Simple harmonic oscillators

Our next important topic is something we've already run into a few times: oscillatory motion, which also goes by the name simple harmonic motion. This sort of motion is given by the solution of the simple harmonic oscillator (SHO) equation,

\[ \begin{aligned} m\ddot{x} = -kx \end{aligned} \]

This happens to be the equation of motion for a spring, assuming we've put our equilibrium point at \( x=0 \):

The simple harmonic oscillator is an extremely important physical system study, because it appears almost everywhere in physics. In fact, we've already seen why it shows up everywhere: expansion around equilibrium points. If \( y_0 \) is an equilibrium point of \( U(y) \), then series expanding around that point gives

\[ \begin{aligned} U(y) = U(y_0) + \frac{1}{2} U''(y_0) (y-y_0)^2 + ... \end{aligned} \]

using the fact that \( U'(y_0) = 0 \), true for any equilibrium. If we change variables to \( x = y-y_0 \), and define \( k = U''(y_0) \), then this gives

\[ \begin{aligned} U(x) = U(x_0) + \frac{1}{2} kx^2 \end{aligned} \]

and then

\[ \begin{aligned} F(x) = -\frac{dU}{dx} = -kx \end{aligned} \]

which results in the simple harmonic oscillator equation. Since we're free to add an arbitrary constant to the potential energy, we'll ignore \( U(x_0) \) from now on, so the harmonic oscillator potential will just be \( U(x) = (1/2) kx^2 \).

Before we turn to solving the differential equation, let's remind ourselves of what we expect for the general motion using conservation of energy. Suppose we have a fixed total energy \( E \) somewhere above the minimum at \( E=0 \). Overlaying it onto the potential energy, we have:

Because of the symmetry of the potential, if the value of \( x \) where \( E = U \) at positive \( x \) is \( U(A) = E \), then we must also have \( U(-A) = E \). Since these are the turning points, we see that \( A \), the amplitude, is the maximum possible distance that our object can reach from the equilibrium point; it must oscillate back and forth between \( x = \pm A \). We also know that \( T=0 \) at the turning points, which means that

\[ \begin{aligned} E = \frac{1}{2} kA^2. \end{aligned} \]

This relation will make it useful to write our solutions in terms of the amplitude. But to do that, first we have to find the solutions! It's conventional to rewrite the SHO equation in the form

\[ \begin{aligned} \ddot{x} = -\omega^2 x, \end{aligned} \]

where

\[ \begin{aligned} \omega^2 \equiv \frac{k}{m}. \end{aligned} \]

The combination \( \omega \) has units of 1/time, and the notation suggests it is an angular frequency - we'll see why shortly.

Solutions to the SHO equation

I've previously argued that just by inspection, we know that \( \sin(\omega t) \) and \( \cos(\omega t) \) solve this equation, because we need an \( x(t) \) that goes back to minus itself after two derivatives (and kicks out a factor of \( \omega^2 \).) But let's be more systematic this time.

Thinking back to our general theory of ODEs, we see that this equation is:

Since it's linear, we know there exists a general solution, and since it's second-order, we know it must contain exactly two unknown constants. We haven't talked about the importance of homogeneous and second-order yet, so let's look at that now. The most general possible homogeneous, linear, second-order ODE takes the form

\[ \begin{aligned} \frac{d^2y}{dx^2} + P(x) \frac{dy}{dx} + Q(x) y(x) = 0, \end{aligned} \]

where \( P(x) \) and \( Q(x) \) are arbitrary functions of \( x \). All solutions of this equation for \( y(x) \) have an extremely useful property known as the superposition principle: if \( y_1(x) \) and \( y_2(x) \) are solutions, then any linear combination of the form \( ay_1(x) + by_2(x) \) is also a solution. It's easy to show that if you plug that combination in to the ODE above, a bit of algebra gives

\[ \begin{aligned} a \left( \frac{d^2y_1}{dx^2} + P(x) \frac{dy_1}{dx} + Q(x) y_1(x) \right) + b \left( \frac{d^2y_2}{dx^2} + P(x) \frac{dy_2}{dx} + Q(x) y_2(x) \right) = 0 \end{aligned} \]

and since by assumption both \( y_1 \) and \( y_2 \) are solutions of the original equations, both of the expressions in parentheses are 0, so the whole left-hand side is 0 and the equation is true. Notice that the homogeneous part is very important for this: if there was some other \( F(x) \) on the right-hand side of the original equation, then we would get \( aF(x) + bF(x) \) on the left-hand side for our linear combination, and superposition will not work.

Anyway, the superposition principle greatly simplifies our search for a solution: if we find any two solutions, then we can add them together with arbitrary constants and we have the general solution. Let's go back to the SHO equation:

\[ \begin{aligned} \ddot{x} = - \omega^2 x \end{aligned} \]

As I mentioned before, two functions that both satisfy this equation are \( \sin(\omega t) \) and \( \cos(\omega t) \). So the general solution, by superposition, is

\[ \begin{aligned} x(t) = B_1 \cos (\omega t) + B_2 \sin (\omega t). \end{aligned} \]


Clicker Question

Consider the following attempt to write a general solution to the SHO equation \( \ddot{x} = -\omega^2 x \):

\[ \begin{aligned} x(t) = C_1 \sin (\omega t) + C_2 \sin (-\omega t) \\ \end{aligned} \]

Is this a valid general solution?

A. Yes!

B. No, it doesn't have enough independent constants.

C. No, the second term isn't a solution to the SHO equation.

D. It depends on the initial conditions.

Answer: B

We can rule out answer D right away; the whole idea of a general solution is that it is true for any choice of initial conditions. (It is actually true that for certain initial conditions, \( x(t) \) can be a valid solution, but it won't be general.)

As for answer C, we can check easily enough that

\[ \begin{aligned} \frac{d^2}{dt^2} \left[ C_2 \sin (-\omega t) \right] = -(-\omega)^2 C_1 \sin (-\omega t) \end{aligned} \]

which indeed satisfies the equation \( \ddot{x} = -\omega^2 x \).

Answer B points us to the real problem. The issue is that \( \sin(-\omega t) \) is closely related to \( \sin(\omega t) \), using the trig identity

\[ \begin{aligned} \sin(-x) = -\sin(x) \end{aligned} \]

(i.e. the statement that \( \sin(x) \) is an odd function.) This lets us rewrite \( x(t) \) as

\[ \begin{aligned} x(t) = (C_1 - C_2) \sin (\omega t) \end{aligned} \]

So although we have written down two constants, we see that they are not independent: only their difference matters for the solution we write down. In other words, we might as well have just defined another single constant \( C_3 = C_1 - C_2 \). This clearly does not give us enough freedom to match any arbitrary initial conditions - anything where \( x(t) \neq 0 \), for example! So this isn't a valid general solution.


The problem highlighted above is that for a general solution, we have to verify that the solutions we are adding together are linearly independent. This means simply that if we have two functions \( f(x) \) and \( g(x) \), there is no solution to the equation

\[ \begin{aligned} k_1 f(x) + k_2 g(x) = 0 \end{aligned} \]

except for \( k_1 = k_2 = 0 \). If there was such a solution, then we could solve for one of the functions in terms of the other, for example \( g(x) = -(k_1/k_2) f(x) \). This solution means that the linear combination \( C_1 f(x) + C_2 g(x) \) is really just the same as a single constant multiplying \( f(x) \), as we saw in the clicker question.

If you're uncertain about whether a collection of functions are linearly independent or not, there's a simple test that you can do using what is known as the Wronskian determinant. For our present purposes we don't really need the Wronskian - we know enough about trig functions to test for linear independence directly. If you're interested, have a look at Boas 3.8.

Now, back to our general solution:

\[ \begin{aligned} x(t) = B_1 \cos (\omega t) + B_2 \sin (\omega t). \end{aligned} \]

This is indeed oscillatory motion, since both \( \sin \) and \( \cos \) are periodic functions: specifically, if we wait for \( t = \tau = 2\pi / \omega \), then

\[ \begin{aligned} x(t+\tau) = B_1 \cos (\omega t + 2\pi) + B_2 \sin (\omega t + 2\pi) = B_1 \cos (\omega t) + B_2 \sin (\omega t) = x(t) \end{aligned} \]

i.e. after a time of \( \tau \), the system returns to exactly where it was. The time length

\[ \begin{aligned} \tau = \frac{2\pi}{\omega} \end{aligned} \]

is known as the period of oscillation, and we see that indeed \( \omega \) is the corresponding angular frequency.

To fix the arbitrary constants, we would just need to plug in initial conditions. If we have an initial position \( x_0 \) and initial speed \( v_0 \), then plugging in at \( t=0 \) the sine always vanishes, and we get the simple system of equations

\[ \begin{aligned} x_0 = x(0) = B_1 \\ v_0 = \dot{x}(0) = B_2 \omega \end{aligned} \]

so

\[ \begin{aligned} x(t) = x_0 \cos (\omega t) + \frac{v_0}{\omega} \sin (\omega t). \end{aligned} \]

This is a perfectly good solution, but this way of writing it makes answering certain questions inconvenient. For example, what is the amplitude \( A \), i.e. the maximum displacement from \( x=0 \)? It's possible to solve for, but certainly not obvious.

Instead of trying to find \( A \) directly, we'll just rewrite our general solution in a different form. We know that trig identities let us combine sums of sine and cosine into a single trig function. Thus, the general solution must also be writable in the form

\[ \begin{aligned} x(t) = A \cos (\omega t - \delta) \end{aligned} \]

where now \( A \) and \( \delta \) are the arbitrary constants. In this form, it's easy to see that \( A \) is exactly the amplitude, since the maximum of a single trig function is at \( \pm 1 \). We could relate \( A \) and \( \delta \) back to \( B_1 \) and \( B_2 \) using trig identities (look in Taylor), but instead we can just re-apply the initial conditions to this form:

\[ \begin{aligned} x_0 = x(0) = A \cos (-\delta) = A \cos \delta \\ v_0 = \dot{x}(0) = -\omega A \sin(-\delta) = \omega A \sin \delta \end{aligned} \]

using trig identities to simplify. (We used \( -\delta \) instead of \( +\delta \) inside our general solution to ensure that there are no minus signs in these equations, in fact.) To compare a little more closely between forms, we see that if we start from rest and at positive \( x_0 \), we must have \( \delta = 0 \) and so

\[ \begin{aligned} x(t) = A \cos (\omega t) = x_0 \cos (\omega t). \end{aligned} \]

On the other hand, if we start at \( x=0 \) but with a positive initial speed \( v_0 \), we must have \( \delta = \pi/2 \) to match on, and so

\[ \begin{aligned} x(t) = A \cos \left(\omega t - \frac{\pi}{2}\right) = A \sin (\omega t) = \frac{v_0}{\omega} \sin (\omega t). \end{aligned} \]

In between the pure-cosine and pure-sine cases, for \( \delta \) between 0 and \( \pi/2 \), we have that both position and speed are non-zero at \( t=0 \). We can visualize the resulting motion as simply a phase-shifted version of the simple cases, with the phase shift \( \delta \) determining the offset between the initial time and the time where \( x=A \):

Earlier, we found that the amplitude is related to the total energy as \( E = \frac{1}{2} kA^2 \). We can verify this against our solutions with an \( A \) in them: if we start at \( x=A \) from rest, then

\[ \begin{aligned} E = U_0 = \frac{1}{2} kx(0)^2 = \frac{1}{2} k A^2 \end{aligned} \]

and if we start at \( x=0 \) (so \( U=0 \)) at speed \( v_0 \), then

\[ \begin{aligned} E = T_0 = \frac{1}{2} mv_0^2 = \frac{1}{2} kA^2 \\ A = \sqrt{\frac{m}{k}} v_0 = \frac{v_0}{\omega} \end{aligned} \]

which matches what we found above comparing our two solution forms. In fact, it's easy to show using the phase-shifted cosine solution that \( E = \frac{1}{2}kA^2 \) always - I won't go through the derivation, but it's in Taylor.

We're not quite done yet: there is actually yet another way to write the general solution, which will just look like a rearrangement for now but will be extremely useful when we add extra forces to the problem. But to understand it, we need to take a brief math detour into the complex numbers.

Complex numbers

This is one of those topics that I suspect many of you have seen before, but you might be rusty on the details. So I will go quickly over the basics; if you haven't seen this recently, make sure to go carefully through the assigned reading in Boas.

Complex numbers are an important and useful extension of the real numbers. In particular, they can be thought of as an extension which allows us to take the square root of a negative number. We define the imaginary unit as the number which squares to \( -1 \),

\[ \begin{aligned} i^2 = -1. \end{aligned} \]

Some books (mostly in engineering, particularly electrical engineering) will call the imaginary unit \( j \) instead - presumably because they want to reserve \( i \) for electric current. This definition allows us to identify the square root of a negative number by including \( i \), for example the combination \( 8i \) will square to

\[ \begin{aligned} (8i)^2 = (64) (-1) = -64 \end{aligned} \]

so \( \sqrt{-64} = 8i \). Actually, we should be careful: notice that \( -8i \) also squares to \( -64 \). Just as with the real numbers, there are two square roots for every number: \( +8i \) is the principal root of -64, the one with the positive sign that we take by default as a convention.

This ambiguity extends to \( i \) itself: some books will start with the definition \( \sqrt{-1} = i \), but we could also have taken \( -i \) as the square root of \( -1 \) and everything would work the same. Defining \( i \) using square roots is also dangerous, because it leads you to things like this "proof":

\[ \begin{aligned} 1 = \sqrt{1} = \sqrt{(-1) (-1)} = \sqrt{-1} \sqrt{-1} = i^2 = -1?? \end{aligned} \]

This is obviously wrong, because it uses a familiar property of square root that isn't true for complex numbers: \( \sqrt{ab} \neq \sqrt{a} \sqrt{b} \) no longer works if any negatives are involved under the roots.

Since we've only added the special object \( i \), the most general possible complex number is

\[ \begin{aligned} z = x + iy \end{aligned} \]

where \( x \) and \( y \) are both real. This observation leads to the fact that the complex numbers can be thought of as two copies of the real numbers. In fact, we can use this to graph complex numbers in the complex plane: if we define

\[ \begin{aligned} {\rm Re}(z) = x \\ {\rm Im}(z) = y \end{aligned} \]

as the real part and imaginary part of \( z \), then we can draw any \( z \) as a point in a plane with coordinates \( (x,y) = ({\rm Re}(z), {\rm Im}(z)) \):

Obviously, numbers along the real axis here are real numbers; numbers on the vertical axis are called imaginary numbers or pure imaginary.

In complete analogy to polar coordinates, we define the magnitude or modulus of \( z \) to be

\[ \begin{aligned} |z| = \sqrt{x^2 + y^2} = \sqrt{{\rm Re}(z)^2 + {\rm Im}(z)^2}. \end{aligned} \]

We can also define an angle \( \theta \), as pictured above, by taking \( \theta = \tan^{-1}(y/x) \) (being careful about quadrants.) This is very useful in a specific context - we'll come back to it in a moment.

First, one more definition: for every \( z = x + iy \), we define the complex conjugate \( z^\star \) to be

\[ \begin{aligned} z^\star = x - iy \end{aligned} \]

(More notation: in math books this is often written \( \bar{z} \) instead of \( z^\star \); physicists tend to prefer the star notation.) In terms of the complex plane, this is a mirror reflection across the \( x \)-axis. We note that

\[ \begin{aligned} zz^\star = (x+iy)(x-iy) = x^2 - i^2 y^2 = x^2 + y^2 \end{aligned} \]

so \( zz^\star = |z|^2 = |z\star|2 \) (the last one is true since \( (z\star)\star = z \).)

Multiplying and dividing complex numbers is best accomplished by doing algebra and using the definition of \( i \). For example,

\[ \begin{aligned} (2+3i)(1-i) = 2 - 2i + 3i - 3i^2 \\ = 5 + i \end{aligned} \]

Division proceeds similarly, except that we should always use the complex conjugate to make the denominator real. For example,

\[ \begin{aligned} \frac{2+i}{3-i} = \frac{2+i}{3-i} \frac{3+i}{3+i} \\ = \frac{6 + 5i + i^2}{9 - i^2} \\ = \frac{6 + 5i - 1}{9 + 1} \\ = \frac{5+5i}{10} = \frac{1}{2} + \frac{i}{2}. \end{aligned} \]

The most important division rule to keep in mind is that dividing by \( i \) just gives an extra minus sign, e.g.

\[ \begin{aligned} \frac{2}{i} = \frac{2i}{i^2} = -2i. \end{aligned} \]

Now, finally to the real reason we're introducing complex numbers: an extremely useful equation called Euler's formula. To find the formula, we start with the series expansion of the exponential function:

\[ \begin{aligned} e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + ... \end{aligned} \]

Series expansions are just as valid over the complex numbers as over the real numbers (with some caveats, but definitely and always true for the exponential.) What happens if we plug in an imaginary number instead of a real one? Let's try \( x = i\theta \), where \( \theta \) is real:

\[ \begin{aligned} e^{i\theta} = 1 + i\theta + \frac{(i\theta)^2}{2!} + \frac{(i\theta)^3}{3!} + \frac{(i\theta)^4}{4!} + ... \end{aligned} \]

Multiplying out all the factors of \( i \) and rearranging, we find this splits into two series, one with an \( i \) left over and one which is completely real:

\[ \begin{aligned} e^{i\theta} = \left(1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} + ... \right) + i \left( \theta - \frac{\theta^3}{3!} + ... \right) \end{aligned} \]

Those other two series should look familiar: they are exactly the Taylor series for the trig functions \( \cos \theta \) and \( \sin \theta \)! With that recognition, we have Euler's formula:

\[ \begin{aligned} e^{i\theta} = \cos \theta + i \sin \theta. \end{aligned} \]

This is an extremely useful and powerful formula, that makes complex numbers much easier to work with! Notice first that

\[ \begin{aligned} |e^{i\theta}| = \cos^2 \theta + \sin^2 \theta = 1. \end{aligned} \]

So \( e^{i\theta} \) is always on the unit circle - and as the notation suggests, the angle \( \theta \) is exactly the angle in the complex plane of this number. This leads to the polar notation for an arbitrary complex number,

\[ \begin{aligned} z = x + iy = |z| e^{i\theta}. \end{aligned} \]

The polar version is much easier to work with for multiplying and dividing complex numbers:

\[ \begin{aligned} z_1 z_2 = |z_1| |z_2| e^{i(\theta_1 + \theta_2)} \\ z_1 / z_2 = |z_1| / |z_2| e^{i(\theta_1 - \theta_2)} \\ z^\alpha = |z|^\alpha e^{i\alpha \theta} \end{aligned} \]

For that last one, we see that taking roots is especially nice in polar coordinates. For example, since \( i = e^{i\pi/2} \), we have

\[ \begin{aligned} \sqrt{i} = (e^{i\pi/2})^{1/2} = e^{i\pi/4} = \frac{1}{\sqrt{2}} (1+i) \end{aligned} \]

which you can easily verify. (This is the principal root, actually; the other possible solution is at \( e^{5i \pi/4} \), which is \( -1 \) times the number above.)