Solving the Harmonic Oscillator Equations

In this notebook, we'll be working on a classic problem: solving the harmonic oscillator equations.

Introduction: the harmonic oscillator

What's an harmonic oscillator? It's a relatively simple model for natural systems that finds wide applications.

A classical example of such a system is a mass that is connected to a spring. If the mass $m$ is located according to the coordinates $x(t)$, and the spring stiffness is $k$, then Newton's equations of momentum read:

$$ m \ddot{x} + k x = 0 $$

Let's simplify the notation in the following way:

$$ \ddot{x} + \omega_0^2 x = 0 $$

where $\omega_0^2 = \frac{k}{m}$. The above equation is the harmonic oscillator model equation. This equation alone does not allow numerical computing unless we also specify initial conditions, which define the oscillators state at the time origin.

Here, since we have a second derivative in time of the position $x$, we need to specify two things: the initial position and the initial speed of the oscillator. These are denoted as:

$$ \begin{align} x(t = 0) = x_0 \\ \dot{x}(t=0) = \dot{x}_0 \end{align} $$

In this notebook, we will explore three options for solving the evolution problem of this harmonic oscillator:

  • solve it analytically
  • solve it numerically with our own numerical scheme
  • solve it numerically with scipy builtin tools

Analytical solution with sympy

To solve this equation analytically, we will use sympy. Sympy provides an ordinary differential equation (ODE) module for these problems:

In [1]:
import sympy
In [97]:

First, we define our symbols and function:

In [98]:
m, k, x_0, v_0, omega_0, t = sympy.symbols('m, k, x_0, v_0, omega_0, t')
x = sympy.Function('x')

We can then use dsolve, which deals with differential equations:

In [99]:
sol = sympy.dsolve(sympy.Derivative(x(t), t, 2) + omega_0 * x(t))
$$x{\left (t \right )} = C_{1} e^{- t \sqrt{- \omega_{0}}} + C_{2} e^{t \sqrt{- \omega_{0}}}$$

And define our initial conditions and solve for them:

In [100]:
ics = [sympy.Eq(sol.args[1].subs(t, 0), x_0), sympy.Eq(sol.args[1].diff(t).subs(t, 0), v_0)]
$$\left [ C_{1} + C_{2} = x_{0}, \quad - C_{1} \sqrt{- \omega_{0}} + C_{2} \sqrt{- \omega_{0}} = v_{0}\right ]$$
In [101]:
solved_ics = sympy.solve(ics)
$$\left [ \left \{ C_{1} : - \frac{v_{0}}{2 \sqrt{- \omega_{0}}} + \frac{x_{0}}{2}, \quad C_{2} : \frac{v_{0}}{2 \sqrt{- \omega_{0}}} + \frac{x_{0}}{2}\right \}\right ]$$
In [102]:
full_sol = sol.subs(solved_ics[0])
$$x{\left (t \right )} = \left(- \frac{v_{0}}{2 \sqrt{- \omega_{0}}} + \frac{x_{0}}{2}\right) e^{- t \sqrt{- \omega_{0}}} + \left(\frac{v_{0}}{2 \sqrt{- \omega_{0}}} + \frac{x_{0}}{2}\right) e^{t \sqrt{- \omega_{0}}}$$

The above equation was obtained by sympy and contains the solution to our problem. One might wonder why it contains square roots of negative quantities. This is perfectly normal: using appropriate input, the function values will turn out to be real, as expected in the case of a mass and spring.

Let's plot the solution.

We will use two sample problems:

  • case 1 : initial position is nonzero and initial velocity is zero
  • case 2 : initial position is zero and initialvelocity is nonzero
In [103]:
%matplotlib inline
In [104]:
case1 = sympy.simplify(full_sol.subs({x_0:1, v_0:0, omega_0:1}))
$$x{\left (t \right )} = \cos{\left (t \right )}$$
In [105]:
<sympy.plotting.plot.Plot at 0xd55cba8>

Let's look at our second solution.

In [107]:
case2 = sympy.trigsimp(full_sol.subs({x_0:0, v_0:1, omega_0:1}))
$$x{\left (t \right )} = - \frac{i}{2} e^{i t} + \frac{i}{2} e^{- i t}$$

We can recognize that the solution is nothing other than $sin(x)$.

In [108]:
<sympy.plotting.plot.Plot at 0xd5bd048>

Okay, now on to numerical solving with our own solution.

Numerical solution using finite differences

Let's discretize our problem as an explicite finite difference scheme. The problem becomes:

$$ \left\{ \begin{align} x^{n+1} = && (2 - \omega_0^2dt^2)x^n - x^{n-1} \\ x^0 = && x_0 \\ x^1 = && x_0 + \dot{x}_0 dt \end{align} \right. $$

As we can see from the equation above, we need to keep track of the two previous positions $x^n$ and $x^{n-1}$ to compute the solution for the next time step. Also, the value $x^1$ is computed from the knowledge of the initial speed by a first order Taylor approximation.

To solve this, I wrote a class in the next cell called HarmonicOdeSolver. It provides two methods, apart from __init__:

  • step that iterates the numerical system one single step in time
  • step_until that iterates the numerical system until a given time and returns snapshots of the solution at certain points chosen by the user
In [109]:
import numpy as np

class HarmonicOdeSolver:
    def __init__(self, dt, x0, xd0, omega_squared):
        "Inits the solver."
        self.dt = dt
        self.dt_squared = dt**2
        self.t = dt
        self.omega_squared = omega_squared
        self.x0 = x0
        self.xd0 = xd0
        self.x = [xd0 * dt + x0, x0]
    def step(self):
        "Steps the solver."
        xt, xtm1 = self.x
        xtp1 = (2 - self.omega_squared * self.dt_squared) * xt - xtm1
        self.x = (xtp1, xt)
        self.t += self.dt
    def step_until(self, tmax, snapshot_dt):
        "Steps the solver until a given time, returns snapshots."
        ts = [self.t]
        vals = [self.x[0]]
        niter = max(1, int(snapshot_dt // self.dt))
        while self.t < tmax:
            for _ in range(niter):
        return np.array(ts), np.array(vals)

Let's test this implementation.

In [110]:
solver = HarmonicOdeSolver(2e-1, 1, 0, 1)
In [111]:
snapshot_dt = 0.3
In [112]:
ts, vals = solver.step_until(12, snapshot_dt)
In [113]:
import matplotlib.pyplot as plt
In [114]:
plt.plot(ts, vals)
[<matplotlib.lines.Line2D at 0xd69fe80>]

Let's compare this to sympy.

In [115]:
plt.plot(ts, vals, 'x', label='my own ode solver')
plt.plot(ts, sympy.lambdify(t, case1.rhs, 'numpy')(ts), label='sympy')
plt.legend(loc='upper right')
<matplotlib.legend.Legend at 0xd710898>

Okay, that's pretty good, except the ode solver lags a bit behind the sympy solution (this is due to the computation of $x^1$ that is a pretty bad approximation). Let's try again with a smaller dt.

In [117]:
solver = HarmonicOdeSolver(1e-3, 1, 0, 1)
ts, vals = solver.step_until(12, snapshot_dt)
In [118]:
plt.plot(ts, vals, 'x', label='my own ode solver')
plt.plot(ts, sympy.lambdify(t, case1.rhs, 'numpy')(ts), label='sympy')
plt.legend(loc='upper right')
<matplotlib.legend.Legend at 0xd78d588>

Okay, looks good. What about the second case?

In [119]:
solver = HarmonicOdeSolver(1e-3, 0, 1, 1)
ts, vals = solver.step_until(12, snapshot_dt)
In [120]:
plt.plot(ts, vals, 'x', label='my own ode solver')
plt.plot(ts, np.real(sympy.lambdify(t, case2.rhs, 'numpy')(ts)), label='sympy')
plt.legend(loc='upper right')
<matplotlib.legend.Legend at 0xd4f72e8>

Okay, good again! Let's move on to the last part of this exploration.

Numerical solution using Scipy

Scipy offers a number of tools for dealing with ordinary differential equations: Here, we'll use the odeint routine.

The documentation says that this routine solves first order differential equations. This means that we need to recast our problem as a first order system. To do this, we write out a vector of unknowns:

$$ u = \begin{pmatrix} \dot{x} \\ x \end{pmatrix} $$

Hence we can write:

$$ \frac{d}{dt} \begin{pmatrix} \dot{x} \\ x \end{pmatrix} = \begin{pmatrix} -\omega_0^2 x \\ \dot{x} \end{pmatrix} $$

We can now write out the derivative, or the right hand side of the previous equation assuming we give our vector $u$ as input:

In [121]:
def deriv(u, t, omega_squared):
    "Provides derivative of vector u."
    xdot, x = u
    return [-omega_squared * x, xdot]

Let's solve case 1 using this. We define a time vector:

In [127]:
ts = np.arange(0, 12, snapshot_dt)

And initial conditions.

In [128]:
y0 = [0, 1]

Now we integrate and look at the result.

In [129]:
from scipy.integrate import odeint
scipysol = odeint(deriv, y0, ts, args=(1,))
In [130]:
plt.plot(ts, scipysol[:, 1])
[<matplotlib.lines.Line2D at 0xd7f3668>]

Let's compare it again with our reference solution:

In [131]:
plt.plot(ts, scipysol[:, 1], 'x', label='scipy')
plt.plot(ts, sympy.lambdify(t, case1.rhs, 'numpy')(ts), label='sympy')
<matplotlib.legend.Legend at 0xe830438>

This works nicely. What about the second test case?

In [132]:
y0 = [1, 0]
In [133]:
scipysol = odeint(deriv, y0, ts, args=(1,))
In [134]:
plt.plot(ts, scipysol[:, 1], 'x', label='scipy')
plt.plot(ts, np.real(sympy.lambdify(t, case2.rhs, 'numpy')(ts)), label='sympy')
<matplotlib.legend.Legend at 0xe8ad1d0>


In this article, we've solved an ordinary differential equation in three different ways. First, we solved it exactly using an analytical approach (for which sympy did all the heavy lifting). Then, we implemented our own finite difference scheme. Finally, we used one of the builtin solvers of scipy to solve the equation. What we can say is:

  • analytical solutions, when available, are the most useful
  • implementing our own scheme is doable, but is a lot of work if we're not specialists in numerical computation and likely more complex in cases that are not like this one-dimensional oscillator
  • using scipy is definitely recommended since all the complicated things from the finite differences are done for us (this strategy could be described as "standing on the shoulders of giants")