The final exam will be comprehensive (except for calculus of variations)! See your e-mail or Canvas inbox for details.
A functional is a ``function of functions": it takes a function as input, and returns a number as output. Functionals are usually written as integrals:
\[ \begin{aligned} J[y] = \int_{x_1}^{x_2} dx\ f[x, y(x), y'(x)]. \end{aligned} \]
Extreme values (minimum, maximum, or saddle point) are given by functions that satisfy the Euler-Lagrange equation,
\[ \begin{aligned} \frac{\partial f}{\partial y} = \frac{d}{dx} \left(\frac{\partial f}{\partial y'} \right). \end{aligned} \]
For mechanical systems, the evolution of the system is always extremizes the action, which is a functional of the Lagrangian: with
\[ \begin{aligned} \mathcal{L} = T - U, \end{aligned} \]
the time evolution of all coordinates \( q_i \) obeys the Euler-Lagrange equations,
\[ \begin{aligned} \frac{\partial \mathcal{L}}{\partial q_i} = \frac{d}{dt} \left( \frac{\partial \mathcal{L}}{\partial \dot{q}_i} \right) \end{aligned} \]
The \( q_i \) are generalized coordinates: functions of the Cartesian coordinates and of time. We can only apply this approach when certain assumptions hold:
Central forces arise from any potential that only depends on distance, i.e. \( U = U(r) \) in spherical coordinates. The two-body problem can be reduced to the effective one-dimensional system for a particle with reduced mass
\[ \begin{aligned} \mu = \frac{m_1 m_2}{m_1 + m_2} \end{aligned} \]
moving subject to the effective potential
\[ \begin{aligned} U_{\rm eff} = U(r) + \frac{L_z^2}{2\mu r^2}. \end{aligned} \]
For gravity (or any other potential of the form \( U(r) = -\gamma / r \)), we can solve for the motion and find the equation of orbit,
\[ \begin{aligned} r(\phi) = \frac{c}{1+\epsilon \cos \phi}, \end{aligned} \]
where the distance of closest approach (periapsis) occurs at \( \phi = 0 \). \( c \) is the scale factor of the orbit, and \( \epsilon \) is the eccentricity. The orbits are shaped like conic sections, as follows:
If the orbit is elliptical, then the semimajor (longer) axis is given by
\[ \begin{aligned} a = \frac{c}{1-\epsilon^2} \end{aligned} \]
and the semiminor (shorter) axis is
\[ \begin{aligned} b = \frac{c}{\sqrt{1-\epsilon^2}} \end{aligned} \]
For any closed orbit, the orbital period is proportional to \( a^{3/2} \) (Kepler's third law.)
Conserved quantities: for a given orbit, the total energy is
\[ \begin{aligned} E = \frac{\gamma}{2c}(\epsilon^2 - 1) \end{aligned} \]
(remember: \( \gamma = Gm_1 m_2 \).) Orbits with \( \epsilon \geq 1 \) are unbound, and have non-negative total energy \( E \geq 0 \). The angular momentum \( L_z \) of an orbit is determined by the scale factor,
\[ \begin{aligned} L_z = \sqrt{\gamma \mu c}. \end{aligned} \]
Two orbits of the same objects with the same \( c \) therefore have equal \( L_z \). Closed orbits have the same energy if they have the same \( a \) (from above, \( E \sim 1/a \) for closed orbits.)
From the equation of orbit we can derive the orbital speed vs. \( \phi \),
\[ \begin{aligned} v(\phi) = \sqrt{\frac{\gamma}{\mu c}}\sqrt{\frac{c^2}{r^2} + \epsilon^2 \sin^2 \phi}. \end{aligned} \]
Linear acceleration of a frame gives a single fictitious force, the inertial force:
\[ \begin{aligned} \vec{F}_{\textrm{inertial}} = -m \vec{a} \end{aligned} \]
One way to remember this is through the fact that standing in an elevator accelerating up at \( g \) in empty space is indistingushable from being on the Earth's surface (ignoring smaller effects.)
Time derivatives of vectors in a rotating frame depend on the rotation \( \vec{\Omega} \):
\[ \begin{aligned} \left( \frac{d \vec{Q}}{dt} \right)_{\mathcal{S}_0} = \left( \frac{d \vec{Q}}{dt} \right)_{\mathcal{S}} + \vec{\Omega} \times \vec{Q} \end{aligned} \]
In particular,
\[ \begin{aligned} \vec{v} = \vec{\Omega} \times \vec{r}. \end{aligned} \]
In a frame rotating with angular velocity \( \vec{\Omega} \), we find three fictitious forces:
\[ \begin{aligned} F_{\textrm{cf}} = m (\vec{\Omega} \times \vec{r}) \times \vec{\Omega} \\ F_{\textrm{cor}} = 2m \dot{\vec{r}} \times \vec{\Omega} \\ F_{\textrm{Euler}} = m \vec{r} \times \dot{\vec{\Omega}} \end{aligned} \]
If \( \vec{\Omega} \) is perpendicular to \( \vec{r} \), then \( \vec{F}_{\rm cf} \) always points radially outwards.
The Earth's rotation vector points out of the North pole. Usual spherical coordinates \( \theta, \phi \) with respect to the North pole correspond to co-latitude and longitude; note that latitude is equal to the angle from the equator, so \( 90^\circ - \theta \) in the Northern hemisphere and \( \theta - 90^\circ \) in the Southern.
The resulting centrifugal force is directed outwards from the rotation axis, and changes the apparent magnitude and direction of the gravitational force; what we call "\( g \)" includes the centrifugal term unless otherwise noted.
The Coriolis force \( (2m \vec{v} \times \vec{\Omega}) \) on Earth tends to deflect motion to the right in the Northern hemisphere, and to the left in the Southern. Near the equator, or for vertical motion, some thought is necessary. When in doubt, sketch the rotating Earth and your velocity vector!
Angular momentum formulas:
\[ \begin{aligned} \vec{L} = \vec{r} \times \vec{p} = \mathbf{I} \vec{\omega} \end{aligned} \]
Motion of any object, rotational or translational, can be split into motion of the CM, plus motion relative to it. In particular,
\[ \begin{aligned} \vec{L} = \vec{L}_{\rm rel} + \vec{L}_{\rm CM}. \end{aligned} \]
Formula for the inertia tensor:
\[ \begin{aligned} \mathbf{I} = \int\ dV\ \rho(x,y,z) \left( \begin{array}{ccc} y^2 + z^2 & -xy & -xz \\ ... & x^z + z^2 & -yz \\ ... & ... & x^2 + y^2 \end{array} \right) \end{aligned} \]
The integral is replaced with a sum for a discrete set of point masses. This equation is valid for rotation about the origin of the coordinates only! More compactly, we can write
\[ \begin{aligned} I_{ij} = \int dV\ \rho (r^2 \delta_{ij} - x_i x_j) \end{aligned} \]
where \( \delta_{ij} \) is the Kronecker delta symbol, equal to \( 1 \) if the labels \( i \) and \( j \) are the same, and \( 0 \) otherwise.
Parallel axis theorem: if \( \mathbf{I} \) is the inertia tensor about an object's CM, then the inertia tensor \( \mathbf{J} \) through parallel coordinates about a new pivot point is
\[ \begin{aligned} J_{ij} = I_{ij} + M(a^2 \delta_{ij} - a_i a_j), \end{aligned} \]
where \( \vec{a} \) points from the other pivot to the body's CM.
This gives us the space-frame inertia tensor, computed with respect to some stationary axes. To switch to the body frame, we calculate the eigenvalues and eigenvectors of \( \mathbf{I} \); the normalized eigenvectors \( \hat{e}_i \) are the basis vectors of the body-frame coordinates (principal axes.) In these coordinates, the inertia tensor is diagonal, and the diagonal entries are the principal moments.
\[ \begin{aligned} \mathbf{I}_{\textrm{body}} = \left( \begin{array}{ccc} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{array} \right) \end{aligned} \]
The equivalent of Newton's third law for rotational motion (in an inertial frame!) is
\[ \begin{aligned} \vec{\Gamma} = \dot{\vec{L}} \end{aligned} \]
When we expand this out in the body frame (which is a rotating frame!), we find the Euler equations
\[ \begin{aligned} \lambda_1 \dot{\omega}_1 - (\lambda_2 - \lambda_3) \omega_2 \omega_3 = \Gamma_1 \\ \lambda_2 \dot{\omega}_2 - (\lambda_3 - \lambda_1) \omega_3 \omega_1 = \Gamma_2 \\ \lambda_3 \dot{\omega}_3 - (\lambda_1 - \lambda_2) \omega_1 \omega_2 = \Gamma_3. \end{aligned} \]
All rigid bodies are divided into categories, based on the symmetry of their principal moments:
The motion of these categories is determined by the Euler equations, which simplify in the two symmetric cases. In the asymmetric case, we found stable rotation about the axes with largest and smallest moments, and instability about the middle axis (the deck of cards.)
Coupled differential equations can arise from physical systems. If we can write them in the form
\[ \begin{aligned} \mathbf{M} \vec{x} = -\mathbf{K} \vec{x}, \end{aligned} \]
then we know how to solve! For \( N \) masses, these matrices are \( N \times N \). This form will arise directly from springs, but can also appear when we approximate almost any motion about equilibrium points.
To solve, we use the generalized eigenvalue equation
\[ \begin{aligned} \det (\mathbf{K} - \omega^2 \mathbf{M}) = 0. \end{aligned} \]
You can think of the solutions \( \omega^2 \) as eigenvalues of the matrix \( M^{-1} K \); they are called the (squared) normal frequencies. We also need the normal modes, which are the corresponding eigenvectors,
\[ \begin{aligned} (\mathbf{K} - \omega_i^2 \mathbf{M}) \vec{\xi}_i = 0. \end{aligned} \]
The general solution is then
\[ \begin{aligned} \vec{x}(t) = \sum_{i=1}^N A_i \vec{\xi}_i \cos (\omega_i t - \delta_i), \end{aligned} \]
where as usual the unknown constants \( A_i \) and \( \delta_i \) are determined by initial conditions \( \vec{x}(0), \dot{\vec{x}}(0) \).
From the Lagrangian, we define the conjugate momenta \( p_i \), one for each generalized coordinate \( q_i \):
\[ \begin{aligned} p_i = \frac{\partial \mathcal{L}}{\partial \dot{q_i}}. \end{aligned} \]
The Hamiltonian \( \mathcal{H}(q_i, p_i) \) is a function of only the coordinates and momenta, and not the velocities \( \dot{q_i} \). It is related to the Lagrangian as
\[ \begin{aligned} \mathcal{H} = \sum_i (p_i \dot{q}_i) - \mathcal{L} \end{aligned} \]
If the generalized coordinates do not explicitly depend on time, so \( \partial q_i / \partial t = 0 \), then the Hamiltonian is equal to the total energy of the system,
\[ \begin{aligned} \frac{\partial q_i}{\partial t} = 0 \Rightarrow \mathcal{H} = T + U. \end{aligned} \]
We can solve from the motion by way of the Hamiltonian using Hamilton's equations:
\[ \begin{aligned} \dot{p_i} = - \frac{\partial \mathcal{H}}{\partial q_i} \\ \dot{q_i} = \frac{\partial \mathcal{H}}{\partial p_i}. \end{aligned} \]
This is a system of coupled first-order differential equations. As a result, we can plot the time derivative as a vector \( (\dot{q_i}, \dot{p_i}) \) in the phase space spanned by \( q_i \) and \( p_i \). When we can narrow the motion down to one coordinate, we can plot the trajectories of the system in phase space.
Phase space paths can never cross, as long as \( \partial \mathcal{H} / \partial t = 0 \); the evolution of the system is uniquely determined by knowing all the \( q_i \) and \( p_i \). Liouville's theorem tells us that the volume of phase space occupied by a system (e.g. a gas with some distribution of particles) doesn't change as it evolves in time.
There is a classic set of examples that are cited as the first hints for the emergence of quantum mechanics. The "ultraviolet catastrophe", predicting that hot objects should radiate all of their energy away instantly. Experiments such as the double-slit, which showed electrons interfering with each other, behaving like waves instead of particles. Rutherford's experiment showing that the positive charge of an atom was concentrated in a nucleus caused problems, since classical physics predicts that the orbiting electrons should radiate away their orbital energy and spiral into the nucleus.
But you've learned about all of these things already. What these signals have in common is that they are all experimental signatures of a quantum world. What we're going to see today is that there are theoretical hints as well, buried deep in the mathematical structure of classical mechanics; and we now know enough of that mathematics to see them.
As we have observed before, one of the advantages of the Hamiltonian formulation of classical mechanics is that it puts the positions \( q_i \) and momenta \( p_i \) on more or less equal footing. In particular, Hamilton's equations which govern the evolution of the \( p_i \) and \( q_i \) are almost identical, up to a minus sign:
\[ \begin{aligned} \dot{p_i} = -\frac{\partial \mathcal{H}}{\partial q_i} \\ \dot{q_i} = \frac{\partial \mathcal{H}}{\partial p_i}. \end{aligned} \]
If we're interested in some more complicated dynamical quantity, like the potential energy or the angular momentum, we can just write it as a function \( f(q_i, p_i) \) on phase space, and then once we solve for \( q_i(t) \) and \( p_i(t) \) we know \( f(t) \) as well. But is there anything more general that we can do, before finding the full solution for the motion? In particular, are there equations we can write down which tell us the evolution of an arbitrary function on phase space? Such an equation would be very useful, for example letting us identify conserved quantities which don't depend on time.
The answer is yes, but to write the equation down, we have to start by defining a strange-looking object called the Poisson bracket. If \( f(q_i,p_i) \) and \( g(q_i,p_i) \) are two functions on phase space, then their Poisson bracket is defined to be
\[ \begin{aligned} \{f, g\} \equiv \sum_i \left(\frac{\partial f}{\partial q_i} \frac{\partial g}{\partial p_i} - \frac{\partial f}{\partial p_i} \frac{\partial g}{\partial q_i} \right). \end{aligned} \]
As you can probably guess, the Poisson bracket is particularly interesting if we pick \( \mathcal{H} \) itself as one of the functions. Then
\[ \begin{aligned} \{f, \mathcal{H}\} = \sum_i \left( \frac{\partial f}{\partial q_i} \frac{\partial \mathcal{H}}{\partial p_i} - \frac{\partial f}{\partial p_i} \frac{\partial \mathcal{H}}{\partial q_i}\right) \\ = \sum_i \left( \frac{\partial f}{\partial q_i} \dot{q_i} + \frac{\partial f}{\partial p_i} \dot{p_i} \right). \end{aligned} \]
But these terms look exactly like what we would get from taking the time derivative of \( f \) and expanding with the chain rule! If we allow \( f \) to depend explicitly on time as well, then the general formula is
\[ \begin{aligned} \frac{df}{dt} = \{f, \mathcal{H}\} + \frac{\partial f}{\partial t}. \end{aligned} \]
So the time evolution of any function on phase space is given by its Poisson bracket with the Hamiltonian! Any time-independent function \( f(q_i,p_i) \) which "Poisson commutes" with the Hamiltonian is a conserved quantity,
\[ \begin{aligned} \{ f, \mathcal{H}\} = 0 \Rightarrow \frac{df}{dt} = 0. \end{aligned} \]
Our most familiar example of this is one of the generalized momenta being conserved; we can see right away that
\[ \begin{aligned} \{ p_i, \mathcal{H}\} = \sum_j \left((0) \frac{\partial \mathcal{H}}{\partial p_j} - (\delta_{ij}) \frac{\partial \mathcal{H}}{\partial q_j} \right) \end{aligned} \]
so that \( p_i \) is conserved as long as \( \partial \mathcal{H} / \partial q_i = 0 \).
The definition in terms of partial derivatives always works, but the Poisson brackets themselves satisfy some familiar mathematical properties:
\[ \begin{aligned} \textrm{Antisymmetry:}\ \{f, g\} = -\{g, f\} \\ \textrm{Linearity:}\ \{\alpha f + \beta g, h\} = \alpha \{f, h\} + \beta \{g, h\} \\ \textrm{Leibniz:}\ \{fg, h\} = f\{g, h\} + \{f, h\} g \\ \textrm{Jacobi:}\ \{f, \{g,h\}\} + \{g,\{h,f\}\} + \{h,\{f,g\}\} = 0. \end{aligned} \]
In addition, the Poisson brackets between the \( q_i \) and \( p_i \) are very simple:
\[ \begin{aligned} \{q_i, q_j\} = 0 \\ \{p_i, p_j\} = 0 \\ \{q_i, p_j\} = \delta_{ij}. \end{aligned} \]
Often it's possible to work just with these basic properties of the brackets, instead of having to expand the derivative out. In fact, the Jacobi identity lets us make a very general observation:
\[ \begin{aligned} \{\{f,g\}, \mathcal{H}\} = \{f, \{g,\mathcal{H}\}\} + \{\{f,\mathcal{H}\},g\}. \end{aligned} \]
This equation tells us that if both \( f \) and \( g \) are conserved quantities, then so is their Poisson bracket \( {f, g} \). This sometimes lets us find additional conserved quantities by construction! Let's see an example: consider a single particle moving with some angular momentum about the origin. Its angular momentum vector is \( \vec{L} = \vec{r} \times \vec{p} \), or in components,
\[ \begin{aligned} L_1 = r_2 p_3 - r_3 p_2 \\ L_2 = r_3 p_1 - r_1 p_3 \\ L_3 = r_1 p_2 - r_2 p_1. \end{aligned} \]
The Poisson bracket of the first two components is
\[ \begin{aligned} \{L_1, L_2\} = \{r_2 p_3 - r_3 p_2, r_3 p_1 - r_1 p_3\} \\ = \{r_2 p_3, r_3 p_1\} + \{r_3 p_2, r_1 p_3\} - \{r_2 p_3, r_1 p_3\} - \{r_3 p_2, r_3 p_1\}. \end{aligned} \]
We can expand out any bracket of this form with the Leibniz rule:
\[ \begin{aligned} \{r_i p_j, r_k p_l\} = r_i \{p_j, r_k p_l\} + \{r_i, r_k p_l\} p_j \\ = r_i (r_k \{p_j, p_l\} + \{p_j, r_k\} p_l) + (r_k \{r_i, p_l\} + \{r_i, r_k\} p_l) p_j \\ = -r_i p_l \delta_{jk} + r_k p_j \delta_{il}. \end{aligned} \]
The two terms with minus signs above vanish since they have \( i \neq l \) and \( j \neq k \), while from the other two terms, we get
\[ \begin{aligned} \{L_1, L_2\} = -r_2 p_1 + p_2 r_1 = L_3. \end{aligned} \]
This tells us that if any two components of \( \vec{L} \) are conserved, so is the third! This makes intuitive sense for a particle; there's no way for it to move to increase only one component of the angular momentum without affecting the others. (A rigid object could just spin faster about a symmetry axis, for example.)
Since we're speaking about angular momentum, let's change gears a little bit and talk about order of operations. Recall that from our discussion of rotations, sometimes it matters which order we do things in; a \( 90^\circ \) rotation about \( x \) and then \( y \) produces a different final orientation than a rotation about \( y \) and then \( x \).
Of course, rotation is an operation; it's something active that we do to our system. In a classical world, we can always just passively keep track of the position, momentum, angular momentum, etc. and watch how a system evolves without disturbing it. But here's one of the fundamental assumptions underlying quantum mechanics: measurement itself is an operation! Even the act of just observing a system will change it. This is, in principle, true in classical mechanics too, if our system is small enough to be sensitive to our measurement apparatus.
The typical, somewhat intuitive example that people like to use involves trying to measure the position of a microscopic billiard ball by bouncing pulses of light off of it. We can determine the position of the billiard ball by measuring the reflected light; if we measure repeatedly over time, we can find the speed of the ball too. The shorter the wavelength of light that we use, the more precisely we know its position.
(Example within an example: imagine standing in a completely dark room in a museum. You know there is an ancient Roman statue across the room from you, but you can't see it. But you have a bucket of glow-in-the-dark tennis balls next to you. If you patiently throw the tennis balls at the statue, and record the position of the ones that bounce off, over time you can build up a picture of the statue, but it will be smeared out at the edges because the tennis balls have a finite radius. If you use basketballs instead of tennis balls, you will get a much more smeared-out picture, because they are much larger. The resolution of your picture is limited by the "wavelength" (radius) of your probes.)
But the problem is that light carries momentum; it acts as a particle as well. And the shorter the wavelength, the higher the frequency and thus the higher the momentum carried! So the more precisely we try to measure the position of the billiard ball, the harder we are pushing it, and the less accurately we know its speed! Moreover, the order of operations matters; if we measure the position of our ball we change its momentum, and vice-versa.
To account for the fact that the order of operations matters, we should impose a non-commutative structure on our phase space; that is to say, we should account for that fact that \( qp \) and \( pq \) can give different results. In general, this means that functions \( f \) and \( g \) on phase space become non-commuting operators. (You can imagine matrices instead of single functions, for example; as you know, in general matrices don't commute.) It turns out that this leads to a subtle inconsistency in the mathematics of Poisson brackets, unless we're careful.
Let's see how it works. Suppose we have a Poisson bracket between four quantities \( f_1, f_2, g_1, g_2 \), like so:
\[ \begin{aligned} \{f_1 f_2, g_1 g_2\} = \{f_1, g_1 g_2\} f_2 + f_1 \{f_2, g_1 g_2\} \\ = \left[ \{f_1, g_1\} g_2 + g_1 \{f_1, g_2\} \right] f_2 + f_1 \left[ \{f_2, g_1\} g_2 + g_1 \{f_2, g_2\} \right] \end{aligned} \]
Here I've just applied the Leibniz rule twice, and I'm being careful to preserve the ordering of everything. But now notice that I could have applied the Leibniz rule to the second part first:
\[ \begin{aligned} \{f_1 f_2, g_1 g_2\} = \{f_1 f_2, g_1\} g_2 + g_1 \{f_1 f_2, g_2\} \\ = \left[ \{f_1, g_1\} f_2 + f_1 \{f_2, g_1\} \right] g_2 + g_1 \left[ f_1 \{f_2, g_2\} + \{f_1, g_2\} f_2 \right]. \end{aligned} \]
These aren't quite the same! If we set them equal and gather the Poisson brackets together, some of the terms cancel, but we're left with:
\[ \begin{aligned} \{f_1, g_1\} (f_2 g_2 - g_2 f_2) = (f_1 g_1 - g_1 f_1) \{f_2, g_2\}. \end{aligned} \]
First of all, you can see that if we go back to the classical case where the \( f \)'s and \( g \)'s are just numbers that commute with each other, then we have \( 0=0 \), and there is no inconsistency. But for non-commuting operators, we have a new constraint. Note that we want this to hold for any, completely arbitrary choice of all of the \( f \)'s and \( g \)'s; so sort of like the argument for separation of variables, we can treat the \( f_2 \) and \( g_2 \) terms as constant with respect to the \( f_1 \) and \( g_1 \) terms. In particular, we find that
\[ \begin{aligned} i \hbar \{f_1, g_1\} = f_1 g_1 - g_1 f_1, \end{aligned} \]
and the same for \( f_2 \) and \( g_2 \). The quantity \( \hbar \) is a constant, a pure number; as my notation suggests, this constant is in fact Planck's constant (divided by \( 2\pi \).) The reason that \( i \) appears has to do with the symmetry properties of quantum mechanical operators; it's beyond the scope of what we're doing here to get into too much detail, but most important quantum operators are Hermitian, which means that as a matrix they are equal to their own transposed complex conjugate, \( O^\dagger = O \). This property turns out to be important for the conservation of probability (making sure nothing happens with probability greater than 1 or less than 0 in our theory.) The \( i \) then assures that when we take the bracket of two Hermitian operators, we get back another Hermitian operator.
Anyway, we have identified the quantum version of the Poisson bracket,
\[ \begin{aligned} \{f, g\}_Q = -\frac{i}{\hbar} [f,g], \end{aligned} \]
where \( [,] \) is the commutator, a familiar expression from matrix algebra. This isn't a rigorous derivation, of course; all we've proved is that if we want to consider operations on phase space where the ordering might matter, the only thing with the same mathematical structure as the Poisson bracket is the commutator. But we shouldn't expect to be able to derive quantum mechanics from classical mechanics, anyway. If we take this commutator structure as a hint, and hypothesize that the position and momentum operators obey the same Poisson bracket relations that the classical versions do, then we have
\[ \begin{aligned} [q_i, q_j] = 0 \\ [p_i, p_j] = 0 \\ [q_i, p_j] = i\hbar \delta_{ij}. \end{aligned} \]
These commutation relations are sometimes listed among the fundamental postulates of quantum mechanics, depending on where you look; we can build up the rest of our quantum theory starting with these relations. And indeed, the time evolution of any operator in a quantum system is determined by our familiar friend the Hamiltonian:
\[ \begin{aligned} \frac{dO}{dt} = \frac{-i}{\hbar} [O, \mathcal{H}] + \frac{\partial O}{\partial t}. \end{aligned} \]
This is the Heisenberg equation, and you can see it has exactly the same structure as its classical counterpart, just substituting in the quantum Poisson bracket.
Two more things about these commutation relations, before we move on. First, it's a straightforward (but a bit messy) exercise in linear algebra to show that these commutation relations, especially the last one, imply the Heisenberg uncertainty principle, namely that
\[ \begin{aligned} (\Delta q) (\Delta p) \geq \frac{\hbar}{2}. \end{aligned} \]
This is a fundamental limit of quantum mechanics, following directly from the postulates. No matter how clever an experiment we design, we can never simultaneously improve our uncertainty in momentum and position of an object to arbitrary precision!
We have, of course, seen a sort of classical version of the uncertainty principle; Liouville's theorem. If the volume of an area of phase space is constant, then that means that the uncertainty in \( q \) and in \( p \) must be related; if we squeeze a gas into a tiny volume, so we know the position of all of the molecules very well, then they will be bouncing off the walls more frequently and the momentum distribution will spread out. The difference is that there is no fundamental limit, classically, to the total uncertainty; if we're clever enough, we can refine the momentum and position resolution of a classical system arbitrarily.
Let's switch gears now and have a look at the other hint for quantum mechanics buried in the math of classical mechanics. This one you've seen before; it's hidden in the principle of least action.
Recall that the idea behind Lagrangian and Hamiltonian mechanics was the idea of least action. The action was a function of the total evolution of a mechanical system from point A to point B, or from the starting time to the ending time,
\[ \begin{aligned} S = \int_{t_i}^{t_f} dt\ \mathcal{L}(t, q_i, \dot{q}_i). \end{aligned} \]
This was always a little weird, compared to Newton's laws. In the Newtonian perspective, at any instant a particle feels some collection of forces, which push it in one direction or another, and the total motion is built up over time. But from the Lagrangian point of view, the particle somehow "knows" which path will minimize the total action. Of course, classically we've proven that these two views are equivalent, so we can rest comfortably in our localized, Newtonian particle, which happens to minimize the action.
Or maybe not! Another one of the great insights of quantum mechanics, due to Richard Feynman, is known as the path integral formulation. Quantum mechanics is inherently probabilistic; if we start at \( (x_i, t_i) \), there is only a chance that we will find our particle when we look at \( (x_f, t_f) \). Feynman's great insight was to propose that the probability amplitude for a particle to travel a certain path is determined by the action over that path:
\[ \begin{aligned} \psi(q_i, q_f) = \mathcal{N} \int_{q_i}^{q_f} \mathcal{D}q(t) e^{iS[q(t)]/\hbar}. \end{aligned} \]
(Here \( \psi \) is the wavefunction; the probability of finding a particle at \( (q_f) \) is proportional to \( |\psi|^2 \).) This is a very formal quantity called a path integral; since we're letting both \( q \) and \( t \) vary over the path, there is in principle an infinite number of integrals to do! (Imagine splitting the path up into a large number of small segments of timestep \( \delta t \), and integrating over position at each timestep.)
But now, we can finally see how least action arises for a classical system! For an arbitrary path, the quantity \( e^{iS/\hbar} \) will oscillate wildly, and its net contribution to the integral will be zero. As a simple one-dimensional example to illustrate this sort of integral, here's a plot of the real part of \( e{ix2} \) vs. \( x \):
It's only when \( S \) approaches a minimum - when it starts to vary slowly with small changes to the path - that the phase of the integral doesn't oscillate so quickly, and the paths add constructively with each other. (I'm not arguing this rigorously, but you'll just have to accept that this is true, until you study quantum mechanics in more detail yourself.)
The units of this oscillation are none other than \( \hbar \). So for classical mechanics where \( \hbar \) is very, very small relative to other quantities in the problem, we essentially only see the classical, action-minimizing path with 100% probability. But in the quantum world, we've answered our quandary; the particle finds the "right" path because it is also a wave, and it is sensitive to all of the nearby paths too!