The Calculus of Variations – Part 2 of 2: Give it a Wiggle
Benjamin Skuse
In part 1, we took a tour through time from the birth of the calculus of variations over 300 years ago, its elaboration by Leonhard Euler and Joseph-Louis Lagrange in the 18th Century, the derivation of its firm and unquestionable mathematical foundations by Karl Weierstrass in the late 19th Century, and finally 20th Century extensions into advanced fields such as optimal control theory. This history lesson gave us a basis for truly understanding why the technique is so powerful. In part 2, we will explore how that power has been wielded over the years, showcasing the staggering usefulness of the calculus of variations.
A Bounty for Physicists
From day one, the calculus of variations was a deeply practical branch of mathematics. The Brachistochrone problem, where our story started in part 1, is an inherently useful problem to solve, delivering the curve that provides the fastest route between two points that are at different heights. For example, rollercoaster designers today trace Brachistochrone curves in order to accelerate riders to the highest possible speed in the shortest possible vertical drop between two points. Brachistochrone curves are also incorporated in the design of various sports runs and spaces, including ski jumps, luge tracks and skate parks.
More important than a single variational problem was Euler and Lagrange’s work towards building a theory of the calculus of variations in the mid-18th Century, and the discovery in the 19th Century by Irish mathematician Sir William Rowan Hamilton of the principle of least action, otherwise known as Hamilton’s principle, which roughly means nature always takes the most efficient path.
Physicists greedily gobbled up Hamilton’s principle as an argument to help explain a huge raft of problems in optics and mechanics, applying it to Euler–Lagrange variational calculus in order to derive dynamical equations (extended to many dimensions, derivatives and constraints). It became so useful because dynamical systems have trajectories that can be considered as solutions to stationary-value problems of least action. So physicists could consider action functionals with desired symmetries from which it is simpler to derive dynamical equations instead of having to use more unwieldy forces and dynamics when building models of different physical phenomena.
Many apparently diverse problems, not only in physics but also in other disciplines, prove identical when posed as variational calculus problems. For example, under the right circumstances and with some simplification, the total energy of a diving board and the total profit of a company can be described by similar functionals:
\[ S[y] = \int_{0}^{A} ((y’’)^{2} + by) \,dx, \]
with associated Euler–Lagrange equation:
\[ y’’’’ + b = 0. \]
For the diving board example, \(x\) refers to horizontal distance and \(y\) refers to the height of the board, with the board having length \(A\), and \(b\) representing some physical constant. For the company profit example, \(x\) is time and \(y\) measures the size of the workforce, with \(A\) perhaps representing a year after some draconian policy change, and \(b\) again some constant.
Applications Everywhere
Due to this flexibility and broad applicability, the calculus of variations took on an important role in resolving a huge number of diverse and stubbornly difficult problems in the 19th Century, assisting in the development of new theories in fields such as physics, astronomy, engineering and technology.
An attempt was even made to apply these methods to moral philosophy. In 1881, British economist Francis Edgworth tried to apply the calculus of variations in extremising happiness. As he states in Mathematical Psychics: An essay on the application of mathematics to the moral sciences: “An analogy is suggested between the Principles of Greatest Happiness, Utilitarian or Egoistic, which constitute the first principles of Ethics and Economics, and those Principles of Maximum Energy which are among the highest generalisations of Physics, and in virtue of which mathematical reasoning is applicable to physical phenomena quite as complex as human life.”
Though thought-provoking, ultimately this attempt to bring rigour to moral philosophy was a bridge too far, as even Edgworth himself noted prosaically throughout the essay. For example: “Atoms of pleasure are not easy to distinguish and discern; more continuous than sand, more discrete than liquid.”
Undergirding the Pillars of Physics
Huge progress was made in the application of the calculus of variations and the principle of least action in the 20th Century. As a fundamental physical quantity with the dimensions of energy × time, action was central to some of the most celebrated discoveries in modern physics. For example, Planck’s constant – postulated by Max Planck in 1900 – defines the quantum nature of energy, and is a quantum of action. And Werner Heisenberg’s uncertainty principle, introduced in 1927, is based on the conjugacy of certain related pairs of measurements on a quantum system, all with the dimensions of action.
A less lauded but no less important discovery was made by German mathematician Emmy Noether in 1918. Her eponymous theorem was formulated in terms of variational calculus. It states that for every continuous symmetry in a physical system, i.e. transformation to the system that leaves its equations of motion unchanged, there exists a conservation law. Noether’s theorem therefore shows the fundamental link between time invariance (symmetry) and conservation of energy, translational invariance and conservation of linear momentum, and rotational invariance and conservation of angular momentum.
Though this may not sound earth-shattering, Noether’s theorem was and is a seismic result. It describes the interplay between symmetries and conservation laws, provides a practical way to identify conserved quantities or symmetries in different physical systems, and offers a guiding light to building new physical theories.
It can even be extended to symmetries and conservation laws for fields in four-dimensional spacetime. This ability was crucial in the development of gauge theories, culminating in the most famous gauge theory of all: the Standard Model of particle physics. The Standard Model, whose form was cemented in the mid-1970s and is still used by physicists today, describes three of the four known fundamental forces – electromagnetic, weak and strong forces, but not gravity – and all 17 known elementary particles.
Another pivotal breakthrough in physics that relied on variational methods was Richard Feynman’s path integral approach to understanding quantum mechanics. Dating back to 1948, the path integral formulation replaced the classical notion of a single, unique classical trajectory for a system with a sum over an infinity of quantum mechanically possible trajectories that compute a quantum amplitude. It has since impacted the development of quantum field theory, string theory and a host of other fields. It was also foundational to quantum chromodynamics, a critical part of the aforementioned and ubiquitous Standard Model.
Perhaps more obscure is the role variational calculus has played in the other pillar of modern physics: relativity. Extending variational techniques to encompass functions that map subsets of a manifold to another manifold, these tools can be wielded in calculating, for example, the trajectories of particles and light in the presence of strong gravitational fields to understand black holes better, as these paths are given by geodesics (shortest line paths) on curved 4-manifolds. Variational methods can also be used to solve Albert Einstein’s field equations that describe the curvature of spacetime in the presence of matter and energy, allowing insight into the large-scale behaviour of the universe.
Providing Deeper Understanding
Complicated relativistic and quantum wave-equation problems can and are often readily reformulated in variational forms to yield solutions. Yet perhaps the most prominent use of variational methods today is in approximation. Often, whether it is a problem in fluid dynamics, chemistry, electromagnetism or some other field, minimising functionals leads to a system of partial differential equations (PDEs). A host of methods has been developed to solve these PDEs, but if they are nonlinear, no general analytical methods exist, and most people have to resort to numerical approximations using (super)computers.
Though these numerical approximations are accurate and show what the solution should look like, they do not show why or how the PDEs deliver a given solution. An alternative approach is to guesstimate a relatively simple trial function, known as an ansatz, with some variational parameters. These can then be used to derive Euler–Lagrange equations that deliver much more manageable ordinary differential equations (ODEs) for the variational parameters, which can then be studied analytically. If the ansatz is chosen well, variations are equivalent to perturbations of the true solution, and the analytical solution of the ODEs allows valuable insights into the formation, propagation and dynamical behaviour of the object or system under study.
With a history encompassing some of the greatest minds in mathematics and physics who built our current description of the world around us, the calculus of variations has been pivotal to scientific development for the past 300 years. Given that researchers today continue to find new applications for it in subjects as diverse as optics and photonics, aerodynamics, and even rollercoaster design, it will no doubt remain a hugely important topic for years to come.
The post The Calculus of Variations – Part 2 of 2: Give it a Wiggle originally appeared on the HLFF SciLogs blog.