## Minimizing Quantity Part 4: Euler-Lagrange Equation

In the last three posts, I talked about paths that minimize time. In all those cases, it involves objects going through a path and minimizing certain quantities. But is there a single equation that covers all physical situations that involve finding the path that minimizes quantity? The answer is yes, and it is called the Euler Lagrange equation.

Here is how the setup works. Let’s say you want to go from point a to point b. There is a bunch of paths you can take, and depending on the path you take, a quantity, let’s call it S, changes with it. Let’s draw a random path q, and let there be an alternate path that varies infinitesimally from the actual path and call that slight variation delta q (note that delta q is a function depending on time too):

As the object travels through the path, the  quantity is added to the overall total S until it stops at point b. That quantity is:

1) $S=\int^b_a F(t, q(t), \dot{q}(t)) dt$

As you can see, the quantity depends on both the path and the variable that moves the object along that path (for example: time, position in x axis). Another thing I should mention is that it is possible to do higher derivatives and more variables, but we restrict ourselves to time, position, and velocity because in most problems of classical physics, you can know an object’s past, present, and future from those factors. Well, ideally you can, but as you know, the real world is a lot more complicated than that. Also, I restricted things to one dimension, but that is only for convenience. The resulting equation can be applied to any coordinate systems, and you can basically apply the same equation to separate coordinates. So a 3d cartesian would have 3 equations of this.

Back to the issue, the problem is, there are an infinite amount of paths, so it is not like we can test all of them. Instead, let’s vary the path infinitesimally and do an approximation of the path using Taylor series:

2) $\int^b_a F(t, q+\delta q, \dot{q}+\delta \dot{q}) dt=$

$\int^b_a [F(t, q, \dot{q}) + \frac{\partial F(t, q, \dot{q})}{\partial q} \delta q + \frac{\partial F(t, q, \dot{q})}{\partial q} \delta \dot{q} + \mathcal{O}q^2] dt$

As for how that Taylor expansion was done, think of a regular taylor series that substitutes x-a=dx. Now you might be wondering about that fancy big O on the squared term. The fancy big O is pretty much called the big O , and it means that as some alternate path becomes closer and closer to the actual path, the absolute value of F(q+delta q)-F-F’dq is less than or equal to a constant times x^2. Basically, the error from doing Taylor’s approximation becomes very small compared to a second order term. It is a good way to summarize the rest of the terms without getting bogged down by all the rest of the infinite stuff.

Proceeding with the problem, what we want to know is the path that minimizes the quantity. What we can do is ignore all other terms except the ones until the first order variation, and make the first variation part equal to zero:

3) $S(t, q+\delta q, \dot{q} + \delta \dot{q}) = \int^b_a [F(t, q, \dot{q}) + \frac{\partial F(t, q, \dot{q})}{\partial q} \delta q + \frac{\partial F(t, q, \dot{q})}{\partial \dot{q}} \delta \dot{q}] dt$

4) $S(t, q +\delta q, \dot{q} + \delta \dot{q})=S(t, q, \dot{q}) + \int^b_a [\frac{\partial F(t, q, \dot{q})}{\partial q} \delta q + \frac{\partial F(t, q, \dot{q})}{\partial q} \delta \dot{q}] dt$

5) $S(t, q+\delta q, \dot{q} + \delta \dot{q}) - S(t, q, \dot{q}) = \delta S =$

$\int^b_a [\frac{\partial F(t, q, \dot{q})}{\partial q} \delta q + \frac{\partial F(t, q, \dot{q})}{\partial \dot{q}} \delta \dot{q}] dt = 0$

The reason is that the path minimizes if the first variation is equal to zero. It is pretty much like the regular calculus, where you find the minimum of a function by using the first derivative and making that equal zero. The rest is ignored, as it is irrelevant to our objective, and the error becomes too small.

Now, to go further ahead, we have to do the following:

6) $\delta \dot{q}=\frac{d}{dt} \delta q$

How do I know the above is true, you say? I will give you a hint, substitute delta q with e*f, e represents an infinitesimal quantity and f represents a function which added to q would move it to an alternate path. e is a constant, f is a function, now go!

Did you get it? No? Well, if you don’t, then keep thinking about it.

Anyways, equation 6 with equation 5 gives us this:

7) $\int^b_a [\frac{\partial F(t, q, \dot{q})}{\partial q} \delta q + \frac{\partial F(t, q, \dot{q})}{\partial \dot{q}} \frac{d}{dt} \delta q] dt = 0$

This part is a neat trick. Use the first year calculus technique called integration by parts for the right term inside the brackets:

$u=\frac{\partial F}{\partial \dot{q}}$ and $dv = \frac{d}{dt} \delta q dt$

This gets us:

8) $\int^b_a [\frac{\partial F(t, q, \dot{q})}{\partial q} - \frac{d}{dt}(\frac{\partial F(t, q, \dot{q})}{\partial \dot{q}})] \delta q dt + \frac{\partial F(t, q, \dot{a})}{\partial \dot{q}} \delta q |^b_a = 0$

Behold!  The right most term in the left hand side is zero because we agreed that even if we vary every single aspect of the path, the endpoint is still at a and b. The endpoint never varies, so:

$\delta q(a)=0$ and $\delta q(b)=0$

This simplify equation 8 into:

9) $\delta S= \int^b_a [\frac{\partial F(t, q, \dot{q})}{\partial q} - \frac{d}{dt}(\frac{\partial F(t, q, \dot{q})}{\partial \dot{q}})] \delta q dt = 0$

Divide both sides by delta q. At this point, I invoke the fundamental lemma of calculus of variation. According to the fundamental lemma of calculus of variation, because we said that the whole thing was equal to zero, the stuff inside the bracket must be zero. Therefore:

10) $\frac{\delta S}{\delta q} = \frac{\partial F}{\partial q} - \frac{d}{dt} (\frac{\partial F}{\partial \dot{q}}) = 0$

This is the Euler Lagrange equation, and not only will it find the path of least value, it will find the maximum value too. It finds areas of stationary value. With this, we can do an alternate derivation of the path of least time for light and the brachistochrone problem. Let me show you.

For Snell’s law it is easy:

1) $T(x, y, \dot{y}) = \int^{x_1}_0 \frac{ds_1}{v_1} + \int^{x_2}_{x_1} \frac{ds_2}{v_2}$

2) $T(x, y, \dot{y}) =\int^{x_1}_0 \frac{\sqrt{1+\dot{y}^2_1}}{v_1} dx + \int^{x_2}_{x_1} \frac{\sqrt{1+\dot{y}^2_2}}{v_2} dx$

This setup represents two paths. One from a medium with velocity 1, and one with a medium with velocity 2.

Now, I let the variation be zero:

3) $\frac{\delta T}{\delta x} = 0$

So the inside of both integrals is zero thanks to the fundamental lemma of variation, and I apply Euler Lagrange equation. Since the inside is not dependent on y at all, the left part of E-L is 0:

3) $\frac{\partial \sqrt{1+\dot{y}^2}}{\partial y}=0$

On the other hand, the right hand side leaves us with:

4) $\frac{d}{dx} \frac{\partial \sqrt{1+\dot{y}^2}}{\partial \dot{y}} = \frac{d}{dx} (\frac{\dot{y}}{\sqrt{1+\dot{y}^2}}) = 0$

Putting 4 in equation 3 gives:

5) $0= \frac{1}{v_1} \int^{x_1}_0 \frac{d}{dx} \frac{\dot{y}_1}{\sqrt{1+\dot{y}^2_1}} dx + \frac{1}{v_2} \int^{x_2}_{x_1} \frac{d}{dx} \frac{\dot{y}_2}{\sqrt{1+\dot{y}^2_2}} dx$

Integrating, and putting the second integral on the other side (doesn’t matter which), gives:

6) $\frac{1}{v_1} \frac{\dot{y}_1}{\sqrt{1+\dot{y}^2_1}} = \frac{1}{v_2} \frac{\dot{y}_2}{\sqrt{1+\dot{y}^2_2}}$

The interval of integration doesn’t matter because y’ doesn’t depend on x. After all, the derivative of a straight line is its slope, so y’ is a constant.

Notice, sin=opposite/hypotenuse, and that is what both the terms on the left and the right represents, Snell’s law comes out:

7) $\frac{sin(\theta_1)}{v_1} = \frac{sin(\theta_2)}{v_2}$

Finally, since integrating equation 4 by x gives a constant (again because y’ is constant, doesn’t depend on x):

8) $\frac{\dot{y}}{\sqrt{1+\dot{y}^2}} = C$

You get the generally applicable:

9) $\frac{sin(\theta_1)}{v_1} = \frac{sin(\theta_2)}{v_2} = C$

The brachistochrone problem is more involved:

1) $T=\int^x_0 \frac{ds}{v} = \int^x_0 \frac{\sqrt{1+\dot{y}^2}}{2g(h-y)} dx$

Remember that I obtained the bottom part by conservation of energy.

At this point, just like last problem, observe that the integral doesn’t depend on x. This will make solving the E-L equation more convenient. For now, I would like to leave the right part alone and expand on the left one. Afterwards, move the left part to the other side:

2) $\frac{\partial F}{\partial y}=\frac{d}{dx} \frac{\dot{y}}{\sqrt{2g(h-y)(1+\dot{y}^2)}}$

Noting that dF/dy’ is a constant (since it doesn’t depend on x, the derivative of it over x is zero) will be useful. Integrating both sides by x gives us:

3) $\int \frac{\partial F}{\partial y} dx = \frac{\dot{y}}{\sqrt{2g(h-y)(1+\dot{y}^2)}}$

The left hand side can be expanded by the following trick:

4) $\frac{dF}{dx} = \frac{\partial F}{\partial x} + \frac{\partial F}{\partial y} \frac{dy}{dx} + \frac{\partial F}{\partial \dot{y}} \frac{d\dot{y}}{dx}$

5) $\frac{\partial F}{\partial y} \frac{dy}{dx} = \frac{dF}{dx} - \frac{\partial F}{\partial x} - \frac{\partial F}{\partial \dot{y}} \frac{d\dot{y}}{dx}$

Multiply equation 3 by y’, and put equation 5 inside the left hand side of equation 3:

6) $\int \frac{dF}{dx} dx - \int \frac{\partial F}{\partial x} dx - \int \frac{\partial F}{\partial \dot{y}} d\dot{y} = \frac{\dot{y}^2}{\sqrt{2g(h-y)(1+\dot{y}^2)}}$

The left hand side of equation 6 can be greatly reduced. The leftmost part is F and the rest I will condense into constant C. I can do that because I already proved that the right hand side and F were constant in terms of x, so the rest must be constant too:

7) $F - C = \frac{\dot{y}^2}{\sqrt{2g(h-y)(1+\dot{y}^2)}}$

8) $C = F - \frac{\dot{y}^2}{\sqrt{2g(h-y)(1+\dot{y}^2)}}$

9) $C = \sqrt{\frac{1+\dot{y}^2}{2g(h-y)}} - \frac{\dot{y}^2}{\sqrt{2g(h-y)(1+\dot{y}^2)}}$

10) $C= \frac{1}{2g(h-y)(1+\dot{y}^2)}$

Split y’ into dy/dx and solve in terms of (dy/dx)^2. Remember also that C=1/sqrt(2gh). The result is:

11) $(\frac{dy}{dx})^2 = \frac{y}{h-y}$

That is indeed the differential equation for the cycloid, and so we are done!

As you can see, the E-L equation gives us the answer to all the previous problems. The E-L equation is also useful for any minimization/maximization problem imaginable. You can use it to prove that the straight line is the path of least time on a flat plane to generating the minimum surface area given a condition. It is also extremely fundamental to classical mechanics, which is what the next and final post of this series is about.

(Credit to these sources for helping me out)