23
Oct 17

Lagrange method: necessary condition

Lagrange method: necessary condition

Consider the problem:

(1) maximize the objective function f(x,y) subject to the equality constraint g(x,y)=0.

Lagrange's idea: add a new, artificial variable \lambda and consider a new function of three variables L(x,y,\lambda )=f(x,y)+\lambda g(x,y). The solution of the constrained problem (1) should be equivalent to the solution of the unconstrained problem

(2) maximize L(x,y,\lambda ).

L(x,y,\lambda ) is called a Lagrangian. Recall the implicit function existence condition

(3) \frac{\partial g}{\partial y}\neq 0.

Under this condition we can employ a useful trick: when the implicit function exists, we can differentiate the restriction g(x,y(x))=0 to obtain

(4) \frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}y^\prime(x)=0.

Simple way to solve (1)

Assuming (3), we can find y=y(x) from the restriction and plug y(x) into the objective function to obtain a function of one variable

\phi (x)=f(x,y(x)).

It's enough to find the extremes of this function (we don't need to use the constraint). At an extremum we necessarily have the first order condition

(5) \frac{d\phi }{dx}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}y^\prime(x)=0.

Lagrange went one step further

(4)+(5) is a linear system of equations. To make this clear, let us introduce a matrix and a vector

A=\left(\begin{array}{cc}  \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\\  \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y}  \end{array}\right) ,~\ Y=\left(\begin{array}{c}1\\  y^\prime(x)\end{array}\right).

Then (4)+(5) becomes AY=0. This is a homogeneous system (the right side is zero) and it has a nonzero solution Y (at least its first component is not zero). The matrix theory tells us that this is possible only if the determinant of the system is zero: \det A=0, which happens only if the second row is proportional to the first: \left(\begin{array}{cc}  \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y}\end{array}\right) =c\left(  \begin{array}{cc}  \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}  \end{array}\right). One vector equation is equivalent to two scalar ones: \frac{  \partial f}{\partial x}=c\frac{\partial g}{\partial x}, \frac{\partial f}{  \partial y}=c\frac{\partial g}{\partial y}. Denoting \lambda =-c, we obtain two first order conditions for the Lagrangian:

(6) \frac{\partial L}{\partial x}=\frac{\partial f}{\partial x}+\lambda  \frac{\partial g}{\partial x}=0, \frac{\partial L}{\partial y}=\frac{  \partial f}{\partial y}+\lambda \frac{\partial g}{\partial y}=0.

The third one is just the constraint:

(7) \frac{\partial L}{\partial \lambda }=g(x,y)=0.

We have proved the following result:

Theorem. Let the implicit function existence condition be satisfied. Then there exists a number \lambda such that the solution of the constrained problem (1) satisfies first order conditions (6)+(7) for the Lagrangian (it must be a critical point).

Leave a Reply

You must be logged in to post a comment.