23
Oct 17

## Lagrange method: sufficient conditions

From what we know about unconstrained optimization, we expect that somehow the matrix of second derivatives should play a role. To get there, we need to differentiate twice the objective function with the constraint incorporated.

## Summary on necessary condition

(1) The problem is to maximize $f(x,y)$ subject to $g(x,y)=0.$

Everywhere we impose the implicit function existence condition:

(2) $\frac{\partial g}{\partial y}\neq 0.$

(3) $\frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}y^\prime(x)=0.$

Let $(x,y)$ be an extremum point for (1). Then, as we proved, there exists $\lambda$ such that the Lagrangian $L(x,y,\lambda )=f(x,y)+\lambda g(x,y)$ satisfies FOC's:

(4) $\frac{\partial L}{\partial x}=0,$ $\frac{\partial L}{\partial y}=0,$ $\frac{\partial L}{\partial \lambda }=0.$

Also we need the function $\phi (x)=f(x,y(x))$ with the constraint built into it.

We need to check the sign of the second derivative of $\phi$:

(5) $\frac{d^2\phi }{dx^2}=\frac{\partial^2f}{\partial x^2}+2\frac{\partial^2f}{\partial x\partial y}y^\prime(x)+\frac{\partial^2f}{\partial y^2}[y^\prime(x)]^2+\frac{\partial f}{\partial y}y^{\prime\prime}(x).$

Differentiating (3) once again gives

(6) $\frac{\partial^2g}{\partial x^2}+2\frac{\partial^2g}{\partial x\partial y}y^\prime(x)+\frac{\partial^2g}{\partial y^2}[y^\prime(x)]^2+\frac{\partial g}{\partial y}y^{\prime \prime}(x)=0.$

Since we need to obtain the Lagrangian, let us multiply (6) by $\lambda$ and add the result to (5):

(7) $\frac{d^2\phi }{dx^2}=\frac{\partial^2L}{\partial x^2}+2\frac{\partial^2L}{\partial x\partial y}y^\prime(x)+\frac{\partial^2L}{\partial y^2}[y^\prime(x)]^2+\frac{\partial L}{\partial y}y^{\prime\prime}(x).$

Here $\frac{\partial L}{\partial y}y^{\prime\prime}(x)=0$ because of (4).
Denote

$D^2L=\left(\begin{array}{cc} \frac{\partial^2L}{\partial x^2}&\frac{\partial^2L}{\partial x\partial y}\\ \frac{\partial^2L}{\partial x\partial y}&\frac{\partial^2L}{\partial y^2}\end{array} \right),\ Y=\left(\begin{array}{c}1\\y^\prime(x)\end{array}\right).$

Then (7) rewrites as

(8) $\frac{d^2\phi}{dx^2}=Y^TD^2LY.$

This is a quadratic form of the Hessian of $L$ (no differentiation with respect to $\lambda$).

Rough sufficient condition. If we require the Hessian to be positive definite, then $h^TD^2Lh>0$ for any $h\neq 0$ and, in particular, for $h=Y$. Thus, positive definiteness of $D^2L$, together with the FOC (3), will be sufficient for $\phi$ to have a minimum.

Refined sufficient condition. We can relax the condition by reducing the set of $h$ on which $h^TD^2Lh$ should be positive. Note from (3) that $Y$ belongs to the set $Z=\{(a_1,a_2):\frac{\partial g}{\partial x}a_1+\frac{\partial g}{\partial y}a_2=0\}$. Using (2), for $(a_1,a_2)\in Z$ we can write $a_2=(-\frac{\partial g}{\partial x}/\frac{\partial g}{\partial y })a_1.$ This means that $Z$ is a straight line. Requiring $h^TD^2Lh>0$ for any nonzero $h\in Z$ we have positivity of (8) for $h=Y.$ We summarize our findings as follows:

Theorem. Assume the implicit function existence condition and consider a critical point for the Lagrangian (that satisfies FOC's). a) If at that point $h^TD^2Lh>0$ for any nonzero $h\in Z$, then that point is a minimum point. b) If at that point $h^TD^2Lh<0$ for any nonzero $h\in Z$, then that point is a maximum point.