Oct 17

The Lagrangian multiplier interpretation

The Lagrangian multiplier interpretation

Motivation. Consider the problem of utility maximization \max u(x,y) under the budget constraint p_xx+p_yy=M. Suppose the budget M changes a little bit and let us see how the solution of the problem depends on changes in the budget. The maximized value of the utility will depend on M, so let us reflect this dependence in the notation U(M)=u(x^\ast(M),y^\ast(M)) where, for each M, (x^\ast(M),y^\ast(M)) is the maximizing bundle. The value of the Lagrangian multiplier \lambda in the FOC's \frac{\partial L}{\partial x}=0, \frac{\partial L}{\partial y}=0, \frac{\partial L}{\partial \lambda }=0 will also depend on M: \lambda =\lambda(M). The property that we will prove is

\frac{dU}{dM}=\lambda (M),

that is, the Lagrangian multiplier measures the sensitivity of the maximized utility function to changes in the budget.

General formulation

The main problem is to maximize f(x,y) subject to g(x,y)=0. However, instead of a fixed constraint we consider a set of constraints perturbed by a constant c: g(x,y)+c=0. Here c varies in a small neighborhood (-\varepsilon ,\varepsilon) of zero (which corresponds to varying M in a neighborhood of some M_{0} in the motivating example). Now everything depends on c, and differentiation with respect to c will give the desired result.

Employing the constraint

As before, we assume the implicit function existence condition:

\left( \frac{\partial g}{\partial x},\frac{\partial g}{\partial y}\right)\neq 0.

Changing the notation, if necessary, we can think that it is the last component of the gradient that is not zero: \frac{\partial g}{\partial y}\neq 0. This condition applies to the perturbed constraint too and guarantees existence of the implicit function, which now depends also on c: y=y(x,c). Plugging it into the constraint we get g(x,y(x,c))+c=0, and differentiation yields

(1) \frac{\partial g}{\partial y}\frac{\partial y}{\partial c}+1=0.

Employing the FOC's

Critical assumption. For each c\in (-\varepsilon ,\varepsilon ) the perturbed maximization problem has a solution. At least in our motivating example, this assumption is satisfied.

The maximized objective function is denoted F(c)=f(x,y(x,c)). At each c, we have the right to use the FOC's for the Lagrangian. One of the FOC's is

\frac{\partial f}{\partial y}+\lambda (c)\frac{\partial g}{\partial y}=0.

To make use of (1), multiply this by \frac{\partial y}{\partial c}:

0=\frac{\partial f}{\partial y}\frac{\partial y}{\partial c}+\lambda(c)\frac{\partial g}{\partial y}\frac{\partial y}{\partial c}=\frac{\partial f}{\partial y}\frac{\partial y}{\partial c}-\lambda (c).

It follows that

\frac{dF}{dc}=\frac{\partial f}{\partial y}\frac{\partial y}{\partial c}=\lambda (c).

With c=0 we have \lambda(c)=\lambda, the Lagrange multiplier for the unperturbed problem, and as a result

\frac{dF}{dc}=\frac{\partial f}{\partial y}\frac{\partial y}{\partial c}=\lambda.

Leave a Reply

You must be logged in to post a comment.