## Unconstrained optimization on the plane: necessary condition

See a very simple geometric discussion of the one-dimensional case. It reveals the Taylor decomposition as the main research tool. Therefore we give the Taylor decomposition in a 2D case. Assuming that the reader has familiarized him/herself with that information, we go directly to the decomposition

(1)

Here is a twice-differentiable function, is an internal point of the domain , is a small vector such that also belongs to the domain,

is a row vector of first derivatives, and

is the **Hessian** (a matrix of second-order derivatives). stands for transposition.

## When there is no local minimum or maximum?

We have seen how reduction to a 1D case can be used to study a 2D case. A similar trick is applied here. Let us represent the vector as where is another vector (to be defined later) and is a small real parameter. Then will be close to . From (1) we get

(2)

We think of as fixed, so the two expressions in square brackets are fixed numbers. Denote . An important observation is that

*When tends to zero, tends to zero even faster*.

Therefore the last term in (2) is smaller than the second, and from (2) we obtain

(3)

**The no-extremes case**. Suppose the vector of first derivatives is not zero: , which means that

(4) at least one of the numbers is different from zero.

Select . Then (3) implies

(5)

From (4) it follows that . Then (5) shows that cannot be an extreme point. Indeed, for small positive we have and for small negative we have . In any neighborhood of the values of can be both higher and lower than

**Conclusion**. In case (4) cannot be a local minimum or maximum. In other words, we should look for local extrema among **critical points** which satisfy the **first order condition**

The FOC is *necessary* for a function to have a local minimum or maximum. All of the above easily generalizes to dimensions higher than 2.