26
Nov 17

## Cauchy-Schwarz inequality and optimization 1 ## Cauchy-Schwarz inequality and optimization 1

Between Cauchy and Schwarz there was also Bunyakovsky who contributed to this inequality.

Inequalities, if they are precise, can be used for optimization, even in the infinite-dimensional case. The method explained here does not rely on the Lagrange method or the Kuhn-Tucker theorem or convexity. It illustrates the point that it's not a good idea to try fit all problems into one framework. The solution method should depend on the problem.

Problem. Let $a_1,a_2\geq 0$ be fixed numbers and let $x_1,x_2\geq 0$ be variables satisfying the restriction

(1) $x_1^2+x_2^2\leq W.$

The problem is to maximize the weighted sum $a_1x_1+a_2x_2$ subject to the above restriction. This can be done using the convexity method. However, our future application will contain an infinite number of variables in which case the convexity method becomes conceptually complex. Therefore here we provide a different method based on the Cauchy-Schwarz inequality. In our case it looks like this

(2) $\left\vert a_1x_1+a_2x_2\right\vert \leq \left(a_1^2+a_2^2\right)^{1/2}\left(x_1^2+x_2^2\right)^{1/2}.$

We solve the problem in two steps. First we bound $a_1x_1+a_2x_2$ from above and then we find the point $(x_1,x_2)$ at which the upper bound is attained.

Step 1. (1) and (2) obviously imply

(3) $a_1x_1+a_2x_2\leq\left(a_1^2+a_2^2\right)^{1/2}W^{1/2}.$

Step 2. We need to find $(x_1,x_2)$ such that (1) is satisfied and (3) turns into an equality. This involves a little guessing. The guiding idea is that on the right in (3) we have squares of $a_1,a_2.$ For them to appear at the left, we choose $x_1=ca_1,x_2=ca_2$ with some constant $c$. Then we want $ca_1^2+ca_2^2=\left(a_1^2+a_2^2\right)^{1/2}W^{1/2}$

to be true. This gives the value of the constant $c=\frac{W^{1/2}}{\left(a_1^2+a_2^2\right)^{1/2}}.$ The point we are looking for becomes

(4) $(x_1,x_2)=\frac{W^{1/2}}{\left(a_1^2+a_2^2\right)^{1/2}}(a_1,a_2).$ It satisfies (1): $x_1^2+x_2^2=\frac{W}{\left(a_1^2+a_2^2\right)}\left(a_1^2+a_2^2\right)=W.$

Further, the maximized value of $a_1x_1+a_2x_2$ is $\left(a_1^2+a_2^2\right)^{1/2}W^{1/2}$.