11
Feb 17

## Gauss-Markov theorem

The Gauss-Markov theorem states that the OLS estimator is the most efficient. Without algebra, you cannot make a single step further, whether it is the precise theoretical statement or an application.

### Why do we care about linearity?

The concept of linearity has been repeated many times in my posts. Here we have to start from scratch, to apply it to estimators.

The slope in simple regression

(1) $y_i=a+bx_i+e_i$

can be estimated by

$\hat{b}(y,x)=\frac{Cov_u(y,x)}{Var_u(x)}$.

Note that the notation makes explicit the dependence of the estimator on $x,y$. Imagine that we have two sets of observations: $(y_1^{(1)},x_1),...,(y_n^{(1)},x_n)$ and $(y_1^{(2)},x_1),...,(y_n^{(2)},x_n)$ (the x coordinates are the same but the y coordinates are different). In addition, the regressor is deterministic. The x's could be spatial units and the y's temperature measurements at these units at two different moments.

Definition. We say that $\hat{b}(y,x)$ is linear with respect to $y$ if for any two vectors $y^{(i)}= (y_1^{(i)},...,y_n^{(i)}),$ $i=1,2,$ and numbers $c,d$ we have

$\hat{b}(cy^{(1)}+dy^{(2)},x)=c\hat{b}(y^{(1)},x)+d\hat{b}(y^{(2)},x)$.

This definition is quite similar to that of linearity of means. Linearity of the estimator with respect to $y$ easily follows from linearity of covariance

$\hat{b}(cy^{(1)}+dy^{(2)},x)=\frac{Cov_u(cy^{(1)}+dy^{(2)},x)}{Var_u(x)}=c\hat{b}(y^{(1)},x)+d\hat{b}(y^{(2)},x)$.

In addition to knowing how to establish linearity, it's a good idea to be able to see when something is not linear. Recall that linearity implies homogeneity of degree 1. Hence, if something is not homogeneous of degree 1, it cannot be linear. The OLS estimator is not linear in x because it is homogeneous of degree -1 in x:

$\hat{b}(y,cx)=\frac{Cov_u(y,cx)}{Var_u(cx)}=\frac{c}{c^2}\frac{Cov_u(y,x)}{Var_u(x)}=\frac{1}{c}\hat{b}(y,x)$.

### Gauss-Markov theorem

Students don't have problems remembering the acronym BLUE: the OLS estimator is Best Linear Unbiased Estimator. Decoding this acronym starts from the end.

1. An estimator, by definition, is a function of sample data.
2. Unbiasedness of OLS estimators is thoroughly discussed here.
3. Linearity of the slope estimator with respect to $y$ has been proved above. Linearity with respect to $x$ is not required.
4. Now we look at the class of all slope estimators that are linear with respect to $y$. As an exercise, show that the instrumental variables estimator belongs to this class.

Gauss-Markov Theorem. Under the classical assumptions, the OLS estimator of the slope has the smallest variance in the class of all slope estimators that are linear with respect to $y$.

In particular, the OLS estimator of the slope is more efficient than the IV estimator. The beauty of this result is that you don't need expressions of their variances (even though they can be derived).

Remark. Even the above formulation is incomplete. In fact, the pair intercept estimator plus slope estimator is efficient. This requires matrix algebra.