Derivation of OLS estimators: the do's and don'ts
Here I give the shortest rigorous derivation of OLS estimators for simple regression indicating the pitfalls.
If you need just an easy way to obtain the estimators and don't care about rigor, see this post.
Definition and problem setup
Observations come in pairs . We want to approximate the y's by a linear function of x's and therefore we are interested in minimizing the residuals . It is impossible to choose two variables so as to minimize quantities at the same time. The compromise is achieved by minimizing the residual sum of squares . OLS estimators are the values that minimize RSS. We find them by applying first order conditions which are necessary for functions optima.
Don't square out the residuals. This leads to large expressions, which most students fail to handle.
Do apply the chain rule: where is called an external functions and is called an internal function. In case of the squared residual we have and . Therefore , . Summing, equating the result to zero and getting rid of gives a system of equations
(1) , .
Don't carry the summation signs. They are the trees that prevent many students from seeing the forest.
Do replace them with sample means ASAP using . Equations (1) give
(2) , .
Notice that has been dropped and the subscript disappears together with the summation signs. The general linearity property of expectations (where are numbers and are random variables) is true for sample means too: . It is used to rewrite equations (2) as
(3) , .
Rearranging the result
Most students can solve a system of two equations (3) for two unknowns, so I skip this step. The solutions are
Do put the hats on the resulting : they are the estimators we have been looking for. Don't put the hats during the derivation, because have been variable.
Don't leave equation (4) in this form. Do use equations (5) from this post to rewrite equation (4) as
Don't plug this expression in (5). In practice, (6) is calculated first and then the result is plugged in (5).