Derivation of OLS estimators: the do's and don'ts

Here I give the shortest rigorous derivation of OLS estimators for simple regression indicating the pitfalls.

If you need just an easy way to obtain the estimators and don't care about rigor, see this post.

### Definition and problem setup

Observations come in pairs . We want to approximate the y's by a linear function of x's and therefore we are interested in minimizing the residuals . It is impossible to choose two variables so as to minimize quantities at the same time. The compromise is achieved by minimizing the residual sum of squares . **OLS estimators** are the values that minimize RSS. We find them by applying first order conditions which are necessary for functions optima.

### Applying FOC's

**Don't** square out the residuals. This leads to large expressions, which most students fail to handle.

**Do** apply the **chain rule**: where is called an external functions and is called an internal function. In case of the squared residual we have and . Therefore , . Summing, equating the result to zero and getting rid of gives a system of equations

(1) , .

### Solving for

**Don't** carry the summation signs. They are the trees that prevent many students from seeing the forest.

**Do** replace them with sample means ASAP using . Equations (1) give

(2) , .

Notice that has been dropped and the subscript disappears together with the summation signs. The general linearity property of expectations (where are numbers and are random variables) is true for sample means too: . It is used to rewrite equations (2) as

(3) , .

### Rearranging the result

Most students can solve a system of two equations (3) for two unknowns, so I skip this step. The solutions are

(4) ,

(5) .

**Do** put the hats on the resulting : they are the estimators we have been looking for. **Don't** put the hats during the derivation, because have been variable.

**Don't** leave equation (4) in this form. **Do** use equations (5) from this post to rewrite equation (4) as

(6) .

**Don't** plug this expression in (5). In practice, (6) is calculated first and then the result is plugged in (5).

[…] Here we derived the OLS estimators. To distinguish between sample and population means, the variance and covariance in the slope estimator will be provided with the subscript u (for "uniform", see the rationale here). […]

[…] See applications: one, and two, and three. […]

[…] a simplified derivation or a full derivation. I am using the […]

[…] in two important formulas: correlation coefficient and slope estimator in simple regression (see derivation, simplified derivation and proof of […]

[…] Here we derived the OLS estimators of the intercept and slope: […]

[…] is an OLS estimator - simplified derivation. In case of the OLS estimator, there is also the rigorous derivation. For the IV estimator, the simplified derivation is the only […]

[…] is the density of (we use the distribution function differentiation equation and the chain rule). In statistical software, the value of is usually reported at the mean value of the […]

[…] and a method are not the same. Application of the least squares method to the linear model gives OLS estimators. Here we apply the Maximum Likelihood (ML) method to the same […]

[…] come in pairs . In case of ordinary least squares, we approximated the y's with linear functions of the parameters, possibly nonlinear in x's. Now we […]