Assumptions about simple regression
We consider the simple regression
(1)
Here we derived the OLS estimators of the intercept and slope:
(2) ,
(3) .
A1. Existence condition. Since division by zero is not allowed, for (2) to exist we require . If this condition is not satisfied, then there is no variance in
and all observed points are on the vertical line.
A2. Convenience condition. The regressor is deterministic. This condition is imposed to be able to apply the properties of expectation, see equation (7) in this post. The time trend and dummy variables are examples of deterministic regressors. However, most real-life regressors are stochastic. Modifying the theory in order to cover stochastic regressors is the subject of two posts: finite-sample theory and large-sample theory.
A3. Unbiasedness condition. . This is the main assumption that makes sure that OLS estimators are unbiased, see equation (7) in this post.
Unbiasedness is not enough
Unbiasedness characterizes the quality of an estimator, see the intuitive explanation. Unfortunately, unbiasedness is not enough to choose the best estimator because of nonuniqueness: usually, if there is one unbiased estimator of a parameter, then there are infinitely many unbiased estimators of the same parameter. For example, we know that the sample mean unbiasedly estimates the population mean
. Since
(
is the first observation), we can easily construct an infinite family of unbiased estimators
, assuming
. Indeed, using linearity of expectation
.
Variance is another measure of an estimator quality: to have a lower spread of estimator values, among competing estimators we choose the one which has the lowest variance. Knowing the estimator variance allows us to find the z-score and use statistical tables.
Slope estimator variance
It is not difficult to find the variance of the slope estimator using representation (6) derived here:
where
Don't try to apply directly the definition of variance at this point, because there will be a square of a sum, which leads to a double sum upon squaring. We need two new assumptions.
A4. Uncorrelatedness of errors. Assume that for all
(errors from different equations (1) are uncorrelated). Note that because of the unbiasedness condition, this assumption is equivalent to
for all
. This assumption is likely to be satisfied if we observe consumption patterns of unrelated individuals.
A5. Homoscedasticity. All errors have the same variances: for all
. Again, because of the unbiasedness condition, this assumption is equivalent to
for all
.
Now we can derive the variance expression, using properties from this post:
(dropping a constant doesn't affect variance)
(for uncorrelated variables, variance is additive)
(variance is homogeneous of degree 2)
(applying homoscedasticity)
(plugging
)
(using the notation of sample variance)
Note that canceling out two variances in the last line is obvious. It is not so obvious for some if instead of the short notation for variances you use summation signs. The case of the intercept variance is left as an exercise.
Conclusion
The above assumptions A1-A5 are called classical. It is necessary to remember their role in derivations because a considerable part of Econometrics is devoted to deviations from classical assumptions. Once you have a certain assumption violated, you should expect the corresponding estimator property invalidated. For example, if , you should expect the estimators to be biased. If any of A4-A5 is not true, the formula we have derived
will not hold. Besides, the Gauss-Markov theorem that the OLS estimators are efficient will not hold (this will be discussed later). The pair A4-A5 can be called an efficiency condition.