### Regressions with stochastic regressors 1: applying conditioning

The convenience condition states that the regressor in simple regression is deterministic. Here we look at how this assumption can be avoided using conditional expectation and variance. General idea: you check which parts of the proofs don't go through with stochastic regressors and modify the assumptions accordingly. It happens that only assumptions concerning the error term should be replaced by their conditional counterparts.

### Unbiasedness in case of stochastic regressors

We consider the slope estimator for the simple regression

(1)

assuming that is stochastic.

First grab the critical representation (6) derived here:

(1) , where

The usual linearity of means applied to prove unbiasedness doesn't work because now the coefficients are stochastic (in other words, they are not constant). But we have generalized linearity which for the purposes of this proof can be written as

(2)

Let us replace the unbiasedness condition by its conditional version:

**A3'. Unbiasedness condition**. .

Then (1) and (2) give

(3)

which can be called **conditional unbiasedness**. Next applying the law of iterated expectations we obtain **unconditional unbiasedness**:

### Variance in case of stochastic regressors

As one can guess, we have to replace efficiency conditions by their conditional versions:

**A4'. Conditional uncorrelatedness of errors**. Assume that for all .

**A5'. Conditional homoscedasticity**. *All errors have the same conditional variances*: for all ( is a constant).

Now we can derive the **conditional variance** expression, using properties from this post:

(dropping a constant doesn't affect variance)

(for conditionally uncorrelated variables, conditional variance is additive)

(conditional variance is homogeneous of degree 2)

(applying conditional homoscedasticity)

(plugging )

(using the notation of sample variance)

(4)

Finally, using the law of total variance and equations (3) and (4) we obtain

(5)

### Conclusion

Replacing the three assumptions about the error by their conditional counterparts allows us to obtain almost perfect analogs of the usual properties of OLS estimators: the usual (unconditional) unbiasedness plus the estimator variance, in which the part containing the regressor should be averaged, to account for its randomness. If you think that solving the problem of stochastic regressors requires nothing more but application of a couple of mathematical tricks, I agree with you.

[…] First approach: the sample size is fixed. The unbiasedness and efficiency conditions are replaced by their analogs conditioned on . The outcome is that the slope estimator is unbiased and its variance is the average of the variance that we have in case of a deterministic regressor. See the details. […]

[…] The next statement helps to understand the right approach. We need the unbiasedness condition from the first approach to stochastic regressors: […]

[…] Modifying the theory in order to cover stochastic regressors is the subject of two posts: finite-sample theory and large-sample […]