## All about the law of large numbers: properties and applications

### Level 1: estimation of population parameters

The law of large numbers is a statement about convergence which is called **convergence in probability** and denoted . The precise definition is rather complex but the intuition is simple: it is convergence to a spike at the parameter being estimated. Usually, any unbiasedness statement has its analog in terms of the corresponding law of large numbers.

**Example 1**. The sample mean unbiasedly estimates the population mean: . Its analog: the sample mean converges to a spike at the population mean: . See the proof based on the Chebyshev inequality.

**Example 2**. The sample variance unbiasedly estimates the population variance: where . Its analog: the sample variance converges to a spike at the population variance:

(1) .

**Example 3**. The **sample covariance** unbiasedly estimates the population covariance: . Its analog: the sample covariance converges to a spike at the population covariance:

(2) .

### Up one level: **convergence** in probability is just convenient

Using or not convergence in probability is a matter of expedience. For usual limits of sequences we know the properties which I call **preservation of arithmetic operations**:

Convergence in probability has exact same properties, just replace with .

### Next level: making regression estimation more plausible

Using convergence in probability allows us to handle stochastic regressors and avoid the unrealistic assumption that regressors are deterministic.

Convergence in probability and in distribution are two types of convergence of random variables that are widely used in the Econometrics course of the University of London.

## Leave a Reply

You must be logged in to post a comment.