The law of large numbers overview
I have already several posts about the law of large numbers:
- start with the intuition, which is illustrated using Excel;
- simulations in Excel show that convergence is not as fast as some textbooks claim;
- to distinguish the law of large numbers from the central limit theorem read this;
- the ultimate purpose is the application to simple regression with a stochastic regressor.
Here we busy ourselves with the proof.
Measuring deviation of a random variable from a constant
Let be a random variable and
some constant. We want a measure of
differing from the constant by a given number
or more. The set where
differs from
by
or more is the outside of the segment
, that is,
.

Figure 1. Measuring the outside of interval
Now suppose has a density
. It is natural to measure the set
by the probability
. This is illustrated in Figure 1.
Convergence to a spike formalized

Figure 2. Convergence to a spike
Once again, check out the idea. Consider a sequence of random variables and a parameter
. Fix some
and consider a corridor
of width
around
. For
to converge to a spike at
we want the area
to go to zero as we move along the sequence to infinity. This is illustrated in Figure 2, where, say,
has a flat density and the density of
is chisel-shaped. In the latter case the area
is much smaller than in the former. The math of this phenomenon is such that
should go to zero for any
(the narrower the corridor, the further to infinity we should move along the sequence).
Definition. Let be some parameter and let
be a sequence of its estimators. We say that
converges to
in probability or, alternatively,
consistently estimates
if
as
for any
.
The law of large numbers in its simplest form
Let be an i.i.d. sample from a population with mean
and variance
. This is the situation from the standard Stats course. We need two facts about the sample mean
: it is unbiased,
(1) ,
and its variance tends to zero
(2) as
.
Now
(by (1))
(by the Chebyshev inequality, see Extension 3))
(by (2))
as
.
Since this is true for any , the sample mean is a consistent estimator of the population mean. This proves Example 1.
Final remarks
The above proof applies in the next more general situation.
Theorem. Let be some parameter and let
be a sequence of its estimators such that: a)
for any
and b)
. Then
converges in probability to
.
This statement is often used on the Econometrics exams of the University of London.
In the unbiasedness definition the sample size is fixed. In the consistency definition it tends to infinity. The above theorem says that unbiasedness for all plus
are sufficient for consistency.
You must be logged in to post a comment.