### Central Limit Theorem versus Law of Large Numbers

**They say**: The Central Limit Theorem (CLT). Describes the Expected Shape of the Sampling Distribution for Sample Mean . For a random sample of size from a population having mean μ and standard deviation σ, then as the sample size increases, the sampling distribution of the sample mean approaches an approximately normal distribution. (Agresti and Franklin, p.321)

**I say**: There are at least three problems with this statement.

**Problem 1**. With any notion or statement, I would like to know its purpose in the first place. The primary purpose of the law of large numbers is to estimate population parameters. The Central Limit Theorem may be a nice theoretical result, but why do I need it? The motivation is similar to the one we use for introducing the z score. There is a myriad of distributions. Only some standard distributions have been tabulated. Suppose we have a sequence of variables , none of which have been tabulated. Suppose also that, as increases, those variables become close to a normal variable in the sense that the cumulative probabilities (areas under their respective densities) become close:

(1) for all .

Then we can use tables developed for normal variables to approximate . This justifies using (1) as the definition of a new convergence type called **convergence in distribution**.

**Problem 2**. Having introduced convergence (1), we need to understand what it means in terms of densities (distributions). As illustrated in Excel, the law of large numbers means convergence to a spike. In particular, the sample mean converges to a mass concentrated at μ (densities contract to one point). Referring to the sample mean in the context of CLT is misleading, because the CLT is about **densities stabilization**.

Figure 1 appeared in my posts before, I just added n=10,000, to show that densities do not stabilize.

In Figure 2, for clarity I use line plots instead of histograms. The density for n=100 is very rugged. The blue line (for n=1000) is more rugged than the orange (for n=10,000). Convergence to a normal shape is visible, although slow.

**Main problem**. It is not the sample means that converge to a normal distribution. It is their z scores

that do. Specifically,

for all

where is a standard normal variable.

In my simulations I used sample means for Figure 1 and z scores of sample means for Figure 2. In particular, z scores always have means equal to zero, and that can be seen in Figure 2. In your class, you can use the Excel file. As usual, you have to enable macros.

[…] 3. In case of a random variable which is not necessarily normal we can use the central limit theorem. z-scores of sample means, for example, approach the standard normal. Instead of (3) we have an […]

[…] Fact 4. Finally, the z scores of sample means stabilize to a standard normal distribution (the central limit theorem). […]

[…] to distinguish the law of large numbers from the central limit theorem read this; […]

[…] to the long run; the difference between the law of large numbers and central limit theorem should be made clear; the rate of convergence in the law of large numbers is not that fast; the law of large numbers is […]