10
Dec 18

Distributions derived from normal variables

Useful facts about independence

In the one-dimensional case the economic way to define normal variables is this: define a standard normal variable and then a general normal variable as its linear transformation.

In case of many dimensions, we follow the same idea. Before doing that we state without proofs two useful facts about independence of random variables (real-valued, not vectors).

Theorem 1. Suppose variables X_1,...,X_n have densities p_1(x_1),...,p_n(x_n). Then they are independent if and only if their joint density p(x_1,...,x_n) is a product of individual densities: p(x_1,...,x_n)=p_1(x_1)...p_n(x_n).

Theorem 2. If variables X,Y are normal, then they are independent if and only if they are uncorrelated: cov(X,Y)=0.

The necessity part (independence implies uncorrelatedness) is trivial.

Normal vectors

Let z_1,...,z_n be independent standard normal variables. A standard normal variable is defined by its density, so all of z_i have the same density. We achieve independence, according to Theorem 1, by defining their joint density to be a product of individual densities.

Definition 1. A standard normal vector of dimension n is defined by

z=\left(\begin{array}{c}z_1 \\... \\z_n \\ \end{array}\right)

PropertiesEz=0 because all of z_i have means zero. Further, cov(z_i,z_j)=0 for i\neq jby Theorem 2 and variance of a standard normal is 1. Therefore, from the expression for variance of a vector we see that Var(z)=I.

Definition 2. For a matrix A and vector \mu of compatible dimensions a normal vector is defined by X=Az+\mu.

PropertiesEX=AEz+\mu=\mu and

Var(X)=Var(Az)=E(Az)(Az)^T=AEzz^TA^T=AIA^T=AA^T

(recall that variance of a vector is always nonnegative).

Distributions derived from normal variables

In the definitions of standard distributions (chi square, t distribution and F distribution) there is no reference to any sample data. Unlike statistics, which by definition are functions of sample data, these and other standard distributions are theoretical constructs. Statistics are developed in such a way as to have a distribution equal or asymptotically equal to one of standard distributions. This allows practitioners to use tables developed for standard distributions.

Exercise 1. Prove that \chi_n^2/n converges to 1 in probability.

Proof. For a standard normal z we have Ez^2=1 and Var(z^2)=2 (both properties can be verified in Mathematica). Hence, E\chi_n^2/n=1 and

Var(\chi_n^2/n)=\sum_iVar(z_i^2)/n^2=2/n\rightarrow 0.

Now the statement follows from the simple form of the law of large numbers.

Exercise 1 implies that for large n the t distribution is close to a standard normal.

29
Sep 16

Definitions of chi-square, t statistic and F statistic

Definitions of the standard normal distribution and independence can be combined to produce definitions of chi-square, t statistic and F statistic. The similarity of the definitions makes them easier to study.

Independence of continuous random variables

Definition of independent discrete random variables easily modifies for the continuous case. Let X,Y be two continuous random variables with densities p_X,\ p_Y, respectively. We say that these variables are independent if the density p_{X,Y} of the pair (X,Y) is a product of individual densities:

(1) p_{X,Y}(s,t)=p_X(s)p_Y(t) for all s,t.

As in this post, equation (1) can be understood in two ways. If (1) is given, then X,Y are independent. Conversely, we if want them to be independent, we can define the density of the pair by equation (1). This definition readily generalizes for the case of many variables. In particular, if we want variables z_1,...,z_n to be standard normal and independent, we say that each of them has density defined here and the joint density p_{z_1,...,z_n} is a product of individual densities.

Definition of chi-square variable

chi-square

Figure 1. chi-square with 1 degree of freedom

Let z_1,...,z_n be standard normal and independent. Then the variable \chi^2_n=z_1^2+...+z_n^2 is called a chi-square variable with n degrees of freedom. Obviously, \chi^2_n\ge 0, which means that its density is zero to the left of the origin. For low values of degrees of freedom, the density is not bounded near the origin, see Figure 1.

Definition of t distribution

t-distr

Figure 2. t distribution and standard normal compared

Let z_0,z_1,...,z_n be standard normal and independent. Then the variable t_n=\frac{z_0}{\sqrt{(z_1^2+...+z_n^2)/n}} is called a t distribution with n degrees of freedom. The density of the t distribution is bell-shaped and for low n has fatter tails than the standard normal. For high n, it approaches that of the standard normal, see Figure 2.

Definition of F distribution

f-distr

Figure 3. F distribution with (1,m) degrees of freedom

Let u_1,...,u_n,v_1,...,v_m be standard normal and independent. Then the variable F_{n,m}=\frac{(u_1^2+...+u_n^2)/n}{(v_1^2+...+v_m^2)/m} is called an F distribution with (n,m) degrees of freedom. It is nonnegative and its density is zero to the left of the origin. When n is low, the density is not bounded in the neighborhood of zero, see Figure 3.

The Mathematica file and video illustrate better the densities of these three variables.

Consequences

  1. If \chi^2_n and \chi^2_m are independent, then \chi^2_n+\chi^2_m is \chi^2_{n+m} (addition rule). This rule is applied in the theory of ANOVA models.
  2. t_n^2=F_{1,n}. This is an easy proof of equation (2.71) from Introduction to Econometrics, by Christopher Dougherty, published by Oxford University Press, UK, in 2016.