2
Jan 17

Conditional variance properties

Preliminaries

Review Properties of conditional expectation, especially the summary, where I introduce a new notation for conditional expectation. Everywhere I use the notation E_Y\pi for expectation of \pi conditional on Y, instead of E(\pi|Y).

This post and the previous one on conditional expectation show that conditioning is a pretty advanced notion. Many introductory books use the condition E_xu=0 (the expected value of the error term u=0 conditional on the regressor x is zero). Because of the complexity of conditioning, I think it's better to avoid this kind of assumption as much as possible.

Conditional variance properties

Replacing usual expectations by their conditional counterparts in the definition of variance, we obtain the definition of conditional variance:

(1) Var_Y(X)=E_Y(X-E_YX)^2.

Property 1. If X,Y are independent, then X-EX and Y are also independent and conditioning doesn't change variance:

Var_Y(X)=E_Y(X-EX)^2=E(X-EX)^2=Var(X),

see Conditioning in case of independence.

Property 2. Generalized homogeneity of degree 2: if a is a deterministic function, then a^2(Y) can be pulled out:

Var_Y(a(Y)X)=E_Y[a(Y)X-E_Y(a(Y)X)]^2=E_Y[a(Y)X-a(Y)E_YX]^2
=E_Y[a^2(Y)(X-E_YX)^2]=a^2(Y)E_Y(X-E_YX)^2=a^2(Y)Var_Y(X).

Property 3. Shortcut for conditional variance:

(2) Var_Y(X)=E_Y(X^2)-(E_YX)^2.

Proof.

Var_Y(X)=E_Y(X-E_YX)^2=E_Y[X^2-2XE_YX+(E_YX)^2]

(distributing conditional expectation)

=E_YX^2-2E_Y(XE_YX)+E_Y(E_YX)^2

(applying Properties 2 and 6 from this Summary with a(Y)=E_YX)

=E_YX^2-2(E_YX)^2+(E_YX)^2=E_YX^2-(E_YX)^2.

Property 4The law of total variance:

(3) Var(X)=Var(E_YX)+E[Var_Y(X)].

Proof. By the shortcut for usual variance and the law of iterated expectations

Var(X)=EX^2-(EX)^2=E[E_Y(X^2)]-[E(E_YX)]^2

(replacing E_Y(X^2) from (2))

=E[Var_Y(X)]+E(E_YX)^2-[E(E_YX)]^2

(the last two terms give the shortcut for variance of E_YX)

=E[Var_Y(X)]+Var(E_YX).

Before we move further we need to define conditional covariance by

Cov_Y(S,T) = E_Y(S - E_YS)(T - E_YT)

(everywhere usual expectations are replaced by conditional ones). We say that random variables S,T are conditionally uncorrelated if Cov_Y(S,T) = 0.

Property 5. Conditional variance of a linear combination. For any random variables S,T and functions a(Y),b(Y) one has

Var_Y(a(Y)S + b(Y)T)=a^2(Y)Var_Y(S)+2a(Y)b(Y)Cov_Y(S,T)+b^2(Y)Var_Y(T).

The proof is quite similar to that in case of usual variances, so we leave it to the reader. In particular, if S,T are conditionally uncorrelated, then the interaction terms disappears:

Var_Y(a(Y)S + b(Y)T)=a^2(Y)Var_Y(S)+b^2(Y)Var_Y(T).
26
Dec 16

Multiple regression through the prism of dummy variables

Agresti and Franklin on p.658 say: The indicator variable for a particular category is binary. It equals 1 if the observation falls into that category and it equals 0 otherwise. I say: For most students, this is not clear enough.

Problem statement

Figure 1. Residential power consumption in 2014 and 2015. Source: http://www.eia.gov/electricity/data.cfm

Residential power consumption in the US has a seasonal pattern. Heating in winter and cooling in summer cause the differences. We want to capture the dependence of residential power consumption PowerC on the season.

 Visual approach to dummy variables

Seasons of the year are categorical variables. We have to replace them with quantitative variables, to be able to use in any mathematical procedure that involves arithmetic operations. To this end, we define a dummy variable (indicator) D_{win} for winter such that it equals 1 in winter and 0 in any other period of the year. The dummies D_{spr},\ D_{sum},\ D_{aut} for spring, summer and autumn are defined similarly. We provide two visualizations assuming monthly observations.

Table 1. Tabular visualization of dummies
Month D_{win} D_{spr} D_{sum} D_{aut} D_{win}+D_{spr}+ D_{sum}+D_{aut}
December 1 0 0 0 1
January 1 0 0 0 1
February 1 0 0 0 1
March 0 1 0 0 1
April 0 1 0 0 1
May 0 1 0 0 1
June 0 0 1 0 1
July 0 0 1 0 1
August 0 0 1 0 1
September 0 0 0 1 1
October 0 0 0 1 1
November 0 0 0 1 1

Figure 2. Graphical visualization of D_spr

The first idea may be wrong

The first thing that comes to mind is to regress PowerC on dummies as in

(1) PowerC=a+bD_{win}+cD_{spr}+dD_{sum}+eD_{aut}+error.

Not so fast. To see the problem, let us rewrite (1) as

(2) PowerC=a\times 1+bD_{win}+cD_{spr}+dD_{sum}+eD_{aut}+error.

This shows that, in addition to the four dummies, there is a fifth variable, which equals 1 across all observations. Let us denote it T (for Trivial). Table 1 shows that

(3) T=D_{win}+D_{spr}+ D_{sum}+D_{aut}.

This makes the next definition relevant. Regressors x_1,...,x_k are called linearly dependent if one of them, say, x_1, can be expressed as a linear combination of the others: x_1=a_2x_2+...+a_kx_k.  In case (3), all coefficients a_i are unities, so we have linear dependence. Using (3), let us replace T in (2). The resulting equation is rearranged as

(4) PowerC=(a+b)D_{win}+(a+c)D_{spr}+(a+d)D_{sum}+(a+e)D_{aut}+error.

Now we see what the problem is. When regressors are linearly dependent, the model is not uniquely specified. (1) and (4) are two different representations of the same model.

What is the way out?

If regressors are linearly dependent, drop them one after another until you get linearly independent ones. For example, dropping the winter dummy, we get

(5) PowerC=a+cD_{spr}+dD_{sum}+eD_{aut}+error.

Here is the estimation result for the two-year data in Figure 1:

PowerC=128176-27380D_{spr}+5450D_{sum}-22225D_{aut}.

This means that:

PowerC=128176 in winter, PowerC=128176-27380 in spring,

PowerC=128176+5450 in summer, and PowerC=128176-22225 in autumn.

It is revealing that cooling requires more power than heating. However, the summer coefficient is not significant. Here is the Excel file with the data and estimation result.

The category that has been dropped is called a base (or reference) category. Thus, the intercept in (5) measures power consumption in winter. The dummy coefficients in (5) measure deviations of power consumption in respective seasons from that in winter.

Here is the question I ask my students

We want to see how beer consumption BeerC depends on gender and income Inc. Let M and F denote the dummies for males and females, resp. Correct the following model and interpret the resulting coefficients:

BeerC=a+bM+cF+dM^2+eF^2+fFM+(h+iM)Inc.

Final remark

When a researcher includes all categories plus the trivial regressor, he/she falls into what is called a dummy trap. The problem of linear dependence among regressors is usually discussed under the heading of multiple regression. But since the trivial regressor is present in simple regression too, it might be a good idea to discuss it earlier.

Linear dependence/independence of regressors is an exact condition for existence of the OLS estimator. That is, if regressors are linearly dependent, then the OLS estimator doesn't exist, in which case the question about its further properties doesn't make sense. If, on the other hand, regressors are linearly independent, then the OLS estimator exists, and further properties can be studied, such as unbiasedness, variance and efficiency.

17
Dec 16

Testing for structural changes: a topic suitable for AP Stats

Testing for structural changes: a topic suitable for AP Stats

Problem statement

Economic data are volatile but sometimes changes in them look more permanent than transitory.

Figure 1. US GDP from agriculture. Source: http://www.tradingeconomics.com/united-states/gdp-from-agriculture

Figure 1 shows fluctuations of US GDP from agriculture. There have been ups and downs throughout the period of 2005-2016 but overall the trend has been up until 2013 and down since then. We want an objective, statistical confirmation of the fact that in 2013 the change was structural, substantial rather than a random fluctuation.

Chow test steps

  1. Divide the observed sample in two parts, A and B, at the point where you suspect the structural change (or break) has occurred. Run three regressions: one for A, another for B and the third one for the whole sample (pooled regression). Get residual sums of squares from each of them, denoted RSS_ARSS_B and RSS_p, respectively.
  2. Let n_A and n_B be the numbers of observations in the two subsamples and suppose there are k coefficients in your regression (for Figure 1, we would regress GDP on a time variable, so the number of coefficients would be 2, including the intercept). The Chow test statistic is defined by

 F_{k,n_A+n_B-2k}=\frac{(RSS_p-RSS_A-RSS_B)/k}{(RSS_A+RSS_B)/(n_A+n_B-2k)}.

This statistic is distributed as F with k,n_A+n_B-2k degrees of freedom. The null hypothesis is that the coefficients are the same for the two subsamples and the alternative is that they are not. If the statistic is larger than the critical value at your chosen level of significance, splitting the sample in two is beneficial (better describes the data). If the statistic is not larger than the critical value, the pooled regression better describes the data.

Figure 2. Splitting is better (there is a structural change)

In Figure 2, the gray lines are the fitted lines for the two subsamples. They fit the data much better than the orange line (the fitted line for the whole sample).

Figure 3. Pooling is better

In Figure 3, pooling is better because the intercept and slope are about the same and pooling amounts to increasing the sample size.

Download the Excel file used for simulation. See the video.

14
Dec 16

It’s time to modernize the AP Stats curriculum

It's time to modernize the AP Stats curriculum

The suggestions below are based on the College Board AP Statistics Course Description, Effective Fall 2010. Citing this description, “AP teachers are encouraged to develop or maintain their own curriculum that either includes or exceeds each of these expectations; such courses will be authorized to use the “AP” designation.” However, AP teachers are constrained by the statement that “The Advanced Placement Program offers a course description and exam in statistics to secondary school students who wish to complete studies equivalent to a one semester, introductory, non-calculus-based, college course in statistics.”

Too much material for a one-semester course

I tried to teach AP Stats in one semester following the College Board description and methodology. That is, with no derivations, giving only recipes and concentrating on applications. The students were really stretched, didn’t remember anything after completing the course, and usefulness of the course for the subsequent calculus-based course was minimal.

Suggestion. Reduce the number of topics and concentrate on those, which require going all the way from (again citing the description) Exploring Data to Sampling and Experimentation to Anticipating Patterns to Statistical Inference. Simple regression is such a topic.

I would drop the stem-and-leaf plot, because it is stupid; chi-square test for goodness of fit, homogeneity of proportions and independence, including ANOVA, because it is too advanced and looks too vague without the right explanation. Instead of going wide, it is better to go deeper, building upon what students already know. I’ll post a couple of regression applications.

“Introductory” should not mean stupefying

Statistics has its specifics. Even I, with my extensive experience in Math, made quite a few discoveries for myself while studying Stats. Textbook authors, in their attempts to make exposition accessible, often replace the true statistical ideas by after-the-fact intuition or formulas by their verbal description. See, for example, the z score.

Using TI-83+ and TI-84 graphing calculators is like using a Tesla electric car in conjunction with candles for generating electricity. The sole purpose of these calculators is to prevent cheating. The inclination for cheating is a sign of low understanding and the best proof that the College Board strategy is wrong.

Once you say “this course is non-calculus-based”, you close many doors

When we format a document in Word, we don’t care how formatting is implemented technically and we don’t need to know anything about programming. Looks like the same attitude is imparted to students of Stats. Few people notice a big difference. When we format a document, we have an idea of what we want and test the result against that idea. In Stats, the idea has to be translated to a formula, and the software output has to be translated into a formula for interpretation.

I understand that, for the majority of Stats students, the amount of algebra I use in some of my posts is not accessible. However, the opposite tendency of telling students that they don’t need to remember any formulas is unproductive. It’s only by memorizing and reproducing equations that they can augment their algebraic proficiency. Stats is largely a mental science. To improve mental activity, you have to engage in one.

Suggestion. Instead of “this course is non-calculus-based”, say: the course develops the ability to interpret equations and translate ideas to formulas.

Follow a logical sequence

The way most AP Stats books are written does not give any idea as to what comes from where. When I was a bachelor student, I was looking for explanations, and I would hate reading one of today’s AP Stats textbooks. For those who think, memorizing a bunch of recipes, without seeing the logical links, is a nightmare. In some cases, the absence of logic leads to statements that are plain wrong. Just following the logical sequence will put the pieces of the puzzle together.

9
Dec 16

Ditch statistical tables if you have a computer

You don't need statistical tables if you have Excel or Mathematica. Here I give the relevant Excel and Mathematica functions described in Chapter 14 of my book. You can save all the formulas in one spreadsheet or notebook and use it multiple times.

Cumulative Distribution Function of the Standard Normal Distribution

For a given real z, the value of the distribution function of the standard normal is
F(z)=\frac{1}{\sqrt{2\pi }}\int_{-\infty }^{z}\exp (-t^{2}/2)dt.

In Excel, use the formula =NORM.S.DIST(z,TRUE).

In Mathematica, enter CDF[NormalDistribution[0,1],z]

Probability Function of the Binomial Distribution

For given number of successes x, number of trials n and probability p the probability is

P(Binomial=x)=C_{x}^{n}p^{x}(1-p)^{n-x}.

In Excel, use the formula =BINOM.DIST(x,n,p,FALSE)

In Mathematica, enter PDF[BinomialDistribution[n,p],x]

Cumulative Binomial Probabilities

For a given cut-off value x, number of trials n and probability p the cumulative probability is

P(Binomial\leq x)=\sum_{t=0}^{x}C_{t}^{n}p^{t}(1-p)^{n-t}.
In Excel, use the formula =BINOM.DIST(x,n,p,TRUE).

In Mathematica, enter CDF[BinomialDistribution[n,p],x]

Values of the exponential function e^{-\lambda}

In Excel, use the formula =EXP(-lambda)

In Mathematica, enter Exp[-lambda]

Individual Poisson Probabilities

For given number of successes x and arrival rate \lambda the probability is

P(Poisson=x)=\frac{e^{-\lambda }\lambda^{x}}{x!}.
In Excel, use the formula =POISSON.DIST(x,lambda,FALSE)

In Mathematica, enter PDF[PoissonDistribution[lambda],x]

Cumulative Poisson Probabilities

For given cut-off x and arrival rate \lambda the cumulative probability is

P(Poisson\leq x)=\sum_{t=0}^{x}\frac{e^{-\lambda }\lambda ^{t}}{t!}.
In Excel, use the formula =POISSON.DIST(x,lambda,TRUE)

In Mathematica, enter CDF[PoissonDistribution[lambda],x]

Cutoff Points of the Chi-Square Distribution Function

For given probability of the right tail \alpha and degrees of freedom \nu, the cut-off value (critical value) \chi _{\nu,\alpha }^{2} is a solution of the equation
P(\chi _{\nu}^{2}>\chi _{\nu,\alpha }^{2})=\alpha .
In Excel, use the formula =CHISQ.INV.RT(alpha,v)

In Mathematica, enter InverseCDF[ChiSquareDistribution[v],1-alpha]

Cutoff Points for the Student’s t Distribution

For given probability of the right tail \alpha and degrees of freedom \nu, the cut-off value t_{\nu,\alpha } is a solution of the equation P(t_{\nu}>t_{\nu,\alpha })=\alpha.
In Excel, use the formula =T.INV(1-alpha,v)

In Mathematica, enter InverseCDF[StudentTDistribution[v],1-alpha]

Cutoff Points for the F Distribution

For given probability of the right tail \alpha , degrees of freedom v_1 (numerator) and v_2 (denominator), the cut-off value F_{v_1,v_2,\alpha } is a solution of the equation P(F_{v_1,v_2}>F_{v_1,v_2,\alpha })=\alpha.

In Excel, use the formula =F.INV.RT(alpha,v1,v2)

In Mathematica, enter InverseCDF[FRatioDistribution[v1,v2],1-alpha]

1
Dec 16

Nonparametric estimation for AP Stats

Nonparametric estimation is the right topic for expanding the Stats agenda

dependence-of-income-on-age

Figure 1. Dependence of income on age

For the last several years I have been doing research in nonparametric estimation. It is intellectually rewarding and it is the best tool to show Stats students the usefulness of Statistics. Agresti and Franklin have a chapter on nonparametric estimation. However, the choice of the topics (Wilcoxon test and Kruskal-Wallis test) is unfortunate. These two tests are about comparing means from two samples. They provide just numbers (corresponding statistics), which is not very appealing because the students see just another solution to the familiar problem.

Nonparametric technique is best in nonlinear curve fitting, and this is its selling point because it is VISUAL. The following examples explain the difference between parametric and nonparametric estimation.

Example 1

Suppose we want to use simple regression to estimate dependence of consumption on income. This is a parametric model, with two parameters (intercept and slope). Suppose the fitted line is Consumption=0.1+0.9\times Income (I just put plausible numbers). The slope 0.9 is interpreted as the marginal propensity to consume and can be used in economic modeling to find the budget multiplier. The advantage of parametric estimation is that often estimated parameters have economic meaning.

Example 2

This example has been taken from Lecture Notes by John Fox. Now let us look at dependence of income on age. It is clear that income is low for young people, then rises with age until middle age and declines after retirement. The dependence is obviously nonlinear and, a priori, no guesses can be made about the shape of the curve.

Figure 1 shows the median and quartiles of the distribution of income from wages and salaries as a function of single years of age. The data are taken from the 1990 U.S. Census one-percent Public Use Microdata Sample, and represent 1.24 million observations. Income starts increasing at around 18 years, tops out at 48 and declines till the age of 65. The fitted line is approximately linear until the age of 24, so young people enjoy a highest and constant income growth rate.

Example 3

return_apple

Figure 2. Density of return on Apple stock

return_ma

Figure 3. Density of return on MA stock

What would have been the better 5-year investment: Apple or MasterCard? Figure 2 shows that the density of return on Apple stock has a negative mode. The density of return on MasterCard has the mode close to zero. This tells us that MasterCard would be better. Indeed, the annual return on Apple is 18%, while on MasterCard it is 29% (over the last 5 years). Nonparametric estimates of densities (kernel density estimates) are used by financial analysts to simulate stock prices to predict their future movements.

Remark. For simple statistical tasks I recommend Eviews student version for two reasons. 1) It has excellent Help. When I want my students to understand just the essence and avoid proofs, I tell them to read Eviews Help. 2) The student version is just $39.95. Figures 2 and 3 have been produced using Eviews.

26
Nov 16

Properties of correlation

Correlation coefficient: the last block of statistical foundation

Correlation has already been mentioned in

Statistical measures and their geometric roots

Properties of standard deviation

The pearls of AP Statistics 35

Properties of covariance

The pearls of AP Statistics 33

The hierarchy of definitions

Suppose random variables X,Y are not constant. Then their standard deviations are not zero and we can define their correlation as in Chart 1.

correlation-definition

Chart 1. Correlation definition

Properties of correlation

Property 1. Range of the correlation coefficient: for any X,Y one has - 1 \le \rho (X,Y) \le 1.
This follows from the Cauchy-Schwarz inequality, as explained here.

Recall from this post that correlation is cosine of the angle between X-EX and Y-EY.
Property 2. Interpretation of extreme cases. (Part 1) If \rho (X,Y) = 1, then Y = aX + b with a > 0.

(Part 2) If \rho (X,Y) = - 1, then Y = aX + b with a < 0.

Proof. (Part 1) \rho (X,Y) = 1 implies

(1) Cov (X,Y) = \sigma (X)\sigma (Y)

which, in turn, implies that Y is a linear function of X: Y = aX + b (this is the second part of the Cauchy-Schwarz inequality). Further, we can establish the sign of the number a. By the properties of variance and covariance

Cov(X,Y)=Cov(X,aX+b)=aCov(X,X)+Cov(X,b)=aVar(X),

\sigma (Y)=\sigma(aX + b)=\sigma (aX)=|a|\sigma (X).

Plugging this in Eq. (1) we get aVar(X) = |a|\sigma^2(X) and see that a is positive.

The proof of Part 2 is left as an exercise.

Property 3. Suppose we want to measure correlation between weight W and height H of people. The measurements are either in kilos and centimeters {W_k},{H_c} or in pounds and feet {W_p},{H_f}. The correlation coefficient is unit-free in the sense that it does not depend on the units used: \rho (W_k,H_c)=\rho (W_p,H_f). Mathematically speaking, correlation is homogeneous of degree 0 in both arguments.
Proof. One measurement is proportional to another, W_k=aW_p,\ H_c=bH_f with some positive constants a,b. By homogeneity

\rho (W_k,H_c)=\frac{Cov(W_k,H_c)}{\sigma(W_k)\sigma(H_c)}=\frac{Cov(aW_p,bH_f)}{\sigma(aW_p)\sigma(bH_f)}=\frac{abCov(W_p,H_f)}{ab\sigma(W_p)\sigma (H_f)}=\rho (W_p,H_f).

 

20
Nov 16

The pearls of AP Statistics 36

ANOVA: the artefact that survives because of the College Board

Why ANOVA should be dropped from AP Statistics

  1. The common argument in favor of using ANOVA is that "The methods introduced in this chapter [Comparing Groups: Analysis of Variance Methods] apply when a quantitative response variable has a categorical explanatory variable" (Agresti and Franklin, p. 680). However, categorical explanatory variables can be replaced by indicator (dummy) variables, and then regression methods can be used to study dependences involving categorical variables. On p. 695, the authors admit that "ANOVA can be presented as a special case of multiple regression".
  2. In terms of knowledge of basic statistical ideas (hypothesis testing, F statistics, significance level), ANOVA doesn't add any value. Those, who have mastered these basic ideas, will not have problems learning ANOVA at their workplace if they have to. There is no need to burden everybody with this stuff "just in case".
  3. The explanation of ANOVA is accompanied with definitions of the within-groups variance estimate and between-groups variance estimate (Agresti and Franklin, p. 686). Even in my courses, where I give a lot of algebra, the students don't get them unless they do a couple of theoretical exercises. At the AP Stats level, the usefulness of these definitions is nil.
  4. The requirement to remember how the F statistics and degrees of freedom are calculated, for the purpose of being able to interpret just one table with output from a statistical package, doesn't make sense. In my book, I have a whole chapter on ANOVA, with most derivations, and I don't remember a thing. Why torture the students?
  5. In the 90 years since R. Fisher has invented ANOVA, many other, more precise and versatile, statistical methods have been developed.

Conclusion

There are two suggestions

1) Explain just the intuition and then jump to the interpretation of output, indicating the statistic to look at, as in Table 14.14.

table-14-14

2) The theory of ANOVA is useful for two reasons: there is a lot of manipulation with summation signs and there is a link to regressions. Learning all this may be the only justification to study ANOVA with definitions. In my classes, this takes 6 hours.

13
Nov 16

Statistical measures and their geometric roots

Variance, covariancestandard deviation and correlation: their definitions and properties are deeply rooted in the Euclidean geometry.

Here is the why: analogy with Euclidean geometry

euclidEuclid axiomatically described the space we live in. What we have known about the geometry of this space since the ancient times has never failed us. Therefore, statistical definitions based on the Euclidean geometry are sure to work.

   1. Analogy between scalar product and covariance

Geometry. See Table 2 here for operations with vectors. The scalar product of two vectors X=(X_1,...,X_n),\ Y=(Y_1,...,Y_n) is defined by

(X,Y)=\sum X_iY_i.

Statistical analog: Covariance of two random variables is defined by

Cov(X,Y)=E(X-\bar{X})(Y-\bar{Y}).

Both the scalar product and covariance are linear in one argument when the other argument is fixed.

   2. Analogy between orthogonality and uncorrelatedness

Geometry. Two vectors X,Y are called orthogonal (or perpendicular) if

(1) (X,Y)=\sum X_iY_i=0.

Exercise. How do you draw on the plane the vectors X=(1,0),\ Y=(0,1)? Check that they are orthogonal.

Statistical analog: Two random variables are called uncorrelated if Cov(X,Y)=0.

   3. Measuring lengths

length-of-vector

Figure 1. Length of a vector

Geometry: the length of a vector X=(X_1,...,X_n) is \sqrt{\sum X_i^2}, see Figure 1.

Statistical analog: the standard deviation of a random variable X is

\sigma(X)=\sqrt{Var(X)}=\sqrt{E(X-\bar{X})^2}.

This explains the square root in the definition of the standard deviation.

   4. Cauchy-Schwarz inequality

Geometry|(X,Y)|\le\sqrt{\sum X_i^2}\sqrt{\sum Y_i^2}.

Statistical analog|Cov(X,Y)|\le\sigma(X)\sigma(Y). See the proof here. The proof of its geometric counterpart is similar.

   5. Triangle inequality

triangle-inequality

Figure 2. Triangle inequality

Geometry\sqrt{\sum (X_i+Y_i)^2}\le\sqrt{\sum X_i^2}+\sqrt{\sum X_i^2}, see Figure 2 where the length of X+Y does not exceed the sum of lengths of X and Y.

Statistical analog: using the Cauchy-Schwarz inequality we have

\sigma(X+Y)=\sqrt{Var(X+Y)} =\sqrt{Var(X)+2Cov(X,Y)+Var(Y)} \le\sqrt{\sigma^2(X)+2\sigma(X)\sigma(Y)+\sigma^2(Y)} =\sigma(X)+\sigma(Y).

   4. The Pythagorean theorem

Geometry: In a right triangle, the squared hypotenuse is equal to the sum of the squares of the two legs. The illustration is similar to Figure 2, except that the angle between X and Y should be right.

Proof. Taking two orthogonal vectors X,Y as legs, we have

Squared hypotenuse = \sum(X_i+Y_i)^2

(squaring out and using orthogonality (1))

=\sum X_i^2+2\sum X_iY_i+\sum Y_i^2=\sum X_i^2+\sum Y_i^2 = Sum of squared legs

Statistical analog: If two random variables are uncorrelated, then variance of their sum is a sum of variances Var(X+Y)=Var(X)+Var(Y).

   5. The most important analogy: measuring angles

Geometry: the cosine of the angle between two vectors X,Y is defined by

Cosine between X,Y = \frac{\sum X_iY_i}{\sqrt{\sum X_i^2\sum Y_i^2}}.

Statistical analog: the correlation coefficient between two random variables is defined by

\rho(X,Y)=\frac{Cov(X,Y)}{\sqrt{Var(X)Var(Y)}}=\frac{Cov(X,Y)}{\sigma(X)\sigma(Y)}.

This intuitively explains why the correlation coefficient takes values between -1 and +1.

Remark. My colleague Alisher Aldashev noticed that the correlation coefficient is the cosine of the angle between the deviations X-EX and Y-EY and not between X,Y themselves.

12
Nov 16

Properties of standard deviation

Properties of standard deviation are divided in two parts. The definitions and consequences are given here. Both variance and standard deviation are used to measure variability of values of a random variable around its mean. Then why use both of them? The why will be explained in another post.

Properties of standard deviation: definitions and consequences

Definition. For a random variable X, the quantity \sigma (X) = \sqrt {Var(X)} is called its standard deviation.

    Digression about square roots and absolute values

In general, there are two square roots of a positive number, one positive and the other negative. The positive one is called an arithmetic square root. The arithmetic root is applied here to Var(X) \ge 0 (see properties of variance), so standard deviation is always nonnegative.
Definition. An absolute value of a real number a is defined by
(1) |a| =a if a is nonnegative and |a| =-a if a is negative.
This two-part definition is a stumbling block for many students, so making them plug in a few numbers is a must. It is introduced to measure the distance from point a to the origin. For example, dist(3,0) = |3| = 3 and dist(-3,0) = |-3| = 3. More generally, for any points a,b on the real line the distance between them is given by dist(a,b) = |a - b|.

By squaring both sides in Eq. (1) we obtain |a|^2={a^2}. Application of the arithmetic square root gives

(2) |a|=\sqrt {a^2}.

This is the equation we need right now.

Back to standard deviation

Property 1. Standard deviation is homogeneous of degree 1. Indeed, using homogeneity of variance and equation (2), we have

\sigma (aX) =\sqrt{Var(aX)}=\sqrt{{a^2}Var(X)}=|a|\sigma(X).

Unlike homogeneity of expected values, here we have an absolute value of the scaling coefficient a.

Property 2. Cauchy-Schwarz inequality. (Part 1) For any random variables X,Y one has

(3) |Cov(X,Y)|\le\sigma(X)\sigma(Y).

(Part 2) If the inequality sign in (3) turns into equality, |Cov(X,Y)|=\sigma (X)\sigma (Y), then Y is a linear function of X: Y = aX + b, with some constants a,b.
Proof. (Part 1) If at least one of the variables is constant, both sides of the inequality are 0 and there is nothing to prove. To exclude the trivial case, let X,Y be non-constant and, therefore, Var(X),\ Var(Y) are positive. Consider a real-valued function of a real number t defined by f(t) = Var(tX + Y). Here we have variance of a linear combination

f(t)=t^2Var(X)+2tCov(X,Y)+Var(Y).

We see that f(t) is a parabola with branches looking upward (because the senior coefficient Var(X) is positive). By nonnegativity of variance, f(t)\ge 0 and the parabola lies above the horizontal axis in the (f,t) plane. Hence, the quadratic equation f(t) = 0 may have at most one real root. This means that the discriminant of the equation is non-positive:

D=Cov(X,Y)^2-Var(X)Var(Y)\le 0.

Applying square roots to both sides of Cov(X,Y)^2\le Var(X)Var(Y) we finish the proof of the first part.

(Part 2) In case of the equality sign the discriminant is 0. Therefore the parabola touches the horizontal axis where f(t)=Var(tX + Y)=0. But we know that this implies tX + Y = constant which is just another way of writing Y = aX + b.

Comment. (3) explains one of the main properties of the correlation:

-1\le\rho(X,Y)=\frac{Cov(X,Y)}{\sigma(X)\sigma(Y)}\le 1.