13
Oct 16

## Properties of means

Properties of means, covariances and variances are bread and butter of professionals. Here we consider the bread - the means

### Properties of means: as simple as playing with tables

Definition of a random variable. When my Brazilian students asked for an intuitive definition of a random variable, I said: It is a function whose values are unpredictable. Therefore it is prohibited to work with their values and allowed to work only with their various means. For proofs we need a more technical definition: it is a table values+probabilities of type Table 1.

 Values of $X$$X$ Probabilities $x_1$$x_1$ $p_1$$p_1$ ... ... $x_n$$x_n$ $p_n$$p_n$

Note: The complete form of writing ${p_i}$ is $P(X = {x_i})$.

Mean (or expected value) value definition$EX = x_1p_1 + ... + x_np_n = \sum\limits_{i = 1}^nx_ip_i.$ In words, this is a weighted sum of values, where the weights $p_i$ reflect the importance of corresponding $x_i$.

Note: The expected value is a function whose argument is a complex object (it is described by Table 1) and the value is simple: $EX$ is just a number. And it is not a product of $E$ and $X$! See how different means fit this definition.

Definition of a linear combination. See here the financial motivation. Suppose that $X,Y$ are two discrete random variables with the same probability distribution ${p_1},...,{p_n}$. Let $a,b$ be real numbers. The random variable $aX + bY$ is called a linear combination of $X,Y$ with coefficients $a,b$. Its special cases are $aX$ ($X$ scaled by $a$) and $X + Y$ (a sum of $X$ and $Y$). The detailed definition is given by Table 2.

 Values of $X$$X$ Values of $Y$$Y$ Probabilities $aX$$aX$ $X + Y$$X + Y$ $aX + bY$$aX + bY$ $x_1$$x_1$ ${y_1}$${y_1}$ $p_1$$p_1$ $a{x_1}$$a{x_1}$ ${x_1} + {y_1}$${x_1} + {y_1}$ $a{x_1} + b{y_1}$$a{x_1} + b{y_1}$ ... ... ... ... ... ... $x_n$$x_n$ ${y_n}$${y_n}$ $p_n$$p_n$ $a{x_n}$$a{x_n}$ ${x_n} + {y_n}$${x_n} + {y_n}$ $a{x_n} + b{y_n}$$a{x_n} + b{y_n}$

Note: The situation when the probability distributions are different is reduced to the case when they are the same, see my book.

Property 1. Linearity of means. For any random variables $X,Y$ and any numbers $a,b$ one has

(1) $E(aX + bY) = aEX + bEY$.

Proof. This is one of those straightforward proofs when knowing the definitions and starting with the left-hand side is enough to arrive at the result. Using the definitions in Table 2, the mean of the linear combination is
$E(aX + bY)= (a{x_1} + b{y_1}){p_1} + ... + (a{x_n} + b{y_n}){p_n}$

(distributing probabilities)
$= a{x_1}{p_1} + b{y_1}{p_1} + ... + a{x_n}{p_n} + b{y_n}{p_n}$

(grouping by variables)
$= (a{x_1}{p_1} + ... + a{x_n}{p_n}) + (b{y_1}{p_1} + ... + b{y_n}{p_n})$

(pulling out constants)
$= a({x_1}{p_1} + ... + {x_n}{p_n}) + b({y_1}{p_1} + ... + {y_n}{p_n})=aEX+bEY.$

See applications: one, and two, and three.

Generalization to the case of a linear combination of $n$ variables:

$E({a_1}{X_1} + ... + {a_n}{X_n}) = {a_1}E{X_1} + ... + {a_n}E{X_n}$.

Special cases. a) Letting $a = b = 1$ in (1) we get $E(X + Y) = EX + EY$. This is called additivity. See an application. b) Letting in (1) $b = 0$ we get $E(aX) = aEX$. This property is called homogeneity of degree 1 (you can pull the constant out of the expected value sign). Ask your students to deduce linearity from homogeneity and additivity.

Property 2. Expected value of a constant. Everybody knows what a constant is. Ask your students what is a constant in terms of Table 1. The mean of a constant is that constant, because a constant doesn't change, rain or shine: $Ec = c{p_1} + ... + c{p_n} = c({p_1} + ... + {p_n}) = 1$ (we have used the completeness axiom). In particular, it follows that $E(EX)=EX$.

Property 3. The expectation operator preserves order: if $x_i\ge y_i$ for all $i$, then $EX\ge EY$. In particular, the mean of a nonnegative random variable is nonnegative: if $x_i\ge 0$ for all $i$, then $EX\ge 0$.

Indeed, using the fact that all probabilities are nonnegative, we get $EX = x_1p_1 + ... + x_np_n\ge y_1p_1 + ... + y_np_n=EY$.

Property 4. For independent variables, we have $EXY=(EX)(EY)$ (multiplicativity), which has important implications on its own.

The best thing about the above properties is that, although we proved them under simplified assumptions, they are always true. We keep in mind that the expectation operator $E$ is the device used by Mother Nature to measure the average, and most of the time she keeps hidden from us both the probabilities and the average $EX$.

### 12 Responses for "Properties of means"

1. […] we distributed the expectation operator and used the property that the mean of a constant () is that constant). By the way, subtracting the mean from a variable […]

2. […] the properties of means and variances we see […]

3. […] Properties of means apply equally to all mean types. […]

4. […] that has been dropped and the subscript disappears together with the summation signs. The general linearity property of expectations (where are numbers and are random variables) is true for sample means too: . It is used to […]

5. […] It remains to show that . Don't try to write . The expectation operator does not have such a property. Instead, we make an assumption that the regressors is deterministic. Therefore by linearity of means […]

6. […] (use linearity of means). […]

7. […] study properties of means with […]

8. […] variables are uncorrelated: if are independent, then . Proof. By the shortcut for covariance and multiplicativity of means for independent variables we have […]

9. […] homogeneity of expected values, here we have an absolute value of the scaling […]

10. […] Table 2 here for operations with vectors. The scalar product of two vectors is defined […]

11. […] usual linearity of means  applied to prove unbiasedness doesn't work because now the coefficients are stochastic (in other […]

12. […] processes and bad (all other) processes. How do you define the good ones? (Hint: Properties of means, covariances and variances are bread and butter of […]

### Leave a Reply

You must be logged in to post a comment.