Properties of means, covariances and variances are bread and butter of professionals. Here we consider the bread - the means

### Properties of means: as simple as playing with tables

**Definition of a random variable**. When my Brazilian students asked for an intuitive definition of a random variable, I said: It is a function whose values are unpredictable. Therefore it is prohibited to work with their values and allowed to work only with their various means. For proofs we need a more technical definition: it is a table values+probabilities of type Table 1.

Values of | Probabilities |

... | ... |

**Note**: The complete form of writing is .

**Mean (or expected value) value definition**. In words, this is a weighted sum of values, where the weights reflect the importance of corresponding .

**Note**: The expected value is a function whose argument is a complex object (it is described by Table 1) and the value is simple: is just a number. And it is not a product of and ! See how different means fit this definition.

**Definition of a linear combination**. See here the financial motivation. Suppose that are two discrete random variables with the same probability distribution . Let be real numbers. The random variable is called a **linear combination** of with coefficients . Its special cases are ( scaled by ) and (a sum of and ). The detailed definition is given by Table 2.

Values of | Values of | Probabilities | |||

... | ... | ... | ... | ... | ... |

**Note**: The situation when the probability distributions are different is reduced to the case when they are the same, see my book.

**Property 1. Linearity of means.** For any random variables and any numbers one has

(1) .

**Proof**. This is one of those straightforward proofs when knowing the definitions and starting with the left-hand side is enough to arrive at the result. Using the definitions in Table 2, the mean of the linear combination is

(distributing probabilities)

(grouping by variables)

(pulling out constants)

See applications: one, and two, and three.

**Generalization **to the case of a linear combination of variables:

.

**Special cases**. a) Letting in (1) we get . This is called **additivity**. See an application. b) Letting in (1) we get . This property is called **homogeneity of degree 1** (you can pull the constant out of the expected value sign). Ask your students to deduce linearity from homogeneity and additivity.

**Property 2. Expected value of a constant.** Everybody knows what a constant is. Ask your students what is a constant in terms of Table 1. *The mean of a constant is that constant*, because a constant doesn't change, rain or shine: (we have used the completeness axiom). In particular, it follows that .

**Property 3**. The expectation operator **preserves order**: if for all , then . In particular, the mean of a nonnegative random variable is nonnegative: if for all , then .

Indeed, using the fact that all probabilities are nonnegative, we get .

**Property 4**. For independent variables, we have (**multiplicativity**), which has important implications on its own.

The best thing about the above properties is that, although we proved them under simplified assumptions, they are always true. We keep in mind that the expectation operator is the device used by Mother Nature to measure the average, and most of the time she keeps hidden from us both the probabilities and the average .

[…] we distributed the expectation operator and used the property that the mean of a constant () is that constant). By the way, subtracting the mean from a variable […]

[…] the properties of means and variances we see […]

[…] Properties of means apply equally to all mean types. […]

[…] that has been dropped and the subscript disappears together with the summation signs. The general linearity property of expectations (where are numbers and are random variables) is true for sample means too: . It is used to […]

[…] It remains to show that . Don't try to write . The expectation operator does not have such a property. Instead, we make an assumption that the regressors is deterministic. Therefore by linearity of means […]

[…] (use linearity of means). […]

[…] study properties of means with […]

[…] variables are uncorrelated: if are independent, then . Proof. By the shortcut for covariance and multiplicativity of means for independent variables we have […]

[…] homogeneity of expected values, here we have an absolute value of the scaling […]

[…] Table 2 here for operations with vectors. The scalar product of two vectors is defined […]

[…] usual linearity of means applied to prove unbiasedness doesn't work because now the coefficients are stochastic (in other […]

[…] processes and bad (all other) processes. How do you define the good ones? (Hint: Properties of means, covariances and variances are bread and butter of […]