**Summation sign rules: identities for simple regression**

There are many sources on the Internet. This and this are relatively simple, while this one is pretty advanced. They cover the basics. My purpose is more specific: to show how to obtain a couple of identities in terms of summation signs from general properties of variance and covariance.

**Shortcut for covariance**. This is a name of the following identity

(1)

where on the left we have the definition of and on the right we have an alternative expression (a shortcut) for the same thing. Letting in (1) we get a **shortcut for variance**:

(2)

see the direct proof here. Again, on the left we have the definition of and on the right a shortcut for the same.

In this post I mentioned that

for a discrete uniformly distributed variable with a finite number of elements, the population mean equals the sample mean if the sample is the whole population.

This is what it means. The most useful definition of a **discrete random variable** is this: it is a table values+probabilities of type

Values | ... | ||

Probabilities | ... |

Here are the values and are the probabilities (they sum to one). With this table, it is easy to define the **mean** of :

(3)

A variable like this is called **uniformly distributed** if all probabilities are the same:

Values | ... | ||

Probabilities | ... |

In this case (3) becomes

(4)

This explains the statement from my post. Using (4), equations (1) and (2) rewrite as

(5)

Try to write this using summation signs. For example, the first identity in (5) becomes

This is crazy and trying to prove this directly would be even crazier.

**Remark**. Let be a sample from an arbitrary distribution. Regardless of the parent distribution, the artificial uniform distribution from Table 2 can still be applied to the sample. To avoid confusion with the expected value with respect to the parent distribution, instead of (4) we can write

(6)

where the subscript stands for "uniform". With that understanding, equations (5) are still true. The power of this approach is that all expressions in (5) are random variables which allows for further application of the expected value with respect to the parent distribution.

[…] is the starting point. Such a variable, by definition, is a table values+probabilities, see this post, and its mean is . If that random variable is uniformly distributed, in the same post we explain […]

[…] leave equation (4) in this form. Do use equations (5) from this post to rewrite equation (4) […]

[…] Property 2. Independent variables are uncorrelated: . This follows immediately from multiplicativity and the shortcut for covariance: […]

[…] To find variance, we use the shortcut: […]

[…] arise from the corresponding population characteristics as explained in this post. Directly from (1) and (2) we see […]

[…] Here I argue that for the purposes of obtaining some identities from the general properties of means instead of the sample variance it's better to use the variance defined by (with division by instead of ). Using Facts 1 and 2 we get from (1) that […]

[…] recall our convention regarding the notation of sample versus population […]