18
Oct 18

## Law of iterated expectations: geometric aspect

There will be a separate post on projectors. In the meantime, we'll have a look at simple examples that explain a lot about conditional expectations.

### Examples of projectors

The name "projector" is almost self-explanatory. Imagine a point and a plane in the three-dimensional space. Draw a perpendicular from the point to the plane. The intersection of the perpendicular with the plane is the points's projection onto that plane. Note that if the point already belongs to the plane, its projection equals the point itself. Besides, instead of projecting onto a plane we can project onto a straight line.

The above description translates into the following equations. For any $x\in R^3$ define

(1) $P_2x=(x_1,x_2,0)$ and $P_1x=(x_1,0,0).$

$P_2$ projects $R^3$ onto the plane $L_2=\{(x_1,x_2,0):x_1,x_2\in R\}$ (which is two-dimensional) and $P_1$ projects $R^3$ onto the straight line $L_1=\{(x_1,0,0):x_1\in R\}$ (which is one-dimensional).

Property 1. Double application of a projector amounts to single application.

Proof. We do this just for one of the projectors. Using (1) three times we get

(1) $P_2[P_2x]=P_2(x_1,x_2,0)=(x_1,x_2,0)=P_2x.$

Property 2. A successive application of two projectors yields the projection onto a subspace of a smaller dimension.

Proof. If we apply first $P_2$ and then $P_1$, the result is

(2) $P_1[P_2x]=P_1(x_1,x_2,0)=(x_1,0,0)=P_1x.$

If we change the order of projectors, we have

(3) $P_2[P_1x]=P_2(x_1,0,0)=(x_1,0,0)=P_1x.$

Exercise 1. Show that both projectors are linear.

Exercise 2. Like any other linear operator in a Euclidean space, these projectors are given by some matrices. What are they?

### The simple truth about conditional expectation

In the time series setup, we have a sequence of information sets $...\subset I_t\subset I_{t+1}\subset...$ (it's natural to assume that with time the amount of available information increases). Denote

$E_tX=E(X|I_t)$

the expectation of $X$ conditional on $I_t$. For each $t$,

 $E_t$$E_t$ is a projector onto the space of random functions that depend only on the information set $I_t$$I_t$.

Property 1. Double application of conditional expectation gives the same result as single application:

(4) $E_t(E_tX)=E_tX$

($E_tX$ is already a function of $I_t$, so conditioning it on $I_t$ doesn't change it).

Property 2. A successive conditioning on two different information sets is the same as conditioning on the smaller one:

(5) $E_tE_{t+1}X=E_tX,$

(6) $E_{t+1}E_tX=E_tX.$

Property 3. Conditional expectation is a linear operator: for any variables $X,Y$ and numbers $a,b$

$E_t(aX+bY)=aE_tX+bE_tY.$

It's easy to see that (4)-(6) are similar to (1)-(3), respectively, but I prefer to use different names for (4)-(6). I call (4) a projector property. (5) is known as the Law of Iterated Expectations, see my post on the informational aspect for more intuition. (6) holds simply because at time $t+1$ the expectation $E_tX$ is known and behaves like a constant.

Summary. (4)-(6) are easy to remember as one property. The smaller information set wins$E_sE_tX=E_{\min\{s,t\}}X.$

18
Jul 16

## Properties of conditional expectation

### Background

A company sells a product and may offer a discount. We denote by $X$ the sales volume and by $Y$ the discount amount (per unit). For simplicity, both variables take only two values. They depend on each other. If the sales are high, the discount may be larger. A higher discount, in its turn, may attract more buyers. At the same level of sales, the discount may vary depending on the vendor's costs. With the same discount, the sales vary with consumer preferences. Along with the sales and discount, we consider a third variable that depends on both of them. It can be the profit $\pi$.

### Formalization

The sales volume $X$ takes values $x_1,x_2$ with probabilities $p_i^X=P(X=x_i)$$i=1,2$. Similarly, the discount $Y$ takes values $y_1,y_2$ with probabilities $p_i^Y=P(Y=y_i)$$i=1,2$. The joint events have joint probabilities denoted $P(X=x_i,Y=y_j)=p_{i,j}$. The profit in the event $X=x_i,Y=y_j$ is denoted $\pi_{i,j}$. This information is summarized in Table 1.

 $y_1$$y_1$ $y_1$$y_1$ $x_1$$x_1$ $\pi_{1,1},\ p_{1,1}$$\pi_{1,1},\ p_{1,1}$ $\pi_{1,2},\ p_{1,2}$$\pi_{1,2},\ p_{1,2}$ $p_1^X$$p_1^X$ $x_2$$x_2$ $\pi_{2,1},\ p_{2,1}$$\pi_{2,1},\ p_{2,1}$ $\pi_{2,2},\ p_{2,2}$$\pi_{2,2},\ p_{2,2}$ $p_2^X$$p_2^X$ $p_1^Y$$p_1^Y$ $p_2^Y$$p_2^Y$

Comments. In the left-most column and upper-most row we have values of the sales and discount. In the "margins" (last row and last column) we put probabilities of those values. In the main body of the table we have profit values and their probabilities. It follows that the expected profit is

(1) $E\pi=\pi_{1,1}p_{1,1}+\pi_{1,2}p_{1,2}+\pi_{2,1}p_{2,1}+\pi_{2,2}p_{2,2}.$

### Conditioning

Suppose that the vendor fixes the discount at $y_1$. Then only the column containing this value is relevant. To get numbers that satisfy the completeness axiom, we define conditional probabilities

$P(X=x_1|Y=y_1)=\frac{p_{11}}{p_1^Y},\ P(X=x_2|Y=y_1)=\frac{p_{21}}{p_1^Y}.$

This allows us to define conditional expectation

(2) $E(\pi|Y=y_1)=\pi_{11}\frac{p_{11}}{p_1^Y}+\pi_{21}\frac{p_{21}}{p_1^Y}.$

Similarly, if the discount is fixed at $y_2$,

(3) $E(\pi|Y=y_2)=\pi_{12}\frac{p_{12}}{p_2^Y}+\pi_{22}\frac{p_{22}}{p_2^Y}.$

Equations (2) and (3) are joined in the notation $E(\pi|Y)$.

Property 1. While the usual expectation (1) is a number, the conditional expectation $E(\pi|Y)$ is a function of the value of $Y$ on which the conditioning is being done. Since it is a function of $Y$, it is natural to consider it a random variable defined by the next table

 Values Probabilities $E(\pi|Y=y_1)$$E(\pi|Y=y_1)$ $p_1^Y$$p_1^Y$ $E(\pi|Y=y_2)$$E(\pi|Y=y_2)$ $p_2^Y$$p_2^Y$

Property 2. Law of iterated expectations: the mean of the conditional expectation equals the usual mean. Indeed, using Table 2, we have

$E[E(\pi|Y)]=E(\pi|Y=y_1)p_1^Y+E(\pi|Y=y_2)p_2^Y$ (applying (2) and (3))

$=\left[\pi_{11}\frac{p_{11}}{p_1^Y}+\pi_{21}\frac{p_{21}}{p_1^Y}\right]p_1^Y+\left[\pi_{12}\frac{p_{12}}{p_2^Y}+\pi_{22}\frac{p_{22}}{p_2^Y}\right]p_2^Y$ $=\pi_{1,1}p_{1,1}+\pi_{1,2}p_{1,2}+\pi_{2,1}p_{2,1}+\pi_{2,2}p_{2,2}=E\pi.$

Property 3. Generalized homogeneity. In the usual homogeneity $E(aX)=aEX$$a$ is a number. In the generalized homogeneity

(4) $E(a(Y)\pi|Y)=a(Y)E(\pi|Y),$

$a(Y)$ is allowed to be a  function of the variable on which we are conditioning. See for yourself: using (2), for instance,

$E(a(y_1)\pi|Y=y_1)=a(y_1)\pi_{11}\frac{p_{11}}{p_1^Y}+a(y_1)\pi_{21}\frac{p_{21}}{p_1^Y}$ $=a(y_1)\left[\pi_{11}\frac{p_{11}}{p_1^Y}+\pi_{21}\frac{p_{21}}{p_1^Y}\right]=a(y_1)E(X|Y=y_1).$

Property 4. Additivity. For any random variables $S,T$ we have

(5) $E(S+T|Y)=E(S|Y)+E(T|Y).$

The proof is left as an exercise.

Property 5. Generalized linearity. For any random variables $S,T$ and functions $a(Y),b(Y)$ equations (4) and (5) imply

$E(a(Y)S+b(Y)T|Y)=a(Y)E(S|Y)+b(Y)E(T|Y).$

Property 6. Conditioning in case of independence. This property has to do with the informational aspect of conditioning. The usual expectation (1) takes into account all contingencies. (2) and (3) are based on the assumption that one contingency for $Y$ has been realized, so that the other one becomes irrelevant. Therefore $E(\pi|Y)$ is considered  an updated version of (1) that takes into account the arrival of new information that the value of $Y$ has been fixed. Now we can state the property itself: if $X,Y$ are independent, then $E(X|Y)=EX$, that is, conditioning on $Y$ does not improve our knowledge of $EX$.

Proof. In case of independence we have $p_{i,j}=p_i^Xp_j^Y$ for all $i,j$, so that

$E(X|Y=y_j)=x_1\frac{p_{1j}}{p_j^Y}+x_2\frac{p_{2j}}{p_j^Y}=x_1p_1^X+x_2p_2^X=EX.$

Property 7. Conditioning in case of complete dependence. Conditioning of $Y$ on $Y$ gives the most precise information: $E(Y|Y)=Y$ (if we condition $Y$ on $Y$, we know about it everything and there is no averaging). More generally, $E(f(Y)|Y)=f(Y)$ for any deterministic function $f$.

Proof. If we condition $Y$ on $Y$, the conditional probabilities become

$p_{11}=P(Y=y_1|Y=y_1)=1,\ p_{21}=P(Y=y_2|Y=y_1)=0.$

Hence, (2) gives

$E(f(Y)|Y=y_1)=f(y_1)\times 1+f(y_2)\times 0=f(y_1).$

Conditioning on $Y=y_2$ is treated similarly.

### Summary

Not many people know that using the notation $E_Y\pi$ for conditional expectation instead of $E(\pi|Y)$ makes everything much clearer. I rewrite the above properties using this notation:

1. Law of iterated expectations: $E(E_Y\pi)=E\pi$
2. Generalized homogeneity$E_Y(a(Y)\pi)=a(Y)E_Y\pi$
3. Additivity: For any random variables $S,T$ we have $E_Y(S+T)=E_YS+E_YT$
4. Generalized linearity: For any random variables $S,T$ and functions $a(Y),b(Y)$ one has $E_Y(a(Y)S+b(Y)T)=a(Y)E_YS+b(Y)E_YT$
5. Conditioning in case of independence: if $X,Y$ are independent, then $E_YX=EX$
6. Conditioning in case of complete dependence$E_Yf(Y)=f(Y)$ for any deterministic function $f$.