## Law of iterated expectations: informational aspect

The notion of Brownian motion will help us. Suppose we observe a particle that moves back and forth randomly along a straight line. The particle starts at zero at time zero. The movement can be visualized by plotting on the horizontal axis time and on the vertical axis - the position of the particle. denotes the random position of the particle at time .

In Figure 1, various paths starting at the origin are shown in different colors. The intersections of the paths with vertical lines at times 0.5, 1 and 1.5 show the positions of the particle at these times. The deviations of those positions from to the upside and downside are assumed to be equally likely (more precisely, they are normal variables with mean zero and variance ).

### Unconditional expectation

“In the beginning there was nothing, which exploded.” ― Lords and Ladies

If we are at the origin (like the Big Bang), nothing has happened yet and is the best prediction for any moment we can make (shown by the blue horizontal line in Figure 1). The usual, unconditional expectation corresponds to the empty information set.

### Conditional expectation

In Figure 2, suppose we are at The dark blue path between and has been realized. We know that the particle has reached the point at that time. With this knowledge, we see that the paths starting at this point will have the average

(1)

This is because the particle will continue moving randomly, with the up and down moves being equally likely. Prediction (1) is shown by the horizontal light blue line between and In general, this prediction is better than .

Note that for different realized paths, takes different values. Therefore , for , is a random variable. It is a function of the event we condition the expectation on.

### Law of iterated expectations

Suppose you are at time (see Figure 3). You send many agents to the future to fetch the information about what will happen. They bring you the data on the means they see (shown by horizontal lines between and Since there are many possible future realizations, you have to average the future means. For this, you will use the distributional belief you have at time The result is Since the up and down moves are equally likely, your distribution at time is symmetric around Therefore the above average will be equal to This is the **Law of Iterated Expectations**, also called the **tower property**:

(2)

The knowledge of all of the future predictions , upon averaging, does not improve or change our current prediction .

For a full mathematical treatment of conditional expectation see Lecture 10 by Gordan Zitkovic.

## Leave a Reply

You must be logged in to post a comment.