Properties of conditional expectation
Background
A company sells a product and may offer a discount. We denote by the sales volume and by
the discount amount (per unit). For simplicity, both variables take only two values. They depend on each other. If the sales are high, the discount may be larger. A higher discount, in its turn, may attract more buyers. At the same level of sales, the discount may vary depending on the vendor's costs. With the same discount, the sales vary with consumer preferences. Along with the sales and discount, we consider a third variable that depends on both of them. It can be the profit
.
Formalization
The sales volume takes values
with probabilities
,
. Similarly, the discount
takes values
with probabilities
,
. The joint events have joint probabilities denoted
. The profit in the event
is denoted
. This information is summarized in Table 1.
Comments. In the left-most column and upper-most row we have values of the sales and discount. In the "margins" (last row and last column) we put probabilities of those values. In the main body of the table we have profit values and their probabilities. It follows that the expected profit is
(1)
Conditioning
Suppose that the vendor fixes the discount at . Then only the column containing this value is relevant. To get numbers that satisfy the completeness axiom, we define conditional probabilities
This allows us to define conditional expectation
(2)
Similarly, if the discount is fixed at ,
(3)
Equations (2) and (3) are joined in the notation .
Property 1. While the usual expectation (1) is a number, the conditional expectation is a function of the value of
on which the conditioning is being done. Since it is a function of
, it is natural to consider it a random variable defined by the next table
Values | Probabilities |
Property 2. Law of iterated expectations: the mean of the conditional expectation equals the usual mean. Indeed, using Table 2, we have
(applying (2) and (3))
Property 3. Generalized homogeneity. In the usual homogeneity ,
is a number. In the generalized homogeneity
(4)
is allowed to be a function of the variable on which we are conditioning. See for yourself: using (2), for instance,
Property 4. Additivity. For any random variables we have
(5)
The proof is left as an exercise.
Property 5. Generalized linearity. For any random variables and functions
equations (4) and (5) imply
Property 6. Conditioning in case of independence. This property has to do with the informational aspect of conditioning. The usual expectation (1) takes into account all contingencies. (2) and (3) are based on the assumption that one contingency for has been realized, so that the other one becomes irrelevant. Therefore
is considered an updated version of (1) that takes into account the arrival of new information that the value of
has been fixed. Now we can state the property itself: if
are independent, then
, that is, conditioning on
does not improve our knowledge of
.
Proof. In case of independence we have for all
, so that
Property 7. Conditioning in case of complete dependence. Conditioning of on
gives the most precise information:
(if we condition
on
, we know about it everything and there is no averaging). More generally,
for any deterministic function
.
Proof. If we condition on
, the conditional probabilities become
Hence, (2) gives
Conditioning on is treated similarly.
Summary
Not many people know that using the notation for conditional expectation instead of
makes everything much clearer. I rewrite the above properties using this notation:
- Law of iterated expectations:
- Generalized homogeneity:
- Additivity: For any random variables
we have
- Generalized linearity: For any random variables
and functions
one has
- Conditioning in case of independence: if
are independent, then
- Conditioning in case of complete dependence:
for any deterministic function
.
Leave a Reply
You must be logged in to post a comment.