Conditional expectation generalized to continuous random variables
The conditional expectation definition needs to be generalized, to be applicable to continuous random variables. The generalization is accompanied with an example and later will be applied to expected shortfall.
Generalizing conditional expectation definition
Suppose can take values with probabilities and can take values with probabilities
(1) .
Denote the joint probabilities . The definition from this post gives
(2) ,
where is a fixed value of . The drawback of this definition is its dependence on indexation of values . Our purpose is to show how from this definition one can obtain a definition that does not use indexation and can therefore be applied to continuous random variables. Denote
Then the sum can be expanded by including zero terms:
(3)
(the sum in the middle includes all points in the sample space). Using (1) and (3) we can rewrite (2) as
.
Replacing here the conditioning on by conditioning on a general set whose probability is not zero we obtain the definition of conditional expectation:
(4) .
Example. If is standard normal and is its distribution function, then for any number one has
A company sells a product and may offer a discount. We denote by the sales volume and by the discount amount (per unit). For simplicity, both variables take only two values. They depend on each other. If the sales are high, the discount may be larger. A higher discount, in its turn, may attract more buyers. At the same level of sales, the discount may vary depending on the vendor's costs. With the same discount, the sales vary with consumer preferences. Along with the sales and discount, we consider a third variable that depends on both of them. It can be the profit .
Formalization
The sales volume takes values with probabilities , . Similarly, the discount takes values with probabilities , . The joint events have joint probabilities denoted . The profit in the event is denoted . This information is summarized in Table 1.
Table 1. Values and probabilities of the profit function
Comments. In the left-most column and upper-most row we have values of the sales and discount. In the "margins" (last row and last column) we put probabilities of those values. In the main body of the table we have profit values and their probabilities. It follows that the expected profit is
(1)
Conditioning
Suppose that the vendor fixes the discount at . Then only the column containing this value is relevant. To get numbers that satisfy the completeness axiom, we define conditional probabilities
This allows us to define conditional expectation
(2)
Similarly, if the discount is fixed at ,
(3)
Equations (2) and (3) are joined in the notation .
Property 1. While the usual expectation (1) is a number, the conditional expectation is a function of the value of on which the conditioning is being done. Since it is a function of , it is natural to consider it a random variable defined by the next table
Table 2. Conditional expectation is a random variable
Values
Probabilities
Property 2. Law of iterated expectations: the mean of the conditional expectation equals the usual mean. Indeed, using Table 2, we have
(applying (2) and (3))
Property 3. Generalized homogeneity. In the usual homogeneity, is a number. In the generalized homogeneity
(4)
is allowed to be a function of the variable on which we are conditioning. See for yourself: using (2), for instance,
Property 4. Additivity. For any random variables we have
(5)
The proof is left as an exercise.
Property 5. Generalized linearity. For any random variables and functions equations (4) and (5) imply
Property 6. Conditioning in case of independence. This property has to do with the informational aspect of conditioning. The usual expectation (1) takes into account all contingencies. (2) and (3) are based on the assumption that one contingency for has been realized, so that the other one becomes irrelevant. Therefore is considered an updated version of (1) that takes into account the arrival of new information that the value of has been fixed. Now we can state the property itself: if are independent, then , that is, conditioning on does not improve our knowledge of .
Proof. In case of independence we have for all , so that
Property 7. Conditioning in case of complete dependence. Conditioning of on gives the most precise information: (if we condition on , we know about it everything and there is no averaging). More generally, for any deterministic function .
Proof. If we condition on , the conditional probabilities become
Hence, (2) gives
Conditioning on is treated similarly.
Summary
Not many people know that using the notation for conditional expectation instead of makes everything much clearer. I rewrite the above properties using this notation:
Law of iterated expectations:
Generalized homogeneity:
Additivity: For any random variables we have
Generalized linearity: For any random variables and functions one has
Conditioning in case of independence: if are independent, then
Conditioning in case of complete dependence: for any deterministic function .
There will be a separate post on projectors. In the meantime, we'll have a look at simple examples that explain a lot about conditional expectations.
Examples of projectors
The name "projector" is almost self-explanatory. Imagine a point and a plane in the three-dimensional space. Draw a perpendicular from the point to the plane. The intersection of the perpendicular with the plane is the points's projection onto that plane. Note that if the point already belongs to the plane, its projection equals the point itself. Besides, instead of projecting onto a plane we can project onto a straight line.
The above description translates into the following equations. For any define
(1) and
projects onto the plane (which is two-dimensional) and projects onto the straight line (which is one-dimensional).
Property 1. Double application of a projector amounts to single application.
Proof. We do this just for one of the projectors. Using (1) three times we get
(1)
Property 2. A successive application of two projectors yields the projection onto a subspace of a smaller dimension.
Proof. If we apply first and then , the result is
(2)
If we change the order of projectors, we have
(3)
Exercise 1. Show that both projectors are linear.
Exercise 2. Like any other linear operator in a Euclidean space, these projectors are given by some matrices. What are they?
The simple truth about conditional expectation
In the time series setup, we have a sequence of information sets (it's natural to assume that with time the amount of available information increases). Denote
the expectation of conditional on . For each ,
is a projector onto the space of random functions that depend only on the information set .
Property 1. Double application of conditional expectation gives the same result as single application:
(4)
( is already a function of , so conditioning it on doesn't change it).
Property 2. A successive conditioning on two different information sets is the same as conditioning on the smaller one:
(5)
(6)
Property 3. Conditional expectation is a linear operator: for any variables and numbers
It's easy to see that (4)-(6) are similar to (1)-(3), respectively, but I prefer to use different names for (4)-(6). I call (4) a projector property. (5) is known as the Law of Iterated Expectations, see my post on the informational aspect for more intuition. (6) holds simply because at time the expectation is known and behaves like a constant.
Summary. (4)-(6) are easy to remember as one property. The smaller information set wins:
Law of iterated expectations: informational aspect
The notion of Brownian motion will help us. Suppose we observe a particle that moves back and forth randomly along a straight line. The particle starts at zero at time zero. The movement can be visualized by plotting on the horizontal axis time and on the vertical axis - the position of the particle. denotes the random position of the particle at time .
Figure 1. Unconditional expectation
In Figure 1, various paths starting at the origin are shown in different colors. The intersections of the paths with vertical lines at times 0.5, 1 and 1.5 show the positions of the particle at these times. The deviations of those positions from to the upside and downside are assumed to be equally likely (more precisely, they are normal variables with mean zero and variance ).
Unconditional expectation
“In the beginning there was nothing, which exploded.” ― Terry Pratchett, Lords and Ladies
If we are at the origin (like the Big Bang), nothing has happened yet and is the best prediction for any moment we can make (shown by the blue horizontal line in Figure 1). The usual, unconditional expectation corresponds to the empty information set.
Conditional expectation
Figure 2. Conditional expectation
In Figure 2, suppose we are at The dark blue path between and has been realized. We know that the particle has reached the point at that time. With this knowledge, we see that the paths starting at this point will have the average
(1)
This is because the particle will continue moving randomly, with the up and down moves being equally likely. Prediction (1) is shown by the horizontal light blue line between and In general, this prediction is better than .
Note that for different realized paths, takes different values. Therefore , for fixed , is a random variable of . It is a function of the event we condition the expectation on.
Law of iterated expectations
Figure 3. Law of iterated expectations
Suppose you are at time (see Figure 3). You send many agents to the future to fetch the information about what will happen. They bring you the data on the means they see (shown by horizontal lines between and Since there are many possible future realizations, you have to average the future means. For this, you will use the distributional belief you have at time The result is Since the up and down moves are equally likely, your distribution at time is symmetric around Therefore the above average will be equal to This is the Law of Iterated Expectations, also called the tower property:
(2)
The knowledge of all of the future predictions , upon averaging, does not improve or change our current prediction .
For a full mathematical treatment of conditional expectation see Lecture 10 by Gordan Zitkovic.
Conditional-mean-plus-remainder representation: we separate the main part from the remainder and find out the remainder properties. My post on properties of conditional expectation is an elementary introduction to conditioning. This is my first post in Quantitative Finance.
A brush-up on conditional expectations
Notation. Let be a random variable and let be an information set. Instead of the usual notation for conditional expectation, in large expressions it's better to use the notation with in the subscript:
Generalized homogeneity. If depends only on information then (a function of known information is known and behaves like a constant). A special case is With we get This shows that conditioning is a projector: if you project a point in a 3D space onto a 2D plane and then project the image of the point onto the same plane, the result will be the same image as from single projecting.
Additivity.
Law of iterated expectations (LIE). If we know about two information sets that then I like the geometric explanation in terms of projectors. Projecting a point onto a plane and then projecting the result onto a straight line is the same as projecting the point directly onto the straight line.
Review Properties of conditional expectation, especially the summary, where I introduce a new notation for conditional expectation. Everywhere I use the notation for expectation of conditional on , instead of .
This post and the previous one on conditional expectation show that conditioning is a pretty advanced notion. Many introductory books use the condition (the expected value of the error term conditional on the regressor is zero). Because of the complexity of conditioning, I think it's better to avoid this kind of assumption as much as possible.
Conditional variance properties
Replacing usual expectations by their conditional counterparts in the definition of variance, we obtain the definition of conditional variance:
(1)
Property 1. If are independent, then and are also independent and conditioning doesn't change variance:
(the last two terms give the shortcut for variance of )
Before we move further we need to define conditional covariance by
(everywhere usual expectations are replaced by conditional ones). We say that random variables are conditionally uncorrelated if .
Property 5. Conditional variance of a linear combination. For any random variables and functions one has
The proof is quite similar to that in case of usual variances, so we leave it to the reader. In particular, if are conditionally uncorrelated, then the interaction terms disappears:
Unlike most UoL exams, here I tried to relate the theory to practical issues.
KBTU International School of Economics
Compiled by Kairat Mynbaev
The total for this exam is 41 points. You have two hours.
Everywhere provide detailed explanations. When answering please clearly indicate question numbers. You don’t need a calculator. As long as the formula you provide is correct, the numerical value does not matter.
Question 1. (12 points)
a) (2 points) At a casino, two players are playing on slot machines. Their payoffs are standard normal and independent. Find the joint density of the payoffs.
b) (4 points) Two other players watch the first two players and start to argue what will be larger: the sum or the difference . Find the joint density. Are variables independent? Find their marginal densities.
c) (2 points) Are normal? Why? What are their means and variances?
d) (2 points) Which probability is larger: or ?
e) (2 points) In this context interpret the conditional expectation . How much is it?
Reminder. The density of a normal variable is .
Question 2. (9 points) The distribution of a call duration of one Kcell [largest mobile operator in KZ] customer is exponential: The number of customers making calls simultaneously is distributed as Poisson: Thus the total call duration for all customers is for . We put . Assume that customers make their decisions about calling independently.
a) (3 points) Find the general formula (when are identically distributed and are independent but not necessarily exponential and Poisson, as above) for the moment generating function of explaining all steps.
b) (3 points) Find the moment generating functions of , and for your particular distributions.
c) (3 points) Find the mean and variance of . Based on the equations you obtained, can you suggest estimators of parameters ?
Remark. Direct observations on the exponential and Poisson distributions are not available. We have to infer their parameters by observing . This explains the importance of the technique used in Question 2.
Question 3. (8 points)
a) (2 points) For a non-negative random variable prove the Markov inequality
b) (2 points) Prove the Chebyshev inequality for an arbitrary random variable .
c) (4 points) We say that the sequence of random variables converges in probability to a random variable if as for any . Suppose that for all and that as . Prove that then converges in probability to .
Remark. Question 3 leads to the simplest example of a law of large numbers: if are i.i.d. with finite variance, then their sample mean converges to their population mean in probability.
Question 4. (8 points)
a) (4 points) Define a distribution function. Give its properties, with intuitive explanations.
b) (4 points) Is a sum of two distribution functions a distribution function? Is a product of two distribution functions a distribution function?
Remark. The answer for part a) is here and the one for part b) is based on it.
Question 5. (4 points) The Rakhat factory prepares prizes for kids for the upcoming New Year event. Each prize contains one type of chocolates and one type of candies. The chocolates and candies are chosen randomly from two production lines, the total number of items is always 10 and all selections are equally likely.
a) (2 points) What proportion of prepared prizes contains three or more chocolates?
b) (2 points) 100 prizes have been sent to an orphanage. What is the probability that 50 of those prizes contain no more than two chocolates?
There is a problem I gave on the midterm that does not require much imagination. Just know the definitions and do the technical work, so I was hoping we could put this behind us. Turned out we could not and thus you see this post.
Problem. Suppose the joint density of variables is given by
I. Find .
II. Find marginal densities of . Are independent?
III. Find conditional densities .
IV. Find .
When solving a problem like this, the first thing to do is to give the theory. You may not be able to finish without errors the long calculations but your grade will be determined by the beginning theoretical remarks.
I. Finding the normalizing constant
Any density should satisfy the completeness axiom: the area under the density curve (or in this case the volume under the density surface) must be equal to one: The constant chosen to satisfy this condition is called a normalizing constant. The integration in general is over the whole plain and the first task is to express the above integral as an iterated integral. This is where the domain where the density is not zero should be taken into account. There is little you can do without geometry. One example of how to do this is here.
The shape of the area is determined by a) the extreme values of and b) the relationship between them. The extreme values are 0 and 1 for both and , meaning that is contained in the square The inequality means that we cut out of this square the triangle below the line (it is really the lower triangle because if from a point on the line we move down vertically, will stay the same and will become smaller than ).
In the iterated integral:
a) the lower and upper limits of integration for the inner integral are the boundaries for the inner variable; they may depend on the outer variable but not on the inner variable.
b) the lower and upper limits of integration for the outer integral are the extreme values for the outer variable; they must be constant.
This is illustrated in Pane A of Figure 1.
Figure 1. Integration order
Always take the inner integral in parentheses to show that you are dealing with an iterated integral.
a) In the inner integral integrating over means moving along blue arrows from the boundary to the boundary The boundaries may depend on but not on because the outer integral is over
b) In the outer integral put the extreme values for the outer variable. Thus,
Check that if we first integrate over (vertically along red arrows, see Pane B in Figure 1) then the equation
results.
In fact, from the definition one can see that the inner interval for is and for it is
The condition for independence of is (this is a direct analog of the independence condition for events ). In words: the joint density decomposes into a product of individual densities.
III. Conditional densities
In this case the easiest is to recall the definition of conditional probability The definition of conditional densities is quite similar:
(2) .
Of course, here can be replaced by their marginal equivalents.
IV. Finding expected values of
The usual definition takes an equivalent form using the marginal density:
Which equation to use is a matter of convenience.
Another replacement in the usual definition gives the definition of conditional expectations:
Note that these are random variables: depends in and depends on
Solution to the problem
Being a lazy guy, for the problem this post is about I provide answers found in Mathematica:
I.
II. for for
It is readily seen that the independence condition is not satisfied.
There are three companies, called A, B, and C, and each has a 4% chance of going bankrupt. The event that one of the three companies will go bankrupt is independent of the event that any other company will go bankrupt.
Company A has outstanding bonds, and a bond will have a net return of if the corporation does not go bankrupt, but it will have a net return of , i.e., losing everything invested, if it goes bankrupt. Suppose an investor buys $1000 worth of bonds of company A, which we will refer to as portfolio .
Suppose also that there exists a security whose payout depends on the bankruptcy of companies B and C in a joint fashion. In particular, if neither B nor C go bankrupt, this derivative will have a net return of . If exactly one of B or C go bankrupt, it will have a net return of , i.e., losing half of the investment. If both B and C go bankrupt, it will have a net return of , i.e., losing the whole investment. Suppose an investor buys $1000 worth of this derivative, which is then called portfolio .
(a) Calculate the VaR at the critical level for portfolios and . [30 marks]
Independence of events. Denote the events that company A goes bankrupt and does not go bankrupt, resp. A similar notation will be used for the other two companies. The simple definition of independence of bankruptcy events would be too difficult to apply to prove independence of all events that we need. A general definition of independence of variables is that their sigma-fields are independent (it will not be explained here). This general definition implies that in all cases below we can use multiplicativity of probability such as
The events here have a simple interpretation: the first is that “both B and C fail”, the second is “both B and C fail”, and the third is that “either (B fails and C does not) or (B does not fail and C does)” (they do not intersect and additivity of probability applies).
Let be returns on A and the security S, resp. From the problem statement it follows that these returns are described by the tables Table 1
Prob
0
0.96
-100
0.04
Table 2
Prob
0
0.9216
-50
0.0768
-100
0.0016
Everywhere we will be working with percentages, so the dollar values don’t matter.
From Table 1 we conclude that the distribution function of return on A looks as follows:
Figure 1. Distribution function of portfolio A
At the function jumps up by 0.04, at by another 0.96. The dashed line at is used in the definition of the VaR using the generalized inverse:
From Table 2 we see that the distribution function of return on S looks like this:
The first jump is at , the second at and third one at . As above, it follows that
(b) Calculate the VaR at the critical level for the joint portfolio . [20 marks]
To find the return distribution for , we have to consider all pairs of events from Tables 1 and 2 using independence.
1.
2.
3.
4.
5.
6.
Since we deal with a joint portfolio, percentages for separate portfolios should be translated into ones for the whole portfolio. For example, the loss of 100% on one portfolio and 0% on the other means 50% on the joint portfolio (investments are equal). There are two such losses, in lines 2 and 5, so the probabilities should be added. Thus, we obtain the table for the return on the joint portfolio:
Table 3
Prob
0
0.884736
-25
0.073728
-50
0.0384
-75
0.003072
-100
0.000064
Here only the first probability exceeds 0.1, so the definition of the generalized inverse gives
(c) Is VaR sub-additive in this example? Explain why the absence of sub-additivity may be a concern for risk managers. [20 marks]
To check sub-additivity, we need to pass to positive numbers, as explained in other posts. Zeros remain zeros, the inequality is true, so sub-additivity holds in this example. Lack of sub-additivity is an undesirable property for risk managers, because for them keeping the VaR at low levels for portfolio parts doesn’t mean having low VaR for the whole portfolio.
(d) The expected shortfall at the critical level can be defined as
where is a return or dollar amount. Calculate the expected shortfall at the critical level for portfolio . Is this risk measure sub-additive? [30 marks]
Using the definition of conditional expectation and Table 3, we have (the time subscript can be omitted because the problem is static)
There is a theoretical property that the expected shortfall is sub-additive.
Students of FN3142 often think that they can get by by picking a few technical tricks. The questions below are mostly about intuition that helps to understand and apply those tricks.
Everywhere we assume that is a time series and is a sequence of corresponding information sets. It is natural to assume that for all We use the short conditional expectation notation: .
Questions
Question 1. How do you calculate conditional expectation in practice?
Question 2. How do you explain ?
Question 3. Simplify each of and and explain intuitively.
Question 4. is a shock at time . Positive and negative shocks are equally likely. What is your best prediction now for tomorrow's shock? What is your best prediction now for the shock that will happen the day after tomorrow?
Question 5. How and why do you predict at time ? What is the conditional mean of your prediction?
Question 6. What is the error of such a prediction? What is its conditional mean?
Question 7. Answer the previous two questions replacing by .
Question 8. What is the mean-plus-deviation-from-mean representation (conditional version)?
Question 9. How is the representation from Q.8 reflected in variance decomposition?
Question 10. What is a canonical form? State and prove all properties of its parts.
Question 11. Define conditional variance for white noise process and establish its link with the unconditional one.
Question 12. How do you define the conditional density in case of two variables, when one of them serves as the condition? Use it to prove the LIE.
Question 13. Write down the joint distribution function for a) independent observations and b) for serially dependent observations.
Question 14. If one variable is a linear function of another, what is the relationship between their densities?
Question 15. What can you say about the relationship between if ? Explain geometrically the definition of the quasi-inverse function.
Answers
Answer 1. Conditional expectation is a complex notion. There are several definitions of differing levels of generality and complexity. See one of them here and another in Answer 12.
The point of this exercise is that any definition requires a lot of information and in practice there is no way to apply any of them to actually calculate conditional expectation. Then why do they juggle conditional expectation in theory? The efficient market hypothesis comes to rescue: it is posited that all observed market data incorporate all available information, and, in particular, stock prices are already conditioned on
Answer 4. Since positive and negative shocks are equally likely, the best prediction is (I call this equation a martingale condition). Similarly, but in this case I prefer to see an application of the LIE:
Answer 5. The best prediction is because it minimizes among all functions of current information Formally, you can use the first order condition
Answer 6. It is natural to define the prediction error by
By the projector property .
Answer 7. To generalize, just change the subscripts. For the prediction we have to use two subscripts: the notation means that we are trying to predict what happens at a future date based on info set (time is like today). Then by definition
Answer 8. Answer 7, obviously, implies The simple case is here.
Answer 12. The conditional density is defined similarly to the conditional probability. Let be two random variables. Denote the density of and the joint density. Then the conditional density of conditional on is defined as After this we can define the conditional expectation With these definitions one can prove the Law of Iterated Expectations:
This is an illustration to Answer 1 and a prelim to Answer 13.
Answer 13. Understanding this answer is essential for Section 8.6 on maximum likelihood of Patton's guide.
a) In case of independent observations the joint density of the vector is a product of individual densities:
b) In the time series context it is natural to assume that the next observation depends on the previous ones, that is, for each depends on (serially dependent observations). Therefore we should work with conditional densities From Answer 12 we can guess how to make conditional densities appear:
The fractions on the right are recognized as conditional probabilities. The resulting expression is pretty awkward:
Answer 14. The answer given here helps one understand how to pass from the density of the standard normal to that of the general normal.
Answer 15. This elementary explanation of the function definition can be used in the fifth grade. Note that conditions sufficient for existence of the inverse are not satisfied in a case as simple as the distribution function of the Bernoulli variable (when the graph of the function has flat pieces and is not continuous). Therefore we need a more general definition of an inverse. Those who think that this question is too abstract can check out UoL exams, where examinees are required to find Value at Risk when the distribution function is a step function. To understand the idea, do the following:
a) Draw a graph of a good function (continuous and increasing).
b) Fix some value in the range of this function and identify the region .
c) Find the solution of the equation . By definition, Identify the region .
d) Note that . In general, for bad functions the minimum here may not exist. Therefore minimum is replaced by infimum, which gives us the definition of the quasi-inverse:
You must be logged in to post a comment.