## Checklist for Quantitative Finance FN3142

Students of FN3142 often think that they can get by by picking a few technical tricks. The questions below are mostly about intuition that helps to understand and apply those tricks.

Everywhere we assume that is a time series and is a sequence of corresponding information sets. It is natural to assume that for all We use the short conditional expectation notation: .

### Questions

**Question 1**. How do you calculate conditional expectation in practice?

**Question 2**. How do you explain ?

**Question 3**. Simplify each of and and explain intuitively.

**Question 4**. is a *shock* at time . Positive and negative shocks are equally likely. What is your best prediction now for tomorrow's shock? What is your best prediction now for the shock that will happen the day after tomorrow?

**Question 5**. How and why do you predict at time ? What is the conditional mean of your prediction?

**Question 6**. What is the error of such a prediction? What is its conditional mean?

**Question 7**. Answer the previous two questions replacing by .

**Question 8**. What is the mean-plus-deviation-from-mean representation (conditional version)?

**Question 9**. How is the representation from Q.8 reflected in variance decomposition?

**Question 10**. What is a canonical form? State and prove all properties of its parts.

**Question 11**. Define conditional variance for white noise process and establish its link with the unconditional one.

**Question 12**. How do you define the conditional density in case of two variables, when one of them serves as the condition? Use it to prove the LIE.

**Question 13**. Write down the joint distribution function for a) independent observations and b) for serially dependent observations.

**Question 14**. If one variable is a linear function of another, what is the relationship between their densities?

**Question 15**. What can you say about the relationship between if ? Explain geometrically the definition of the quasi-inverse function.

### Answers

** Answer 1**. Conditional expectation is a complex notion. There are several definitions of differing levels of generality and complexity. See one of them here and another in Answer 12.

The point of this exercise is that any definition requires a lot of information and in practice there is no way to apply any of them to actually calculate conditional expectation. Then why do they juggle conditional expectation in theory? The efficient market hypothesis comes to rescue: it is posited that all observed market data incorporate all available information, and, in particular, stock prices are already conditioned on

** Answers 2 and 3**. This is the best explanation I have.

** Answer 4**. Since positive and negative shocks are equally likely, the best prediction is (I call this equation a

**martingale condition**). Similarly, but in this case I prefer to see an application of the LIE:

** Answer 5**. The

**best prediction**is because it minimizes among all functions of current information Formally, you can use the first order condition

to find that is the minimizing function. By the projector property

** Answer 6**. It is natural to define the

**prediction error**by

By the projector property .

** Answer 7**. To generalize, just change the subscripts. For the prediction we have to use two subscripts: the notation means that we are trying to predict what happens at a future date based on info set (time is like today). Then by definition

** Answer 8**. Answer 7, obviously, implies The simple case is here.

** Answer 9**. See the law of total variance and change it to reflect conditioning on

** Answer 10**. See canonical form.

** Answer 11**. Combine conditional variance definition with white noise definition.

** Answer 12**. The conditional density is defined similarly to the conditional probability. Let be two random variables. Denote the density of and the joint density. Then the

**conditional density**of conditional on is defined as After this we can define the

**conditional expectation**With these definitions one can prove the Law of Iterated Expectations:

This is an illustration to Answer 1 and a prelim to Answer 13.

** Answer 13**. Understanding this answer is essential for Section 8.6 on maximum likelihood of Patton's guide.

a) In case of independent observations the joint density of the vector is a product of individual densities:

b) In the time series context it is natural to assume that the next observation depends on the previous ones, that is, for each depends on (**serially dependent observations**). Therefore we should work with conditional densities From Answer 12 we can guess how to make conditional densities appear:

The fractions on the right are recognized as conditional probabilities. The resulting expression is pretty awkward:

** Answer 14**. The answer given here helps one understand how to pass from the density of the standard normal to that of the general normal.

** Answer 15**. This elementary explanation of the function definition can be used in the fifth grade. Note that conditions sufficient for existence of the inverse are not satisfied in a case as simple as the distribution function of the Bernoulli variable (when the graph of the function has flat pieces and is not continuous). Therefore we need a more general definition of an inverse. Those who think that this question is too abstract can check out UoL exams, where examinees are required to find Value at Risk when the distribution function is a step function. To understand the idea, do the following:

a) Draw a graph of a good function (continuous and increasing).

b) Fix some value in the range of this function and identify the region .

c) Find the solution of the equation . By definition, Identify the region .

d) Note that . In general, for bad functions the minimum here may not exist. Therefore minimum is replaced by infimum, which gives us the definition of the **quasi-inverse**:

.