14
May 19

Question 1 from UoL exam 2016, Zone B, Post 2

Question 1 from UoL exam 2016, Zone B, Post 2

For the problem statement and first part of the solution see Question 1 from UoL exam 2016, Zone B, Post 1.

Let R denote the return on P_1+P_2. From Table 1 we can derive the probabilities table for this return:

Table 2. Joint table of returns on separate portfolios

R_1
0 -100
R_2 0 0.96^2 0.04\cdot 0.96
-100 0.04\cdot 0.96 0.04^2

From Table 2 we conclude that the return on the combined portfolio looks as follows:

Table 3. Total return

R \text{Prob}
0 0.96^2
-50 2\cdot 0.04\cdot 0.96
-100 0.04^2

Table 3 shows that

F_R(x)=0 for x<-100,

F_R(x)=0.04^2=0.0016 for -100\leq x<-50,

F_R(x)=2\cdot 0.04\cdot 0.96+0.0016=0.0784 for -50\leq x<0 and

F_R(x)=0.96^2+0.0784=1 for x\geq 0.

Try to follow the procedure used in Post 1 and you will see that

F_R^{-1}(y)=+\infty for y>1,

F_R^{-1}(y)=1 for 0.0784<y\leq 1,

F_R^{-1}(y)=0.0784 for 0.0016<y\leq 0.0784,

F_R^{-1}(y)=0.0016 for 0<y\leq 0.0016 and

F_R^{-1}(y)=-\infty for y\leq 0.

This implies VaR_R^\alpha=F_R^{-1}(0.05)=0.0784. In statistics, we always have to watch if the numbers we get make sense. The last number doesn't and in fact leads to a contradiction: P(R\leq 0)=1\leq P(R\leq 0.0784)=0.05. This is because the quasi-inverse notion has nothing to do with probabilities. With a more realistic return, the VaR should be negative for small values of \alpha .

(b) The subadditivity definition requires amounts opposite in sign to ours. That is, we define \widetilde{VaR^\alpha} from P(X\leq -\widetilde{VaR^\alpha})=\alpha and then say that VaR thus defined is sub-additive if \widetilde{VaR^\alpha}(P_1+P_2)\leq \widetilde{VaR^\alpha}(P_1)+\widetilde{VaR^\alpha}(P_2). We have been using the definition P(X\leq VaR^\alpha)=\alpha. It's easy to see that \widetilde{VaR^\alpha}=-VaR^\alpha. Thus, in our case we have \widetilde{VaR_R^\alpha}=-0.0784 which is not smaller than \widetilde{VaR_{R_1}^\alpha}+\widetilde{VaR_{R_2}^\alpha}=-2. Sub-additivity does not hold in this example. Absence of sub-additivity means that riskiness of the whole portfolio, as measured by VaR, may exceed riskiness of the sum of the portfolio parts.

(c) The problem uses the definition of the expected shortfall that yields positive values. I use everywhere the definition that gives negative values: ES^\alpha=E_t[R|R\leq VaR_{t+1}^\alpha]. Since the setup is static, this is the same as ES^\alpha=E[R|R\leq VaR^\alpha]. By definition, E(X|A)=\frac{E(X1_A)}{P(A)}, so ES^\alpha=\frac{E(R1_{\{R\leq VaR^\alpha\}})}{P(R\leq VaR^\alpha)}.

In Post 1 we found that VaR^\alpha=1 for each of R_1,R_2. The condition R_i\leq VaR^\alpha places no restriction on R_i, so from Table 1

E(R_i1_{\{R_i\leq VaR^\alpha \}})=ER_i=0\cdot 0.96-100\cdot 0.04=-4.

As a result, ES_i^\alpha=-4\%.

Since VaR_R^\alpha=F_R^{-1}(0.05)=0.0784, from Table 3

E(R1_{\{R\leq 0.0784\}})=0\cdot 0.96^2-50\cdot 2\cdot 0.04\cdot 0.96-100\cdot 0.04^2=-4.

Therefore ES_R^\alpha=-4/0.0784=-5.8808\%. Converting everything to positive values, we have 5.8808\leq 4+4, so that sub-additivity holds.

The returns in percentages can be easily converted to those in dollars.

7
May 19

Question 1 from UoL exam 2016, Zone B, Post 1

Question 1 from UoL exam 2016, Zone B, Post 1

There is a hidden mine in this question, and it is caused by discreteness of the distribution function. We had a lively discussion of this oddity in my class. The answer will be given in two posts.

Question. Two corporations each have a 4% chance of going bankrupt and the event that one of the two companies will go bankrupt is independent of the event that the other company will go bankrupt. Each company has outstanding bonds. A bond from any of the two companies will return R=0% if the corporation does not go bankrupt, and if it goes bankrupt you lose the face value of the investment, i.e., R=-100%. Suppose an investor buys $1000 worth of bonds of the first corporation, which is then called portfolio P_1, and similarly, an investor buys $1000 worth of bonds of the second corporation, which is then called portfolio P_2.

(a) [40 marks] Calculate the VaR at \alpha=5\% critical level for each portfolio and for the joint portfolio P_1+P_2.

(b) [30 marks] Is VaR sub-additive in this example? Explain why the absence of sub-additivity may be a concern for risk managers.

(c) [30 marks] The expected shortfall ES^\alpha at the \alpha =5\% critical level can be defined as ES^\alpha=-E_t[R|R<-VaR_{t+1}^\alpha]. Calculate the expected shortfall for the portfolios P_1, P_2 and P_1+P_2. Is this risk measure sub-additive?

Solution. a) The return on each portfolio is a binary variable described by Table 1:

Table 1. Return on separate portfolios

R_i Prob
0 0.96
-100 0.04

Therefore the distribution function F_{R_i}(x) of the return is a piece-wise constant function equal to 0 for x<-100, to 0.04 for -100\leq x<0 and to 1 for x\geq 0, see Example 3. For instance, if x\geq 0 we can write

F_{R_i}(x)=P(R_i\leq x)=P(R_i<-100)+P(R_i=-100)+P(-100<R_i<0)+P(R_i=0)+P(0<R_i\leq  x)=0.04+0.96=1.

Return distribution function

Diagram 1. Return distribution function

Since this function is not one-to-one, it's usual inverse does not exist and we have to use the quasi-inverse, see Answer 15. As with the distribution function, we need to look at different cases.

If y>1, drawing a horizontal line at y we see that the set \{x:F_{R_i}(x)\geq y\} is empty and the infimum of an empty set is by definition +\infty (can you guess why?)

For any 1\geq y>0.04, we have \{x:F_{R_i}(x)\geq y\}=[1,+\infty ), so F_{R_i}^{-1}(y)=1.

Next, for 0<y\leq 0.04 we have \{x:F_{R_i}(x)\geq y\}=[-100,+\infty ) and F_{R_i}^{-1}(y)=-100.

Finally, if y\leq 0, we get \{x:F_{R_i}(x)\geq  y\}=(-\infty ,+\infty ) and F_{R_i}^{-1}(y)=-\infty .

Quase-inverse function

Diagram 2. Quasi-inverse function

 

The resulting function is bad in two ways. Firstly, it takes infinite values. In applications to VaR this should not concern us because the bad values occur in the ranges y>1 and y\leq 0.

Secondly, for practically interesting values of y\in (0,1), the graph of F_{R_i}^{-1} has flat pieces, which may be problematic. By definition, VaR^\alpha is the solution to the equation P(R_i\leq VaR^\alpha)=\alpha . This means that we should have VaR^\alpha=F_{R_i}^{-1}(\alpha). When we plug this value in F_{R_i}(x), we are supposed to get \alpha . However, here we don't, in general. For example, VaR^{0.02}=F_{R_i}^{-1}(0.02)=-100, while F_{R_{i}}(-100)=0.04\neq 0.02.

This happens because the usual inverse does not exist. When the usual inverse exists, we have two identities F^{-1}(F(x))=x and F(F^{-1}(y))=y. Here both are violated.

Now the definition of VaR gives VaR_{R_i}^\alpha=F_{R_i}^{-1}(0.05)=1 for each portfolio.

To be continued.

24
Apr 19

Solution to Question 3b) from UoL exam 2018, Zone A

Solution to Question 3b) from UoL exam 2018, Zone A

I thought that after all the work we've done with my students the answer to this question would be obvious. It was not, so I am sharing it.

Question. Consider a position consisting of a $20,000 investment in asset X and a $20,000 investment in asset Y. Assume that returns on these two assets are i.i.d. normal with mean zero, that the daily volatilities of both assets are 3%, and that the correlation coefficient between their returns is 0.4. What is the 10-day VaR at the \alpha =1\% critical level for the portfolio?

Solution. First we have to work with returns and then translate the result into dollars.

Let R_X, R_Y be the daily returns on the two assets. We are given that ER_X=ER_Y=0, \sigma (R_X)=\sigma(R_Y)=0.03, \rho(R_X,R_Y)=0.4.

Since the total investment is $40,000, the shares of the investment are s_X=s_Y=20,000/40,000=0.5. Therefore the daily return on the portfolio is R=0.5R_X+0.5R_Y, see Exercise 2.

It follows that ER=0.5ER_X+0.5ER_Y=0,

Var(R)=0.5^2\sigma^2(R_X)+2\cdot 0.5\cdot 0.5\rho (R_X,R_Y)\sigma (R_X)\sigma(R_Y)+0.5^2\sigma^2(R_Y)=0.5^2\cdot  0.03^2(1+0.8+1)=0.015^2\cdot 2.8.

These figures are for daily returns. We need to make sure that R is normally distributed. The sufficient condition for this is that the returns R_X, R_Y are jointly normally distributed. It is not mentioned in the problem statement, and we have to assume that it is satisfied.

Let R_i denote the return on day i. Under continuous compounding the daily returns are summed: if we invest M_0 initially, after the first day we have M_1=M_0e^{R_1}, after the second day we have M_2=M_1e^{R_2}=M_0e^{R_1+R_2} and so on. So the 10-day return is r=R_1+...+R_{10}.

Since the daily returns are independent and identically distributed, by additivity of variance we have

Var(r)=\sum Var(R_i)=10\cdot 0.015^2\cdot 2.8, \sigma (r)=0.015\sqrt{28}=0.079, Er=0.

r is normally distributed because the daily returns are independent. It remains to apply the VaR formula

VaR^\alpha=\mu+\sigma\Phi^{-1}(\alpha).

for normal distributions. From the table of the distribution function of the standard normal \Phi ^{-1}(0.01)=-2.33. Thus, VaR^{\alpha }=0.079\cdot (-2.33)=-0.184. This translates to the minimum loss of 0.184\times 40,000=7362. Thus, with probability 1% the loss can be $7362 or more.

13
Apr 19

Checklist for Quantitative Finance FN3142

Checklist for Quantitative Finance FN3142

Students of FN3142 often think that they can get by by picking a few technical tricks. The questions below are mostly about intuition that helps to understand and apply those tricks.

Everywhere we assume that ...,Y_{t-1},Y_t,Y_{t+1},... is a time series and ...,I_{t-1},I_t,I_{t+1},... is a sequence of corresponding information sets. It is natural to assume that I_t\subset I_{t+1} for all t. We use the short conditional expectation notation: E_tX=E(X|I_t).

Questions

Question 1. How do you calculate conditional expectation in practice?

Question 2. How do you explain E_t(E_tX)=E_tX?

Question 3. Simplify each of E_tE_{t+1}X and E_{t+1}E_tX and explain intuitively.

Question 4. \varepsilon _t is a shock at time t. Positive and negative shocks are equally likely. What is your best prediction now for tomorrow's shock? What is your best prediction now for the shock that will happen the day after tomorrow?

Question 5. How and why do you predict Y_{t+1} at time t? What is the conditional mean of your prediction?

Question 6. What is the error of such a prediction? What is its conditional mean?

Question 7. Answer the previous two questions replacing Y_{t+1} by Y_{t+p} .

Question 8. What is the mean-plus-deviation-from-mean representation (conditional version)?

Question 9. How is the representation from Q.8 reflected in variance decomposition?

Question 10. What is a canonical form? State and prove all properties of its parts.

Question 11. Define conditional variance for white noise process and establish its link with the unconditional one.

Question 12. How do you define the conditional density in case of two variables, when one of them serves as the condition? Use it to prove the LIE.

Question 13. Write down the joint distribution function for a) independent observations and b) for serially dependent observations.

Question 14. If one variable is a linear function of another, what is the relationship between their densities?

Question 15. What can you say about the relationship between a,b if f(a)=f(b)? Explain geometrically the definition of the quasi-inverse function.

Answers

Answer 1. Conditional expectation is a complex notion. There are several definitions of differing levels of generality and complexity. See one of them here and another in Answer 12.

The point of this exercise is that any definition requires a lot of information and in practice there is no way to apply any of them to actually calculate conditional expectation. Then why do they juggle conditional expectation in theory? The efficient market hypothesis comes to rescue: it is posited that all observed market data incorporate all available information, and, in particular, stock prices are already conditioned on I_t.

Answers 2 and 3. This is the best explanation I have.

Answer 4. Since positive and negative shocks are equally likely, the best prediction is E_t\varepsilon _{t+1}=0 (I call this equation a martingale condition). Similarly, E_t\varepsilon _{t+2}=0 but in this case I prefer to see an application of the LIE: E_{t}\varepsilon _{t+2}=E_t(E_{t+1}\varepsilon _{t+2})=E_t0=0.

Answer 5. The best prediction is \hat{Y}_{t+1}=E_tY_{t+1} because it minimizes E_t(Y_{t+1}-f(I_t))^2 among all functions f of current information I_t. Formally, you can use the first order condition

\frac{d}{df(I_t)}E_t(Y_{t+1}-f(I_t))^2=-2E_t(Y_{t+1}-f(I_t))=0

to find that f(I_t)=E_tf(I_t)=E_tY_{t+1} is the minimizing function. By the projector property
E_t\hat{Y}_{t+1}=E_tE_tY_{t+1}=E_tY_{t+1}=\hat{Y}_{t+1}.

Answer 6. It is natural to define the prediction error by

\hat{\varepsilon}_{t+1}=Y_{t+1}-\hat{Y}_{t+1}=Y_{t+1}-E_tY_{t+1}.

By the projector property E_t\hat{\varepsilon}_{t+1}=E_tY_{t+1}-E_tY_{t+1}=0.

Answer 7. To generalize, just change the subscripts. For the prediction we have to use two subscripts: the notation \hat{Y}_{t,t+p} means that we are trying to predict what happens at a future date t+p based on info set I_t (time t is like today). Then by definition \hat{Y} _{t,t+p}=E_tY_{t+p}, \hat{\varepsilon}_{t,t+p}=Y_{t+p}-E_tY_{t+p}.

Answer 8. Answer 7, obviously, implies Y_{t+p}=\hat{Y}_{t,t+p}+\hat{\varepsilon}_{t,t+p}. The simple case is here.

Answer 9. See the law of total variance and change it to reflect conditioning on I_t.

Answer 10. See canonical form.

Answer 11. Combine conditional variance definition with white noise definition.

Answer 12. The conditional density is defined similarly to the conditional probability. Let X,Y be two random variables. Denote p_X the density of X and p_{X,Y} the joint density. Then the conditional density of Y conditional on X is defined as p_{Y|X}(y|x)=\frac{p_{X,Y}(x,y)}{p_X(x)}. After this we can define the conditional expectation E(Y|X)=\int yp_{Y|X}(y|x)dy. With these definitions one can prove the Law of Iterated Expectations:

E[E(Y|X)]=\int E(Y|x)p_X(x)dx=\int \left( \int yp_{Y|X}(y|x)dy\right)  p_X(x)dx

=\int \int y\frac{p_{X,Y}(x,y)}{p_X(x)}p_X(x)dxdy=\int \int  yp_{X,Y}(x,y)dxdy=EY.

This is an illustration to Answer 1 and a prelim to Answer 13.

Answer 13. Understanding this answer is essential for Section 8.6 on maximum likelihood of Patton's guide.

a) In case of independent observations X_1,...,X_n the joint density of the vector X=(X_1,...,X_n) is a product of individual densities:

p_X(x_1,...,x_n)=p_{X_1}(x_1)...p_{X_n}(x_n).

b) In the time series context it is natural to assume that the next observation depends on the previous ones, that is, for each t, X_t depends on X_1,...,X_{t-1} (serially dependent observations). Therefore we should work with conditional densities p_{X_1,...,X_t|X_1,...,X_{t-1}}. From Answer 12 we can guess how to make conditional densities appear:

p_{X_1,...,X_n}(x_1,...,x_n)=\frac{p_{X_1,...,X_n}(x_1,...,x_n)}{  p_{X_1,...,X_{n-1}}(x_1,...,x_{n-1})}\frac{p_{X_1,...,X_{n-1}}(x_1,...,x_{n-1})}{  p_{X_1,...,X_{n-2}}(x_1,...,x_{n-2})}...\frac{p_{X_1,X_2}(x_1,x_2)}{p_{X_1}(x_1)}p_{X_1}(x_1).

The fractions on the right are recognized as conditional probabilities. The resulting expression is pretty awkward:

p_{X_1,...,X_n}(x_1,...,x_n)=p_{X_1,...,X_n|X_1,...,X_n-1}(x_1,...,x_n|x_1,...,x_{n-1})\times \times p_{X_1,...,X_{n-1}|X_1,...,X_{n-2}}(x_1,...,x_{n-1}|x_1,...,x_{n-2})...\times p_{X_1,X_2|X_1}(x_1,x_2|x_1)p_{X_1}(x_1).

Answer 14. The answer given here helps one understand how to pass from the density of the standard normal to that of the general normal.

Answer 15. This elementary explanation of the function definition can be used in the fifth grade. Note that conditions sufficient for existence of the inverse are not satisfied in a case as simple as the distribution function of the Bernoulli variable (when the graph of the function has flat pieces and is not continuous). Therefore we need a more general definition of an inverse. Those who think that this question is too abstract can check out UoL exams, where examinees are required to find Value at Risk when the distribution function is a step function. To understand the idea, do the following:

a) Draw a graph of a good function f (continuous and increasing).

b) Fix some value y_0 in the range of this function and identify the region \{y:y\ge y_0\}.

c) Find the solution x_0 of the equation f(x)=y_0. By definition, x_0=f^{-1}(y_o). Identify the region \{x:f(x)\ge y_0\}.

d) Note that x_0=\min\{x:f(x)\ge y_0\}. In general, for bad functions the minimum here may not exist. Therefore minimum is replaced by infimum, which gives us the definition of the quasi-inverse:

x_0=\inf\{x:f(x)\ge y_0\}.

18
Oct 18

Law of iterated expectations: geometric aspect

Law of iterated expectations: geometric aspect

There will be a separate post on projectors. In the meantime, we'll have a look at simple examples that explain a lot about conditional expectations.

Examples of projectors

The name "projector" is almost self-explanatory. Imagine a point and a plane in the three-dimensional space. Draw a perpendicular from the point to the plane. The intersection of the perpendicular with the plane is the points's projection onto that plane. Note that if the point already belongs to the plane, its projection equals the point itself. Besides, instead of projecting onto a plane we can project onto a straight line.

The above description translates into the following equations. For any x\in R^3 define

(1) P_2x=(x_1,x_2,0) and P_1x=(x_1,0,0).

P_2 projects R^3 onto the plane L_2=\{(x_1,x_2,0):x_1,x_2\in R\} (which is two-dimensional) and P_1 projects R^3 onto the straight line L_1=\{(x_1,0,0):x_1\in R\} (which is one-dimensional).

Property 1. Double application of a projector amounts to single application.

Proof. We do this just for one of the projectors. Using (1) three times we get

(1) P_2[P_2x]=P_2(x_1,x_2,0)=(x_1,x_2,0)=P_2x.

Property 2. A successive application of two projectors yields the projection onto a subspace of a smaller dimension.

Proof. If we apply first P_2 and then P_1, the result is

(2) P_1[P_2x]=P_1(x_1,x_2,0)=(x_1,0,0)=P_1x.

If we change the order of projectors, we have

(3) P_2[P_1x]=P_2(x_1,0,0)=(x_1,0,0)=P_1x.

Exercise 1. Show that both projectors are linear.

Exercise 2. Like any other linear operator in a Euclidean space, these projectors are given by some matrices. What are they?

The simple truth about conditional expectation

In the time series setup, we have a sequence of information sets ...\subset I_t\subset I_{t+1}\subset... (it's natural to assume that with time the amount of available information increases). Denote

E_tX=E(X|I_t)

the expectation of X conditional on I_t. For each t,

E_t is a projector onto the space of random functions that depend only on the information set I_t.

Property 1. Double application of conditional expectation gives the same result as single application:

(4) E_t(E_tX)=E_tX

(E_tX is already a function of I_t, so conditioning it on I_t doesn't change it).

Property 2. A successive conditioning on two different information sets is the same as conditioning on the smaller one:

(5) E_tE_{t+1}X=E_tX,

(6) E_{t+1}E_tX=E_tX.

Property 3. Conditional expectation is a linear operator: for any variables X,Y and numbers a,b

E_t(aX+bY)=aE_tX+bE_tY.

It's easy to see that (4)-(6) are similar to (1)-(3), respectively, but I prefer to use different names for (4)-(6). I call (4) a projector property. (5) is known as the Law of Iterated Expectations, see my post on the informational aspect for more intuition. (6) holds simply because at time t+1 the expectation E_tX is known and behaves like a constant.

Summary. (4)-(6) are easy to remember as one property. The smaller information set winsE_sE_tX=E_{\min\{s,t\}}X.

13
Oct 18

Law of iterated expectations: informational aspect

Law of iterated expectations: informational aspect

The notion of Brownian motion will help us. Suppose we observe a particle that moves back and forth randomly along a straight line. The particle starts at zero at time zero. The movement can be visualized by plotting on the horizontal axis time and on the vertical axis - the position of the particle. W(t) denotes the random position of the particle at time t.

Unconditional expectation

Figure 1. Unconditional expectation

In Figure 1, various paths starting at the origin are shown in different colors. The intersections of the paths with vertical lines at times 0.5, 1 and 1.5 show the positions of the particle at these times. The deviations of those positions from y=0 to the upside and downside are assumed to be equally likely (more precisely, they are normal variables with mean zero and variance t).

Unconditional expectation

“In the beginning there was nothing, which exploded.” ― Terry Pratchett, Lords and Ladies

If we are at the origin (like the Big Bang), nothing has happened yet and EW(t)=0 is the best prediction for any moment t>0 we can make (shown by the blue horizontal line in Figure 1). The usual, unconditional expectation EX corresponds to the empty information set.

Conditional expectation

Conditional expectation

Figure 2. Conditional expectation

In Figure 2, suppose we are at t=2. The dark blue path between t=0 and t=2 has been realized. We know that the particle has reached the point W(2) at that time. With this knowledge, we see that the paths starting at this point will have the average

(1) E(W(t)|W(2))=W(2), t>2.

This is because the particle will continue moving randomly, with the up and down moves being equally likely. Prediction (1) is shown by the horizontal light blue line between t=2 and t=4. In general, this prediction is better than EW(t)=0.

Note that for different realized paths, W(2) takes different values. Therefore E(W(t)|W(2)), for t<2, is a random variable. It is a function of the event we condition the expectation on.

Law of iterated expectations

Law of iterated expectations

Figure 3. Law of iterated expectations

Suppose you are at time t=2 (see Figure 3). You send many agents to the future t=3 to fetch the information about what will happen. They bring you the data on the means E(W(t)|W(3)) they see (shown by horizontal lines between t=3 and t=4). Since there are many possible future realizations, you have to average the future means. For this, you will use the distributional belief you have at time t=2. The result is E[E(W(t)|W(3))|W(2)]. Since the up and down moves are equally likely, your distribution at time t=2 is symmetric around W(2). Therefore the above average will be equal to E(W(t)|W(2)). This is the Law of Iterated Expectations, also called the tower property:

(2) E[E(W(t)|W(3))|W(2)]=E(W(t)|W(2)).

The knowledge of all of the future predictions E(W(t)|W(3)), upon averaging, does not improve or change our current prediction E(W(t)|W(2)).

For a full mathematical treatment of conditional expectation see Lecture 10 by Gordan Zitkovic.

23
Sep 18

Portfolio analysis: return on portfolio

Portfolio analysis: return on portfolio

Exercise 1. Suppose a portfolio contains n_1 shares of stock 1 whose price is S_1 and n_2 shares of stock 2 whose price is S_2. Stock prices fluctuate and are random variables. Numbers of shares are assumed fixed and are deterministic. What is the expected value of the portfolio?

Solution. The portfolio value is its market price V=n_1S_1+n_2S_2. Since this is a linear combination, the expected value is EV=n_1ES_1+n_2ES_2.

In fact, the portfolio analysis is a little bit different than suggested by Exercise 1. To explain the difference, we start with fixing two points of view.

View 1. I hold a portfolio of stocks. I may have inherited it, and it does not matter how much it cost at the moment it was formed. If I want to sell it, I am interested in knowing its market value. In this situation the numbers of shares in my portfolio, which are constant, and the market prices of stocks, which are random, determine the market value of the portfolio, defined in Exercise 1. The value of the portfolio is a linear combination of stock prices.

View 2. I have a certain amount of money M^0 to invest. Being a gambler, I am not interested in holding a portfolio forever. I am thinking about buying a portfolio of stocks now and selling it, say, in a year at price M^1. In this case I am interested in the rate of return defined by r=\frac{M^1-M^0}{M^0}. M^0 is considered deterministic (current prices are certain) and M^1 is random (future prices are unpredictable). Thus the rate of return is random.

We pursue the second view (prevalent in finance). As it often happens in economics and finance, the result depends on how one understands the things. Suppose the initial amount M^0 is invested in n assets. Denoting M_i^0 the amount invested in asset i, we have M^0=\sum\limits_{i = 1}^nM_i^0. Denoting s_i=M_i^0/{M^0} the share (percentage) of M_i^0 in the total investment M^0, we have

(1) M_i^0=s_iM^0,\ M^0=\sum\limits_{i = 1}^ns_iM^0.

The initial shares s_i are deterministic.

Let M_i^1 be what becomes of M_i^0 in one year and let M^1=\sum\limits_{i = 1}^nM_i^1 be the total value of the investment at the end of the year. Since different assets grow at different rates, generally it is not true that M_i^1 =s_iM^1. Denote r_i=\frac{M_i^1-M_i^0}{M_i^0} the rate of return on asset i. Then

(2) M_i^1=(1+r_i)M_i^0, M^1=\sum\limits_{i = 1}^n(1+r_i)M_i^0.

Exercise 2. The rate of return on the portfolio is a linear combination of the rates of return on separate assets, the coefficients being the initial shares of investment.

Solution. Using Equations (1) and (2) we get

(3) r=\frac{M^1-M^0}{M^0}=\frac{\sum(1+r_i)M_i^0-\sum M_i^0}{M^0}=\frac{\sum r_iM_i^0}{M^0}=\frac{\sum r_is_iM^0}{M^0}=\sum s_ir_i .

Once you know this equation you can find the mean and variance of the rate of return on the portfolio in terms of investment shares and rates of return on assets.

22
Sep 18

Applications of the diagonal representation IV

Applications of the diagonal representation IV

Principal component analysis is a general method based on diagonalization of the variance matrix. We consider it in a financial context. The variance matrix measures riskiness of the portfolio.  We want to see which stocks contribute most to the portfolio risk. The surprise is that the answer is given not in terms of the vector of returns but in terms of its linear transformation.

8. Principal component analysis (PCA)

Let R be a column-vector of returns on p stocks with the variance matrix V(R)=E(R-ER)(R-ER)^{T}. The idea is to find an orthogonal matrix W such that W^{-1}V(R)W=D is a diagonal matrix D=diag[\lambda_1,...,\lambda_p] with \lambda_1\geq...\geq\lambda_p.

With such a matrix, instead of R we can consider its transformation Y=W^{-1}R for which

V(Y)=W^{-1}V(R)(W^{-1})^T=W^{-1}V(R)W=D.

We know that V(Y) has variances V(Y_1),...,V(Y_p) on the main diagonal. It follows that V(Y_i)=\lambda_i for all i. Variance is a measure of riskiness. Thus, the transformed variables Y_1,...,Y_p are put in the order of declining risk. What follows is the realization of this idea using sample data.

In a sampling context, all population means should be replaced by their sample counterparts. Let R^{(t)} be a p\times 1 vector of observations on R at time t. These observations are put side by side into a matrix \mathbb{R}=(R^{(1)},...,R^{(n)}) where n is the number of moments in time. The population mean ER is estimated by the sample mean

\bar{\mathbb{R}}=\frac{1}{n}\sum_{t=1}^nR^{(t)}.

The variance matrix V(R) is estimated by

\hat{V}=\frac{1}{n-1}(\mathbb{R}-\bar{\mathbb{R}}l)(\mathbb{R}-\bar{\mathbb{R}}l)^T

where l is a 1\times n vector of ones. It is this matrix that is diagonalized: W^{-1}\hat{V}W=D.

In general, the eigenvalues in D are not ordered. Ordering them and at the same time changing places of the rows of W^{-1} correspondingly we get a new orthogonal matrix W_1 (this requires a small proof) such that the eigenvalues in W_1^{-1}\hat{V}W_1=D_1 will be ordered. There is a lot more to say about the method and its applications.

16
May 18

Efficient market hypothesis is subject to interpretation

Efficient market hypothesis is subject to interpretation

The formulation on Investopedia seems to me the best:

The efficient market hypothesis (EMH) is an investment theory that states it is impossible to "beat the market" because stock market efficiency causes existing share prices to always incorporate and reflect all relevant information. According to the EMH, stocks always trade at their fair value on stock exchanges, making it impossible for investors to either purchase undervalued stocks or sell stocks for inflated prices. As such, it should be impossible to outperform the overall market through expert stock selection or market timing, and the only way an investor can possibly obtain higher returns is by purchasing riskier investments.

This is not Math, and the EMH interpretation is subjective. My purpose is not to discuss the advantages and drawbacks of various versions of the EMH but indicate some errors students make on exams.

Best(?) way to answer questions related to EMH

Since there is a lot of talking, the best is to use the appropriate key words.

Start with "The EMH states that it is impossible to make economic profit".

Then explain why: The stock market is efficient in the sense that stocks trade at their fair value, so that undervalued or overvalued stocks don't exist.

Then specify that "to obtain economic profits, from the revenues we subtract opportunity (hidden) costs, in addition to direct costs, such as transaction fees". What on the surface seems to be a profitable activity may in fact be balancing at break-even.

Next is to address the specification by Malkiel that the EMH depends on the information set \Omega_t available at time t.

Weak form of EMH. The information set \Omega_t^1 contains only historical values of asset prices, dividends (and possibly volume) up until time t. This is basically what an investor sees on a stock price chart. Many students say "historical information" but fail to mention that it is about prices of financial assets. The birthdays of celebrities are also historical information but they are not in this info set.

Semi-strong form of EMH. The info set \Omega_t^2 is all publicly available information. Some students don't realize that it includes \Omega_t^1. The risk-free rate is in \Omega_t^2 but not in \Omega_t^1 because 1) it is publicly known and 2) it is not traded (it is fixed by the central bank for extended periods of time).

Strong form of EMH. The info set \Omega_t^3 includes all publicly available info plus private company information. Firstly, this info set includes the previous two: \Omega_t^1\subset\Omega_t^2\subset\Omega_t^3. Secondly, whether a certain piece of information belongs to \Omega_t^2 or \Omega_t^3 depends on time. For example, the number of shares Warren Buffett purchased today of the stock is in \Omega_t^3 but over time it becomes a part of \Omega_t^2 because large holdings must be reported within 45 days of the end of a calendar quarter. If there are nuances like this you have to explain them.

Implications for time series analysis

Conditional expectation is a relatively complex mathematical construct. The simplest definition is accessible to basic statistics students. The mid-level definition in case of conditioning on a set of positive probability already raises questions about practical calculation. The most general definition is based on Radon-Nikodym  derivatives. Moreover, nobody knows exactly any of those \Omega_t. So how do you apply time series models which depend so heavily on conditioning? The answer is simple: since by the EMH the stock price "reflects all relevant information", that price is already conditioned on that information, and you don't need to worry about theoretical complexities of conditioning in applications.

10
May 18

The Newey-West estimator: uncorrelated and correlated data

The Newey-West estimator: uncorrelated and correlated data

I hate long posts but here we by necessity have to go through all ideas and calculations, to understand what is going on. One page of formulas in A. Patton's guide to FN3142 Quantitative Finance in my rendition becomes three posts.

Preliminaries and autocovariance function

Let X_1,...,X_n be random variables. We need to recall that the variance of the vector X=(X_1,...,X_n)^T is

(1) V(X)=\left(\begin{array}{cccc}V(X_1)&Cov(X_1,X_2)&...&Cov(X_1,X_n)\\Cov(X_2,X_1)&V(X_2)&...&Cov(X_2,X_n)\\...&...&...&...\\Cov(X_n,X_1)&Cov(X_n,X_2)&...&V(X_n)\end{array}\right).

With the help of this matrix we derived two expressions for variance of a linear combination:

(2) V\left(\sum_{i=1}^na_iX_i\right)=\sum_{i=1}^na_i^2V(X_i)

for uncorrelated variables and

(3) V\left(\sum_{i=1}^na_iX_i\right)=\sum_{i=1}^na_i^2V(X_i)+2\sum_{i=1}^{n-1}\sum_{j=i+1}^na_ia_jCov(X_i,X_j)

when there is autocorrelation.

In a time series context X_1,...,X_n are observations along time. 1,...,n stand for moments in time and the sequence X_1,...,X_n is called a time series. We need to recall the definition of a stationary process. Of that definition, we will use only the part about covariances: Cov(X_i,X_j) depends only on the distance |i-j| between the time moments i,j. For example, in the top right corner of (1) we have Cov(X_1,X_n), which depends only on n-1.

Preamble. Let X_1,...,X_n be a stationary times series. Firstly, Cov(X_i,X_i+k) depends only on k. Secondly, for all integer k=0,\pm 1,\pm 2,... denoting j=i+k we have

(4) Cov(X_i,X_{i+k})=Cov(X_{j-k},X_j)=Cov(X_j,X_{j-k}).

Definition. The autocovariance function is defined by

(5) \gamma_k=Cov(X_i,X_{k+i}) for all integer k=0,\pm 1,\pm 2,...

In particular,

(6) \gamma_0=Cov(X_i,X_i)=V(X_i) for all i.

The preamble shows that definition (5) is correct (the right side in (5) depends only on k and not on i). Because of (4) we have symmetry \gamma_{-k}=\gamma _k, so negative k can be excluded from consideration.

With (5) and (6) for a stationary series (1) becomes

(7) V(X)=\left(\begin{array}{cccc}\gamma_0&\gamma_1&...&\gamma_{n-1}\\ \gamma_1&\gamma_0&...&\gamma_{n-2}\\...&...&...&...\\  \gamma_{n-1}&\gamma_{n-2}&...&\gamma_0\end{array}\right).

Estimating variance of a sample mean

Uncorrelated observations. Suppose X_1,...,X_n are uncorrelated observations from the same population with variance \sigma^2. From (2)
we get

(8) V\left(\frac{1}{n}\sum_{i=1}^nX_i\right) =\frac{1}{n^2}\sum_{i=1}^nV(X_i)=\frac{n\sigma^2}{n^2}=\frac{\sigma^2}{n}.

This is a theoretical relationship. To actually obtain an estimator of the sample variance, we need to replace \sigma^2 by some estimator. It is known that

(9) s^2=\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^2

consistently estimates \sigma^2. Plugging it in (8) we see that variance of the sample mean is consistently estimated by

\hat{V}=\frac{1}{n}s^2=\frac{1}{n^2}\sum_{i=1}^n(X_i-\bar{X})^2.

This is the estimator derived on p.151 of Patton's guide.

Correlated observations. In this case we use (3):

V\left( \frac{1}{n}\sum_{i=1}^nX_i\right) =\frac{1}{n^{2}}\left[\sum_{i=1}^nV(X_i)+2\sum_{i=1}^{n-1}\sum_{j=i+1}^nCov(X_i,X_j)\right].

Here visualization comes in handy. The sums in the square brackets include all terms on the main diagonal of (7) and above it. That is, we have n copies of \gamma_0, n-1 copies of \gamma_{1},..., 2 copies of \gamma _{n-2} and 1 copy of \gamma _{n-1}. The sum in the brackets is

\sum_{i=1}^nV(X_i)+2\sum_{i=1}^{n-1}\sum_{j=i+1}^nCov(X_i,X_j)=n\gamma_0+2[(n-1)\gamma_1+...+2\gamma_{n-2}+\gamma _{n-1}]=n\gamma  _0+2\sum_{k=1}^{n-1}(n-k)\gamma_k.

Thus we obtain the first equation on p.152 of Patton's guide (it's up to you to match the notation):

(10) V(\bar{X})=\frac{1}{n}\gamma_0+\frac{2}{n}\sum_{k=1}^{n-1}(1-\frac{k}{n})\gamma_k.

As above, this is just a theoretical relationship. \gamma_0=V(X_i)=\sigma^2 is estimated by (9). Ideally, the estimator of \gamma_k=Cov(X_i,X_{k+i}) is obtained by replacing all population means by sample means:

(11) \hat{\gamma}_k=\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})(X_{k+i}-\bar{X}).

There are two problems with this estimator, though. The first problem is that when i runs from 1 to n, k+i runs from k+1 to k+n. To exclude out-of-sample values, the summation in (11) is reduced:

(12) \hat{\gamma}_k=\frac{1}{n-k}\sum_{i=1}^{n-k}(X_i-\bar{X})(X_{k+i}-\bar{X}).

The second problem is that the sum in (12) becomes too small when k is close to n. For example, for k=n-1 (12) contains just one term (there is no averaging). Therefore the upper limit of summation n-1 in (10) is replaced by some function M(n) that tends to infinity slower than n. The result is the estimator

\hat{V}=\frac{1}{n}\hat{\gamma}_0+\frac{2}{n}\sum_{k=1}^{M(n)}(1-\frac{k}{M(n)})\hat{\gamma}_k

where \hat{\gamma}_0 is given by (9) and \hat{\gamma}_k is given by (12). This is almost the Newey-West estimator from p.152. The only difference is that instead of \frac{k}{M(n)} they use \frac{k}{M(n)+1}, and I have no idea why. One explanation is that for low n, M(n) can be zero, so they just wanted to avoid division by zero.