20
Apr 21

Put debit spread

Put debit spread

This post parallels the one about the call debit spread. A combination of several options in one trade is called a strategy. Here we discuss a strategy called a put debit spread. The word "debit" in this name means that a trader has to pay for it. The rule of thumb is that if it is a debit (you pay for a strategy), then it is less risky than if it is a credit (you are paid). Let p(K) denote the price of the put with the strike K, suppressing all other variables that influence the put price.

Assumption. The market values higher events of higher probability. This is true if investors are rational and the market correctly reconciles views of different investors.

We need the following property: if K_{1}<K_{2} are two strike prices, then for the corresponding put prices (with the same expiration and underlying asset) one has p(K_{1})<p(K_{2}).

Proof.  A put price is higher if the probability of it being in the money at expiration is higher. Let S(T) be the stock price at expiration T. Since T is a moment in the future, S(T) is a random variable. For a given strike K, the put is said to be in the money at expiration if S(T)<K. If K_{1}<K_{2} and S(T)<K_{1}, then S(T)<K_{2}. It follows that the set \{ S(T)<K_{1}\} is a subset of the set \{S(T)<K_{2}\} . Hence the probability of the event \{S(T)<K_{2}\} is higher than that of the event \{S(T)<K_{1}\} and p(K_{2})>p(K_{1}).

Put debit spread strategy. Select two strikes K_{1}<K_{2}, buy p(K_{2}) (take a long position) and sell p(K_{1}) (take a short position). You pay p=p(K_{2})-p(K_{1})>0 for this.

Our purpose is to derive the payoff for this strategy. We remember that if S(T)\ge K, then the put p(K) expires worthless.

Case S(T)\ge K_{2}. In this case both options expire worthless and the payoff is the initial outlay: payoff =-p.

Case K_{1}\leq S(T)<K_{2}. Exercising the put p(K_{2}), in comparison with selling the stock at the market price you gain K_{2}-S(T). The second option expires worthless. The payoff is: payoff =K_{2}-S(T)-p.

Case S(T)<K_{1}. Both options are exercised. The gain from p(K_{2}) is, as above, K_{2}-S(T). The holder of the long put p(K_{1}) sells you stock at price K_{1}. Since your position is short, you have nothing to do but comply. The alternative would be to buy at the market price, so you lose S(T)-K_{1}. The payoff is: payoff =\left(K_{2}-S(T)\right) +\left( S(T)-K_{1}\right) -p=K_{2}-K_{1}-p.

Summarizing, we get:

payoff =\left\{\begin{array}{ll}  -p, & K_2\le S(T) \\  K_{2}-S(T)-p, & K_{1}\leq S(T)<K_{2}\\  K_{2}-K_{1}-p, & S(T)<K_{1}  \end{array}\right.

Normally, the strikes are chosen so that K_{2}-K_{1}>p. From the payoff expression we see then that the maximum profit is K_{2}-K_{1}-p>0, the maximum loss is -p and the breakeven stock price is S(T)=K_{2}-p. This is illustrated in Figure 1, where the stock price at expiration is on the horizontal axis.

Payoff from put debit spread

Figure 1. Payoff from put debit spread. Source: https://www.optionsbro.com/

Conclusion. For the strategy to be profitable, the price at expiration should satisfy S(T)< K_{2}-p. Buying a put debit spread is appropriate when the price is expected to stay in that range.

In comparison with the long put position p(K_{2}), taking at the same time the short call position -p(K_{1}) allows one to reduce the initial outlay. This is especially important when the stock volatility is high, resulting in a high put price. In the difference p(K_{2})-p(K_{1}) that volatility component partially cancels out.

Remark. There is an important issue of choosing the strikes. Let S denote the stock price now. The payoff expression allows us to rank the next choices in the order of increasing risk: 1) S<K_1<K_2 (both options are in the money, less risk), 2) K_1<S<K_2 and 3) K_1<K_2<S (both options are out of the money, highest risk).  Also remember that a put debit spread is less expensive than buying p(K_{2}) and selling p(K_{1}) in two separate transactions.

Exercise. Analyze a put credit spread, in which you sell p(K_{2}) and buy p(K_{1}).

21
Mar 21

Call debit spread

Call debit spread

A combination of several options in one trade is called a strategy. Here we discuss a strategy called a call debit spread. The word "debit" in this name means that a trader has to pay for it. The rule of thumb is that if it is a debit (you pay for a strategy), then it is less risky than if it is a credit (you are paid). Let c(K) denote the call price with the strike K, suppressing all other variables that influence the call price.

Assumption. The market values higher events of higher probability. This is true if investors are rational and the market correctly reconciles views of different investors.

We need the following property: if K_{1}<K_{2} are two strike prices, then for the corresponding call prices (with the same expiration and underlying asset) one has c(K_{1})>c(K_{2}).

Proof.  A call price is higher if the probability of it being in the money at expiration is higher. Let S(T) be the stock price at expiration T. Since T is a moment in the future, S(T) is a random variable. For a given strike K, the call is said to be in the money at expiration if S(T)>K. If K_{1}<K_{2} and S(T)>K_{2}, then S(T)>K_{1}. It follows that the set \{ S(T)>K_{2}\} is a subset of the set \{S(T)>K_{1}\} . Hence the probability of the event \{S(T)>K_{2}\} is lower than that of the event \{S(T)>K_{1}\} and c(K_{1})>c(K_{2}).

Call debit spread strategy. Select two strikes K_{1}<K_{2}, buy c(K_{1}) (take a long position) and sell c(K_{2}) (take a short position). You pay p=c(K_{1})-c(K_{2})>0 for this.

Our purpose is to derive the payoff for this strategy. We remember that if S(T)\leq K, then the call c(K) expires worthless.

Case S(T)\leq K_{1}. In this case both options expire worthless and the payoff is the initial outlay: payoff =-p.

Case K_{1}<S(T)\leq K_{2}. Exercising the call c(K_{1}) and immediately selling the stock at the market price you gain S(T)-K_{1}. The second option expires worthless. The payoff is: payoff =S(T)-K_{1}-p. (In fact, you are assigned stock and selling it is up to you).

Case K_{2}<S(T). Both options are exercised. The gain from c(K_{1}) is, as above, S(T)-K_{1}. The holder of the long call c(K_{2}) buys from you at price K_{2}. Since your position is short, you have nothing to do but comply. You buy at S(T) and sell at K_{2}. Thus the loss from -c(K_{2}) is K_{2}-S(T). The payoff is: payoff =\left(S(T)-K_{1}\right) +\left( K_{2}-S(T)\right) -p=K_{2}-K_{1}-p.

Summarizing, we get:

payoff =\left\{\begin{array}{ll}  -p, & S(T)\leq K_{1} \\  S(T)-K_{1}-p, & K_{1}<S(T)\leq K_{2} \\  K_{2}-K_{1}-p, & K_{2}<S(T)  \end{array}\right.

Normally, the strikes are chosen so that K_{2}-K_{1}>p. From the payoff expression we see then that the maximum profit is K_{2}-K_{1}-p>0, the maximum loss is -p and the breakeven stock price is S(T)=K_{1}+p. This is illustrated in Figure 1, where the stock price at expiration is on the horizontal axis.

Payoff for call debit strategy

Figure 1. Payoff for call debit strategy. Source: https://www.optionsbro.com/

Conclusion. For the strategy to be profitable, the price at expiration should satisfy S(T)\geq K_{1}+p. Buying a call debit spread is appropriate when the price is expected to stay in that range.

In comparison with the long call position c(K_{1}), taking at the same time the short call position -c(K_{2}) allows one to reduce the initial outlay. This is especially important when the stock volatility is high, resulting in a high call price. In the difference c(K_{1})-c(K_{2}) that volatility component partially cancels out.

Remark. There is an important issue of choosing the strikes. Let S denote the stock price now. The payoff expression allows us to rank the next choices in the order of increasing risk: 1) K_1<K_2<S (both options are in the money, less risk), 2) K_1<S<K_2 and 3) K_1<K_2<S (both options are out of the money, highest risk).  Also remember that a call debit spread is less expensive than buying c(K_{1}) and selling c(K_{2}) in two separate transactions.

Exercise. Analyze a call credit spread, in which you sell c(K_{1}) and buy c(K_{2}).

24
Jun 20

Solution to Question 2 from UoL exam 2018, Zone B

Solution to Question 2 from UoL exam 2018, Zone B

There are three companies, called A, B, and C, and each has a 4% chance of going bankrupt. The event that one of the three companies will go bankrupt is independent of the event that any other company will go bankrupt.

Company A has outstanding bonds, and a bond will have a net return of r = 0\% if the corporation does not go bankrupt, but it will have a net return of r = -100\%, i.e., losing everything invested, if it goes bankrupt. Suppose an investor buys $1000 worth of bonds of company A, which we will refer to as portfolio {P_1}.

Suppose also that there exists a security whose payout depends on the bankruptcy of companies B and C in a joint fashion. In particular, if neither B nor C go bankrupt, this derivative will have a net return of r = 0\%. If exactly one of B or C go bankrupt, it will have a net return of r = -50\%, i.e., losing half of the investment. If both B and C go bankrupt, it will have a net return of r = -100\%, i.e., losing the whole investment. Suppose an investor buys $1000 worth of this derivative, which is then called portfolio {P_2}.

(a) Calculate the VaR at the \alpha = 10\% critical level for portfolios P_1 and {P_2}. [30 marks]

Independence of events. Denote A,{A^c} the events that company A goes bankrupt and does not go bankrupt, resp. A similar notation will be used for the other two companies. The simple definition of independence of bankruptcy events P(A \cap B) = P(A)P(B) would be too difficult to apply to prove independence of all events that we need. A general definition of independence of variables is that their sigma-fields are independent (it will not be explained here). This general definition implies that in all cases below we can use multiplicativity of probability such as

P(B \cap C) = P(B)P(C) = {0.04^2} = 0.0016,\,\,P({B^c} \cap {C^c}) = {0.96^2} = 0.9216, P((B \cap {C^c}) \cup ({B^c} \cap C)) = P(B \cap {C^c}) + P({B^c} \cap C) = 2 \times 0.04 \times 0.96 = 0.0768.

The events here have a simple interpretation: the first is that “both B and C fail”, the second is “both B and C fail”, and the third is that “either (B fails and C does not) or (B does not fail and C does)” (they do not intersect and additivity of probability applies).

Let {r_A},{r_S} be returns on A and the security S, resp. From the problem statement it follows that these returns are described by the tables
Table 1

{r_A} Prob
0 0.96
-100 0.04

Table 2

{r_S} Prob
0 0.9216
-50 0.0768
-100 0.0016

Everywhere we will be working with percentages, so the dollar values don’t matter.

From Table 1 we conclude that the distribution function of return on A looks as follows:

Distribution function of portfolio A

Figure 1. Distribution function of portfolio A

At x=-100 the function jumps up by 0.04, at x=0 by another 0.96. The dashed line at y=0.1 is used in the definition of the VaR using the generalized inverse:

VaR_A^{0.1} = \inf \{ {x:{F_A}(x) \ge 0.1}\} = 0.

From Table 2 we see that the distribution function of return on S looks like this:

The first jump is at x=-100, the second at x=-50 and third one at x=0. As above, it follows that

VaR_S^{0.1} = \inf\{ {x:{F_S}(x) \ge 0.1}\} = 0.

(b) Calculate the VaR at the \alpha=10\% critical level for the joint portfolio {P_1} + {P_2}. [20 marks]

To find the return distribution for P_1 + P_2, we have to consider all pairs of events from Tables 1 and 2 using independence.

1.P({r_A}=0,{r_S}=0)=0.96\times 0.9216=0.884736

2.P({r_A}=-100,{r_S}=0)=0.04\times 0.9216=0.036864

3.P({r_A}=0,{r_S}=-50)=0.96\times 0.0768=0.073728

4.P({r_A}=-100,{r_S}=-50)=0.04\times 0.0768=0.003072

5.P({r_A}=0,{r_S}=-100)=0.96\times 0.0016=0.001536

6.P({r_A}=-100,{r_S}=-100)=0.04\times 0.0016=0.000064

Since we deal with a joint portfolio, percentages for separate portfolios should be translated into ones for the whole portfolio. For example, the loss of 100% on one portfolio and 0% on the other means 50% on the joint portfolio (investments are equal). There are two such losses, in lines 2 and 5, so the probabilities should be added. Thus, we obtain the table for the return r on the joint portfolio:

Table 3

r Prob
0 0.884736
-25 0.073728
-50 0.0384
-75 0.003072
-100 0.000064

Here only the first probability exceeds 0.1, so the definition of the generalized inverse gives

VaR_r^{0.1} = \inf \{ {x:{F_r}(x) \ge 0.1}\} = 0.

(c) Is VaR sub-additive in this example? Explain why the absence of sub-additivity may be a concern for risk managers. [20 marks]

To check sub-additivity, we need to pass to positive numbers, as explained in other posts. Zeros remain zeros, the inequality 0 \le 0 + 0 is true, so sub-additivity holds in this example. Lack of sub-additivity is an undesirable property for risk managers, because for them keeping the VaR at low levels for portfolio parts doesn’t mean having low VaR for the whole portfolio.

(d) The expected shortfall E{S^\alpha } at the \alpha critical level can be defined as

ES^\alpha= - E_t[R|R < - VaR_{t + 1}^\alpha],

where R is a return or dollar amount. Calculate the expected shortfall at the \alpha = 10\% critical level for portfolio P_2. Is this risk measure sub-additive? [30 marks]

Using the definition of conditional expectation and Table 3, we have (the time subscript can be omitted because the problem is static)
ES^{0.1}=-E[r|r<VaR_r^{0.1}]=-\frac{Er1_{\{r<VaR_r^{0.1}\}}}{{P(r<VaR_r^{0.1})}}=
=-\frac{-25\times 0.073728-50\times 0.0384-75\times 0.003072-100\times 0.000064}{0.073728+0.0384+0.003072+0.000064}=\frac{4}{0.115264}=34.7029.

There is a theoretical property that the expected shortfall is sub-additive.

22
Jun 20

Solution to Question 2 from UoL exam 2019, zone B

Solution to Question 2 from UoL exam 2019, zone B

Suppose the parameters in a GARCH (1,1) model

\sigma _{t + 1}^2 = \omega + \beta \sigma _t^2 + \alpha \varepsilon _t^2   (1)

are \omega = 0.000004,\ \alpha = 0.06,\ \beta = 0.93, the index t refers to days and {\varepsilon _t} is zero-mean white noise with conditional variance \sigma _t^2.

(a) What are the requirements for this process to be covariance stationary, and are they satisfied here? [20 marks]

If the coefficients satisfy the condition for positivity, \omega>0,\ \alpha,\beta\ge0, then the condition for covariance-stationarity is \alpha + \beta < 1. They are barely satisfied.

(b) What is the long-run average volatility? [20 marks]

We use the facts that {\sigma ^2} = E\sigma _{t + 1}^2 = E\left[ {E(\varepsilon _{t + 1}^2|{F_t})} \right] for all t. Applying the unconditional mean to regression (1) and using the LIE we get

{\sigma ^2}=E\sigma _{t+1}^2=E\left[{\omega+\beta\sigma _t^2+\alpha\varepsilon _t^2}\right]=\omega+\beta{\sigma^2}+\alpha{\sigma^2}

and

{\sigma^2}=\frac{\omega }{{1-\alpha-\beta}}=\frac{{0.000004}}{{1-0.06-0.93}}=0.0004.

(c) If the current volatility is 2.5% per day, what is your estimate of the volatility in 20, 40, and 60 days? [20 marks]

On p.107 of the Guide there is the derivation of the equation

\sigma _{t + h,t}^2 = \sigma _y^2 + {(\alpha + \beta )^{h - 1}}(\sigma _{t + 1,t}^2 - \sigma _y^2),\,\,h \ge 1.    (2)

I gave you a slightly easier derivation in my class, please use that one. If we interpret "current" as t+1 and "in twenty days" as t+1+20, then

\sigma _{t+21}^2=\sigma^2+(\alpha + \beta )^{20}(\sigma _{t+1}^2-\sigma^2) = 0.0004+\exp\left[ 20\ln(0.06+0.93)\right](0.025-0.0004) = 0.020521.

For h=41,61 use the same formula to get 0.016692, 0.013725, resp. I did it in Excel and don't envy you if you have to do it during an exam.

(d) Suppose that there is an event that decreases the current volatility by 1.5%to 1% per day. Estimate the effect on the volatility in 20, 40, and 60 days. [20 marks]

Calculations are the same, just replace 0.025 by 0.01. Alternatively, one can see that the previous values will go down by \exp[(h-1)\ln(0.06+0.93)]0.015, which results in volatility values 0.012146, 0.009934 and 0.008125.

(e) Explain what volatility should be used to price 20-, 40-and 60-day options, and explain how you would calculate the values. [20 marks]

The only unobservable input to the Black-Scholes option pricing formula is the stock price volatility. In the derivation of the formula the volatility is assumed to be constant. The value of the constant should depend on the forecast horizon. If we, say, forecast 20 days ahead, we should use a constant value for all 20 days. This constant can be obtained as an average of daily forecasts obtained from the GARCH model.

If the GARCH is not used, a simpler approach is applied. If the average daily volatility is {\sigma _d}, then assuming independent returns, over a period of n days volatility is {\sigma _{nd}} = \sqrt n {\sigma _d}.

In practice, traders go back from option prices to volatility. That is, they use observed option prices to solve the Black-Scholes formula for volatility (find the root of an equation with the price given). The resulting value is called implied volatility. If it is plugged back into the Black-Scholes formula, the observed option price will result.

21
Jun 20

Solution to Question 3 from UoL exam 2019, zone A

Solution to Question 3 from UoL exam 2019, zone A

(a) Define the concept of trade duration in financial markets and explain briefly why this concept is economically useful. What features do trade durations typically exhibit and how can we model these features? [25 marks]

High frequency traders (HFT) may trade every millisecond. Orders from traders arrive at random moments and therefore the trade times are not evenly spaced. It makes sense to model the differences

{x_j} = TIME_j - TIME_{j - 1}

between transaction times. (The Guide talks about differences between times of returns but I don’t like this because on small time frames people are interested in prices, not returns.) Those differences are called durations. They are economically interesting because 1) they tell us something about liquidity: periods of intense trading are generally periods of greater market liquidity than periods of sparse trading (there is also after-hours trading between 16:00 and 20:30, New York time, when trading may be intense but liquidity is low) and 2) durations relate directly to news arrivals and the adjustment of prices to news, and so have some use in discussions of market efficiency.

The trading session in the USA is from 9:30 to 16:00, New York time. Durations exhibit diurnality (that is, intraday seasonality): transactions are more frequent (durations are shorter) in the first and last hour of the trading session and less frequent around lunch, see Figure 16.6 from the Guide.

Higher frequency in the first hour results from traders rebalancing their portfolios after overnight news and in the last hour – from anticipation of news during the next night.

The main decomposition of durations is

{x_j} = {s_j}x_j^*,
so \log {x_j} = \log {s_j} + \log x_j^*,
\log {s_j} = \sum\limits_{i = 1}^{13} {\beta _j}{D_{i,j}}
\log {s_j} = \gamma _0 + \gamma _1HR{S_j} + \gamma _2HRS_j^2.

In the first equation {s_j} is the diurnal component and x_j^* is called a de-seasonalized duration (it has not been defined here). The second follows from the first.

I am not sure that you need the third equation. The fourth equation is used below. In the third equation \log {s_j} is regressed on dummies of half-hour periods (there are 13 of them in the trading session; the constant is not included to avoid the dummy trap). In the fourth equation it is regressed on the first and second power of the time variable HRS_j, which measures time in hours starting from the previous midnight. This is called a polynomial regression. Both regressions can capture diurnality.

(b) Describe the Engle and Russell (1998) autoregressive conditional duration (ACD) model. [25 marks]

Instead of the duration model considered in part (a) Engle and Russell suggest the ACD model

{x_j} = {s_j}x_j^*,
x_j^* = {\psi _j}{\varepsilon _j},
where \log {s_j} = {\beta _0} + {\gamma _1}HR{S_j} + {\beta _2}HRS_j^2,
x_j^\star = \frac{{x_j}}{{s_j}},\ {\varepsilon _j}|{F_t}\sim i.i.d.(1),
\psi _j = \omega + \beta \psi _{j - 1} + \alpha x_{j - 1}^*,\,\,\omega > 0,\,\,\alpha ,\beta \ge 0   (1)

The first decomposition is the same as above. The second equation decomposes the de-seasonalized duration into a product of deterministic and stochastic components. To understand the idea, we can compare (1) with the GARCH(1,1) model:

y_{t + 1} = {\mu _{t + 1}} + {\varepsilon _{t + 1}},
{\varepsilon _{t + 1}} = {\sigma _{t + 1}}{v_{t + 1}},
{v_{t + 1}}|{F_t}\sim F(0,1),
\sigma _{t + 1}^2 = \omega + \beta \sigma _t^2 + \alpha \varepsilon _t^2.   (2)

Equations (1) and (2) are similar. The assumptions about the random components are different: in (1) we have E{\varepsilon _j} = 1, in (2) E{v_j} = 0. This is because in (2) the epsilons are deviations from the mean and may change sign; in (1) the epsilons come from durations and should be positive. To obtain the last equation in (1) from the GARCH(1,1) in (2) one has to make replacements

\sigma _t^2\sim {\psi _j},\ \varepsilon _t^2 = {({y_{t + 1}} - {\mu _{t + 1}})^2}\sim x_{j - 1}^*. (3)

This is important to know, to understand the comparison of the ML method for the two models below.

(c) Compare the conditions for covariance stationarity, identification and positivity of the duration series for the ACD(1,1) to those for the GARCH(1,1). [25 marks]

Those conditions for GARCH are

Condition 1: \omega > 0,\ \alpha,\beta \ge 0, for positive variance,

Condition 2: \beta = 0 if \alpha = 0, for identification,

Condition 3: \alpha + \beta < 1, for covariance stationarity.

For ACD they are the same, because both are essentially ARMA models.

(d) Illustrate the relationship between the log-likelihood of the ACD(1,1) model and the estimation of a GARCH(1,1) model using the normal likelihood function. [25 marks]

Because of the assumption E{\varepsilon _j} = 1 in (1) we cannot use the normal distribution for (1). Instead the exponential random variable is used. It takes only positive values; its density is zero on the left half-axis and is an exponential function on the right half-axis:

Z\sim Exponential(\gamma ),

so f(z|\gamma ) = \frac{1}{\gamma }\exp\left(-\frac{z}{\gamma } \right) and EZ = \gamma .

Here \gamma is a positive number and f is the density. We take \gamma= 1 as required by the ACD model. This implies Ex_j^* = {\psi _j}E{\varepsilon _j} = {\psi _j} so x_j^* is distributed as Exponential({\psi _j}). Its density is

f(x_j^*|\psi _j)=f(x_j^*|\psi _j(\omega,\beta,\alpha))=\frac{1}{\psi _j}f\left(-\frac{x_j^*}{\psi _j}\right).

The rest is logical: plug \psi_j from the ACD model (1), then take log and then add those logs to obtain the log-likelihood. A. Patton gives the log-likelihood for GARCH, whose derivation I could not find in the book. But from (3) we know that there should be similarity after replacement \sigma _t^2\sim{\psi _j},\ x_{j - 1}^*\sim{({r_t} - {\mu _t})^2}. To this Patton adds that the GARCH likelihood is simply a linear transformation of the ACD likelihood.

20
Apr 20

FN3142 Chapter 14 Risk management and Value-at-Risk: Backtesting

FN3142 Chapter 14 Risk management and Value-at-Risk: Backtesting

Here I added three videos and corresponding pdf files on three topics:

Chapter 14. Part 1. Evaluating VAR forecasts

Chapter 14. Part 2. Conditional coverage tests

Chapter 14. Part 3. Diebold-Mariano test for VaR forecasts

Be sure to watch all of them because sometimes I make errors and correct them in later videos.

All files are here.

Unconditional coverage test

7
Apr 20

FN3142 Chapter 13. Risk management and Value-at-Risk: Models

FN3142 Chapter 13. Risk management and Value-at-Risk: Models

Chapter 13 is divided into 5 parts. For each part, there is a video with the supporting pdf file. Both have been created in Notability using an iPad. All files are here.

Part 1. Distribution function with two examples and generalized inverse function.

Part 2. Value-at-Risk definition

Part 3. Empirical distribution function and its estimation

Part 4. Models based on flexible distributions

Part 5. Semiparametric models, nonparametric estimation of densities and historical simulation.

Besides, in the subchapter named Expected shortfall you can find additional information. It is not in the guide but it was required by one of the past UoL exams.

29
Mar 20

FN3142 Chapter 12. Forecast comparison and combining

FN3142 Chapter 12. Forecast comparison and combining

Like most universities, we switched to online teaching because of COVID-19. This is what I prepared for my Quantitative Finance course and want to make available for everybody. The lecture is divided in three parts. Download the pdf and watch the corresponding video.

Diebold-Mariano test

Diebold-Mariano test

Chapter 12. Part 1.mp4

Chapter 12. Forecast comparison and combination. Part 1.pdf

Chapter 12. Part 2.mp4

Chapter 12. Forecast comparison and combination. Part 2.pdf

Chapter 12. Part 3.mp4

Chapter 12. Part 3. Forecast encompassing and combining

See another post on Newey-West estimator

 

28
Oct 19

Leverage effect: the right definition and explanation

Leverage effect: the right definition and explanation

The guide by Andrew Patton for Quantitative Finance FN3142 states that "stock returns are negatively correlated with changes in volatility: that is, volatility tends to rise following bad news (a negative return) and fall following good news (a positive return)", with reference to Black (1976). This is not quite so, as can be seen from the following chart.

S&P500 versus VIX

S&P500 versus VIX: leverage effect

 

The candlebars (in green and light red) show the index S&P 500, which is an average of the stock prices of the largest 500 publicly traded companies. The continuous purple line shows the VIX, one of widely used measures of volatility. Between the yellow vertical lines at A and B the return has been predominantly positive, yet in the beginning of that period the volatility has been high. The graph clearly shows that there is a negative correlation between asset prices and volatility, not between return and volatility. Thus the proper definition of the leverage effect is "negative correlation between asset prices and volatility". There are different explanations of the effect, here is the one I prefer.

At all times, market participants try to maximize profits and minimize losses. This motivation results in different behaviors during the market cycle.

Near the bottom (below line at E)

During the slump to the left of point A. Out of fear that a great depression is coming, everybody is dumping stocks, trying to stay in cash and gold as much as possible. That's why the price drops quickly and volatility is high.

During the recovery to the right of point A. Some investors consider many stocks cheap and try to load up, buying stocks in large quantities. Others are not convinced that the recovery has started. Opposing opinions and swift purchases, made possible by large amounts of cash on hands, increase volatility.

Near the top (above line at F)

To the left of point B. Stocks are bought in small quantities, for the following reasons: 1) to avoid a sharp increase in price, 2) out of fear that the rally will soon end, and 3) not much cash is left in portfolios, so it's mainly portfolio rebalancing (buy stocks with potential to grow, sell those that have stalled).

To the right of point B. Investors sell stocks in small quantities, to take profits and in anticipation of a new downturn.

The main difference between what happens at the bottom and at the top is in the relative amount of cash and stocks in portfolios. This is why near the top there is little volatility. Of course, there are other reasons. Look at what happened between points C and D. The S&P level was relatively high, but there was a lot of uncertainty about the US-China trade war. Trump with his tweets contributed a lot to that volatility. This article claims that somebody made billions of dollars trading on his tweets. Insider trading is prohibited but not for the mighty of this world.

Final remark. If the recovery from the trough at point A took just three months, why worry and sell stocks during the fall? There are three answers. Firstly, after the big recession of 2008, it took the markets five years to fully recover to the pre-2008 level, and that's what scares everybody. Secondly, the best stocks after the recovery will not be the same as before the fall. Thirdly, one can make money on the way down too, provided one has spare cash.

14
May 19

Question 1 from UoL exam 2016, Zone B, Post 2

Question 1 from UoL exam 2016, Zone B, Post 2

For the problem statement and first part of the solution see Question 1 from UoL exam 2016, Zone B, Post 1.

Let R denote the return on P_1+P_2. From Table 1 we can derive the probabilities table for this return:

Table 2. Joint table of returns on separate portfolios

R_1
0 -100
R_2 0 0.96^2 0.04\cdot 0.96
-100 0.04\cdot 0.96 0.04^2

From Table 2 we conclude that the return on the combined portfolio looks as follows:

Table 3. Total return

R \text{Prob}
0 0.96^2
-50 2\cdot 0.04\cdot 0.96
-100 0.04^2

Table 3 shows that

F_R(x)=0 for x<-100,

F_R(x)=0.04^2=0.0016 for -100\leq x<-50,

F_R(x)=2\cdot 0.04\cdot 0.96+0.0016=0.0784 for -50\leq x<0 and

F_R(x)=0.96^2+0.0784=1 for x\geq 0.

Try to follow the procedure used in Post 1 and you will see that

F_R^{-1}(y)=+\infty for y>1,

F_R^{-1}(y)=0 for 0.0784<y\leq 1,

F_R^{-1}(y)=-50 for 0.0016<y\leq 0.0784,

F_R^{-1}(y)=-100 for 0<y\leq 0.0016 and

F_R^{-1}(y)=-\infty for y\leq 0.

This implies VaR_R^\alpha=F_R^{-1}(0.05)=-50.

(b) The subadditivity definition requires amounts opposite in sign to ours. That is, we define \widetilde{VaR^\alpha} from P(X\leq -\widetilde{VaR^\alpha})=\alpha and then say that VaR thus defined is sub-additive if \widetilde{VaR^\alpha}(P_1+P_2)\leq \widetilde{VaR^\alpha}(P_1)+\widetilde{VaR^\alpha}(P_2). We have been using the definition P(X\leq VaR^\alpha)=\alpha. It's easy to see that \widetilde{VaR^\alpha}=-VaR^\alpha. Thus, in our case we have \widetilde{VaR_R^{0.05}}=50 which is not smaller than \widetilde{VaR_{R_1}^{0.05}}+\widetilde{VaR_{R_2}^{0.05}}=0. Sub-additivity does not hold in this example. Absence of sub-additivity means that riskiness of the whole portfolio, as measured by VaR, may exceed riskiness of the sum of the portfolio parts.

(c) The problem uses the definition of the expected shortfall that yields positive values. I use everywhere the definition that gives negative values: ES^\alpha=E_t[R|R\leq VaR_{t+1}^\alpha]. Since the setup is static, this is the same as ES^\alpha=E[R|R\leq VaR^\alpha]. By definition, E(X|A)=\frac{E(X1_A)}{P(A)}, so ES^\alpha=\frac{E(R1_{\{R\leq VaR^\alpha\}})}{P(R\leq VaR^\alpha)}.

In Post 1 we found that VaR^\alpha=0 for each of R_1,R_2. The condition R_i\leq VaR^\alpha=0 places no restriction on R_i, so from Table 1

E(R_i1_{\{R_i\leq VaR^\alpha \}})=ER_i=0\cdot 0.96-100\cdot 0.04=-4,\ P(R_i\leq VaR^\alpha)=1.

As a result, ES_i^\alpha=-4\%.

Since VaR_R^\alpha=F_R^{-1}(0.05)=-50, from Table 3

E(R1_{\{R\leq -50\}})=-50\cdot 2\cdot 0.04\cdot 0.96-100\cdot 0.04^2=-4, P(R\leq -50)=1-0.9216=0.0784.

Therefore ES_R^\alpha=-51.02. Converting everything to positive values, we have 51.02>4+4, so that sub-additivity does not hold. In fact, there is a theoretical property that it should hold. In this example it does not because of the bad behavior of the generalized inverse for distribution functions of discrete random variables.

The returns in percentages can be easily converted to those in dollars.