18
Oct 20

People need real knowledge

People need real knowledge

Traffic analysis

The number of visits to my website has exceeded 206,000. This number depends on what counts as a visit. An external counter, visible to everyone, writes cookies to the reader's computer and counts many visits from one reader as one. The number of individual readers has reached 23,000. The external counter does not give any more statistics. I will give all the numbers from the internal counter, which is visible only to the site owner.

I have a high percentage of complex content. After reading one post, the reader finds that the answer he is looking for depends on the preliminary material. He starts digging it and then has to go deeper and deeper. Hence the number 206,000, that is, one reader visits the site on average 9 times on different days. Sometimes a visitor from one post goes to another by link on the same day. Hence another figure: 310,000 readings.

I originally wrote simple things about basic statistics. Then I began to write accompanying materials for each advanced course that I taught at Kazakh-British Technical University (KBTU). The shift in the number and level of readership shows that people need deep knowledge, not bait for one-day moths.

For example, my simple post on basic statistics was read 2,300 times. In comparison, the more complex post on the Cobb-Douglas function has been read 7,100 times. This function is widely used in economics to model consumer preferences (utility function) and producer capabilities (production function). In all textbooks it is taught using two-dimensional graphs, as P. Samuelson proposed 85 years ago. In fact, two-dimensional graphs are obtained by projection of a three-dimensional graph, which I show, making everything clear and obvious.

The answer to one of the University of London (UoL) exam problems attracted 14,300 readers. It is so complicated that I split the answer into two parts, and there are links to additional material. On the UoL exam, students have to solve this problem in 20-30 minutes, which even I would not be able to do.

Why my site is unique

My site is unique in several ways. Firstly, I tell the truth about the AP Statistics books. This is a basic statistics course for those who need to interpret tables, graphs and simple statistics. If you have a head on your shoulders, and not a Google search engine, all you need to do is read a small book and look at the solutions. I praise one such book in my reviews. You don't need to attend a two-semester course and read an 800-page book. Moreover, one doesn't need 140 high-quality color photographs that have nothing to do with science and double the price of a book.

Many AP Statistics consumers believe that learning should be fun. Such people are attracted by a book with anecdotes that have no relation to statistics or the life of scientists. In the West, everyone depends on each other, and therefore all the reviews are written in a superlative degree and streamlined. Thank God, I do not depend on the Western labor market, and therefore I tell the truth. Part of my criticism, including the statistics textbook selected for the program "100 Textbooks" of the Ministry of Education and Science of Kazakhstan (MES), is on Facebook.

Secondly, I have the world's only online, free, complete matrix algebra tutorial with all the proofs. Free courses on Udemy, Coursera and edX are not far from AP Statistics in terms of level. Courses at MIT and Khan Academy are also simpler than mine, but have the advantage of being given in video format.

The third distinctive feature is that I help UoL students. It is a huge organization spanning 17 universities and colleges in the UK and with many branches in other parts of the world. The Economics program was developed by the London School of Economics (LSE), one of the world's leading universities.

The problem with LSE courses is that they are very difficult. After the exams, LSE puts out short recommendations on the Internet for solving problems like: here you need to use such and such a theory and such and such an idea. Complete solutions are not given for two reasons: they do not want to help future examinees and sometimes their problems or solutions contain errors (who does not make errors?). But they also delete short recommendations after a year. My site is the only place in the world where there are complete solutions to the most difficult problems of the last few years. It is not for nothing that the solution to one problem noted above attracted 14,000 visits.

Fourthly, my site is unique in terms of the variety of material: statistics, econometrics, algebra, optimization, finance.

The average number of visits is about 100 per day. When it's time for students to take exams, it jumps to 1-2 thousand. The total amount of materials created in 5 years is equivalent to 5 textbooks. It takes from 2 hours to one day to create one post, depending on the level. After I published this analysis of the site traffic on Facebook, my colleague Nurlan Abiev decided to write posts for the site. I pay for the domain myself, $186 per year. It would be nice to make the site accessible to students and schoolchildren of Kazakhstan, but I don't have time to translate from English.

Once I was looking at the requirements of the MES for approval of electronic textbooks. They want several copies of printouts of all (!) materials and a solid payment for the examination of the site. As a result, all my efforts to create and maintain the site so far have been a personal initiative that does not have any support from the MES and its Committee on Science.

24
Jun 20

Solution to Question 2 from UoL exam 2018, Zone B

Solution to Question 2 from UoL exam 2018, Zone B

There are three companies, called A, B, and C, and each has a 4% chance of going bankrupt. The event that one of the three companies will go bankrupt is independent of the event that any other company will go bankrupt.

Company A has outstanding bonds, and a bond will have a net return of r = 0\% if the corporation does not go bankrupt, but it will have a net return of r = -100\%, i.e., losing everything invested, if it goes bankrupt. Suppose an investor buys $1000 worth of bonds of company A, which we will refer to as portfolio {P_1}.

Suppose also that there exists a security whose payout depends on the bankruptcy of companies B and C in a joint fashion. In particular, if neither B nor C go bankrupt, this derivative will have a net return of r = 0\%. If exactly one of B or C go bankrupt, it will have a net return of r = -50\%, i.e., losing half of the investment. If both B and C go bankrupt, it will have a net return of r = -100\%, i.e., losing the whole investment. Suppose an investor buys $1000 worth of this derivative, which is then called portfolio {P_2}.

(a) Calculate the VaR at the \alpha = 10\% critical level for portfolios P_1 and {P_2}. [30 marks]

Independence of events. Denote A,{A^c} the events that company A goes bankrupt and does not go bankrupt, resp. A similar notation will be used for the other two companies. The simple definition of independence of bankruptcy events P(A \cap B) = P(A)P(B) would be too difficult to apply to prove independence of all events that we need. A general definition of independence of variables is that their sigma-fields are independent (it will not be explained here). This general definition implies that in all cases below we can use multiplicativity of probability such as

P(B \cap C) = P(B)P(C) = {0.04^2} = 0.0016,\,\,P({B^c} \cap {C^c}) = {0.96^2} = 0.9216, P((B \cap {C^c}) \cup ({B^c} \cap C)) = P(B \cap {C^c}) + P({B^c} \cap C) = 2 \times 0.04 \times 0.96 = 0.0768.

The events here have a simple interpretation: the first is that “both B and C fail”, the second is “both B and C fail”, and the third is that “either (B fails and C does not) or (B does not fail and C does)” (they do not intersect and additivity of probability applies).

Let {r_A},{r_S} be returns on A and the security S, resp. From the problem statement it follows that these returns are described by the tables
Table 1

{r_A} Prob
0 0.96
-100 0.04

Table 2

{r_S} Prob
0 0.9216
-50 0.0768
-100 0.0016

Everywhere we will be working with percentages, so the dollar values don’t matter.

From Table 1 we conclude that the distribution function of return on A looks as follows:

Distribution function of portfolio A

Figure 1. Distribution function of portfolio A

At x=-100 the function jumps up by 0.04, at x=0 by another 0.96. The dashed line at y=0.1 is used in the definition of the VaR using the generalized inverse:

VaR_A^{0.1} = \inf \{ {x:{F_A}(x) \ge 0.1}\} = 0.

From Table 2 we see that the distribution function of return on S looks like this:

The first jump is at x=-100, the second at x=-50 and third one at x=0. As above, it follows that

VaR_S^{0.1} = \inf\{ {x:{F_S}(x) \ge 0.1}\} = 0.

(b) Calculate the VaR at the \alpha=10\% critical level for the joint portfolio {P_1} + {P_2}. [20 marks]

To find the return distribution for P_1 + P_2, we have to consider all pairs of events from Tables 1 and 2 using independence.

1.P({r_A}=0,{r_S}=0)=0.96\times 0.9216=0.884736

2.P({r_A}=-100,{r_S}=0)=0.04\times 0.9216=0.036864

3.P({r_A}=0,{r_S}=-50)=0.96\times 0.0768=0.073728

4.P({r_A}=-100,{r_S}=-50)=0.04\times 0.0768=0.003072

5.P({r_A}=0,{r_S}=-100)=0.96\times 0.0016=0.001536

6.P({r_A}=-100,{r_S}=-100)=0.04\times 0.0016=0.000064

Since we deal with a joint portfolio, percentages for separate portfolios should be translated into ones for the whole portfolio. For example, the loss of 100% on one portfolio and 0% on the other means 50% on the joint portfolio (investments are equal). There are two such losses, in lines 2 and 5, so the probabilities should be added. Thus, we obtain the table for the return r on the joint portfolio:

Table 3

r Prob
0 0.884736
-25 0.073728
-50 0.0384
-75 0.003072
-100 0.000064

Here only the first probability exceeds 0.1, so the definition of the generalized inverse gives

VaR_r^{0.1} = \inf \{ {x:{F_r}(x) \ge 0.1}\} = 0.

(c) Is VaR sub-additive in this example? Explain why the absence of sub-additivity may be a concern for risk managers. [20 marks]

To check sub-additivity, we need to pass to positive numbers, as explained in other posts. Zeros remain zeros, the inequality 0 \le 0 + 0 is true, so sub-additivity holds in this example. Lack of sub-additivity is an undesirable property for risk managers, because for them keeping the VaR at low levels for portfolio parts doesn’t mean having low VaR for the whole portfolio.

(d) The expected shortfall E{S^\alpha } at the \alpha critical level can be defined as

ES^\alpha= - E_t[R|R < - VaR_{t + 1}^\alpha],

where R is a return or dollar amount. Calculate the expected shortfall at the \alpha = 10\% critical level for portfolio P_2. Is this risk measure sub-additive? [30 marks]

Using the definition of conditional expectation and Table 3, we have (the time subscript can be omitted because the problem is static)
ES^{0.1}=-E[r|r<VaR_r^{0.1}]=-\frac{Er1_{\{r<VaR_r^{0.1}\}}}{{P(r<VaR_r^{0.1})}}=
=-\frac{-25\times 0.073728-50\times 0.0384-75\times 0.003072-100\times 0.000064}{0.073728+0.0384+0.003072+0.000064}=\frac{4}{0.115264}=34.7029.

There is a theoretical property that the expected shortfall is sub-additive.

22
Jun 20

Solution to Question 2 from UoL exam 2019, zone B

Solution to Question 2 from UoL exam 2019, zone B

Suppose the parameters in a GARCH (1,1) model

\sigma _{t + 1}^2 = \omega + \beta \sigma _t^2 + \alpha \varepsilon _t^2   (1)

are \omega = 0.000004,\ \alpha = 0.06,\ \beta = 0.93, the index t refers to days and {\varepsilon _t} is zero-mean white noise with conditional variance \sigma _t^2.

(a) What are the requirements for this process to be covariance stationary, and are they satisfied here? [20 marks]

If the coefficients satisfy the condition for positivity, \omega>0,\ \alpha,\beta\ge0, then the condition for covariance-stationarity is \alpha + \beta < 1. They are barely satisfied.

(b) What is the long-run average volatility? [20 marks]

We use the facts that {\sigma ^2} = E\sigma _{t + 1}^2 = E\left[ {E(\varepsilon _{t + 1}^2|{F_t})} \right] for all t. Applying the unconditional mean to regression (1) and using the LIE we get

{\sigma ^2}=E\sigma _{t+1}^2=E\left[{\omega+\beta\sigma _t^2+\alpha\varepsilon _t^2}\right]=\omega+\beta{\sigma^2}+\alpha{\sigma^2}

and

{\sigma^2}=\frac{\omega }{{1-\alpha-\beta}}=\frac{{0.000004}}{{1-0.06-0.93}}=0.0004.

(c) If the current volatility is 2.5% per day, what is your estimate of the volatility in 20, 40, and 60 days? [20 marks]

On p.107 of the Guide there is the derivation of the equation

\sigma _{t + h,t}^2 = \sigma _y^2 + {(\alpha + \beta )^{h - 1}}(\sigma _{t + 1,t}^2 - \sigma _y^2),\,\,h \ge 1.    (2)

I gave you a slightly easier derivation in my class, please use that one. If we interpret "current" as t+1 and "in twenty days" as t+1+20, then

\sigma _{t+21}^2=\sigma^2+(\alpha + \beta )^{20}(\sigma _{t+1}^2-\sigma^2) = 0.0004+\exp\left[ 20\ln(0.06+0.93)\right](0.025-0.0004) = 0.020521.

For h=41,61 use the same formula to get 0.016692, 0.013725, resp. I did it in Excel and don't envy you if you have to do it during an exam.

(d) Suppose that there is an event that decreases the current volatility by 1.5%to 1% per day. Estimate the effect on the volatility in 20, 40, and 60 days. [20 marks]

Calculations are the same, just replace 0.025 by 0.01. Alternatively, one can see that the previous values will go down by \exp[(h-1)\ln(0.06+0.93)]0.015, which results in volatility values 0.012146, 0.009934 and 0.008125.

(e) Explain what volatility should be used to price 20-, 40-and 60-day options, and explain how you would calculate the values. [20 marks]

The only unobservable input to the Black-Scholes option pricing formula is the stock price volatility. In the derivation of the formula the volatility is assumed to be constant. The value of the constant should depend on the forecast horizon. If we, say, forecast 20 days ahead, we should use a constant value for all 20 days. This constant can be obtained as an average of daily forecasts obtained from the GARCH model.

If the GARCH is not used, a simpler approach is applied. If the average daily volatility is {\sigma _d}, then assuming independent returns, over a period of n days volatility is {\sigma _{nd}} = \sqrt n {\sigma _d}.

In practice, traders go back from option prices to volatility. That is, they use observed option prices to solve the Black-Scholes formula for volatility (find the root of an equation with the price given). The resulting value is called implied volatility. If it is plugged back into the Black-Scholes formula, the observed option price will result.

21
Jun 20

Solution to Question 3 from UoL exam 2019, zone A

Solution to Question 3 from UoL exam 2019, zone A

(a) Define the concept of trade duration in financial markets and explain briefly why this concept is economically useful. What features do trade durations typically exhibit and how can we model these features? [25 marks]

High frequency traders (HFT) may trade every millisecond. Orders from traders arrive at random moments and therefore the trade times are not evenly spaced. It makes sense to model the differences

{x_j} = TIME_j - TIME_{j - 1}

between transaction times. (The Guide talks about differences between times of returns but I don’t like this because on small time frames people are interested in prices, not returns.) Those differences are called durations. They are economically interesting because 1) they tell us something about liquidity: periods of intense trading are generally periods of greater market liquidity than periods of sparse trading (there is also after-hours trading between 16:00 and 20:30, New York time, when trading may be intense but liquidity is low) and 2) durations relate directly to news arrivals and the adjustment of prices to news, and so have some use in discussions of market efficiency.

The trading session in the USA is from 9:30 to 16:00, New York time. Durations exhibit diurnality (that is, intraday seasonality): transactions are more frequent (durations are shorter) in the first and last hour of the trading session and less frequent around lunch, see Figure 16.6 from the Guide.

Higher frequency in the first hour results from traders rebalancing their portfolios after overnight news and in the last hour – from anticipation of news during the next night.

The main decomposition of durations is

{x_j} = {s_j}x_j^*,
so \log {x_j} = \log {s_j} + \log x_j^*,
\log {s_j} = \sum\limits_{i = 1}^{13} {\beta _j}{D_{i,j}}
\log {s_j} = \gamma _0 + \gamma _1HR{S_j} + \gamma _2HRS_j^2.

In the first equation {s_j} is the diurnal component and x_j^* is called a de-seasonalized duration (it has not been defined here). The second follows from the first.

I am not sure that you need the third equation. The fourth equation is used below. In the third equation \log {s_j} is regressed on dummies of half-hour periods (there are 13 of them in the trading session; the constant is not included to avoid the dummy trap). In the fourth equation it is regressed on the first and second power of the time variable HRS_j, which measures time in hours starting from the previous midnight. This is called a polynomial regression. Both regressions can capture diurnality.

(b) Describe the Engle and Russell (1998) autoregressive conditional duration (ACD) model. [25 marks]

Instead of the duration model considered in part (a) Engle and Russell suggest the ACD model

{x_j} = {s_j}x_j^*,
x_j^* = {\psi _j}{\varepsilon _j},
where \log {s_j} = {\beta _0} + {\gamma _1}HR{S_j} + {\beta _2}HRS_j^2,
x_j^\star = \frac{{x_j}}{{s_j}},\ {\varepsilon _j}|{F_t}\sim i.i.d.(1),
\psi _j = \omega + \beta \psi _{j - 1} + \alpha x_{j - 1}^*,\,\,\omega > 0,\,\,\alpha ,\beta \ge 0   (1)

The first decomposition is the same as above. The second equation decomposes the de-seasonalized duration into a product of deterministic and stochastic components. To understand the idea, we can compare (1) with the GARCH(1,1) model:

y_{t + 1} = {\mu _{t + 1}} + {\varepsilon _{t + 1}},
{\varepsilon _{t + 1}} = {\sigma _{t + 1}}{v_{t + 1}},
{v_{t + 1}}|{F_t}\sim F(0,1),
\sigma _{t + 1}^2 = \omega + \beta \sigma _t^2 + \alpha \varepsilon _t^2.   (2)

Equations (1) and (2) are similar. The assumptions about the random components are different: in (1) we have E{\varepsilon _j} = 1, in (2) E{v_j} = 0. This is because in (2) the epsilons are deviations from the mean and may change sign; in (1) the epsilons come from durations and should be positive. To obtain the last equation in (1) from the GARCH(1,1) in (2) one has to make replacements

\sigma _t^2\sim {\psi _j},\ \varepsilon _t^2 = {({y_{t + 1}} - {\mu _{t + 1}})^2}\sim x_{j - 1}^*. (3)

This is important to know, to understand the comparison of the ML method for the two models below.

(c) Compare the conditions for covariance stationarity, identification and positivity of the duration series for the ACD(1,1) to those for the GARCH(1,1). [25 marks]

Those conditions for GARCH are

Condition 1: \omega > 0,\ \alpha,\beta \ge 0, for positive variance,

Condition 2: \beta = 0 if \alpha = 0, for identification,

Condition 3: \alpha + \beta < 1, for covariance stationarity.

For ACD they are the same, because both are essentially ARMA models.

(d) Illustrate the relationship between the log-likelihood of the ACD(1,1) model and the estimation of a GARCH(1,1) model using the normal likelihood function. [25 marks]

Because of the assumption E{\varepsilon _j} = 1 in (1) we cannot use the normal distribution for (1). Instead the exponential random variable is used. It takes only positive values; its density is zero on the left half-axis and is an exponential function on the right half-axis:

Z\sim Exponential(\gamma ),

so f(z|\gamma ) = \frac{1}{\gamma }\exp\left(-\frac{z}{\gamma } \right) and EZ = \gamma .

Here \gamma is a positive number and f is the density. We take \gamma= 1 as required by the ACD model. This implies Ex_j^* = {\psi _j}E{\varepsilon _j} = {\psi _j} so x_j^* is distributed as Exponential({\psi _j}). Its density is

f(x_j^*|\psi _j)=f(x_j^*|\psi _j(\omega,\beta,\alpha))=\frac{1}{\psi _j}f\left(-\frac{x_j^*}{\psi _j}\right).

The rest is logical: plug \psi_j from the ACD model (1), then take log and then add those logs to obtain the log-likelihood. A. Patton gives the log-likelihood for GARCH, whose derivation I could not find in the book. But from (3) we know that there should be similarity after replacement \sigma _t^2\sim{\psi _j},\ x_{j - 1}^*\sim{({r_t} - {\mu _t})^2}. To this Patton adds that the GARCH likelihood is simply a linear transformation of the ACD likelihood.

20
Apr 20

FN3142 Chapter 14 Risk management and Value-at-Risk: Backtesting

FN3142 Chapter 14 Risk management and Value-at-Risk: Backtesting

Here I added three videos and corresponding pdf files on three topics:

Chapter 14. Part 1. Evaluating VAR forecasts

Chapter 14. Part 2. Conditional coverage tests

Chapter 14. Part 3. Diebold-Mariano test for VaR forecasts

Be sure to watch all of them because sometimes I make errors and correct them in later videos.

All files are here.

Unconditional coverage test

7
Apr 20

FN3142 Chapter 13. Risk management and Value-at-Risk: Models

FN3142 Chapter 13. Risk management and Value-at-Risk: Models

Chapter 13 is divided into 5 parts. For each part, there is a video with the supporting pdf file. Both have been created in Notability using an iPad. All files are here.

Part 1. Distribution function with two examples and generalized inverse function.

Part 2. Value-at-Risk definition

Part 3. Empirical distribution function and its estimation

Part 4. Models based on flexible distributions

Part 5. Semiparametric models, nonparametric estimation of densities and historical simulation.

Besides, in the subchapter named Expected shortfall you can find additional information. It is not in the guide but it was required by one of the past UoL exams.

29
Mar 20

FN3142 Chapter 12. Forecast comparison and combining

FN3142 Chapter 12. Forecast comparison and combining

Like most universities, we switched to online teaching because of COVID-19. This is what I prepared for my Quantitative Finance course and want to make available for everybody. The lecture is divided in three parts. Download the pdf and watch the corresponding video.

Diebold-Mariano test

Diebold-Mariano test

Chapter 12. Part 1.mp4

Chapter 12. Forecast comparison and combination. Part 1.pdf

Chapter 12. Part 2.mp4

Chapter 12. Forecast comparison and combination. Part 2.pdf

Chapter 12. Part 3.mp4

Chapter 12. Part 3. Forecast encompassing and combining

See another post on Newey-West estimator

 

28
Oct 19

Leverage effect: the right definition and explanation

Leverage effect: the right definition and explanation

The guide by Andrew Patton for Quantitative Finance FN3142 states that "stock returns are negatively correlated with changes in volatility: that is, volatility tends to rise following bad news (a negative return) and fall following good news (a positive return)", with reference to Black (1976). This is not quite so, as can be seen from the following chart.

S&P500 versus VIX

S&P500 versus VIX: leverage effect

 

The candlebars (in green and light red) show the index S&P 500, which is an average of the stock prices of the largest 500 publicly traded companies. The continuous purple line shows the VIX, one of widely used measures of volatility. Between the yellow vertical lines at A and B the return has been predominantly positive, yet in the beginning of that period the volatility has been high. The graph clearly shows that there is a negative correlation between asset prices and volatility, not between return and volatility. Thus the proper definition of the leverage effect is "negative correlation between asset prices and volatility". There are different explanations of the effect, here is the one I prefer.

At all times, market participants try to maximize profits and minimize losses. This motivation results in different behaviors during the market cycle.

Near the bottom (below line at E)

During the slump to the left of point A. Out of fear that a great depression is coming, everybody is dumping stocks, trying to stay in cash and gold as much as possible. That's why the price drops quickly and volatility is high.

During the recovery to the right of point A. Some investors consider many stocks cheap and try to load up, buying stocks in large quantities. Others are not convinced that the recovery has started. Opposing opinions and swift purchases, made possible by large amounts of cash on hands, increase volatility.

Near the top (above line at F)

To the left of point B. Stocks are bought in small quantities, for the following reasons: 1) to avoid a sharp increase in price, 2) out of fear that the rally will soon end, and 3) not much cash is left in portfolios, so it's mainly portfolio rebalancing (buy stocks with potential to grow, sell those that have stalled).

To the right of point B. Investors sell stocks in small quantities, to take profits and in anticipation of a new downturn.

The main difference between what happens at the bottom and at the top is in the relative amount of cash and stocks in portfolios. This is why near the top there is little volatility. Of course, there are other reasons. Look at what happened between points C and D. The S&P level was relatively high, but there was a lot of uncertainty about the US-China trade war. Trump with his tweets contributed a lot to that volatility. This article claims that somebody made billions of dollars trading on his tweets. Insider trading is prohibited but not for the mighty of this world.

Final remark. If the recovery from the trough at point A took just three months, why worry and sell stocks during the fall? There are three answers. Firstly, after the big recession of 2008, it took the markets five years to fully recover to the pre-2008 level, and that's what scares everybody. Secondly, the best stocks after the recovery will not be the same as before the fall. Thirdly, one can make money on the way down too, provided one has spare cash.

24
Aug 19

Sylvester's criterion

Sylvester's criterion

Exercise 1. Suppose A=CBC^{T}, where B=diag[d_{1},...,d_{n}] and \det C\neq 0. Then A is positive if and only if all d_{i} are positive.

Proof. For any x\neq 0 we have x^{T}Ax=x^{T}CBC^{T}x=(C^{T}x)^{T}B(C^{T}x). Let y=C^{T}x\neq 0. Then x^{T}Ax=y^{T}By=\sum_{j}d_{j}y_{j}^{2}. This is positive for all y\neq 0 if and only if \min_{j}d_{j}>0.

Exercise 2 (modified Gaussian elimination). Suppose that A is a real symmetric matrix with nonzero leading principal minors d_{1},...,d_{n}. Then B=CAC^{T}, where B=diag[d_{1},d_{2}/d_{1},...,d_{n}/d_{n-1}] and \det C=1.

Proof. Review the transformation applied in Exercise 1 to obtain a triangular form. In that exercise, we eliminated element a_{21} below a_{11} by premultiplying A by the matrix C=I-\frac{a_{21}}{a_{11}} e_{2}^{T}e_{1}. Now after this we can post-multiply A by the matrix C^{T}=I-\frac{a_{21}}{a_{11}}e_{1}^{T}e_{2}. Because of the assumed symmetry of A, we have C^{T}=I-\frac{a_{12}}{a_{11}}e_{1}^{T}e_{2}, so this will eliminate element a_{12} to the right of a_{11}, see Exercise 2. Since in the first column a_{21} is already 0, the diagonal element a_{22} will not change.

We can modify Exercise 1 by eliminating a_{1j} immediately after eliminating a_{j1}. The right sequencing of transformations is necessary to be able to apply Exercise 1: the matrix used for post-multiplication should be the transpose of the matrix used for premultiplication. If C=C_{m}...C_{1}, then C^{T}=C_{1}^{T}...C_{m}^{T}, which means that premultiplication by C_{i} should be followed by post-multiplication by C_{i}^{T}. In this way we can make zero all off-diagonal elements. The resulting matrix B=diag[d_{1},d_{2}/d_{1}, ...,d_{n}/d_{n-1}] is related to A through B=CAC^{T}.

Theorem (Sylvester) Suppose that A is a real symmetric matrix. Then A is positive if and only if all its leading principal minors are positive.

Proof. Let's assume that all leading principal minors are positive. By Exercise 2, we have A=C^{-1}B(C^{-1})^{T} where \det C=1. It remains to apply Exercise 1 above to see that A is positive.

Now suppose that A is positive, that is x^{T}Ax=\sum_{i,j=1}^{n}a_{ij}x_{i}x_{j}>0 for any x\neq 0. Consider cut-off matrices A_{k}=\left( a_{ij}\right) _{i,j=1}^{k}. The corresponding cut-off quadratic forms x^{T}A_{k}x=\sum_{i,j=1}^{k}a_{ij}x_{i}x_{j}, k=1,...,n, are positive for nonzero x\in R^{k}. It follows that A_{k} are non-singular because if x\in N(A_{k}), then x^{T}A_{k}x=0. Hence their determinants d_{k}=\det A_{k}, k=1,...,n, are nonzero . This allows us to apply the modified Gaussian elimination (Exercise 2) and then Exercise 1 with B=diag[d_{1},...,d_{n}/d_{n-1}]. By Exercise 1 consecutively d_{1}>0, d_{2}>0,..., d_{n}>0.

Exercise 3. A is negative if and only if the leading principal minors change signs, starting with minus: d_{1}<0, d_{2}>0, d_{3}<0,...

Proof. By definition, A is negative if -A is positive. Because of homogeneity of determinants, when we pass from A to -A, the minor of order k gets multiplied by (-1)^{k}. Thus, by Sylvester's criterion A is negative if and only if (-1)^{k}d_{k}>0, as required.