5
May 22

Vector autoregression (VAR)

Vector autoregression (VAR)

Suppose we are observing two stocks and their respective returns are x_{t},y_{t}. To take into account their interdependence, we consider a vector autoregression

(1) \left\{\begin{array}{c}x_{t}=a_{1}x_{t-1}+b_{1}y_{t-1}+u_{t} \\ y_{t}=a_{2}x_{t-1}+b_{2}y_{t-1}+v_{t}\end{array}\right.

Try to repeat for this system the analysis from Section 3.5 (Application to an AR(1) process) of the Guide by A. Patton and you will see that the difficulties are insurmountable. However, matrix algebra allows one to overcome them, with proper adjustment.

Problem

A) Write this system in a vector format

(2) Y_{t}=\Phi Y_{t-1}+U_{t}.

What should be Y_{t},\Phi ,U_{t} in this representation?

B) Assume that the error U_{t} in (1) satisfies

(3) E_{t-1}U_{t}=0,\ EU_{t}U_{t}^{T}=\Sigma ,~EU_{t}U_{s}^{T}=0 for t\neq s with some symmetric matrix \Sigma =\left(\begin{array}{cc}\sigma _{11} & \sigma _{12} \\ \sigma _{12} & \sigma _{22}\end{array}\right) .

What does this assumption mean in terms of the components of U_{t} from (2)? What is \Sigma if the errors in (1) satisfy

(4) E_{t-1}u_{t}=E_{t-1}v_{t}=0,~Eu_{t}^{2}=Ev_{t}^{2}=\sigma ^{2}, Eu_{s}u_{t}=Ev_{s}v_{t}=0 for t\neq s, Eu_{s}v_{t}=0 for all s,t?

C) Suppose (1) is stationary. The stationarity condition is expressed in terms of eigenvalues of \Phi but we don't need it. However, we need its implication:

(5) \det \left( I-\Phi \right) \neq 0.

Find \mu =EY_{t}.

D) Find Cov(Y_{t-1},U_{t}).

E) Find \gamma _{0}\equiv V\left( Y_{t}\right) .

F) Find \gamma _{1}=Cov(Y_{t},Y_{t-1}).

G) Find \gamma _{2}.

Solution

A) It takes some practice to see that with the notation

Y_{t}=\left(\begin{array}{c}x_{t} \\y_{t}\end{array}\right) , \Phi =\left(\begin{array}{cc}a_{1} & b_{1} \\a_{2} & b_{2}\end{array}\right) , U_{t}=\left(\begin{array}{c}u_{t} \\v_{t}\end{array}\right)

the system (1) becomes (2).

B) The equations in (3) look like this:

E_{t-1}U_{t}=\left(\begin{array}{c}E_{t-1}u_{t} \\ E_{t-1}v_{t}\end{array}\right) =0, EU_{t}U_{t}^{T}=\left(\begin{array}{cc}Eu_{t}^{2} & Eu_{t}v_{t} \\ Eu_{t}v_{t} & Ev_{t}^{2}\end{array}\right) =\left(\begin{array}{cc}\sigma _{11} & \sigma _{12} \\ \sigma _{12} & \sigma _{22}\end{array}\right) , EU_{t}U_{s}^{T}=\left(\begin{array}{cc}Eu_{t}u_{s} & Eu_{t}v_{s} \\Ev_{t}u_{s} & Ev_{t}v_{s}\end{array}\right) =0.

Equalities of matrices are understood element-wise, so we get a series of scalar equations E_{t-1}u_{t}=0,...,Ev_{t}v_{s}=0 for t\neq s.

Conversely, the scalar equations from (4) give

E_{t-1}U_{t}=0,\ EU_{t}U_{t}^{T}=\left(\begin{array}{cc}\sigma ^{2} & 0 \\0 & \sigma ^{2}\end{array}\right) ,~EU_{t}U_{s}^{T}=0 for t\neq s.

C) (2) implies EY_{t}=\Phi EY_{t-1}+EU_{t}=\Phi EY_{t-1} or by stationarity \mu =\Phi \mu or \left( I-\Phi \right) \mu =0. Hence (5) implies \mu =0.

D) From (2) we see that Y_{t-1} depends only on I_{t} (information set at time t). Therefore by the LIE

Cov(Y_{t-1},U_{t})=E\left( Y_{t-1}-EY_{t-1}\right) U_{t}^{T}=E\left[ \left(Y_{t-1}-EY_{t-1}\right) E_{t-1}U_{t}^{T}\right] =0, Cov\left( U_{t},Y_{t-1}\right) =\left[ Cov(Y_{t-1},U_{t})\right] ^{T}=0.

E) Using the previous post

\gamma _{0}\equiv V\left( \Phi Y_{t-1}+U_{t}\right) =\Phi V\left(Y_{t-1}\right) \Phi ^{T}+Cov\left( U_{t},Y_{t-1}\right) \Phi ^{T}+\Phi Cov(Y_{t-1},U_{t})+V\left( U_{t}\right) =\Phi \gamma _{0}\Phi ^{T}+\Sigma

(by stationarity and (3)). Thus, \gamma _{0}-\Phi \gamma _{0}\Phi^{T}=\Sigma and \gamma _{0}=\sum_{s=0}^{\infty }\Phi ^{s}\Sigma\left( \Phi^{T}\right) ^{s} (see previous post).

F) Using the previous result we have

\gamma _{1}=Cov(Y_{t},Y_{t-1})=Cov(\Phi Y_{t-1}+U_{t},Y_{t-1})=\Phi Cov(Y_{t-1},Y_{t-1})+Cov(U_{t},Y_{t-1}) =\Phi Cov(Y_{t-1},Y_{t-1})=\Phi \gamma _{0}=\Phi \sum_{s=0}^{\infty }\Phi^{s}\Sigma\left( \Phi ^{T}\right) ^{s}.

G) Similarly,

\gamma _{2}=Cov(Y_{t},Y_{t-2})=Cov(\Phi Y_{t-1}+U_{t},Y_{t-2})=\Phi Cov(Y_{t-1},Y_{t-2})+Cov(U_{t},Y_{t-2}) =\Phi Cov(Y_{t-1},Y_{t-2})=\Phi \gamma _{1}=\Phi ^{2}\sum_{s=0}^{\infty}\Phi ^{s}\Sigma\left( \Phi ^{T}\right) ^{s}.

Autocorrelations require a little more effort and I leave them out.

 

5
May 22

Vector autoregressions: preliminaries

Vector autoregressions: preliminaries

Suppose we are observing two stocks and their respective returns are x_{t},y_{t}. A vector autoregression for the pair x_{t},y_{t} is one way to take into account their interdependence. This theory is undeservedly omitted from the Guide by A. Patton.

Required minimum in matrix algebra

Matrix notation and summation are very simple.

Matrix multiplication is a little more complex. Make sure to read Global idea 2 and the compatibility rule.

The general approach to study matrices is to compare them to numbers. Here you see the first big No: matrices do not commute, that is, in general AB\neq BA.

The idea behind matrix inversion is pretty simple: we want an analog of the property a\times \frac{1}{a}=1 that holds for numbers.

Some facts about determinants have very complicated proofs and it is best to stay away from them. But a couple of ideas should be clear from the very beginning. Determinants are defined only for square matrices. The relationship of determinants to matrix invertibility explains the role of determinants. If A is square, it is invertible if and only if \det A\neq 0 (this is an equivalent of the condition a\neq 0 for numbers).

Here is an illustration of how determinants are used. Suppose we need to solve the equation AX=Y for X, where A and Y are known. Assuming that \det A\neq 0 we can premultiply the equation by A^{-1} to obtain A^{-1}AX=A^{-1}Y. (Because of lack of commutativity, we need to keep the order of the factors). Using intuitive properties A^{-1}A=I and IX=X we obtain the solution: X=A^{-1}Y. In particular, we see that if \det A\neq 0, then the equation AX=0 has a unique solution X=0.

Let A be a square matrix and let X,Y be two vectors. A,Y are assumed to be known and X is unknown. We want to check that X=\sum_{s=0}^{\infty }A^{s}Y\left( A^{T}\right) ^{s} solves the equation X-AXA^{T}=Y. (Note that for this equation the trick used to solve AX=Y does not work.) Just plug X:

\sum_{s=0}^{\infty }A^{s}Y\left( A^{T}\right) ^{s}-A\sum_{s=0}^{\infty }A^{s}Y\left( A^{T}\right) ^{s}A^{T} =Y+\sum_{s=1}^{\infty }A^{s}Y\left(A^{T}\right) ^{s}-\sum_{s=1}^{\infty }A^{s}Y\left( A^{T}\right) ^{s}=Y

(write out a couple of first terms in the sums if summation signs frighten you).

Transposition is a geometrically simple operation. We need only the property \left( AB\right) ^{T}=B^{T}A^{T}.

Variance and covariance

Property 1. Variance of a random vector X and covariance of two random vectors X,Y are defined by

V\left( X\right) =E\left( X-EX\right) \left( X-EX\right) ^{T}, Cov\left(X,Y\right) =E\left( X-EX\right) \left( Y-EY\right) ^{T},

respectively.

Note that when EX=0, variance becomes

V\left( X\right) =EXX^{T}=\left(\begin{array}{ccc}EX_{1}^{2} & ... & EX_{1}X_{n} \\ ... & ... & ... \\ EX_{1}X_{n} & ... & EX_{n}^{2}\end{array}\right) .

Property 2. Let X,Y be random vectors and suppose A,B are constant matrices. We want an analog of V\left( aX+bY\right) =a^{2}V\left( X\right) +2abcov\left( X,Y\right) +b^{2}V\left( X\right) . In the next calculation we have to remember that the multiplication order cannot be changed.

V\left( AX+BY\right) =E\left[ AX+BY-E\left( AX+BY\right) \right] \left[AX+BY-E\left( AX+BY\right) \right] ^{T} =E\left[ A\left( X-EX\right) +B\left( Y-EY\right) \right] \left[ A\left(X-EX\right) +B\left( Y-EY\right) \right] ^{T} =E\left[ A\left( X-EX\right) \right] \left[ A\left( X-EX\right) \right]^{T}+E\left[ B\left( Y-EY\right) \right] \left[ A\left( X-EX\right) \right]^{T} +E\left[ A\left( X-EX\right) \right] \left[ B\left( Y-EY\right) \right]^{T}+E\left[ B\left( Y-EY\right) \right] \left[ B\left( Y-EY\right) \right]^{T}

(applying \left( AB\right) ^{T}=B^{T}A^{T})

=AE\left( X-EX\right) \left( X-EX\right) ^{T}A^{T}+BE\left( Y-EY\right)\left( X-EX\right) ^{T}A^{T} +AE\left( X-EX\right) \left( Y-EY\right) ^{T}B^{T}+BE\left( Y-EY\right)\left( Y-EY\right) ^{T}B^{T} =AV\left( X\right) A^{T}+BCov\left( Y,X\right)A^{T}+ACov(X,Y)B^{T}+BV\left( Y\right) B^{T}.