23
Oct 17

Canonical form for time series

Canonical form for time series

We start with a doubly-infinite time series \{ Y_t:t=0, \pm 1, \pm 2,... \} . At each point in time t, in addition to Y_t, we are given an information set I_t. It is natural to assume that with time we know more and more: I_t\subset I_{t+1} for all t. We want to apply the idea used in two simpler situations before:

1) Mean-plus-deviation-from-mean representation: Y=\mu+\varepsilon, where \mu=EY, \varepsilon=Y-EY, E\varepsilon=0.

2) Conditional-mean-plus-remainder representation: having some information set I, we can write Y=E_IY+\varepsilon, where E_IY=E(Y|I), \varepsilon=Y-E_IY, E_I\varepsilon=0.

Notation: for any random variable X, the conditional mean E(X|I_t) will be denoted E_{t}X.

Following the above idea, we can write Y_{t+1}=Y_{t+1}-E_tY_{t+1}+E_tY_{t+1}. Hence, denoting

\mu_{t+1}=E_tY_{t+1}, \varepsilon_{t+1}=Y_{t+1}-E_tY_{t+1}, we get the canonical form

(1) Y_{t+1}=\mu_{t+1}+\varepsilon_{t+1}.

Properties

a) Conditional mean of the remainder: E_t\varepsilon_{t+1}=E_t(Y_{t+1}-E_tY_{t+1})=0, because E_tE_t=E_t. This implies for the unconditional mean E\varepsilon _{t+1}=0 by the LIE.

b) Conditional variances of Y_{t+1} and \varepsilon _{t+1} are the same:

V_t(\varepsilon_{t+1})=E_t(\varepsilon_{t+1}-E_t\varepsilon_{t+1})^2=E_t\varepsilon_{t+1}^2=E_t(Y_{t+1}-E_tY_{t+1})^2=V_t(Y_{t+1}).

c) The two terms in (1) are conditionally uncorrelated:

Cov_t(\mu_{t+1},\varepsilon_{t+1})=E_t[(\mu_{t+1}-E_t\mu_{t+1})\varepsilon_{t+1}]=E_t[(\mu_{t+1}-\mu_{t+1})\varepsilon_{t+1}]=0

(\mu_{t+1} is known at time t).

They are also unconditionally uncorrelated: by the LIE

Cov(\mu_{t+1},\varepsilon_{t+1})=E[(\mu_{t+1}-E\mu_{t+1})\varepsilon_{t+1}]=EE_t[(\mu_{t+1}-E\mu_{t+1})\varepsilon_{t+1}]=E[(\mu_{t+1}-E\mu_{t+1})E_t\varepsilon_{t+1}]=0.

d) Full (long-term) variance of Y_{t+1}, in addition to V(\varepsilon_{t+1}), includes variance of the conditional mean \mu _{t}:

V(Y_{t+1})=V(\mu_{t+1}+\varepsilon_{t+1})=V(\mu_{t+1})+2Cov(\mu_{t+1},\varepsilon_{t+1})+V(\varepsilon_{t+1})=V(\mu_{t+1})+V(\varepsilon_{t+1}).

e) The remainders are uncorrelated. When considering Cov(\varepsilon_t,\varepsilon_s) for s\neq t, by symmetry of covariance we can assume that t\leq s-1. Then, remembering that \varepsilon_t is known at time s-1, by the LIE we have

Cov(\varepsilon_t,\varepsilon_s)=E[E_{s-1}(\varepsilon_t\varepsilon_s)]=E[\varepsilon_tE_{s-1}\varepsilon_s] =0.

Question: do the remainders represent white noise?

4
Oct 17

Conditional-mean-plus-remainder representation

Conditional-mean-plus-remainder representation: we separate the main part from the remainder and find out the remainder properties. My post on properties of conditional expectation is an elementary introduction to conditioning. This is my first post in Quantitative Finance.

A brush-up on conditional expectations

  1. Notation. Let X be a random variable and let I be an information set. Instead of the usual notation E(X|I) for conditional expectation, in large expressions it's better to use the notation with I in the subscript: E_IX=E(X|I).

  2. Generalized homogeneity. If f(I) depends only on information I, then E_I(f(I)X)=f(I)E_I(X) (a function of known information is known and behaves like a constant). A special case is E_I(f(I))=f(I)E_I(1)=f(I). With f(I)=E_I(X) we get E_I(E_I(X))=E_I(X). This shows that conditioning is a projector: if you project a point in a 3D space onto a 2D plane and then project the image of the point onto the same plane, the result will be the same image as from single projecting.

  3. Additivity. E_I(X+Y)=E_IX+E_IY.

  4. Law of iterated expectations (LIE). If we know about two information sets that I_1\subset I_2, then E_{I_1}E_{I_2}X=E_{I_1}X. I like the geometric explanation in terms of projectors. Projecting a point onto a plane and then projecting the result onto a straight line is the same as projecting the point directly onto the straight line.

Conditional-mean-plus-remainder representation

This is a direct generalization of the mean-plus-deviation-from-mean decomposition. There we wrote X=EX+(X-EX) and denoted \mu=EX,~\varepsilon=X-EX to obtain X=\mu+\varepsilon with the property E\varepsilon=0.

Here we write X=E_IX+(X-E_IX) and denote \varepsilon=X-E_IX the remainder. Then the representation is

(1) X=E_IX+\varepsilon.

Properties. 1) E_I\varepsilon=E_IX-E_IX=0 (remember, this is a random variable identically equal to zero, not a number zero).

2) Conditional covariance is obtained from the usual covariance by replacing all usual expectations by conditional. Thus, by definition,

Cov_I(X,Y)=E_I(X-E_IX)(Y-E_IY).

For the components in (1) we have

Cov_I(E_IX,\varepsilon)=E_I(E_IX-E_IE_IX)(\varepsilon-E_I\varepsilon)=E_I(E_IX-E_IX)\varepsilon=0.

3) Var_I(\varepsilon)=E_I(\varepsilon-E_I\varepsilon)^{2}=E_I(X-E_IX)^2=Var_I(X).