Autoregressive–moving-average (ARMA) models were suggested in 1951 by Peter Whittle in his PhD thesis. Do you think he played with data and then came up with his model? No, he was guided by theory. The same model may describe visually very different data sets, and visualization rarely leads to model formulation.
Recall that the main idea behind autoregressive processes is to regress the variable on its own past values. In case of moving averages, we form linear combinations of elements of white noise. Combining the two ideas, we obtain the definition of the autoregressive–moving-average process:
(1)
It is denoted ARMA(p,q), where p is the number of included past values and q is the number of included past errors (AKA shocks to the system). We should expect a couple of facts to hold for this process.
If its characteristic polynomial has roots outside the unit circle, the process is stable and should be stationary.
A stable process can be represented as an infinite moving average. Such a representation is in fact used to analyze its properties.
The coefficients of the moving average part (the thetas) and the constant have no effect on stationarity.
The quantity can be called an instantaneous effect of
on
. This effect accumulates over time (the value at
is influenced by the value at
, which, in turn, is influenced by the value at
and so on). Therefore the long-run interpretation of the coefficients is complicated. Comparison of Figures 1 and 2 illustrates this point.
Exercise. For the model find the mean
(just modify this argument).

Figure 1. Simulated AR process


Question. Why in (1) the current error
1) We never used the current error with a nontrivial coefficient.
2) It is logical to assume that past shocks may have an aftereffect (measured by thetas) on the current
3) Mathematically, the case when instead of
Leave a Reply
You must be logged in to post a comment.