## Saturday, January 21, 2012

### The Dynamic Stability of AR Models - Tricking EViews

In this post I'm going to focus on understanding the extent to which there's an equivalence between two different ways of estimating an AR(p) model for a time-series, Yt, using EViews, and to see what information is generated in each case.

In particular, I want to show you how you can "trick" EViews into showing you if your estimated dynamic regression model is "dynamically stable". That is, if the estimated coefficients for the lagged values of Y are such that the model is stationary. If the lag-order is above 2, this isn't something that's always easy to do by just looking at the estimated coefficient values.

By way of example, consider the case of an AR(2) model, with an intercept (or “drift”) term included to allow for a non-zero mean in the series. The results apply equally to the general AR(p) case, as will become apparent in the empirical example that's given below.
AR(2) Model:

Yt = α + φ1Yt-1 + φ2Yt-2 + εt    ,                (1)

where εt is a “white noise” series.
In the case of EViews, we could estimate the model using OLS with the specification:

Y  C   Y(-1)   Y(-2)

Consider an alternative model for Y:

Yt = α + ut    ;      ut = φ1ut-1 + εt .         (2)

That is, we just explain Y in term of a level and a error term that is itself AR(2). From (2), notice that:

φ1Yt-1 = φ1 α + φ1ut-1                           (3)

and

φ2Yt-2 = φ2 α + φ2ut-2 .              (4)

So, subtracting (3) and (4) from (2), we have:

Yt - φ1Yt-1 - φ2Yt-2 = α (1- φ1 - φ2) + (ut - φ1ut-1 - φ2ut-2),   (5)

or, from the definition of in (2):

Yt = α (1 - φ1 - φ2) + φ1Yt-1 + φ2Yt-2 + εt .             (6)

We see that (1) and (6) (and hence (1) and (2)) are the same, except for the intercept term. To estimate model (2) in EViews we'd use the specification:

Y    C    AR(1)    AR(2)

We see that if we do this then the estimates of phi1 and phi2 will be same as if used the EViews specification for (1), above. Of course, we could also recover the estimate of α in this second case by dividing the estimated intercept coefficient by the estimate of (1 - φ1 - φ2).

What would be the point of using this second specification? We should certainly use it only with care, given that the estimated intercept coefficient has to re-interpreted. However, if we set up this specification with the AR(.) terms in EViews, the package provides us with an extra item in the regression output that we can VIEW an item that's not available otherwise.

And that's what we're looking for!

Specifically, if there are (any) AR terms in the model specification (even not all successive ones, as in AR(1), AR(4)), then we can VIEW the ARMA STRUCTURE. This then provides information about the dynamic stability of the estimated AR model, in terms of whether or not the inverted roots of the characteristic polynomial lie within the unit circle. In addition, we have the ability to see how the estimated model responds, dynamically, to various types of “shocks” to the data.

The mathematical details relating to the derivation of the stationarity (dynamic stability) conditions for the AR(2) can be found here.

One important thing to notice. We're really interested in a model that is autoregressive in the variable, Y. The second version of the model appears to lack this feature, but have an autoregressive error term. The point is, though, that this second version can be re-written as the first version, except for the intercept - and this intercept doesn't affect the dynamic stability.

An Example:
Consider the following simple autoregressive model for quarterly Japanese consumption data. These data are available in the Data page that goes with this blog; and the EViews file that I have used for the results below is available on the Code page. The model is not intended to be particularly sophisticated - I just want to illustrate simple point here.

The following transformations have been applied to the data:

Wt = log(CONSt)   ;     to linearize the trend

Yt = (Wt - Wt-4)  ;     to remove the seasonality.

The second transformation also has the effect of removing most of the trend, as we can see:

Now let’s model the {Yt} series using an AR(4) model, along the lines of equation (1):

Then, consider the alternative approach, along the lines of equation (2):

Note that, apart from the intercept coefficient, the various estimated coefficients and their standard errors are the same in each output. Also, note the additional information that is provided about the (complex) inverse roots of the characteristic equation at the bottom of this output. Let’s explore this information in more detail. This is what we see if we select VIEW / ARMA Structure:

The estimated AR(4) process is stationary – and that's something that may not be altogether obvious just by looking at the estimated coefficients in the first OLS output.

Returning to the ARMA Structure option, we might also choose the impulse response function. In this case we are going to “shock” the Yt series by an amount equal to one sample standard deviation, and then see how the predicted series “settle down” over a 24-quarter (6-year) period:

Because the estimated specification is stationary, things settle down to their original state after about 5 years.

So, sometimes with a bit of thought we can "trick" out favourite econometrics package into providing us with information that it might otherwise be reluctant to yield up. This can save us a lot of work!

1. Hi Dave,

Is it possible to "trick" Eviews to give us IRFs of a transfer function? I mean something like:

Y_t C X_t AR(1) AR(2)

Where I would like to get the IRF of a shock to the X_t series.

1. Itamar: Good question! I've been playing around withy EViews and I can't find a way. I had hoped that maybe I could trick it by setting up a VAR model with a "redundant" equation for X, but that didn't work.

Maybe someone else has some suggestions?

2. I think RATS has a built in procedure for transfer function analysis, but I haven't got the chance to check it out yet.

BTW, I'm a big fan of your blog!

3. Itamar: Thanks for the very helpful suggestion (and the kind comment).

4. I stumbled upon your blog. Very nice. Autobox has automatic transfer function and non-automatic. It will also identify lead,contemporaneous and lags of the x variable. It will also detect outliers, level shifts, changes in trend and seasonality. They have been around since the mid 1970's and were one of the first.

We would be glad to benchmark to show you how it does it compared to your current tools (SAS, R, etc.). Email us at sales@autobox.com to discuss.

5. Tom: I should have remembered Autobox from way back! Everyone else - definitely take up Tom's suggestion.

6. Thanks for the kudos.

We took a look at your example above(108 values). The approach you used 2 transformations as up front filters. Transformations are like medicine. Some are good for you and some are bad for you.

One transformation not considered here is to determine when and if model parameters had changed over time(ie Chow test). This is the elephant in the room where no attempt whatsoever to validate the amount of data used for the analysis. Too much data is sometimes counterproductive. Some think that LOGS fixes this which it actually can cause more problems.

For example, in this case, 47 values from the beginning of the data set were found to be different from the last 61 observations. Consequently, the older should be truncated.

Another possible transformation is to detect unusual values and to render a cleansed series free of outliers(see Tsay's work www.unc.edu/~jbhill/tsay.pdf)

A possible model would have this following form using the last 61 obs.

[(1-B**4)]Y(T) = 2.7299 japan
+[X1(T)][(1-B**4)][(- 12.1978)] :PULSE 1974/ 1
+[X2(T)][(1-B**4)][(- 9.4258)] :PULSE 1974/ 4
+ [(1- .759B** 1)]**-1 [A(T)]

Model Fit statistics had to be withheld due to space limitations.

Here is the model from the first 47 obs

MODEL COMPONENT LAG COEFF STANDARD P T
# (BOP) ERROR VALUE VALUE

Differencing 1
1Autoregressive-Factor # 1 4 1.04 .166E-01 .0000 62.61
2Autoregressive-Factor # 2 1 .116 .163 .4799 .71

This is the model for the last 67 obs

MODEL COMPONENT LAG COEFF STANDARD P T
# (BOP) ERROR VALUE VALUE

Differencing 1
1Autoregressive-Factor # 1 4 1.02 .923E-02 .0000 99.99
2Autoregressive-Factor # 2 1 .378 .123 .0033 3.07

7. Part #2

Here is the Chow test

ERROR SUM OF SQUARES GLOBAL : 626.

ERROR SUM OF SQUARES REGIME 1 : 205.
ERROR SUM OF SQUARES REGIME 2 : 327.

REDUCTION DUE TO LOCAL : 94.4
MEAN SQUARE ERROR DUE TO LOCAL: 47.2 WITH 2

MEAN SQUARE UNDER LOCAL : 5.11 WITH 104

BREAKPOINT= 48 F VALUE= 9.24 P VALUE= .0002 WITH 2, 104 DF
CONSTANCY TEST FOR PARAMETERS WAS PERFORMED AND IMPLEMENTED.

Final residuals withheld due to space limitations

As you well know, the assumption of constant variance is on the error term and has nothing to do with the observed series. It has everything to do with the residuals from a tentative model. These residuals above exhibit no change in variance otherwise the TSAY variance procedure would have been invoked to construct a GLS. Furthermore, there is no observable linkage between the residuals and the predicted values thus there was no need for any power transformation.

Feel free to contact us at sales@autobox.com for more discussions Time Series.

2. Thanks this was very useful!

3. Mikko - glad to hear that. Thanks for the feedback.

4. Tom: Thanks for taking the time to run the data through AUTOBOX. This was most interesting.

However, the whole point of the post was expressed in its title - simply to illustrate to users of EViews how they can "trick" the package into providing a useful piece of information that otherwise would not be transparent - namely, the inverse roots of the characteristic equation.

This was never intended to be a full modelling exerxise for the time-series that I used for illustrative purposes. I could have used some purely artificial data, but I think that would have made the post less interesting.

5. Thanks! That was really useful information!

6. Hi Dave, Just found your blog and really like it. :-) I've got a question for you. Does your example of tricking Eviews into reporting the inverse roots of the characteristic equation work when there are explanatory variables in the equation (say, for an ARDL model)?

1. No - not without further manipulations.

7. Thank you, this is a very helpful post. Probably most people learn AR(p) as in your specification 1, but the Eviews help pages seem to relate solely to your specification 2.

8. Thanks, this is a great post. Just looking at the derivation, it would seem that equation (2) should be a second-order error process, rather than first-order?

1. No - everything is correct as it is.

9. This is really helpful post :). But I'm still confused about the inverted root. I would like to ask you that if the inverted root of MA in the result of the regression is 1; however the ARMA structure presents in table ARMA is invertible. How should we conclude?

1. You'll have to provide a more specific example.

10. I think there's a minor algebra issue here. Just before eq. (6) we read, "or, from the definition of __ in (2):" Something is missing; from context it looks like it's 'ut.' But if so, in substituting out ut, instead of simplifying to εt, we get (εt - φ2ut-2). I couldn't figure out a way to simplify further. So could it be that the two lagged forms are identical, except for the intercept *and* the error term? Thanks, JS

1. Correct - the error must be different too. I'll fix the wording when I get a chance.

11. Hi Dave,

If one of the AR root falls on the unit circle would you call the VAR is stable?
Thanks.

1. No, it's not in this case.