Monday, May 26, 2014

Unit Root Testing: Sample Size vs. Sample Span

The more the merrier when it comes to the number of observations we have for our economic time-series data - right? Well, not necessarily. 

There are several reasons to be cautious, not the least of which include the possibility of structural breaks or regime-switching in the data-generating process. However, these are topics for future posts. Here, I want to discuss a different issue - namely, the impact of data frequency on the properties of tests for the stationarity of economic time-series.

To be specific, let's consider the following question: "Which is better when I'm applying the (augmented) Dickey-Fuller test - 20 annual observations for the series, or 80 quarterly observations?"

The short answer is that it really doesn't matter.

However, a couple of caveats are in order. I'm going to assume that any unit roots that may be present in the series are at the zero frequency. That is, I'm not going to consider unit roots at the seasonal frequencies. Clearly, we can't test for these using annual data. Moreover, if seasonal unit roots are present, then this should be taken into account when testing for unit roots at the zero frequency. For example, see Hylleberg et al. (1990).

The question that I've posed has received quite a bit of consideration in the time-series econometrics literature. A slightly more general way of posing the question is to ask if tests for unit roots are affected when we either temporally aggregate or selectively sample the data. The former process arises with flow variables when, for example, we add up monthly flows to get a quarterly or annual flow. The second process is associated with stock variables where we use a particular higher-frequency observation to represent the value of the lower-frequency variable. An example would be where we use the March-month unemployment rate as a measure of the rate for the March quarter.

In general, let's let "m" denote the frequency at which we selectively sample; or the number of periods that are aggregated in the case of a flow variable, As a detail, note that if, in the case of a stock variable, we average some higher-frequency values to get a low frequency value, then this constitutes m-period temporal aggregation, with a scaling of ( 1 / m). An example would be using the average of four end-of-quarter values for the CPI to get an annual CPI measure (m = 4).

Now, suppose that we have a time series generated according to:

                          yt = ρ yt-1 + ut    ;  t = 1, 2, ...., T

where ut follows a finite-order stationary ARMA process whose maximum AR modulus is less than ρ.

We want to test H0: ρ = 1 against a sequence of local alternatives. That is,  against  HA: ρ = exp(-c / T), where c > 0.

In this context, perhaps the most important result for you to aware of is the following, from Pierce and Snell (1995, p.336):
“Any test that is asymptotically independent of nuisance parameters under both H0 and HA has  a limiting distribution under both H0 and HA that is independent of m.”
So, this means that, asymptotically, temporal aggregation or selective sampling have no consequences in terms of size distortion, or loss of power, for the ADF, Phillips-Perron test, or Hall's (1994) IV-based unit root test. The same is true for several other related tests.

Pierce and Snell also show that even in finite samples, this result holds quite well; and it also applies to tests of non-cointegration, such as those of Engle and Granger, and Johansen.

Let's look at a simple, illustrative, example.The data I'll use are for imports of merchandise (c.i.f. basis) into New Zealand. The data are in nominal values, and I've seasonally adjusted them using the Census X-13 method, assuming a multiplicative decomposition of the time-series. The data were downloaded using the excellent, free, Infoshare facility provided by Statistics New Zealand.

(Snide remark: I hate to say it, but Statistics Canada could learn a few things from Statistics N.Z., and from plenty of other such agencies, when it comes to providing easy access to long-term economic time-series data.)

The fact that the data have been seasonally adjusted using the ratio-to-moving average method, prior to our unit root testing, can raise some issues of its own. Again, this is something that's best left for a separate post.

The data that I've used are available on this blog's data page, and the EViews workfile is on the code page. This is what the monthly, quarterly and annual data look like:

Now let's apply some unit root tests. I've used both the ADF test (where the null hypothesis is that the series is I(1), and the alternative is that it is I(0)); and th KPSS test (where the null hypothesis is that the series is I(0), and the alternative is that it is I(1)). 

(None of the series were found to be I(2).)

The ADF tests have been applied with an allowance for a drift and trend in the data, and the SIC was used to select degree of augmentation, k. For the  KPSS tests the bandwidth, b, was selected using the Newey-West method, with the Bartlett kernel.

Here are the results:


Sample                               T             k             ADF                   b          KPSS    
                                                                      (p-val.)
------------------------------------------------------------------------------------------------
1960M1 - 2014M3              651           2            -2.050                21          0.709*
                                                                      (0.572)

1960Q1 - 2014Q1              217            0            -2.027               11          0.456*
                                                                      (0.583)

1960 - 2013                        54            4            -1.409                5          0.252*
                                                                      (0.846)
-----------------------------------------------------------------------------------------------
* Significant at the 1% level, based on asymptotic critical values.

As you can see, results strongly support the presence of a unit root in the imports time-series, regardless of the degree of aggregation of the data (and hence the sample size, T) over the 54-year time-span under consideration here.


References

Hall, A., 1992. Testing for a unit root in time series using instrumental variable estimators with pretest data based model selection. Journal of Econometrics, 54, 223-250.

Hylleberg, S., R. F. Engle, C. W. J. Granger, and B. S. Yoo, 1990. Seasonal integration and cointegration. Journal of Econometrics, 44, 215-238.     

Pierse, R. G. and J. Snell, 1995. Temporal aggregation and the power of tests for a unit root. Journal of Econometrics, 65, 333-345.



© 2014, David E. Giles

15 comments:

  1. Dear Professor,

    All your posts help students a lot.

    If you can write a post on "Time-varying Parameter Estimation", it would be really helpful.

    Thanking you.

    ReplyDelete
  2. Yes Santosh,

    Time-Varying coefficients also important in detecting the G-causalities of non-stationary variables. Dave Giles explains the topics very nicely. I hope, he will cover it soon.

    ReplyDelete
  3. Dear professor, again I must thank you for the opportunity you give us in finding the best and correct answer to (almost all) our questions regarding econometrics. I was troubled yesterday by this very question related to small sample size of annual data vs. monthly/quarterly data in unit root testing and cointegration procedure and now I found my "peace" :))

    ReplyDelete
  4. This result precludes stochastic volatility.

    ReplyDelete
  5. Dear Dave Giles,

    First let me thank you for a very insightful blog! I find your explanations extremely concise and have used them often for my work.

    I wish to ask you a question somewhat related to the post above. In practice we often face short series. Suppose, for example, that we only have 5 years of quarterly data on two series, GDP growth and interest rate spread. Suppose in addition that we know, based on theory, that both should be stationary and that, due to a low power in small samples, unit root tests cannot reject the null of unit root (DF, ADF, PP) and at the same time cannot reject the null of stationarity (KPSS).

    Could I ask you, what do you recommend in such cases, to model the two variables in levels or rather difference them further? What are the consequences of using the first or the second option? Do you know of any references that investigate this issue?

    I sincerely thank you for any help you might provide.

    Kind regards,
    Vasja

    ReplyDelete
    Replies
    1. Vasja - thanks for your comment. I'd say, "If in doubt, difference the data." The reason for saying this is:

      1. If any of the data are non-stationary and you fail to difference them, then unless they happen to be cointegrated the results will be meaningless.

      2. If the data are actually stationary, but you difference them, they will still be stationary, although not I(0). "Over-differencing" the data is not totally desirable, but the penalties associated with this are much less than those associated with case 1 above.

      Delete
  6. Dear Prof.,
    Thank you for sharing with us your knowledge, it is of great help.
    If i'm using ADF and got this warning "Warning: Probabilities and critical values calculated for 20 observations and may not be accurate for a sample size of 18"....
    I use KPSS and it shows that the time series is stationary, is that enough? and reliable?
    Thank you for your help.
    Best REgards,
    Yasmine

    ReplyDelete
    Replies
    1. Yasmine - thanks for the question. First, make sure that the critical values use for the KPSS test were the finite-sample critical values for n=18, and not just the asymptotic critical values. You can get the exact critical values for the ADF test when n=18 from http://qed.econ.queensu.ca/working_papers/papers/qed_wp_1227.pdf .
      There's only a small difference between these and the ones for n=20, so you're probably OK, as long as your ADF test results aren't "borderline". My (probably unhelpful suggestion) - get a larger sample!

      Delete
    2. Thank you professor for your prompt reply, that was very helpful. It is difficult to get a larger sample.

      Delete
  7. Respected Prof.
    Your blog is really very helpful. Your explanations are easy to understand and concise. Even a scholar from non-econometric background,like me also can understand it.
    I am dealing with time series of very few observations. Company wise it is varying say from 6-11 years. Before running regression I need to check stationarity. But different adf models are giving different results. My questions are do I need to check stationarity for such small sample? If yes then which method e.g. adf, kpss, pp should I use? If adf, then which model to be considered? I am using numXl software, an add-on to excel.
    I am stuck badly in my research for good many months as unable to get any workable solution. Kindly suggest me. It will be of great help. Thanking you in anticipation of prompt reply.

    ReplyDelete
    Replies
    1. Thanks for your comments. You certainly do have a very short span of data, an I'd be surprised if you can model anything sensibly with so few observations. Having said that..... (1) Yes, stationarity is always important; (2) I'm not surprised that the different tests are giving conflicting results; (3) There's no simple choice of test; (3) With ADF, the inclusion of drift and/or trend in the ADF regression depends on the features of the data.
      My suggestion - don't worry about testing for stationarity. Just first-difference all the series and then estimate the model using these data. Why? If any series is in fact non-stationary, differencing will deal with this. What if a series is I(0)? Then differencing the data is unnecessary, but the resulting series will still be stationary (but not I(0)). It's better to over-difference than to run the risk of a "spurious regression" by using un-differenced I(1) data.

      Delete
  8. Respected Sir,
    Thank you so much for your prompt reply. It really helped me. I have one more query. You have suggested differencing of all the series irrespective of stationarity test but after differencing I am getting negative values to a large number. Now for log transformation, I need to add minimum positive value to almost all series. On the other hand if log transformation is followed by stationarity test, then differencing log transformed data will actually mean rate/ ratio. What do you suggest? I am in dilemma. In anticipation of your advice.

    Warm Regards

    Arjun

    ReplyDelete
    Replies
    1. With the negative values you simply can;t use logs. You SHOULD NOT add a value to make the number positive - NEVER!

      Delete