Thursday, August 30, 2012

The Cauchy Estimator & Unit Root Tests

As we all know, there's more than one way to estimate a regression equation. Some of the estimators that we frequently use include OLS, GLS, IV, GMM, LAD, and ML. Some of these estimators are special cases of some of the others, depending on the circumstances.

But have you ever used the Cauchy estimator? Probably not, even though it's been around (at least) since 1836.

In that year, Augustin-Louis Cauchy published a paper in the highly influential journal, Philosophical Magazine. (The same issue included letters to the Editor from Michael Faraday, and the journal is still published today.) The paper by Monsieur Cauchy began as follows:




What Cauchy provided was an approach to fitting a curve to data-points. You can think of his  method as a competitor to least squares. Interestingly, it is still used today in spectrometry to fit relationships between the index of refraction (n) and the wavelength (λ). This gives rise to the so-called "n(λ) curve" (e.g., here).

In econometrics, the Cauchy estimator has been used in the context of estimating the coefficients of an autoregressive time-series, which may or may not be stationary, by So and Shin (1999), Shin and So (200), and Phillips et al. (2004). The first of these co-authors establish the following (surprising?) result.

Suppose we have the autoregression,

                                        yt = ρ yt-1 + ut  .

Let st = sign(yt-1) = -1 if yt-1 < 0; = +1, otherwise. The Cauchy estimator of ρ turns out to be just IV estimation, using st as the instrument for yt-1. Importantly, the estimator and the associated t-test statistic are asymptotically standard normal in distribution, regardless of the value of ρ. In particular, this is the case even if ρ = 1!

Compare this to the usual situation where we estimate the autoregression by OLS. Then, the asymptotic distribution of the estimator of ρ is Normal if ρ < 1. However, if ρ = 1, we know from the results of White (1958), Dickey and Fuller (1979), and others that the asymptotic distribution of the OLS estimator is completely non-standard. White (1958) proved that if ρ > 1, then  the asymptotic distribution of the OLS estimator is Cauchy (that name keeps popping up!) - that is, Student-t with one degree of freedom.

The implications of the So and Shin (1999) result is that if we want to conduct inference after estimating an autoregression using the Cauchy estimator, we don't have to worry about the time-series being non-stationary. We can construct tests and confidence intervals in the usual way, using the asymptotic Normality.

In particular, as they show, we can construct a unit root test by just computing the associated t-statistic, and using the fact that it will follow a standard normal distribution in large samples.

To illustrate this, I've undertaken a small simulation experiment, using EViews. You'll find the workfile and the code I've used here (including a text file for easy reading).

I've simulated the sampling distributions of the Cauchy unit root test statistic for samples of size n = 10, 25, 50, 100 and 1,000. As you can see, the sampling distribution settles down very quickly to its asymptotic form - namely standard normal:





Using an assumed significance level of 5% when testing H0: ρ = 1 vs. HA: ρ < 1, the critical value based on the standard normal distribution is -1.645. Using this critical value, and computing the proportion of rejections (out of 10,000) for the Cauchy test, we get the empirical size of the test. For n = 10, 25, 50, and 1,000 the empirical sizes are 7.3%, 5.7%, 5.3%, and 4.8% respectively. We see that there is minimal size distortion if the asymptotic distribution is used, even when the sample size is as small as n = 50.

I've also compared these sampling distributions with the distributions of the corresponding (no drift/no trend) Dickey-Fuller test statistic. As we know, the distribution of the DF test statistic is "non-standard" (and is usually expressed as functionals of Brownian motions). Even asymptotically, it is "shifted" to the left, relative to the standard normal distribution. We see this in the following sequence of graphs, for increasing values of "n":


Although I've focused on the "no drift/no trend" case here when testing for a unit root, this isn't restrictive. So and Shin (1999) and Shin and So (2000) show that it's quite simple to extend the basic idea of Cauchy estimation to more general unit root tests, including ones for seasonal unit roots.

In summary, the construction of tests for unit roots based on the Cauchy estimator is very straightforward, and it enables us to use the usual (standard normal) asymptotics.

I'm currently working on a paper that uses the Cauchy estimator for a different testing problem - but more on that in due course!


References

Cauchy, A-L., 1836. On a new formula for solving the problem of interpolation in a manner applicable to physical investigations. Philosophical Magazine (Series 3), 8, 459-468.

Dickey, D.A. and W.A. Fuller, 1979. Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74, 427–431.

Phillips, P. C. B., J. Y. Park, and Y. Chang, 2004. Nonlinear instrumental variable estimation of an autoregression. Journal of Econometrics, 118, 219-246.

Shin, D. W. and B. S. So, 2000. Gaussian tests for seasonal unit roots based on Cauchy estimation and recursive mean adjustments. Journal of Econometrics, 99, 107-137.

So, B. S. and D. W. Shin1999. Cauchy estimators for autoregressive processes with applications to unit root tests and confidence intervals. Econometric Theory, 15, 165-176.

White, J. S., 1958. The limiting distribution of the serial correlation coefficient in the explosive case. Annals of Mathematical Statistics, 29, 1188-1197.


© 2012, David E. Giles

1 comment: