Friday, May 19, 2017

When Everything Old is New Again

Some ideas are so good that they keep re-appearing again and again. In other words, they stand the test of time, and prove to be useful in lots of different contexts – sometimes in situations that we couldn’t have imagined when the idea first came to light.

This certainly happens in econometrics, and here are just a few examples that come to mind.

Principal Components

Principal components analysis (PCA) was proposed by Pearson (1901), and then developed independently by Hotelling in various papers in the 1930's. As I discussed in this earlier post, PCA was used by econometricians in the 1970's as a "dimension reduction device in the context of the 2SLS estimator. In many of the large-scale simultaneous equations models that were being estimated, the number of predetermined variables in the full system exceeded the sample size, so the 2SLS estimator was not defined. Replacing the full set of the predetermined variables with just the first few associated principal components circumvented this problem, at the cost of a small loss of (asymptotic) efficiency in the estimator.

This idea was "re-discovered" by Ng and Bai (2009), and Winkelfried and Smith (2011).

More recently, PC analysis has become one of the standard ways of undertaking regression (and other statistical) analysis with "fat" data-sets. These are sets of data in which there are very large numbers of explanatory variables, relative to the number of observations per variable, For instance, see Halko et al. (2011), and Yang et al. (2014)

Almon Distributed Lag Model

This model was proposed originally to facilitate the estimation of the coefficients of a finite distributed lag model, while avoiding the perils of the high multicollinearity between the successive lagged regressors. It was shown subsequently to be a particular (and rather innovative) application of Restricted Least Squares.

For more details, see my earlier posts, here and here.

In more recent times, Shirley Almon’s polynomial distributed lag scheme has been used as a key ingredient in the so-called MIDAS (Mixed-Data Sampling) model proposed by Eric Ghysels and his colleagues. (For example, see Ghysels et al. (2004), and my earlier post here)

The Almon lag is used there to “spread” high frequency observations over lower frequency time periods, while imposing a positivity constraint. This is a really nice example of an old idea being used innovatively in a new setting.  

Instrumental Variables

If you know anything about the history of econometrics, you’ll know that Instrumental variables (IV) estimation played an absolutely central role in defining our discipline. Appropriate inference in the context of endogeneity, simultaneity, and measurement errors was what distinguished econometrics from other areas of applied statistics, right from the start.

Even so, there was a time when it wasn't that easy to convince some practitioners that they should use IV estimators, rather than OLS.

So, it's rather ironic to see a new bunch of applied economists "instrumenting" everything in sight, and carrying on as if they've just just made some great new methodological discovery. They haven't! 

They say that there's no-one more zealous than a recent convert. This is a case in point.

Sargan’s Test

I get a little irritated when people talk about "Hansen's J test". Yes, Hansen's (1982) GMM estimator and the associated inferential procedures were major contributions. No doubt about that, and this was recognized in the form of a Nobel Prize in 2013. However, let's not forget that the J test is really a re-formulation of Sargan's test of the over-identifying restrictions in IV estimation. And that test was introduced by Denis Sargan in 1958!

Recently, some writers do seem to be referring to the "Sargan-Hansen test". Good - it's about time that we gave credit where credit is due. Do your reading; know the history of our discipline. It won't hurt you to do so!

LIML & the Anderson-Rubin Test

The Limited Information Maximum Likelihood (LIML) estimator was developed in a seminal paper by Anderson and Rubin (1949). That same paper included a test of the hypothesis that the structural equation being estimated is over-identified. These contributions by Anderson and Rubin were among the most important to the literature on inference for simultaneous equations models.

It should be noted that the LIML estimator is a specific member of the k-class family of estimators, and all of the latter estimators are examples of IV estimators.

The "weak instrument" problem that is common with IV estimation led to a recognition that the LIML estimator and the Anderson-Rubin test have extremely important uses. Specifically, in the context of weak instruments, the LIML estimator is superior to the 2SLS estimator in terms of bias. In addition, the Anderson-Rubin test turns out to be robust to the presence of weak instruments.

Not only have Anderson and Rubin's contributions withstood the test of time, but their value has increased with the recognition of new problems associated with the estimation of structural equations.

ARDL Models

The Autoregressive Distributed Lag (ARDL) model has been around for decades. It was a popular “work horse” regression model in the 1960’s and 1970’s. See here for a brief discussion of this.

Recently, Hashem Pesaran and various co-authors have shown that this old model can be used to deal with the problem of testing for cointegration when we have a mixture of both I(0) and I(1) variables. Now, when we talk about ARDL models it's this latter context that we generally have in mind.


For more details, see my earlier posts on modern ARDL modelling, and Bounds Testing, here and here. In addition, see the posts on the EViews blog, here and here.


Some of the examples given above involve the creative use of an old tool to solve a new problem. However, some of them seem to involve people getting excited about using an old tool to solve the the original problem, without realizing that's what they're doing.

As Yogi Bera said: "It's like déjà vu all over again."


© 2017, David E. Giles

4 comments:

  1. great article Dave. Similarly, one could argue that GMM, although clearly not exactly the same , has its origins in the method of moments which was discovered by Karl Pearson, a statistician, in 1894. In the same way, some would argue that ARCH is an AR model (which was discovered by Box et al in the 70's ) where X_t is the volatility. Hansen and Engle are incredibly talented and deserving of their discoveries but, in one way or another, everyone stands on the shoulders of past giants. All the best.




    ReplyDelete
    Replies
    1. Mark - thanks for the thoughtful comments. All of them right on target. I guess I could summarize my feelings behind this post as follows: 1. (Positive) - it's great to see "old tools getting a new lease of life when new problems come along. 2. (Negative) - I wish that people would be less excited when they're using an old tool, but they think that it's a new one!

      Delete
    2. «less excited when they're using an old tool, but they think that it's a new one!»

      In another field with which I am more familiar old ideas, concepts and tools get rediscovered all the time and published as original under new names.
      The difficulty is that every PhD student, postdoc (and associate professor) has to find a constant stream of "original" ideas for writing papers.
      If only truly original ideas were publishable, very few people would get published, and the whole "tenure track" scheme to use PhD students and postdocs as casual low-cost project labour would crash.
      One interesting statistic I read somewhere is that the number of PhD students and postdocs keeps increasing faster than the number of articles published in "top journals", creating a massive crunch.

      Delete
  2. thank u for sharing information with us

    ReplyDelete

Note: Only a member of this blog may post a comment.