Thursday, January 26, 2012

Hot Topics in Econometrics

Last week, Takamitsu Kurita asked me "What do you think will be the big developments in Econometrics over the next decade". We were having a drink following his seminar, and I really didn't have a good answer. I think those of us present ducked the question by saying that, as econometricians, we know only too well the pitfalls associated with forecasting! But Taka's question was a good one, and it certainly deserved a better response than I had at the time.

I don't think I'm at all qualified to provide a decent answer, but the question got me thinking. And we all know where that leads to!

So, here are some thoughts about some topics that are hot, even if they don't remain that way for a decade:
  • Modelling extremely large data-sets.
  • Modelling ultra-high frequency financial data.
  • Information theory and entropy econometrics.
  • The econometrics of networks.
  • Treatment effect models.
  • An increasing acceptance of Bayesian methods in econometrics.
  • Nonlinear time-series analysis.

Perhaps you'd like to add to this very short list?

© 2012, David E. Giles


  1. Long memory misspecification tests,
    Identification of of breaks
    Bootstrapping methods

    all seem to be quite relevant.

  2. That's right. Identification of (several) structural breaks in extreamly large data-sets has already been an issue for me in empirical work.

  3. Dooruj & Georg: Right on.

    More and more use of computaionslly intensive methods has totally changed the facte of the statistics community in the past few years - it's been spilling over into econometrics too.

    The whole inter-twining of structural breaks and non-stationarity will, I think continue to get a lot of attention.

    Thanks for your thoughts!

  4. Data analytics methods: Decison trees, random forests, neural networks, etc. Though I suspect the up-take in TS will be slow.

    Also, model comparison methods will improve, as you say, using information theory.

  5. Real-time data - that is, inference and forecasting with data that will be subsequently revised.

    We almost always punt on this issue.

    1. Stephen - absolutely! Really crucial in macroeconometrics, in particular.

  6. David,
    I think we all agree we live in data rich times. Some commentators have talked about the data deluge. But does that necessarily imply it’s a hot topic for econometricians? I can see important issues around data storage and retrieval but these typically don’t concern econometricians. If my desktop version of Stata or Eviews can’t cope with my data set then I simply take a (still large) sample from my massive data set and proceed as usual. It seems that it is not size per se that is going to lead to interesting times for econometricians it is data quality. We may now be able to address questions that previously could not be considered because the data were simply not up to the task, even with the multitude of assumptions we typically make in order to make progress with applied work. Having masses more of the same type of data is not terribly exciting, we just get more precise estimates of previously estimated parameters. So the challenges will arise when we ask more of the data in attempting to resolve more nuanced questions. This will happen with better data probably arising from different or previously inaccessible data sources, possibly merged in creative ways to existing data.
    Cheers, Denzil

    1. Denzil: Fair comment - point taken. I guess I'd add that we are starting to get "richer" (more highly dimensioned) data-sets, and this certainly presents some interesting modelling challenges.

      Great to hear from you!

  7. Hi Dave,

    Nice post. Sorry, I had not read it before.

    I am on to research on econometrics of network. But the great difficulty is that there are not much datasets available for the purpose.