Pages

Monday, November 28, 2016

David Hendry on "Economic Forecasting"

Today I was reading a recent discussion paper by Neil Ericcson, titled "Economic Forecasting in Theory and Practice: An Interview With David F. Hendry". The interview is to be published in the International Journal of Forecasting.

Here's the abstract:

"David Hendry has made major contributions to many areas of economic forecasting. He has developed a taxonomy of forecast errors and a theory of unpredictability that have yielded valuable insights into the nature of forecasting. He has also provided new perspectives on many existing forecast techniques, including mean square forecast errors, add factors, leading indicators, pooling of forecasts, and multi-step estimation. In addition, David has developed new forecast tools, such as forecast encompassing; and he has improved existing ones, such as nowcasting and robustification to breaks. This interview for the International Journal of Forecasting explores David Hendry’s research on forecasting."

Near the end of the wde-rangng, and thought-provoking, interview David makes the following point:
"Many top econometricians are now involved in the theory of forecasting, including Frank Diebold, Hashem Pesaran, Peter Phillips, Lucrezia Reichlin, Jim Stock, Timo Teräsvirta, KenWallis, and MarkWatson. Their technical expertise as well as their practical forecasting experience is invaluable in furthering the field. A mathematical treatment can help understand economic forecasts, as the taxonomy illustrated. Recent developments are summarized in the books by Hendry and Ericsson (2001), Clements and Hendry (2002), Elliott, Granger, and Timmermann (2006), and Clements and Hendry (2011b). Forecasting is no longer an orphan of the profession."
(My emphasis added; DG)

Neil's interview makes great reading, and I commend it to you.


© 2016, David E. Giles

Friday, November 18, 2016

The Dead Grandmother/Exam Syndrome

Anyone who's had to deal with students will be familiar with the well-known problem that biologist Mike Adams discussed in his 1999 piece, "The Dead Grandmother/Exam Syndrome", in the Annals of Improbable Research. 😊

As Mike noted,
"The basic problem can be stated very simply:
A student’s grandmother is far more likely to die suddenly just before the student takes an exam, than at any other time of year."
Based on his data, Mike observed that:
"Overall, a student who is failing a class and has a final coming up is more than 50 times more likely to lose a family member than is an A student not facing any exams. Only one conclusion can be drawn from these data. Family members literally worry themselves to death over the outcome of their relative's performance on each exam.
Naturally, the worse the student’s record is, and the more important the exam, the more the family worries; and it is the ensuing tension that presumably causes premature death."
I'll leave you to read the rest, and to find out why grandmothers are more susceptible to this problem than are grandfathers.

Enjoy Mike's research - and then make sure that you put a link to his paper on your course outlines!


© 2016, David E. Giles

Thursday, November 17, 2016

Inside Interesting Integrals

In some of my research - notably that relating to statistical distribution theory, and that in Bayesian econometrics - I spend quite a bit of time dealing with integration problems. As I noted in this recent post, integration is something that we really can't avoid in econometrics - even if it's effectively just "lurking behind the scenes", and not right in our face.

Contrary to what you might think, this can be rather interesting!

We can use software, such as Maple, or Mathematica, to help us to evaluate many complicated integrals. Of course, that wasn't always so, and in any case it's a pity to let your computer have all the fun when you could get in there and get your hands dirty with some hands-on work. Is there anything more thrilling than "cracking" a nasty looking integral?

I rely a lot on the classic book, Table of Integrals, Series and Products, by Gradshteyn Ryzhik. It provides a systematic tabulation of thousands of integrals and other functions. I know that there are zillions of books that discuss various standard methods (and non-standard tricks) to help us evaluate integrals. I'm not qualified to judge which ones are the best, but here's one that caught my attention some time back and which I've enjoyed delving into in recent months.

It's written by an electrical engineer, Paul J. Nahin, and it's called Inside Interesting Integrals.

I just love Paul's style, and I think that you will too. For instance, he describes his book in the following way -
"A Collection of Sneaky Tricks, Sly Substitutions, and Numerous Other Stupendously Clever, Awesomely Wicked, and Devilishly Seductive Maneuvers for Computing Nearly 200 Perplexing Definite Integrals From Physics, Engineering, and Mathematics. (Plus 60 Challenge Problems with Complete, Detailed Solutions.)"
Well, that certainly got my attention!

And then there's the book's "dedication":
"This book is dedicated to all who, when they read the following line from John le Carre´’s 1989 Cold War spy novel The Russia House, immediately know they have encountered a most interesting character:
'Even when he didn’t follow what he was looking at, he could relish a good page of mathematics all day long.'
as well as to all who understand how frustrating is the lament in Anthony Zee’s book Quantum Field Theory in a Nutshell:
'Ah, if we could only do the integral … . But we can’t.' "
What's not to love about that?

Take a look at Inside Interesting Integrals - it's a gem.

© 2016, David E. Giles

Saturday, November 12, 2016

Monte Carlo Simulation Basics, II: Estimator Properties

In the early part of my recent post on this series of posts about Monte Carlo (MC) simulation, I made the following comments regarding its postential usefulness in econometrics:
".....we usually avoid using estimators that are are "inconsistent". This implies that our estimators are (among other things) asymptotically unbiased. ......however, this is no guarantee that they are unbiased, or even have acceptably small bias, if we're working with a relatively small sample of data. If we want to determine the bias (or variance) of an estimator for a particular finite sample size (n), then once again we need to know about the estimator's sampling distribution. Specifically, we need to determine the mean and the variance of that sampling distribution. 
If we can't figure the details of the sampling distribution for an estimator or a test statistic by analytical means - and sometimes that can be very, very, difficult - then one way to go forward is to conduct some sort of MC simulation experiment."
Before proceeding further, let's recall just what we mean by a "sampling distribution". It's a very specific concept, and not all statisticians agree that it's even an interesting one.

Tuesday, November 8, 2016

Monte Carlo Simulation Basics, I: Historical Notes

Monte Carlo (MC) simulation provides us with a very powerful tool for solving all sorts of problems. In classical econometrics, we can use it to explore the properties of the estimators and tests that we use. More specifically, MC methods enable us to mimic (computationally) the sampling distributions of estimators and test statistics in situations that are of interest to us. In Bayesian econometrics we use this tool to actually construct the estimators themselves. I'll put the latter to one side in what follows.

Sunday, November 6, 2016

The BMST Package for Gretl

As a follow-up to this recent post, I heard again from Artur Tarassow.

You'll see from his email message below that he's extended his earlier work and has prepared a new package for Gretl called "Binary Models Specification Tests".

It's really good to see tests of this type being made available for users of different software - especially free software such as Gretl.

Artur writes:

Saturday, November 5, 2016

Snakes in a Room

Teachers frequently use analogies when explaining new concepts. In fact, most people do. A good analogy can be quite eye-opening.

The other day my wife was in the room while I was on the 'phone explaining to someone why we often like to apply BOTH the ADF test and the KPSS test when we're trying to ascertain whether a partcular time-series is stationary or non-stationary. (More specifically, whether it is I(0) or I(1).) The conversation was, not surprisingly, relatively technical in nature.

After the call was over, it occurred to me that my wife (who is an artist, and not an econometrician) might ask me what the heck all that gobbly-gook was all about. As it happened, she didn't - she had more important things on her mind, no doubt. But it forced me to think about a useful analogy that one might use in this particular instance.

I'm not suggesting that what I came up with is the best possible analogy, but for what it's worth I'll share it with you.

Friday, November 4, 2016

November Reading

You'll see that this month's reading list relates, in part, to my two recent posts about Ted Anderson and David Cox.
  • Acharya, A., M. Blackwell, & M. Sen, 2015. Explaining causal findings without bias: Detecting and assessing direct effects. RWP15-194, Harvard Kennedy School.
  • Anderson, T.W., 2005. Origins of the limited information maximum likelihood and two-stage least squares estimators. Journal of Econometrics, 127, 1-16.
  • Anderson, T.W. & H. Rubin, 1949. Estimation of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics, 20, 46-63.
  • Cox, D.R., 1972. Regression models and life-tables (with discussion). Journal of the Royal Statistical Society B, 34, 187–220.
  • Malsiner-Walli, G. & H. Wagner, 2011. Comparing spike and slab priors for Bayesian variable selection. Austrian Journal of Statistics, 40, 241-264.
  • Psaradakis, Z. & M. Vavra, 2016. Portmanteau tests for linearity of stationary time series. Working Paper 1/2016, National Bank of Slovakia.
© 2016, David E. Giles

Thursday, November 3, 2016

T. W. Anderson: 1918-2016

Unfortunately, this post deals with the recent loss of one of the great statisticians of our time - Theodore (Ted) W. Anderson.

Ted passed away on 17 September of this year, at the age of 98.

I'm hardly qualified to discuss the numerous, path-breaking, contributions that Ted made as a statistician. You can read about those in De Groot (1986), for example.

However, it would be remiss of me not to devote some space to reminding readers of this blog about the seminal contributions that Ted Anderson made to the development of econometrics as a discipline. In one of the "ET Interviews", Peter Phillips talks with Ted about his career, his research, and his role in the history of econometrics.  I commend that interview to you for a much more complete discussion than I can provide here.

(See this post for information about other ET Interviews).

Ted's path-breaking work on the estimation of simultaneous equations models, under the auspices of the Cowles Commission, was enough in itself to put him in the Econometrics Hall of Fame. He gave us the LIML estimator, and the Anderson and Rubin (1949, 1950) papers are classics of the highest order. It's been interesting to see those authors' test for over-identification being "resurrected" recently by a new generation of econometricians. 

There are all sorts of other "snippets" that one can point to as instances where Ted Anderson left his mark on the history and development of econometrics.

For instance, have you ever wondered why we have so many different tests for serial independence of regrsssion errors? Why don't we just use the uniformly most powerful (UMP) test and be done with it? Well, the reason is that no such test (against the alternative of a first-oder autoregresive pricess) exists.

That was established by Anderson (1948), and it led directly to the efforts of Durbin and Watson to develop an "approximately UMP test" for this problem.

As another example, consider the "General-to-Specific" testing methodology that we associate with David Hendry, Grayham Mizon, and other members of the (former?) LSE school of thought in econometrics. Why should we "test down", and not "test up" when developing our models? In other words, why should we start with the most  general form of the model, and then successively test and impose restrictions on the model, rather than starting with a simple model and making it increasingly complex? The short answer is that if we take the former approach, and "nest" the successive null and alternative hypotheses in the appropriate manner, then we can appeal to a theorem of Basu to ensure that the successive test statistics are independent. In turn, this means that we can control the overall significance level for the set of tests to what we want it to be. In contrast, this isn't possible if we use a "Simple-to-General" testing strategy.

All of this spelled out in Anderson (1962) in the context of polynomial regression, and is discussed further in Ted's classic time-series book (Anderson, 1971). The LSE school referred to this in promoting the "General-to-Specific" methodology.

Ted Anderson published many path-breaking papers in statistics and econometrics and he wrote several books - arguably, the two most important are Anderson (1958, 1971). He was a towering figure in the history of econometrics, and with his passing we have lost one of our founding fathers.

References

Anderson, T.W., 1948. On the theory of testing serial correlation. Skandinavisk Aktuarietidskrift, 31, 88-116.

Anderson, T.W., 1958. An Introduction to Multivariate Statistical Analysis. WIley, New York (2nd. ed. 1984).

Anderson, T.W., 1962. The choice of the degree of a polynomial regression as a multiple decision problem. Annals of Mathematical Statistics, 33, 255-265.

Anderson, T.W., 1971. The Statistical Analysis of Time Series. Wiley, New York.

Anderson, T.W. & H. Rubin, 1949. Estimation of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics, 20, 46-63.

Anderson, T.W. & H. Rubin, 1950. The asymptotic properties of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics, 21,570-582.

De Groot, M.H., 1986. A Conversation with T.W. Anderson: An interview with Morris De Groot. Statistical Science, 1, 97–105.

© 2016, David E. Giles

I Was Just Kidding......!

Back in 2011 I wrote a post that I titled, "Dummies for Dummies". It began with the suggestion that I'd written a book of that name, and it included this mock-up of the "cover":


(Thanks to former grad. student, Jacob Schwartz, for helping with the pic.)

Although I did say, "O.K., I'm (uncharacteristically) exaggerating just a tad", apparently a few people took me too seriously/literally. I've had a couple of requests for the book, and even one of my colleagues asked me when it would be appearing.

Sadly, no book was ever intended - somehow, I just don't think there's the market for it!

© 2016, David E. Giles

Wednesday, November 2, 2016

Specification Tests for Logit Models Using Gretl

In various earlier posts I've commented on the need for conducting specification tests when working with Logit and Probit models. (For instance, see herehere, and here.) 

One of the seminal references on this topic is Davidson and MacKinnon (1984). On my primary website, you can find a comprehensive list of other related references, together with EViews files that will enable you to conduct various specification tests with LDV  models.

The link for that material is here.

Today I had an email from Artur Tarassow at the University of Hamburg. He wrote:
I know that you're already aware of the open-source econometric software called "Gretl". 
I would like to let you know that I updated my package "LOGIT_HETERO.gfn". This package runs both the tests of homoskedasticity and correct functional form based on your nice program "Logit_hetero.prg" written for EViews.
If you want to have a look at it, simply run:
    set echo off
    set messages off
    install LOGIT_HETERO.gfn
    include LOGIT_HETERO.gfn
    open http://web.uvic.ca/~dgiles/downloads/binary_choice/Logistic_Burr.wf1
    logit Y 0 X1 X2
    matrix M = LOGIT_HETERO(Y,$xlist,$coeff,1)
    print M
Thanks for this, Artur - I'm sure it will be very helpful to many readers of this blog.

Footnote: See Artur's comment below, and the more recent post here. In particular, note Artur's comment: As a note to your blog readers: The two Logit model related packages “logit_burr.gfn” and “LOGIT_HETERO.gfn” are not available any more, as BMST includes both of them.

Reference

Davidson, R. & J. G. MacKinnon, 1984. Convenient specification tests for logit and probit models. Journal of Econometrics, 25, 241 262.

© 2016, David E. Giles

Tuesday, November 1, 2016

International Prize in Statistics


A few days ago, the inaugural winner of the biennial International Prize in Statistics was announced.

The first recipient of the new award is Sir David Cox, whose work is, of course, well known to econometricians.

The award was made to Sir David for his "Survival Analysis Model Applied in Medicine, Science, and Engineering".