Monday, December 26, 2016

Irving Fisher & Distributed Lags

Some time back, Mike Belongia (U. Mississippi) emailed me as follows: 
"I enjoyed your post on Shirley Almon;  her name was very familiar to those of us of a certain age.
With regard to your planned follow-up post, I thought you might enjoy the attached piece by Irving Fisher who, in 1925, was attempting to associate variations in the price level with the volume of trade.  At the bottom of p. 183, he claims that "So far as I know this is the first attempt to distribute a statistical lag" and then goes on to explain his approach to the question.  Among other things, I'm still struck by the fact that Fisher's "computer" consisted of his intellect and a pencil and paper."
The 1925 paper by Fisher that Mike is referring to can be found here. Here are pages 183 and 184:



Thanks for sharing this interesting bit of econometrics history, Mike. And I haven't forgotten that I promised to prepare a follow-up post on the Almon estimator!

© 2016, David E. Giles

Saturday, December 24, 2016

Top New Posts of 2016

Thank you to all readers of this blog for your continued involvement during 2016.

Of the new posts released this year, the Top Five in terms of page-views were:
  1. Forecasting From an Error Correction Model
  2. I Was Just Kidding......!
  3. Choosing Between the Logit and Probit Models
  4. The Forecasting Performance of Models for Cointegrated Data
  5. A Quick Illustration of Pre-Testing Bias
Season's greetings!

© 2016, David E. Giles

Sunday, December 18, 2016

Not All Measures of GDP are Created Equal

A big hat-tip to one of my former grad. students, Ryan MacDonald at Statistics Canada, for bringing to my attention a really informative C.D. Howe Institute Working Paper by Philip Cross (former Chief Economic Analyst at Statistics Canada).


We all know what's meant by Gross Domestic Product (GDP), don't we? O.K., but do you know that there are lots of different ways of calculating GDP, including the six that Philip discusses in detail in his paper, namely:
  • GDP by industry
  • GDP by expenditure
  • GDP by income
  • The quantity equation
  • GDP by input/output
  • GDP by factor input
So why does this matter?

Well, for one thing - and this is one of the major themes of Philip's paper - how we view (and compute) GDP has important implications for policy-making. And, it's important to be aware that different ways of measuring GDP can result in different numbers.

For instance, consider this chart from p.16 of the Philip's paper:


My first reaction when I saw this was "it's not flat". However, as RMM has commented below, "the line actually shows us the fluctuations of industries that are more intermediate compared with industries (or the total) that includes only final goods. Interesting and useful for business cycle analysis..."

Here's my take-away (p.18 of the paper):
"For statisticians, the different measures of GDP act as an internal check on their conceptual and empirical consistency. For economists, the different optics for viewing economic activity lead to a more profound understanding of the process of economic growth. Good analysis and policy prescription often depend on finding the right optic to understand a particular problem."
Let's all keep this in mind when we look at the "raw numbers".

© 2016, David E. Giles

Wednesday, December 14, 2016

Stephen E. Fienberg, 1942-2016

The passing of Stephen Fienberg today is another huge loss for the statistics community. Carnegie Mellon University released this obituary this morning.

Steve was born and raised in Toronto, and completed his undergraduate training in mathematics and statistics at the University of Toronto before moving to Harvard University for his Ph.D.. His contributions to statistics, and to the promotion of statistical science, were immense.

As the CMU News noted:
"His many honors include the 1982 Committee of Presidents of Statistical Society President's Award for Outstanding Statistician Under the Age of 40; the 002 ASA Samuel S. Wilks Award for his distinguished career in statistics; the first Statistical Society of Canada's Lise Manchester Award in 2008 to recognize excellence in state-of-the-art statistical work on problems of public interest; the 2015 National Institute of Statistical Sciences Jerome Sacks Award for Cross-Disciplinary Research; the 2015 R.A. Fisher Lecture Award from the Committee of Presidents of Statistical Societies and the ISBA 2016 Zellner Medal. 
Fienberg published more than 500 technical papers, brief papers, editorials and discussions.  He edited 19 books, reports and other volumes and co-authored seven books, including 1999's "Who Counts? The Politics of Census-Taking in Contemporary America," which he called "one of his proudest achievements." " 
There at least three terrific interviews with Steve that we have to remind us of the breadth of his contributions:



© 2016, David E. Giles

Monday, December 5, 2016

Monte Carlo Simulation Basics, III: Regression Model Estimators

This post is the third in a series of posts that I'm writing about Monte Carlo (MC) simulation, especially as it applies to econometrics. If you've already seen the first two posts in the series (here and here) then you'll know that my intention is to provide a very elementary introduction to this topic. There are lots of details that I've been avoiding, deliberately.

In this post we're going to pick up from where the previous post about estimator properties based on the sampling distribution left off. Specifically, I'll be applying the ideas that were introduced in that post in the context of regression analysis. We'll take a look at the properties of the Least Squares estimator in three different situations. In doing so, I'll be able to illustrate, through simulation, some "text book" results that you'll know about already.

If you haven't read the immediately preceding post in this series already, I urge you to do so before continuing. The material and terminology that follow will assume that you have.

Saturday, December 3, 2016

December Reading List

Goodness me! November went by really quickly!
 
© 2016, David E. Giles

Monday, November 28, 2016

David Hendry on "Economic Forecasting"

Today I was reading a recent discussion paper by Neil Ericcson, titled "Economic Forecasting in Theory and Practice: An Interview With David F. Hendry". The interview is to be published in the International Journal of Forecasting.

Here's the abstract:

"David Hendry has made major contributions to many areas of economic forecasting. He has developed a taxonomy of forecast errors and a theory of unpredictability that have yielded valuable insights into the nature of forecasting. He has also provided new perspectives on many existing forecast techniques, including mean square forecast errors, add factors, leading indicators, pooling of forecasts, and multi-step estimation. In addition, David has developed new forecast tools, such as forecast encompassing; and he has improved existing ones, such as nowcasting and robustification to breaks. This interview for the International Journal of Forecasting explores David Hendry’s research on forecasting."

Near the end of the wde-rangng, and thought-provoking, interview David makes the following point:
"Many top econometricians are now involved in the theory of forecasting, including Frank Diebold, Hashem Pesaran, Peter Phillips, Lucrezia Reichlin, Jim Stock, Timo Teräsvirta, KenWallis, and MarkWatson. Their technical expertise as well as their practical forecasting experience is invaluable in furthering the field. A mathematical treatment can help understand economic forecasts, as the taxonomy illustrated. Recent developments are summarized in the books by Hendry and Ericsson (2001), Clements and Hendry (2002), Elliott, Granger, and Timmermann (2006), and Clements and Hendry (2011b). Forecasting is no longer an orphan of the profession."
(My emphasis added; DG)

Neil's interview makes great reading, and I commend it to you.


© 2016, David E. Giles

Friday, November 18, 2016

The Dead Grandmother/Exam Syndrome

Anyone who's had to deal with students will be familiar with the well-known problem that biologist Mike Adams discussed in his 1999 piece, "The Dead Grandmother/Exam Syndrome", in the Annals of Improbable Research. 😊

As Mike noted,
"The basic problem can be stated very simply:
A student’s grandmother is far more likely to die suddenly just before the student takes an exam, than at any other time of year."
Based on his data, Mike observed that:
"Overall, a student who is failing a class and has a final coming up is more than 50 times more likely to lose a family member than is an A student not facing any exams. Only one conclusion can be drawn from these data. Family members literally worry themselves to death over the outcome of their relative's performance on each exam.
Naturally, the worse the student’s record is, and the more important the exam, the more the family worries; and it is the ensuing tension that presumably causes premature death."
I'll leave you to read the rest, and to find out why grandmothers are more susceptible to this problem than are grandfathers.

Enjoy Mike's research - and then make sure that you put a link to his paper on your course outlines!


© 2016, David E. Giles

Thursday, November 17, 2016

Inside Interesting Integrals

In some of my research - notably that relating to statistical distribution theory, and that in Bayesian econometrics - I spend quite a bit of time dealing with integration problems. As I noted in this recent post, integration is something that we really can't avoid in econometrics - even if it's effectively just "lurking behind the scenes", and not right in our face.

Contrary to what you might think, this can be rather interesting!

We can use software, such as Maple, or Mathematica, to help us to evaluate many complicated integrals. Of course, that wasn't always so, and in any case it's a pity to let your computer have all the fun when you could get in there and get your hands dirty with some hands-on work. Is there anything more thrilling than "cracking" a nasty looking integral?

I rely a lot on the classic book, Table of Integrals, Series and Products, by Gradshteyn Ryzhik. It provides a systematic tabulation of thousands of integrals and other functions. I know that there are zillions of books that discuss various standard methods (and non-standard tricks) to help us evaluate integrals. I'm not qualified to judge which ones are the best, but here's one that caught my attention some time back and which I've enjoyed delving into in recent months.

It's written by an electrical engineer, Paul J. Nahin, and it's called Inside Interesting Integrals.

I just love Paul's style, and I think that you will too. For instance, he describes his book in the following way -
"A Collection of Sneaky Tricks, Sly Substitutions, and Numerous Other Stupendously Clever, Awesomely Wicked, and Devilishly Seductive Maneuvers for Computing Nearly 200 Perplexing Definite Integrals From Physics, Engineering, and Mathematics. (Plus 60 Challenge Problems with Complete, Detailed Solutions.)"
Well, that certainly got my attention!

And then there's the book's "dedication":
"This book is dedicated to all who, when they read the following line from John le Carre´’s 1989 Cold War spy novel The Russia House, immediately know they have encountered a most interesting character:
'Even when he didn’t follow what he was looking at, he could relish a good page of mathematics all day long.'
as well as to all who understand how frustrating is the lament in Anthony Zee’s book Quantum Field Theory in a Nutshell:
'Ah, if we could only do the integral … . But we can’t.' "
What's not to love about that?

Take a look at Inside Interesting Integrals - it's a gem.

© 2016, David E. Giles

Saturday, November 12, 2016

Monte Carlo Simulation Basics, II: Estimator Properties

In the early part of my recent post on this series of posts about Monte Carlo (MC) simulation, I made the following comments regarding its postential usefulness in econometrics:
".....we usually avoid using estimators that are are "inconsistent". This implies that our estimators are (among other things) asymptotically unbiased. ......however, this is no guarantee that they are unbiased, or even have acceptably small bias, if we're working with a relatively small sample of data. If we want to determine the bias (or variance) of an estimator for a particular finite sample size (n), then once again we need to know about the estimator's sampling distribution. Specifically, we need to determine the mean and the variance of that sampling distribution. 
If we can't figure the details of the sampling distribution for an estimator or a test statistic by analytical means - and sometimes that can be very, very, difficult - then one way to go forward is to conduct some sort of MC simulation experiment."
Before proceeding further, let's recall just what we mean by a "sampling distribution". It's a very specific concept, and not all statisticians agree that it's even an interesting one.