Friday, April 29, 2011

Testing for Granger Causality

Several people have asked me for more details about testing for Granger (non-) causality in the context of non-stationary data. This was prompted by my brief description of some testing that I did in my "C to Shining C"  posting of 21 March this year. I have an of example to go through here that will illustrate the steps that I usually take when testing for causality, and I'll use them to explain some of pitfalls to avoid. If you're an EViews user, then I can also show you a little trick to help you go about things in an appropriate way with minimal effort.

In my earlier posting, I mentioned that I had followed the Toda and Yamamoto (1995) procedure to test for Granger causality. If you check out this reference, you'll find you really only need to read the excellent abstract to get the message for practitioners. In that sense, it's rare paper!

It's important to note that there are other approaches that can be taken to make sure that your causality testing is done properly when the time-series you're using are non-stationary (& possibly cointegrated). For instance, see L├╝tkepohl (2007, Ch. 7).

The first thing that has to be emphasised is the following:

Tuesday, April 26, 2011

Drawing Inferences From Very Large Data-Sets

It's not that long ago that one of the biggest complaints to be heard from applied econometricians was that there were never enough data of sufficient quality to enable us to do our job well. I'm thinking back to a time when I lived in a country that had only annual national accounts, with the associated publication delays. Do you remember when monthly data were hard to come by; and when surveys were patchy in quality, or simply not available to academic researchers? As for longitudinal studies - well, they were just something to dream about!

Now, of course that situation has changed dramatically. The availability, quality and timeliness of economic data are all light years away from what a lot of us were brought up on. Just think about our ability, now, to access vast cross-sectional and panel data-sets, electronically, from our laptops via wireless internet.

Obviously things have changed for the better, in terms of us being able to estimate our models and provide accurate policy advice. Right? Well, before we get too complacent, let's think a bit about what this flood of data means for the way in which we interpret our econometric/statistical results. Could all of these data possibly be too much of a good thing? 

Monday, April 25, 2011

What a Difference a Graph Makes

I've previously made a case for plotting your data before you model them. But not all graphs are created equal. In this regard, I liked Nick Gruen's re-working of Paul Krugman's graph on the tax burden in the U.S. Same data - opposite conclusion! Take a look at Nick's posting,  'Zombie Tax Lies' at the Australian site,

The Devil is in the details!

p.s.: Thanks to Sinclair Davidson for letting me know that the re-drawn graph actually originated with him - see the coments to this posting.

© 2011, David E. Giles

Sunday, April 24, 2011

How Many Weeks are There in a Year?

What a question (you'd think)! Well, bear with me, because I have some news for you. I've previously blogged about the decline and fall of our higher education system, but there's always room for a little embellishment. One thing that I've learned over my years of teaching econometrics and economic statistics is that you should always expect the unexpected. It goes with the territory.

Here's a situation that I encountered a few years ago when I was teaching a first-level undergraduate economic statistics course. I recount it simply to illustrate the point that I learn something every time I step into the classroom,  and certainly not out of disrespect for any particular individual(s). This is how it went down.

Wednesday, April 20, 2011

Thursday Magazine

In this town we have a weekly publication, named after a particular day of the week - but not the day on which it's published. Go figure! I'm sure there's a history to this, but like lots of things in life, it escapes me.

I used to find the movie reviews section of this publication quite helpful. If they gave a movie five stars, I'd avoid it like the plague; if they gave it just one star I'd be in the line-up to see it at the crack of dawn. Worked liked a charm - every time! Then one week they declared both Furry Veangeance and Macgruber to be lone-star movies, and they were dead  right! The rest, as they say, is history - there was a unusually civil parting of the ways.

The magazine in question also contains a must-read section that appears under the title: "You P*****d Me Off ". Well, the title actually has all of its letters intact in the magazine, but if I included them for you here I'd be breaking Google's Blogspot rules, and I'd have to pack up shop. So have fun filling in the blanks yourself. Various individuals use this section to vent, anonymously, about real or perceived personal affronts. In some places they use guns to deal with this sort of thing, but we Canadians know that the pen is mightier than the sword - right?

It occurred to me that a YPMO section could be a useful addition to certain academic journals - probably after the "Notes and Comments" section. To reduce the number of lawsuits, the section title could be de-personalized to "Things That P*** Me Off", or TTPMO for short. In fact, given the degree of specialization now associated with academic research and publications, there'd be lots of scope for some really neat titles for this new section, specifically tailored to the journal in question. Editors could go nuts dreaming up appropriate names. For instance, consider the following economics journals:
Journal of Economic Theory - Data (DTPMO)
Economic Inquiry - Questions (QTPMO)
Journal of Economic Surveys - Tele-Marketers (TMWPMO)
Economic Modelling - Dress Designers (DDWPMO)
Well,...... maybe not.

Anyway, what are some of the TTPMO after 35 years or so in the 'Econometrics business'? Oh boy - where to start?

  • Econometrica - why are there virtually no econometrics papers published in this journal any more?
  • Applied economists who think it's O.K. to ignore any developments in econometric theory since they were in grad. school.
  • Econometrics courses or books that tell students what to do, but not why.
  • Economists & econometricians who 're-discover' results that are long-established in the Statistics literature. 
  • Practitioners who think they're testing for Granger (non-) causality, but continue to screw up.
  • Seminar presenters who insist on going through the math. in gory detail.

Y'know - some of these things might just be worth blogging about some time! Feel free to let me know your additions to the list.

© 2011, David E. Giles

Monday, April 18, 2011

Laughing Our Way Out of a Recession

Don't ask me why, but the other day I was thinking about one of my all-time favourite opening sentences in an academic paper:

         "0.    Introduction.   Consider a light bulb."
         (Balkema and de Haan, 1974, p.792.)

It has a certain ring to it, doesn't it? It takes courage to begin a paper in that way. More courage than I have! From there, it was just a small leap to begin reminiscing about memorable light bulb jokes, but I'm not going to go down that track. Actually, I don't have a stock of econometrics jokes, though I recognize that many jokes are very "transportable" across professions. For instance, we could quite easily convert the line, "Once I couldn't even spell 'Engineer' - now I are one!", into something that hits a little closer to home, also beginning with an 'E'. But I digress!

In recent times there's been a lot of press relating to measuring 'happiness' (whatever that is), and to the idea that perhaps we should replace measures such as GDP with some sort of Gross National Happiness Index, at least for certain purposes. I'm not sure what I could possibly add to that discussion directly, but it got me thinking about how our mood is governed in part by the state of the economy, and that perhaps this is reflected in our use of humour to deal with both personal and economic depression.

A lot of cartoons that appear in newspapers and magazines relate, not too surprisingly, to political events and politicians. Political satire has always been popular. It's also the case that a decent number of these cartoons relate specifically to economic matters. Of course, I know that there is often an overlap between economics and politics. None the less, I think we'll all agree that we regularly see cartoons whose primary focus is some aspect of the economy.

Friday, April 15, 2011

Price Indices From Regression Models

We're all familiar with index numbers. We encounter them every day - perhaps in the form of the CPI or the PPI;  or maybe in the form of some index of production. The effective exchange rate is an index that I'll be posting about in the near future. Share price indices are also familiar, although the DJIA has some very peculiar aspects to its construction that deserve separate consideration on another occasion.

The thing about an index number is that it has only ordinal content. That's to say, if  a particular price index, say P, has a (unit-less) value of 110, that number tells us nothing about the price level at all. It's only when we compare two values of the same index - say, the values in 2010 and in 2011 - that the numbers really mean anything. If P = 100 in 2010 and P = 105 in 2011, then the average price of the bundle of goods being measured by P has changed (risen in this case) by 5% - not by $5 or some other value. In other words, over time, or perhaps across regions, an index number measures proportional changes.

When any index number is constructed, a base period and a base value must first be chosen. For example, we might decide to choose a base year of 1996, and a base value of 100. There's absolutely nothing wrong with choosing a base value of, say, 167.5290 in 1996 - it would just be unnecessarily inconvenient. In that case if the index rose to 184.2819 in 1997, this would imply a relative price change of 100*[(184.2819 - 167.5290) / 167.5290] = 10%. Wouldn't it have been easier if we had chosen the base value to be 100, observed a value of 110 in 1997, and then been able to see immediately that this implied a 10% increase in prices over this one-year period?

Of course, it's the fact that an index measures only relative changes over time that enables us to "re-base" (change the base year) an index without losing any information at all. The numbers in the index series just get scaled, multiplicatively, by the same factor, leaving relative values - and the implications for price changes -  unaltered.

Monday, April 11, 2011

My Tips Jar

I think it's time to put the "tips jar" out on my office desk again. I used to have one, but I gave up after a certain colleague (?) emptied the coins out and bought himself a new car  - or was it a cup of coffee? I figured, if they have a tips jar on the counter at the coffee shop, why can't I have one too? Seemed like a great idea at the time (standard excuse), but in fact it just didn't work out too well. At least I didn't have to worry about reporting the extra cash to the CRA! 

Friday, April 8, 2011

May I Show You My Collection of p-Values?

Tom Thorn kindly sent me the following link, which is certainly pertinent to this posting:

It's always fun to start things off with a snap quiz - it wakes you up, if nothing else. So here we go - multiple choice, so you have at least a 20% chance of getting the correct answer:

Question:    Which of the following statements about p-values is correct?

1. A p-value of 5% implies that the probability of the null hypothesis being true is (no more than) 5%.
2. A p-value of 0.005 implies much more "statistically significant" results than does a p-value of 0.05.
3. The p-value is the likelihood that the findings are due to chance.
4. A p-value of 1% means that there is a 99% chance that the data were sampled from a population that's consistent with the null hypothesis.
5. None of the above.

Monday, April 4, 2011

Curious Regressions

"The picture-examining eye is the best finder we have of the wholly unanticipated"
(John Tukey, 1915 - 2000)

Back in 1992 I participated in a panel discussion at the Australasian Meetings of the Econometric Society, in Melbourne. I don't recall what the topic was for the panel discussion, but I do recall that my brief was to talk about some aspects of "best practice" in teaching econometrics. In case you weren't there, if you slept through my talk, or if you simply hadn't yet learned how to spell "econometrics" back in 1992, I thought I'd share some related pearls of wisdom with you.

One of the main points that I made in that panel discussion could be summarized as: "Plot it or Perish". In other words, if you don't graph your data before undertaking any econometric analysis, then you're running the risk of a major catastrophe. There's nothing new in this message, of course - it's been trumpeted by many eminent statisticians in the past. However, I went further and made a scandalous (but serious) suggestion.

My proposal was that no econometrics computer package should allow you to see any regression results until you'd viewed some suitable data-plots. So, if you were using your favourite package, and issued a generic command along the lines: OLS Y C X, then a scatter-plot of Y against X would appear on your computer screen. You'd be given an option to see other plots (such as log(Y) against log(X), and so on). In the case of a multiple regression model, a command of the form: OLS Y C X1 X2 X3, would result in several partial plots of Y against each of the regressors, etc.. Then, after a prescribed time delay you would eventually see the message:
"Are you abolutely sure that you want to fit this regression model?"