Monday, May 30, 2011

Cointegrated at the Hips

Spring is sprung and wedding season is fast approaching. Indeed, some couples have already taken the plunge this Spring. On 28 April, the day before Will and Kate got hitched (sorry - before the Royal nuptials of  Prince William and Catherine Middleton), The Economist magazine's Daily Chart showed the average age at first marriage for males and for females in the United Kingdom, from 1890 to 2010. The piece that accompanied with the chart went under the jolly title of "The Royal We", and discussed trends in marriage ages with markers at various key royal wedding dates over the years.

Several features of the chart were pretty much as you'd anticipate:
  • The average age for grooms exceeds that for brides.
  • There were structural breaks around the end  of the First and Second World Wars.
  • Since about 1970, the average marriage age has been trending upwards (from about 22 and 24 years of age for brides and grooms in 1970; to about 29 and 31 years of age, respectively, in 2010).
Always on the look-out for some new examples for my classes - especially ones that are a little bit "different" in their content - I thought it would be interesting to explore some similar data for U.S. pairings. The following chart shows the median age at first marriage for happy couples entering marriage for the first time between 1947 and 2010. The data are from the U.S. Census Bureau, and are available in the Excel workbook on the Data page that goes with this blog.

Friday, May 20, 2011

Peace Through Statistics

The latest issue of Amstat News (the monthly newsletter of the American Statistical Association) includes this piece. It's about the amazing international effort, spearheaded by Miodrag Lovrić (Serbia), Jasmin Komić (Bosnia and Herzegovina), and Ksenija Dumičić (Croatia),  that has led to the publication of the International Encyclopedia of Statistical Science.

The outreaching efforts of these three statisticians has earned them a nomination for the 2011 Nobel Prize for Peace. The outcome of this year's award will be announced on 7 October, 2011.

The Encyclopedia cotains contributions from 619 authors, repesenting 104 countries around the world. There are lots of names there that will be very familiar to econometricians. These include, among others:

T. W. Anderson, K. J. Arrow, B. Baltagi, G. E. P. Box, S. Chib, J. S. Chipman, D. R. Cox, D. A. Dickey, H. K. van Dijk, B. Efron, C. W. J. Granger, J. D. Hamilton, J. L. Heckman, S. Hylleberg, R. J. Hyndman, C. M. Jarque, P. E. Kennedy, J. Kmenta, W. Krämer, R. S. Mariano, C. R. Rao and A. Spanos.


© 2011, David E. Giles




Tuesday, May 17, 2011

The Monkey Run

May I show you my collection of slide rules?


When it comes to doing econometrics, students today just have no idea how lucky they are - compared with old geezers like me, that is. So, just for the record, this is how it went down when I was running regressions as a student, in the pre-P.C. (personal computer and politically correct) era:


Punch-card with a single SHAZAM command:
GENR  LAHERCPI = LOG(AHERCPI)



Punch-card with one row  (5 integer fields) of input data

Here, we have one row (one sample observation) from a data matrix for five input variables. For a sample of size n there would have been a stack of n cards like this.

Friday, May 13, 2011

Gripe of the Day

I have a pet peeve that I really must get off my chest. Yes, another one! Rest assured, there are plenty more where this comes from.

Logit and Probit models are the joint work-horses of a lot of microeconometric studies. They've been widely used for a very long time, and these days they are frequently used with what I'd call pretty large data-sets - ones where the sample size is in the thousands, or even hundreds of thousands. 

These models are invariably estimated by Maximum Likelihood (ML), and as we all know, ML estimators have great large-sample (asymptotic) properties. Specifically, they're weakly consistent, asymptotically efficient, and asymptotically normal - under specific conditions. These conditions are what we usually term the 'regularity conditions', and these are simply some mild conditions on the derivatives of the underlying density function for the random data in the model. The normal and logistic distributions satisfy these conditions, which were first introduced in the context of the formal properties of ML estimators by Dugué (1937) and Cramér (1946). So, what could go wrong?

Monday, May 9, 2011

A Trick of the Trade

Simultaneous Equations Models (SEMs), together with the treatment of measurement error, lie at the historical foundation of Econometrics as a discipline.

The idea of SEM’s for the economy came from Jan Tinbergen, who estimated a 24-equation system for the Dutch economy in 1936. See Tinbergen (1959, pp.37-84) for an English translation. When the first Nobel Prize in Economic Science was awarded in 1969, Tinbergen shared the inaugural honour with Ragnar Frisch (a Norwegian econometrician) for their pioneering work that led to the development of econometrics as a recognized sub-discipline.

It's not that long ago that courses in econometrics included a solid amount of material relating to SEMs. Consequently, students became well aware of issues surrounding parametric identification. They were also very familiar with a raft of estimators designed to deliver consistent, and perhaps asymptotically efficient,  estimates in the context of simultaneous systems. These included single equation estimators - generally Instrumental Variables (IV) estimators such as 2SLS and the k-class family, including the Limited Information Maximum Likelihood (LIML) estimator; and full system estimators such as 3SLS and Full Information Maximum Likelihood (FIML).

These days, the estimation of SEMs seems to get relatively little attention in standard econometrics courses. Most students learn about 2SLS, and many of them appreciate that this is just a specific member of the IV family of estimators. However, few students realize that all of the other estimators - yes, including FIML - are also in the IV family. For example, see Hendry (1976).

Friday, May 6, 2011

Good News Friday


No, I'm not on a bus tour of Europe! It's actually Friday, so I'll spend part of the day engaged in a very pleasant task - sending out emails to authors whose papers have been accepted for publication in a journal that I co-edit. The journal in question is The Journal of International Trade & Economic Development, and I got involved on the editorial side of things there about 15 years ago - mainly to help with the many empirical papers that we receive as submissions.

We use ScholarOne Manuscripts™ (ManuscriptCentral) to handle submissions, refereeing and correspondence electronically. That's certainly made my life a lot easier, and controllable. 

It seems just yesterday that we used to seal three typed copies of our shiny new manuscript, with a cover letter to the editor, into a big fat envelope, and entrust it to our intrepid mail carriers. Then the waiting would begin. Eventually, without warning, a real letter would turn up one day, and we'd know the fate of our endeavours.

I've been an associate editor of quite a number of economics and statistics journals over the years. In that role, even in this electronic age you generally don't have control of the timing for the dispatch of acceptance and rejection letters. In the days prior email and electronic submissions, the editors themselves determined when acceptance and rejection letters were sent, but not when they were received.

And, as in many things, timing can be very important. Does any author want their weekend ruined by receiving an electronic rejection note and reports from referees who clearly don't "get it", on a Friday evening? I don't think so. And that's not a great way to start a Monday morning, either.

So, here's a little secret of mine. As an editor, I always try to time the receipt of good editorial news for a Friday or Saturday. If the decision is really negative, I aim for Tuesday through Thursday. I know that a speedy response is very important, so I don't "sit" on decisions. If an author has made an enquiry about progress with their submission, and has mentioned that they have a tenure/promotion coming up, my email is on its way as soon as a decision is reached. But if I possibly can, I try to time the arrival of the news to suit its content.

So, don't forget to check your email in-box later today - you never know!

© 2011, David E. Giles

Thursday, May 5, 2011

Cookbook Econometrics

"First, catch your hare"
(Mrs. Beeton- recipe for jugged hare)


(I understand that it's debatable whether or not Mrs. Beeton actually wrote those precise words, but they're generally attributed to her and it's still pretty good advice.)

Cookbooks certainly have their place, but sometimes they're misunderstood or misused. Indeed, sometimes they're mislaid, and then if you don't understand the rudiments of cooking you're either going to go hungry, or you may create something very unpalatable. The same is true when it comes to certain types of econometrics courses or textbooks.

I'll lay it on the table - I am definitely not a fan of "Cookbook Econometrics".

Here's what I'm referring to.

Monday, May 2, 2011

Killer Exams

Professors have to be so careful in this age of political correctness (PC). Life just isn't as much fun as it used to be. Students have to be given written notice, at the start of of a course, of the nature and dates of any assessment. This immediately eliminates the joys of waking them up the class with a "snap quiz"! Unless you want to hear from the Dean.

One of the few weapons left in our arsenal is the final exam. - assuming we're allowed to have one. Of course, we have to supply "practice exams."; solutions to previous, related, exams.; and almost everything except the questions themselves. Actually, I did hear of a course in another discipline where the students were given a set of x questions in advance, and told that a subset, y in number, of them would constitute the final exam. Sorry - I just can't go there.

When I was an undergrad. math. student, in a different educational system, courses went for the whole academic year; there were no term-tests; "assignments" didn't count to the final grade; and then at the end of the year each course had two final exams. - Paper A and Paper B. Three hours each - sudden death - just like double-overtime in a Stanley Cup final. Oh yes, those were the days!

Friday, April 29, 2011

Testing for Granger Causality

Several people have asked me for more details about testing for Granger (non-) causality in the context of non-stationary data. This was prompted by my brief description of some testing that I did in my "C to Shining C"  posting of 21 March this year. I have an of example to go through here that will illustrate the steps that I usually take when testing for causality, and I'll use them to explain some of pitfalls to avoid. If you're an EViews user, then I can also show you a little trick to help you go about things in an appropriate way with minimal effort.

In my earlier posting, I mentioned that I had followed the Toda and Yamamoto (1995) procedure to test for Granger causality. If you check out this reference, you'll find you really only need to read the excellent abstract to get the message for practitioners. In that sense, it's rare paper!

It's important to note that there are other approaches that can be taken to make sure that your causality testing is done properly when the time-series you're using are non-stationary (& possibly cointegrated). For instance, see Lütkepohl (2007, Ch. 7).

The first thing that has to be emphasised is the following:

Tuesday, April 26, 2011

Drawing Inferences From Very Large Data-Sets

It's not that long ago that one of the biggest complaints to be heard from applied econometricians was that there were never enough data of sufficient quality to enable us to do our job well. I'm thinking back to a time when I lived in a country that had only annual national accounts, with the associated publication delays. Do you remember when monthly data were hard to come by; and when surveys were patchy in quality, or simply not available to academic researchers? As for longitudinal studies - well, they were just something to dream about!

Now, of course that situation has changed dramatically. The availability, quality and timeliness of economic data are all light years away from what a lot of us were brought up on. Just think about our ability, now, to access vast cross-sectional and panel data-sets, electronically, from our laptops via wireless internet.

Obviously things have changed for the better, in terms of us being able to estimate our models and provide accurate policy advice. Right? Well, before we get too complacent, let's think a bit about what this flood of data means for the way in which we interpret our econometric/statistical results. Could all of these data possibly be too much of a good thing?