Saturday, December 31, 2011

Congratulations to Donald Fraser!

Canadian statistical icon, Donald A. S. Fraser, has been appointed an Officer of the Order of Canada (O.C.) in the New Year's Honours list that was announced yesterday. Fraser has been Professor in the Department of Statistics at the University of Toronto since 1949, and he's received numerous international awards. His highly influential work has resulted in seven books and over 250 peer-reviewed papers (see his c.v.), and he has supervised 55 Ph.D. students.

Officers of the Order of Canada are appointed for their "lifetime of achievement and merit of a high degree, especially in service to Canada or to humanity at large."

How appropriate!

© 2011, David E. Giles

BIG Data

"Big Data" = data that come in amounts that are too large for current computer hardware and software to deal with. That sounds like fun!

Wednesday, December 28, 2011

When is the OLS estimator BLU?

Or, if you prefer, "When do the OLS and GLS estimators coincide?"

O.K., so you think you know the answer to this one? My guess is that you know a sufficient condition, but probably not a necessary and sufficient condition, for the OLS and GLS estimators of the coefficient vector in a linear regression model to coincide. Let's see if I'm right!

Monday, December 26, 2011

Just Crantastic!

If you're an econometrician and you don't use the R software environment - for at least some of your work, then you're missing out on all sorts of (free!) opportunities.

Wednesday, December 21, 2011

Information and Entropy Econometrics

The eminent physicist Ed. Jaynes (1957a) wrote:
"Information theory provides a constructive criterion for setting up probability distributions on the basis of partial knowledge, and leads to a type of statistical inference which is called the maximum entropy estimate. It is least biased estimate possible on the given information; i.e., it is maximally noncommittal with regard to missing information."
In other words, when we want to describe noisy data with a statistical model, we should always choose the one that has Maximum Entropy.

Friday, December 16, 2011

"An Information Theoretic Approach to Econometrics"

George Judge & Ron Mittelhammer have a new book, hot off the press: An Information Theoretic Approach to Econometrics (CUP, 2012).

Thursday, December 15, 2011

Reported "Accuracy" for Regression Results

In a recent post I posed the question: "How many decimal places (or maybe significant digits) are appropriate when reporting OLS regression results?"

Tuesday, December 13, 2011

Heteros*edasticity

Earlier this year I had a post about the decline in the amount of attention paid to the concept of "multicollinearity" in econometrics texts over the years. It's a fully justifiable decline, in my view.

Sunday, December 11, 2011

Confidence Bands for the H-P Filter: Correction!

Aaaaaghhhh!!

In a post a couple of days ago I posted about constructing confidence bands for the trend that's extracted from a time-series using the Hodrick-Prescott (HP) filter. There was an error in my EViews program code that affected the last graph I showed in that post.

Friday, December 9, 2011

So, Sue Me!

"In a case that’s sending a frightening message to the blogger community, a U.S. District Court judge ruled that a blogger must pay $2.5 million to an investment firm she wrote about — because she isn’t a real journalist."

Thursday, December 8, 2011

Confidence Bands for the Hodrick-Prescott Filter

Signal extraction is a common pastime in empirical economics. When we fit a regression model we're extracting a signal about the dependent variable from the data, and separating it from the "noise". When we use the Hodrick-Prescott (HP) filter to extract the trend from a time-series, we're also engaging in signal extraction. In the case of a regression model we wouldn't dream of reporting estimated coefficients without their standard errors; or predictions without confidence bands. Why is it, then, that the trend that's extracted using the HP filter is always reported without any indication of the associated uncertainty?

Wednesday, December 7, 2011

Choconomics

My wife enjoys travelling with me to conferences, so I guess I'll be in Belgium next September for this one!

Tuesday, December 6, 2011

Professor of Official Statistics

A recent post, titled Free Data!, drew comments about some aspects of the availability and cost of official data in Canada. I was therefore intrigued to come across a recent advertisement for the position of Professor of Official Statistics, at an Australian University.

Monday, December 5, 2011

Precision Competition

In a recent post I raised the point about the spurious degree of precision that is often encountered with reported regression results. So, here's a challenge for you - how many decimal places (or maybe significant digits) are appropriate when reporting OLS regression results?

Sunday, December 4, 2011

Webinar With Robert F. Engle

It's not every day that you can get to see a presentation by a Nobel laureate econometrician for free, and without leaving home.

Saturday, December 3, 2011

Free Data!

Freedom to information is what all econometricians hope for - that is, readily accessible and reliable data that you don't have to pay for.

Monday, November 28, 2011

Terribly Simple, But Simply Terrible

Last Saturday there was a general election in New Zealand, where I've been visiting for the past couple of weeks. In conjunction with this election there was also a referendum on the future of the voting system that they use. For about 17 years now they have used a "Mixed Member Proportional" (MMP) system.

Friday, November 25, 2011

Spurious Precision

I'm currently in New Zealand - hence the smaller number of posts recently - and I came across the following example of a "best before" date on the lid of a jar of marmalade:

Friday, November 18, 2011

Trends in Econometrics

In an earlier post I mentioned the online conference that was organized by Wiley and The Journal of Economic Surveys over the past few days.

Here is the link to David Hendry's keynote lecture, "Trends in Econometrics", together with organized commentaries from Neil Ericsson and Katerina Juselius.

Great stuff! We need more of this.



© 2011, David E. Giles

Friday, November 11, 2011

Close Encounters of the Math Kind

Alert readers of this blog may have noticed (front page) that I have an Erdös Number of 4. That's to say, I've published (several) papers co-authored with someone, who co-authored a paper with someone, who co-authored a paper with the mathematician Paul Erdös.





Thursday, November 10, 2011

Sunday, November 6, 2011

A Real Econometrics Seminar

Last Friday we were treated to a particularly good seminar in our Department. Sílvia Gonçalves (here, at Université de Montréal) presented a paper, "Bootstrapping factor-augmented regression models" (joint with Benoit Perron). Yes, an actual Econometrics seminar!

Friday, November 4, 2011

Cointegration, Structural Breaks, and gretl

In a post in May I discussed testing for cointegration in the presence of structural breaks, and provided some EViews code to facilitate this. I then followed that up with another post in June that provided corresponding R code and a set of tables, both produced with Ryan Godwin.

Riccardo (Jack) Lucchetti, co-author of the (free) gretl econometrics package converted our code into gretl script, and kindly sent it me. Passing the script on to eveyone is long overdue - sorry about the delay, Jack!

Thursday, November 3, 2011

VECMs, IRFs & gretl

In a comment on my post yesterday, "psummers" kindly pointed out that the free econometrics package, gretl, will also produce confidence intervals for Impulse Response Functions (IRFs) generated by a VECM.

I had an earlier post about gretl, and here is a very brief run-down on using it to produce those VECM-IRF confidence intervals.

Wednesday, November 2, 2011

Impulse Response Functions From VECMs

In the comments and discussion associated with an earlier post on "Testing for Granger Causality" an interesting question arose. If we're using a VAR model for constructing Impulse Response Functions, then typically we'll want to compute and display confidence bands to go with the IRFs, because the latter are  simply "point predictions". The theory for this is really easy, and in the case of EViews it's just a trivial selection to get asymptotically valid confidence bands.

But what about IRFs from a VECM - how do we get confidence bands in this case? This is not nearly so simple, because of the presence of the error-correction term(s) in the model. EViews doesn't supply confidence bands with the IRFs in the case of VECMs. What alternatives do we have?

Monday, October 31, 2011

R 2.14.0 Released

A Halloween treat for R users! Version 2.14.0 was released today. Among other things there are big improvements for parallel processing.

For a quick synopsis of the new "goodies", see the post on the Revolutions blog.



© 2011, David E. Giles

Friday, October 28, 2011

My Hero!

Keen-eyed followers of this blog may have noticed that if you view my complete profile you'll learn that one of my interests is "staying at home". Actually it's a (male side of the) family tradition - though my son, Matt, seems to be bucking the trend!

As accurate as this profile is, I do have a small confession to make. I stole the line about staying at home from Professor Sir Richard Stone - specifically, from his listing in Who's Who when he was still alive.

Thursday, October 27, 2011

Quote of the Day

"Reality is a Dangerous Concept."


 Source: Jerzy Neyman: On Time Series Analysis and Some Related Statistical Problems in Economics. (A Conference With Dr. Neyman in the Auditorium of the Department of Agriculture, 10 April, 1937, 11 a. m., Dr. Charles F. Sarle presiding). In Lectures and Conferences on Mathematical Statistics, Delivered by J. Neyman at the United States Department of Agriculture in April 1937, Graduate School of the United States Department of Agriculture, Washington D.C., p. 112.




© 2011, David E. Giles

Tuesday, October 25, 2011

VAR or VECM When Testing for Granger Causality?

It never ceases to amaze me that my post titled "How Many Weeks are There in a Year?" is at the top of my all-time hits list! Interestingly, the second-placed post is the one I titled "Testing for Granger Causality". Let's call that one the number one serious post. As with many of my posts, I've received quite a lot of direct emails about that piece on Granger causality testing, in addition to the published comments.


One question that has come up a few times relates to the use of  a VAR model for the levels of the data as the basis for doing the non-causality testing, even when we believe that the series in question may be cointegrated. Why not use a VECM model as the basis for non-causality testing in this case?

Sunday, October 23, 2011

If it Ain't Broke, Don't Fix It!

For the most part I like engineers. In fact, some of my best friends are engineers. However, there's always the exception that "proves" the rule!

Back in 1978 I was appointed to a chair (full professorship) in what was then the Department of Econometrics & Operations Research at Monash University, in Australia. It's now the Department of Econometrics & Business Statistics. It was, and still is, a terrific department to be associated with. My colleagues and our students were wonderful. and I had nine productive years there.

Friday, October 21, 2011

Sargent and Sims are Econometricians!

It's official! This year's Economics Nobel laureates, Thomas Sargent and Christopher Sims, are Econometricians! The latest issue of the Royal Statistical Society's Newsletter makes that very point, clearly and repeatedly!

So, back off you Empirical Macro. types! Chris Sims certainly teaches grad. courses in Econometric Theory and Time Series, and in his c.v. he describes his areas of research interest as: "econometric theory for dynamic models; macroeconomic theory and policy". Notice how econometric theory comes first? So there!

I admit that the case is less convincing in the case of Sargent - but it depends on how broadly you define "econometrics". The web page at NYU's Stern School certainly declares him to be a macroeconomist. But he's a past-President of the Econometric Society. And both he and Sims are Fellows of that august body.

That's close enough for me! ☺


© 2011, David E. Giles

Thursday, October 20, 2011

A Moniacal Economist

"In 1958, A. W. H. Phillips published in Economica what was to become one of the most widely cited articles ever written in economics. To mark the 50th anniversary of the paper, the New Zealand Association of Economists and the Econometric Society hosted the conference “Markets and Models: Policy Frontiers in the A. W. H. Phillips Tradition” in July 2008."
(Economica, 2011, Vol 78, p.1)

The January  issue of the journal Economica this year was devoted to papers from the A. W. H. 50th Anniversary Symposium, in honour of the New Zealander, A. W. H. ("Bill") Phillips, well-known to economists for his development of the so-called "Phillips Curve". I strongly recommend this issue of the journal.

Wednesday, October 19, 2011

Cook-Book Statistics

In my post yesterday I wrote about a remark made by the well-known statistician, Michael Stephens. Here's another interesting comment from him, referring to his departure from post-World War II England for the U.S.:
"Out of the blue came an offer from Case Institute of Technology (now Case-Western Reserve) and off I went to Cleveland. Case had a huge Univac computer, all whirling tapes and flashing lights, and I decided to learn programming. Next to computing was Statistics, so I thought I would learn some of that too. I took some cook-book classes, and thought I was learning statistics. For the life of me, I can't understand why I didn’t wonder why the ratio of this to that would be called F and looked up on page 376."
(Michael Stephens, in A Conversation With Michael A. Stephens, by Richard Lockhart)

Seems I'm not alone when it comes to cook-book courses (and here)!


© 2011, David E. Giles

Tuesday, October 18, 2011

It's All in the Moments

On Thursday afternoons I'm usually to be found at the seminars organized by the Statistics group in our Department of Mathematics and Statistics. It's almost always the highlight of my week. Last week the Thursday gathering was replaced by a very special half-day event on the Friday: a mini-conference to honour the recent retirement of senior UVic statistician, Bill Reed.

Tuesday, October 11, 2011

On the Importance of Knowing the Assumptions

I've been pretty vocal in the past about the importance of understanding what conditions need to be satisfied before you start using some fancy new econometric or statistical "tool". Specifically, in my post, "Cookbook Econometrics", I grizzled about so-called "econometrics" courses that simply teach you do "do this", do that", without getting you to understand when these actions may be appropriate.

My bottom line: you need to understand what assumptions lie behind such claims as "this estimator will yield consistent estimates of the parameters"; or "this test has good power properties" - preferably before you get too excited about using the estimator or test and you cause too much damage. In other words, it's all very well to understand what problems you face in your empirical work (simultaneity, missing observations, uncertain model specification, etc.), but then when you choose some tools to deal with these problems, you need to be confident that your choices will achieve your objectives.

Monday, October 10, 2011

Congratulations, to Thomas Sargent & Christopher Sims

By now, you'll all know that the 2011 Nobel Prize in Economics has been awarded to Thomas Sargent (New York University) and Christopher Sims (Princeton University). To say that this is well deserved and overdue, is an under-statement. Congratulations!

Sunday, October 9, 2011

The Rise & Fall of Multicollinearity

Boris Kaiser in the Public Economics Working Group of the Department of Economics, University of Berne in Switzerland writes:

"As a frequent reader of your blog, I consider it my honour as well as my duty to point your attention to the following graph:



It shows the relative frequency of appearance of the word in the realm of the literature, contained in Google Books, over the last 50 years.  [1960-2011; DG]  Clicking on this link here, you can see how I generated the graph."

It seems that we're well on the way to the eradication of this grossly over-rated concept, as predicted in my earlier post, "The Second Longest Word in the Econometrics Dictionary". Thank goodness for that!

I'll explain my relief in a subsequent post. Meantime, "thanks a bunch for doing your duty, Boris"!


© 2011, David E. Giles

Friday, October 7, 2011

Erratum!

Back in May I posted a piece titled, "Gripe of the Day". With that post I provided some EViews code to run homoskedasticity tests for Logit and Probit models. Unfortunately, there was a small error in the code. This has now been fixed, and there is a note to this effect on the Code page for this blog, and in the original post.

The error affected the test results only at the second or third decimal places.

HT to eagle-eyed Sven Steinkamp at Universität Osnabrück  for bringing the error to my attention.



© 2011, David E. Giles

Thursday, October 6, 2011

Predicting the 2011 Nobel Laureate in Economic Science

" "This is the Oscars for nerds," says Paul Bracher, a chemist at the California Institute of Technology"

(Daniel Strain, Science, 21 September, 2011)


I swore I wouldn't do it, but in the end I couldn't stop myself! Everyone else is having their say about next Monday's award of the Economics Nobel Prize, so I couldn't sit on the sidelines any longer.

I know it's proper name is "The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel", but I'm going to be lax here. The big announcement will come at around 1:00p.m. CST on Monday 10th October this year. That's around 4 a.m. that day if you're on the West coast, like me, or 7a.m. in New York. If you're not the lucky one to get the wake-up call, then you can watch the announcement, live, here. I'll be joining you!

So, who will the recipient(s) be this year?

Tuesday, October 4, 2011

Keynes and Econometrics

Regular readers of this blog will know that I think it's important for students of econometrics to know something about the history of the discipline. So, let's pick a big-name economist at random, and see how he fitted into the overall scheme of things econometric.

John Maynard Keynes completed his B.A. with first class honours at the University of Cambridge in 1904. His B.A. in mathematics, that is. Subsequently, he was placed twelfth Wrangler in the Mathematical Tripos of 1905.

Monday, October 3, 2011

Making a Name for Yourself!

So you want to make a name for yourself? One way for an up-and-coming young econometrician to do this would be to come up with a new estimator or test that everyone subsequently associates with your name. For example, the the Aitken estimator; the Durbin-Watson test; the Cochrane-Orcutt estimator; the Breusch-Pagan test; White's robust covariance matrix estimator, etc.

This can be a bit risky - your new inferential procedure might not "catch on" as well as you hope it will. Worse yet, someone else might come up with a similar idea around the same time, and steal your glory.  A much safer way to make a name for yourself is to be the first to prove a result that has hitherto had everyone baffled.

Wednesday, September 28, 2011

Estimating Models With "Under-Sized" Samples

"....and there is no new thing under the sun."
Ecclesiastes 1:9 (King James Bible)


The first part of my career was spent at New Zealand's central bank (the Reserve Bank of N.Z.), where I was heavily involved in the construction and use of large-scale macroeconometric models. By the mid 1970's our models involved more than 100 equations (of which about 50 were structural relationships that had to be estimated; and the rest were accounting identities). The basic investigative estimation was undertaken using OLS; and variants such as the Almon estimator for distributed lag models. Boy, this dates me!

Of course, we were well aware that OLS wasn't appropriate for the finished product. These models were simultaneous equations models, so OLS was inconsistent. Obviously, something more appropriate was need, but which estimator should we use?

Friday, September 23, 2011

Student's t-Test, Normality, and the Bootstrap

Is Student's t-statistic still t-distributed if the data are non-normal? I addressed this question - in fact, an even more general question - in an earlier post titled "Being Normal is Optional!". There, I noted that the null distribution of any test statistic that is scale-invariant, will be the same if the data come from the elliptically symmetric family distributions, as if they come from the normal distribution.

Wednesday, September 21, 2011

Are You in Need of Some Psychic Help?

Occasionally I bid for items on ebaY. Sometimes I'm successful.  A few years ago I bought an item in this way, and it turned out that the seller lived here in my home town. I arranged to collect my purchase from his house, and in the course of the transaction he passed me his business card.

Apart from his first name, and telephone number, the only other word on the front of the very colourful card was "PSYCHIC". I kid you not!

Tuesday, September 20, 2011

Communications With Economists

There's an interesting looking online conference coming up in November - the 16th to 18th of November, specifically. This free conference is courtesy of the Journal of Economic Surveys, and their publisher, Wiley.

The full title of the conference is "Communications With Economists: Current and Future Trends", and there will be three keynote videos; five papers with invited commentaries; workshops on academic publishing; a "reading room"; and lots of discussion.

Students can register, and of special interest to students and practitioners of econometrics will be the keynote video by Professor Sir David Hendry (University of Oxford), one of the foremost econometricians of our time.

Thanks to Les Oxley and the rest of the team at JOES.

Register here - I have!



© 2011, David E. Giles

Friday, September 16, 2011

R Meets Google

Always on the lookout for innovative and free ways to implement data analysis, I was intrigued by a post on Andrew Gelman's blog yesterday, titled "More Data Tools Worth Using From Google". It referred to a 2009 post, "How to Use A Google Spreadsheet as Data in R" (recently updated here) on the Revolutions blog. The title of the post tells it all.

Definitely worth knowing about! Free spreadsheet; free statistical software. What more could you want?

But wait ...... there's more!

In a comment from Zach on the Gelman blog this morning, I learned about Google Motion Charts With R. As Zach notes, it allows you to take data from R and graph it directly using the Google Motion charts API. You may find that it's fussy over which browser you use, but that's O.K. It's recent release, and the R package you need is on the CRAN site here.

Have fun!


© 2011, David E. Giles

Thursday, September 15, 2011

Micronumerosity

In a much earlier post I took a jab at the excessive attention paid to the concept of "multicollinearity", historically, in econometrics text books.

Art Goldberger (1930-2009) made numerous important contributions to econometrics, and modelling in the social sciences in general. He wrote several great texts, the earliest of which (Goldberger, 1964) was one of the very first to use the matrix notation that we now take as standard for the linear regression model.

In one of  his text books, Art also poked fun at the attention given to multicollinearity, and I'm going to share his parody with you here in full. In a couple of places I've had to replace formulae with words. What follows is from Chapter 23.3. of Goldberger (1991):

Wednesday, September 14, 2011

P-Values of Differences of P-Values of Differences of....

An interesting post on Andrew Gelman's blog last week reminded us that we have to be careful to distinguish between the statistical significance of the difference between two "effects", and the difference between the significances of two separate effects. They're not the same thing, and it's the first of these that's usually relevant.

Let's re-phrase this, and put it in baby econometrics terms.

Monday, September 12, 2011

Econometrics and One-Way Streets

It's a nice sunny day out there, so you decide to get on your bike and pedal down the street a couple of blocks to your favourite ice cream parlour. Great idea! Except that it's a one-way street and you're pedalling against the traffic. Fortunately, on this particular day, most people have already headed for the beach, there are very few cars that have to avoid you, and somehow you make it to your destination in one piece. Whew!

Now, maybe your ears are suffering from some of the abuse you received along the way - maybe not. I guess it would depend on exactly who you encountered during your little trip. In any event, you savour your well-earned ice cream and feel pretty good about yourself, and life in general. You could have travelled the longer route, around the block, to avoid the one-way street, but gee, the end result was the same, so that's all that matters. Right?

Wednesday, September 7, 2011

A Tale of Two Tests

Here's a puzzle for you. It relates to two very standard tests that you usually encounter in a first (proper) course in econometrics. One is the Chow (1960) test for a structural break in the regression model's coefficient vector; and the other is the Goldfeld and Quandt (1965) test for homoskedasticity, against a particularly simple form of heteroskedasticity.

What's the puzzle, exactly?

Monday, September 5, 2011

Where are You Now?

One of the great thrills of this job is working together with students as they learn about econometrics. I've been most fortunate to have been associated with quite a few students who have had enough interest in the subject to subsequently go on to undertake graduate research with me.

Along the way, I've worked with some great people. They've been talented, dedicated, and a lot of fun to be around. If they learned anything from me, then I'm grateful - because I certainly learned from them, at least ten-fold.

I thought it was time to mention these students and to acknowledge their achievements. So, I've added a new Former Students page to this blog.

My fear is that I may have omitted someone from this Graduate Students page. My hope is that I'll hear from those of you who want to update me on your career progress and current position.

© 2011, David E. Giles

Thursday, September 1, 2011

Still Searching for the Number of Weeks in a Year

When I put up a post titled "How Many Weeks Are There in a Year" back in April, little did I know how many hits it would get. For a while I was intrigued to see that visitors kept arriving. They still do - every day, without fail.

Then I realized the reason why. It wasn't because the readers of this post were in search of econometric enlightenment. Oh no! There was a much more obvious reason. They genuinely want to know the answer to the question posed in the title of that post!

A quick look at the blog "stats" revealed that there are some popular web search strings that lead these poor souls, no doubt kicking and screaming, to this site.

Wednesday, August 31, 2011

Beware of Econometricians Bearing Spreadsheets

"Let's not kid ourselves: the most widely used piece of
software for statistics is Excel"
(B. D. Ripley, RSS Conference, 2002)

What a sad state of affairs! Sad, but true when you think of all of the number crunching going on in those corporate towers.

With the billions of dollars that are at stake when some of those spreadsheets are being used by the uninitiated, you'd think (and hope) that the calculations are squeaky clean in terms of reliability. Unfortunately, you'd be wrong!

A huge number of reputable studies over the years - ranging from McCullough (1998, 1999), to the special section in Computational Statistics & Data Analysis in 2008 - have pointed out some of the numerical inaccuracies in various releases of some widely used spreadsheets. With reputations at stake, and the potential for litigation, you'd again think (and hope) that by now the purveyors of such software would be on the ball. Not so, it seems!

Tuesday, August 30, 2011

An Overly Confident (Future) Nobel Laureate

For some reason, students often have trouble interpreting confidence intervals correctly. Suppose they're presented with an OLS estimate of 1.1 for a regression coefficient, and an associated 95% confidence interval of [0.9,1.3]. Unfortunately, you sometimes see interpretations along the following lines: 
  • There's a 95% probability that the true value of the regression coefficient lies in the interval [0.9,1.3].
  • This interval includes the true value of the regression coefficient 95% of the time.

So, what's wrong with these statements?

Monday, August 29, 2011

Missing Keys and Econometrics

There couldn't possibly be any connection between conducting econometric analysis and looking for your lost keys, could there? Or, maybe there could!

Jeff Racine (McMaster U.) put a nice little piece up on his web page at the start of this month. It's titled Find Your Keys Yet?, and has the sub-title "Some Thoughts on Parametric Model Misspecification". Jeff rightly points out some of the difficulties associated with the concept of "the true model" in econometrics, and the importance of specification testing in the games we play.

BTW, this ties in with "Darren's" comments on my earlier post, Cookbook Econometrics.

Students of econometrics - please read Jeff's piece. Teachers of econometrics - ditto!



© 2011, David E. Giles

Saturday, August 27, 2011

Levelling the Learning Paying Field

Whenever it comes time to assign a textbook for a course, I get the jitters. It's the price tag that always gets to me! And if it gets to me, then surely it must result in gasps of disbelief from the students (and parents) who are affected by my choices.

Often, I can (and do) make sure that the one text I assign can be used for two back-to-back courses. Hopefully, that helps a bit.

However, the cost of textbooks can still be a sizeable burden. Then, when students go to re-sell their texts the following year, they discover that those pesky publishing houses have churned out new editions! Guess what that does to the re-sale value of last year's purchase?

Playing fields (or paying fields in this case) would be level if the world were flat. Right? Right! Ideally, flat and at a height of zero. Zero dollars! That's exactly what Flat World Knowledge is all about.

Thursday, August 25, 2011

Reproducible Econometric Research

I doubt if anyone would deny the importance of being able to reproduce one's econometric results. More importantly, other researchers should be able to reproduce our results to verify (a) that we've done what we said we did; (b) to investigate the sensitivity of our results to the various choices we made (e.g., functional form of our model, choice of sample period, etc.); and (c) to satisfy themselves that they understand our analysis.

However, if you've ever tried to literally reproduce someone else's econometric results, you'll know that it's not always that easy to so - even if they supply you with their data-set. You really need to have their code (R, EViews, STATA, Gauss) as well. That's why I include both Data and Code pages with this blog.

Wednesday, August 24, 2011

MoneyScience

MoneyScience - which describes itself as "the community resource for Quantitative Finance, Risk Management and Technology Practitioners, Vendors and Academics" - has recently released version 3 of its site.

There's a great deal of interesting work going on in "Financial Econometrics", and this is one site that provides really good content and excellent networking facilities that will help keep econometricians up to speed with what is going on in the finance community at large.

I'm pleased to be feeding this blog to MoneyScience (here),  and you'll notice a new icon near the bottom of the right side-bar: MoneyScience


© 2011, David E. Giles

Innovations in Editing

I've posted in the past (here and here) about some of my experiences as an academic journal Editor. It has its ups and downs, for sure, but ultimately it's a rewarding job.

Preston McAfee (Yahoo! & Caltech) is currently the Editor of Economic Inquiry, where he's introduced some important innovations into the editorial process. He was formerly Co-Editor of American Economic Review. As well as being a highly respected economist, Preston has a wonderful way with words.

His thoughts on journal editing make excellent reading, for seasoned academics and newcomers alike. I particularly recommend Preston's piece in the American Economist last year - here.


Reference

McAfee, R. P. (2010). Edifying editing. American Economist, 55, 1-8.

© 2011, David E. Giles

Thursday, August 18, 2011

Visualizing Random p-Values

Here's a follow-up to yesterday's post on Wolfram's CDF file format. In an earlier post (here) I discussed the fact the p-values are random variables, with their own sampling distribution.

A great way of visualizing a number of the points that I made in that post is to use the CDF file for the Mathematica app. written by Ian McLeod (University of Western Ontario). You can run it/download it from here, using Wolfram's free CDF Player (here).

You'll recall that a p-value is uniformly distributed on [0 , 1] if the null hypothesis being tested is true. Using Ian's app., here is an example of what you see for the case where you're testing the null of a zero mean in a normal population, against a 2-sided alternative:


The null hypothesis is TRUE. The sample size is n = 50, and the simulation experiment involves 10,000 replications. You can see that the empirical distribution is approaching the Uniform true sampling distribution.

When the true mean of the distribution is 2.615 (so the null hypothesis is FALSE), the sampling distribution of the (two-sided) p-value looks like this:



If you're not sure why the pictures look like this, you might want to take a look at my earlier post, ("May I Show You My Collection of p-Values?")



© 2011, David E. Giles

Wednesday, August 17, 2011

Interactive Statistics - Wolfram's CDF format

Many of you will be familiar with Wolfram Research, the company that delivers Mathematica, among other things. Last month, they launched their new Computable Document Format (CDF) - it's something I'm going to be using a lot in my undergraduate Economic Statistics course.

Here are a few words taken from their press release of July 21:

Monday, August 15, 2011

Themes in Econometrics

There are several things that I recall about the first course in econometrics that I took. I'd already completed a degree in pure math. and mathematical statistics, and along with a number of other students I did a one-year transition program before embarking on a Masters degree in economics. The transition program comprised all of the final-year undergrad. courses offered in economics.

As you'd guess, the learning curve was pretty steep in macro. and micro, but I had a comparative advantage when it came to linear programming and econometrics. So things balanced out - somewhat!

Thursday, August 11, 2011

That Darned Debt!

What with the recent S&P downgrading of the U.S., and the turmoil in the markets, it's difficult to watch the T.V. or read a newspaper without being bombarded with the dreaded "D-word". I was even moved to pitch in myself recently by posting a piece about the issue of units of measurement when comparing debt (a stock) with GDP (a flow).


Yesterday, Lisa Evans had nice post titled 20 Outlandish and Informative Ways to Illustrate the U.S. National Debt. I think you'd enjoy it!

Lisa runs a site called Masters in Economics. It's devoted to assisting students who are thinking of undertaking a Masters degree in Economics as a stepping stone into a career in Economics. Programs at this level are more widely available than you might have thought.

I'm hoping that Lisa will be able to expand her database to include the many programs at this level in Canadian schools.

Meantime, keep an eye on her site and watch for her future posts.


© 2011, David E. Giles

Wednesday, August 10, 2011

Flip a Coin - FIML or 3SLS ?

When it comes to choosing an estimator or a test in our econometric modelling, sometimes there are pros and cons that have to weighted against each other. Occasionally we're left with the impression that the final decision may as well be based on computational convenience, or even the flip of a coin.

In fact, there's usually some sound basis for selecting one potential estimator or test over an alternative one. Let's take the case where we're estimating a structural simultaneous equations model (SEM). In this case there's a wide range of consistent estimators available to us.

There are the various "single equation" estimators, such as 2SLS or Limited Information Maximum Likelihood (LIML). These have the disadvantage of being asymptotically inefficient, in general, relative the "full system" estimators. However, they have the advantage of usually being more robust to model mis-specification. Mis-specifying one equation in the model may result in inconsistent estimation of that equation's coefficients, but this generally won't affect the estimation of the other equations.

The two commonly used "full system" estimators are 3SLS and Full Information Maximum Likelihood (FIML). Under standard conditions, these two estimators are asymptotically equivalent when it comes to estimating the structural form of an SEM with normal errors. More specifically, they each have the same asymptotic distribution, so they are both asymptotically efficient.

Tuesday, August 9, 2011

Being Normal is Optional!

One of the cardinal rules of teaching is that you should never provide information that you know you're going to have to renege on in a later course. When you're teaching econometrics, I know that you can't possibly cover all of the details and nuances associated with key results when you present them at an introductory level. One of the tricks, though, is to try and present results in a way that doesn't leave the student with something that subsequently has to be "unlearned", because it's actually wrong.

If you're scratching your head, and wondering who on earth would be so silly as to teach something that has to be "unlearned", let me give you a really good example. You'll have encountered it a dozen times or more, I'm sure. You just have to pick up almost any econometrics textbook, at any level, and you'll come away with a big dose of mis-information regarding one of the standard assumptions that we make about the error term in a regression model. If this comes as news to you, then I'll have made my point!

Wednesday, August 3, 2011

The Article of the Future

The standard format for academic journal articles is pretty much "tried and true": Abstract; Introduction; Methodology; Data; Results; Conclusions. There are variations on this. of course, depending on the discipline in question. When it comes to journals that publish research in econometrics, it's difficult to think of innovations that have taken advantage of developments in technology in the past few years.

O.K., so you can follow your favourite journal on Twitter or Facebook - but when you get to the articles themselves, do they look that much different from, say, ten years ago? Not really.

You'd think that econometrics journals could be a bit more exciting. The content is always a blast, of course, but what about the way it's presented? Apart from moving from paper to pdf files, we haven't really come that far. Yes, in many cases you get hyperlinks to the articles listed in the References section, which is fine and dandy. But is that enough?

When we undertake the research that leads to the articles we use all sorts of data analysis and graphical tools, whichever econometric or statistical software we're wedded to. Yet, these powerful tools are left pretty much on the sideline when it comes to disseminating the information through the traditional peer-reviewed outlets.

Are there any glimmers of hope on the horizon?

In a recent electronic issue of their Editors' Update newsletter, the publishing company, Elsevier, discussed their so-called "Article of the Future" Project. It's worth looking at. In particular, it could get you thinking about how we can raise the bar a little when it comes to publishing new research results in econometrics. For example, see the interactive graphics in the "Fun With F1" article.

A lot of us have some pretty strong views about the pricing policies of academic journals, and about the extent to which the flow of scientific information should be commercialized. I have no affiliation with this particular publisher, but it's refreshing to see what they're up to in this regard. Hopefully there's more of this going on elsewhere.


© 2011, David E. Giles

Friday, July 29, 2011

Galton Centenary

The words "regression" and "correlation" trip off our tongues on a daily basis - if not more frequently. Both of them can be attributed to the British polymath, Sir Francis Galton (1822 - 1911). I've blogged a little bit about Galton rpreviously, in The Origin of Our Species.

To commemorate the centenary of his death on 17 January 1911, statisticians are honouring Galton's impressive contributions this year. Putting aside Galton's promotion of eugenics, there is still much to celebrate. Perhaps the most comprehensive source of information about his work and influence is at http://galton.org/. This site includes, among other things, copies of all of his published work - much of which is difficult to obtain elsewhere these days.

Not surprisingly, the Royal Statistical Society has been paying special to Galton this year. Among other things there have been some interesting items in their Significance magazine. I'd especially recommend the pieces by Graham Wheeler, Tom Fanshawe and Julian Champkin. In the last of these, look for the link to a BBC radio talk on Galton, by Steve Jones of the Galton Laboratory at University College London!

Finally, if you're looking for inspiration - and who isn't(!) - Galton's own account of his discovery of correlation and regression (originally termed "reversion") makes interesting reading. Titled "Kinship and Regression", you can find it here.


© 2011, David E. Giles

Thursday, July 28, 2011

Moving Average Errors


"God made X (the data), man made all the rest (especially ε, the error term)."
Emanuel Parzen



A while back I was asked if I could provide some examples of situations where the errors of a regression model would be expected to follow a moving average process. 

Introductory courses in econometrics always discuss the situation where the errors in a model are correlated, implying that the associated covariance matrix is non-scalar. Specifically, at least some of the off-diagonal elements of this matrix are non-zero. Examples that are usually mentioned include: (a) the errors follow a stationary first-order autoregressive (i.e., AR(1)) process; and (b) the errors follow a first-order moving average (i.e., MA(1)) process. Typically, the discussion then deals with tests for independence against a specific alternative process; and estimators that take account of the non-scalar covariance matrix - e.g., the GLS (Aitken) estimator.

It's often easier to motivate AR errors than to think of reasons why MA errors may arise in a regression model in practice. For example, if we're using economic time-series data and if the error term reflects omitted effects, then the latter are likely to be trended and/or cyclical. In each case, this gives rise to an autoregressive process. The omission of a seasonal variable will general imply errors that follow an AR(4) process; and so on.

However, let's think of some situations where the MA regression errors might be expected to arise.

Monday, July 25, 2011

Maximum Likelihod Estimation is Invariably Good!

In a recent post I talked a bit about some of the (large sample) asymptotic properties of Maximum Likelihood Estimators (MLEs). With some care in its construction, the MLE will be consistent, asymptotically efficient, and asymptotically normal.These are all desirable statistical properties.

Most of you will be well aware that MLEs also enjoy an important, and very convenient, algebraic property - we usually call it "invariance". However, you may not know that this property holds in more general circumstances than those that are usually mentioned in econometrics textbooks. I'll come to that shortly.

In case the concept of invariance is news to you, here's what this property is about. Let's suppose that the underlying joint data density, for our vector of data, y, is parameterized in terms of a vector of parameters, θ. The likelihood function (LF) is just the joint data density, p(y | θ) , viewed as if it is a function of the parameters, not the data. That is, the LF is L(θ | y) = p(y | θ). We then find the value of θ that (globally) maximizes L, given the sample of data, y.

That's all fine, but what if our main interest is not in θ itself, but instead we're interested in some function of  θ, say φ = f(θ)? For instance, suppose we are estimating a k-regressor linear regression model of the form:

y = Xβ + ε  ;  ε ~N[0 , σ2In] .

Here, θ' = (β' , σ2).  You'll know that in this case the MLE of β is just the OLS estimator of that vector, and the MLE of σ2 is the sum of the squared residuals, divided by n (not by n-k). The first of these estimators is minimum variance unbiased; while the second estimator is biased. Both estimators are consistent and "best asymptotically normal".

Now, what if we are interested in estimating the non-linear function, φ = f(θ) = (β1 + β2β3)?

Saturday, July 23, 2011

National Debt & Regression Models - Units of Measurement Matter!

In a recent post, titled "Debt and Delusion", Robert Shiller draws attention to a very important point amid the current bombardment of news about the debt crisis (crises?).

Referring to the situation in Greece, he comments:

 "Here in the US, it might seem like an image of our future, as public debt comes perilously close to 100% of annual GDP and continues to rise. But maybe this image is just a bit too vivid in our imaginations. Could it be that people think that a country becomes insolvent when its debt exceeds 100% of GDP?

That would clearly be nonsense. After all, debt (which is measured in currency units) and GDP (which is measured in currency units per unit of time) yields a ratio in units of pure time. There is nothing special about using a year as that unit. A year is the time that it takes for the earth to orbit the sun, which, except for seasonal industries like agriculture, has no particular economic significance.

We should remember this from high school science: always pay attention to units of measurement. Get the units wrong and you are totally befuddled.

If economists did not habitually annualize quarterly GDP data and multiply quarterly GDP by four, Greece’s debt-to-GDP ratio would be four times higher than it is now. And if they habitually decadalized GDP, multiplying the quarterly GDP numbers by 40 instead of four, Greece’s debt burden would be 15%. From the standpoint of Greece’s ability to pay, such units would be more relevant, since it doesn’t have to pay off its debts fully in one year (unless the crisis makes it impossible to refinance current debt)." .........

Friday, July 22, 2011

On the Importance of Going Global

Since being proposed by Sir Ronald Fisher in a series of papers during the period 1912 to 1934 (Aldrich, 1977), Maximum Likelihood Estimation (MLE) has been one of the "workhorses" of statistical inference, and so it plays a central role in econometrics. It's not the only game in town, of course, especially if you're of the Bayesian persuasion, but even then the likelihood function (in the guise of the joint data density) is a key ingredient in the overall recipe.

MLE provides one of the core "unifying themes" in econometric theory and practice. Many of the particular estimators that we use are just applications of MLE; and many of the tests we use are simply special cases of the core tests that go with MLE - the likelihood ratio; score (Lagrange multiplier), and Wald tests.

The statistical properties that make MLE (and the associated tests) appealing are mostly "asymptotic" in nature. That is, they hold if the sample size becomes infinitely large. There are no guarantees, in general, that MLEs will have "good" properties if the sample size is small. It will depend on the problem in question. So, for example, in some cases MLEs are unbiased, but in others they are not.

More specifically, you'll be aware that (in general) MLEs have the following desirable large-sample properties - they are:
  • (At least) weakly consistent.
  • Asymptotically efficient.
  • Asymptotically normally distributed.

Just what does "in general" mean here? ..........

Wednesday, July 20, 2011

So Much For My Bucket List!

Forty two years ago today, on July 20, 1969 (20:17:40 UTC), Neil Armstrong stepped on to the surface of our moon.

I've had an ongoing interest in the space program since the early 1960's. I kind of grew up with it all. Then, in the summer of 1980, while attending the Joint Statistical Meetings in Houston, my friend Keith McLaren and I went on a tour of the Johnson Space Center.

Several things stand out when I think back to that visit.
  • The Apollo 11 capsule was unbelievably small, and the ceramic heat shield was burned almost right through!
  • The Mission Control room was also incredibly small! (The guide said that was everyone's first reaction.)
  • We went inside a "mock-up" of the space shuttle.
  • We walked along a catwalk above a barn of a room that was totally full of more IBM mainframe computers than you can imagine. They were part way through a 12 month long simulation in preparation for the first shuttle flight the following year.
Not surprisingly, then, one item on my bucket list was to see a shuttle launch.

Mapping the Flow of Scientific Knowledge

If you have an interest in the flow of scientific knowledge, especially across different disciplines, then you'll enjoy the Eigenfactor.org site. It provides some terrific graphical analyses of the map ("graph") of the world of scientific citations.

One thing that you'll get an insight into is the position of economics, as a discipline, relative to other sciences.

You can also use the site to get a slightly different "take" on the rankings of economics and econometrics journals, based on factors that aren't taken into account in simple citation counting. I'm referring to the so-called Eigenfactor Score.

Make sure that you check out the tabs labelled "mapping" and "well-formed".

Moritz Stefaner is responsible for the "well-formed" visualization, and he blogs here. You've seen his very creative work before, in connection with the OECD's Better Life Index, which I've discussed previously, here, here, and  here.

Enjoy!
© 2011, David E. Giles


Saturday, July 16, 2011

A Bayesian and Non-Bayesian Marriage

There's no doubt that the unholy rift between Bayesian and non-Bayesian statisticians, and econometricians, is a thing of the past. And thank goodness for that, because both parties have so much to offer each other.

It wasn't that long ago that it was a case of "them or us" - you had to take sides. That was pretty much the case when I wrote my Ph.D. dissertation in Bayesian econometrics in the early 1970's. These days, most of us are pretty comfortable about using a blend of principles and techniques that are drawn from both camps. It's all about taking a flexible approach, and recognizing that sometimes you need more than one set of tools in your tool box.

Historically, we had the school I'll loosely call the "frequentists" in the blue corner, and the Bayesians in the red corner. I'm not going to try and list the key players in each group. However, the big name that comes to mind in the blue corner is Sir Ronald A. Fisher, who gave us the concept of the "likelihood function", and maximum likelihood estimation.

In the red corner, George E. P. Box has to be numbered as one of the most influential Bayesian statisticians, though his statistical interests were quite broad. Born in England, he later moved to the U.S.A., served as Director of the Statistical Research Group at Princeton University, and  founded the Department of Statistics at the University of Wisconsin (Madison) in 1960.

Apart from anything else, econometricians will know George Box from Box-Jenkins time-series analysis, and the Box-Cox transformation. In addition, George penned the oldest song, "There's No Theorem Like Bayes' Theorem",  in The Bayesian Songbook. See my earlier post on this here.

So, what's the interesting connection between Fisher and Box? Well, among the many professional awards that Box received was  the A.S.A's R. A. Fisher Lectureship in 1974.

But it gets better. Box married Joan Fisher, the second of Fisher's five daughters.

I like that! 


© 2011, David E. Giles

Wednesday, July 13, 2011

After-Dinner Talks to Die For

We all know that things aren't always what they appear to be. The same is true of people. A well known experiment from 1970 provides an interesting and entertaining illustration of this.

In the web-only content of the latest issue of Significance magazine (co-published by The American Statistical Association and The Royal Statistical Society), Mikhael Simkin summarizes the story of this experiment. What's new about his piece is that it gives us a link to a video  of the fake lecture that formed the basis of the experiment at the University of Southern California School of Medicine.

The video is worth watching, especially if you have even the slightest knowledge of game theory! I won't say more - just see for yourselves.

I used to make myself available, through my university's speakers' bureau, to give talks to to outside groups and organizations. Lions Clubs; the Sons of Norway; Chartered Accountants Supporting Whales and Dolphins; the South Nanaimo Spinning, Weaving and Knitting Guild; and so on. The trouble was that when I was invited to talk on (my then interest) "The Underground Economy in Canada", all they really wanted to know about was how they could get involved in the fun, preferably without being caught. (Especially the Spinners and Weavers!)

However, maybe it's time to get on the speakers' circuit again. But not for free. After all, you get what you pay for - right?

I'll bet there's a good market out there for after-dinner talks on topics such as:
  • "What Every Dental Surgeon Should Know About Unit Roots"
  • "Recent Developments in Cointegration Analysis for Recent Immigrants"
  • "Monte Carlo Simulation in the Post-Impressionist Art Movement" 
  • "Saddlepoint Approximations for Horse-Lovers"
  • "Spatial Econometrics for Realtors"
And if you think I'm kidding, I should point out that in my bookshelf I have a copy of nice book titled Statistics for Ornithologists. It was a gift some years ago from my friend, Ken White, developer of the SHAZAM econometrics package. If that doesn't offer some possibilities, I don't know what does.

So, if you belong to a club/group/society that's looking for that different, memorable, talk - you know where to find me!

© 2011, David E. Giles








Sunday, July 10, 2011

Prosperity, Thriving, & Economic Growth

A while ago I posted a couple of pieces (here and here) relating to the Better Life Index (BLI) that the OECD released in May of this year. Not surprisingly, the BLI caught the attention of a number of bloggers.

Some of the most thoughtful posts on this topic came from the Australian economist, Winton Bates. In the last few days Winton has extended his earlier analysis and comments in a series of posts that look at other, somewhat similar, indices.

These include the Legatum Prosperity Index, which Winton correlates with the BLI  here, and relates to GDP growth here. If you check these out you'll find links to his earlier posts on the BLI. 

In another interesting post (here), Winton asks "Does economic growth help people to thrive?" He uses Gallup's World Poll data on "thriving", "struggling" and "suffering", and relates it to per capita GDP in different countries. The Gallup data are interesting in their own right, being based on the Cantril self-striving anchoring scale, which you can learn more about here.

If you have an interest in these various measures of  "well being", and the linkages between them and standard measures of economic output and growth (e.g., GDP), Winton's blog, "Freedom and Flourishing", is definitely worth following.

© 2011, David E. Giles

Saturday, July 9, 2011

Econometrics Without Borders?

We're all familiar with the "Doctors Without Borders" organization, and the valuable international medical work that it performs. Perhaps you didn't know that there's also a group called "Statistics Without Borders"? To quote from their web site, their mission statement is as follows:


"Statistics Without Borders (SWB) is an apolitical organization under the auspices of the American Statistical Association, comprised entirely of volunteers, that provides pro bono statistical consulting and assistance to organizations and government agencies in support of these organizations' not-for-profit efforts to deal with international health issues (broadly defined)."

It's great to see The American Statistical Association, which I've been a member of for 38 years, supporting this type of venture.

Now, a new initiative, provisionally called "Data Without Borders" (DWB), has been established by data scientist, Jake Porway. You can read about it on Porway's web site, of course, and also in a recent post on The Guardian's DataBlog here. Briefly, the aim is to match important data from not-for-profit organizations with experts in the analysis of data. According to a recent item in the Royal Statistical Society's newsletter, RSSeNEWS, there were over 300 expressions of interest, internationally, within the first 24 hours of DWB being announced.

So, don't let anyone ever tell you that being a "quant" who works with data can't be socially meaningful. Even econometricians can make a difference if we want to!



© 2011, David E. Giles

Friday, July 8, 2011

Choosing Your Co-Authors Carefully

The choice of your co-authors in academic work can be very important - you all have to be able to bring something to the table, and hopefully there'll be enough synergies to ensure that the paper (or book) is better than any of you could have achieved individually.

I must say that I've certainly been extremely fortunate with my own past and current co-authorship "liasons".

There's plenty that could be said about deciding on the order of the authors - but I'll leave that for another post. That being said, there may be some other interesting considerations.

I was recently re-reading Stephen Hawking's A Brief History of Time. In Chapter 6 Hawking has a great story about an important quantum physics paper published in Physical Review in 1948. Two of the authors were George Gamow and his Ph.D. student, Ralph Alpher, at George Washington University. Gamow, persuaded the renowned physicist Hans Bethe to join them as a co-author, simply so that the final line-up of authors was Alpher, Bethe and Gamow.

A much more detailed account of the background to the "Alpha-Beta-Gamma" paper is provided by Simon Singh, in his book Big Bang: The Origin of the Universe. The paper, titled simply 'The Origin of Chemical Elements' was (Singh, p.323) "...a milestone in the Big Bang versus eternal universe debate", and "....the first major triumph for the Big Bang model since Hubble had observed and measured the redshifts of galaxies." (Singh, p.319).

Unfortunately, there was a darker side to this story that I was unaware of until Jeremy Austin brought it to my attention in a comment to the original version of this post (see below).

However, it certainly made me think about some possibilities for interesting co-authorships in other disciplines.

For instance, ........ they lived centuries apart, and their contributions to pure mathematics don't really overlap, but wouldn't it be nice if we could time-travel and get (Fields Medal winner) Klaus Roth to team up with Michel Rolle. A kind of mathematical jam session!

Moving closer to home, I see that both Yixiao Sun and Hyungsik Roger Moon have (separately) co-authored papers with Peter Phillips. However, it seems the first two of these econometricians haven't taken the opportunity to co-author a paper together. Pity - I'd enjoy that!

And what about trying to persuade Fallaw Sowell and John Pepper to cook up some interesting econometrics for us? Or perhaps my friend John Knight could collaborate with Rohit Deo? They'd undoubtedly produce a paper that was the epitomy of econometric clarity.

So, whatever your last name is, choose your co-authors carefully, and maybe even have a bit of fun in the process! 

© 2011, David E. Giles

Thursday, July 7, 2011

Alexander Aitken

Can you imagine what it would be like  trying to learn and teach econometrics without the use of matrix algebra? O.K., I know that some of you are probably thinking, "that would be great!" But give it some serious thought. We'd be extremely limited in what we could do. It would be a nightmare to go beyond the absolute basics.

Only in the 1960's, with the classic texts by Johnston (1963) and Goldberger (1964), did the use of matrix algebra become standard practice in the teaching of econometrics. We used that first edition of Johnston's text in the first undergraduate econometrics course I took. Thank goodness!

Every student of econometrics is indebted to Alexander Craig Aitken (1895 - 1967) for his development of what is now the standard vector/matrix notation for the linear  regression model (and its extensions). Econometricians also use the Generalised Least Squares ("Aitken") estimator when this model has a non-standard error covariance matrix.

The seminal Generalised Least Squares contribution, together with the first matrix formulation of the linear regression model appeared in Aitken's paper, "On Least Squares and Linear Combinations of Observations", Proceedings of the Royal Society of Edinburgh, 1935, vol. 55, pp. 42-48. In this paper we find the well-known extension of the Gauss-Markhov Theorem to the case where the regression error vector has a non-scalar covariance matrix - the Aitken estimator is shown to be "Best Linear Unbiased".

Aitken's most influential statistical paper was co-authored with (another New Zealander) Harold  Silverstone - "On the Estimation of Statistical Parameters", Proceedings of the Royal Society of Edinburgh, 1942, vol. 61, pp. 186-194. This paper extends earlier ideas by Sir Ronald Fisher to derive (only for the unbiased case) the result that we now usually refer to as the "Cramér-Rao Inequality" for the lower bound on the variance of an estimator. Interestingly, this contribution pre-dates the 1945 work by Rao and Cramér's 1946 paper.

So who was Alexander Craig Aitken?