Tuesday, August 19, 2014

The Bracken Bower Prize

The Bracken Bower Prize is a new initiative that's intended to motivate younger authors to identify and analyse future business trends. It's an important award that could well be of interest to applied econometricians, so here are the details that were sent to me:



Introducing the Bracken Bower Prize

The Financial Times and McKinsey & Company, organisers of the Business Book of the Year Award, want to encourage young authors to tackle emerging business themes. They hope to unearth new talent and encourage writers to research ideas that could fill future business books of the year. A prize of £15,000 will be given for the best book proposal.

The Bracken Bower Prize is named after Brendan Bracken who was chairman of the FT from 1945 to 1958 and Marvin Bower, managing director of McKinsey from 1950 to 1967, who were instrumental in laying the foundations for the present day success of the two institutions. This prize honours their legacy but also opens a new chapter by encouraging young writers and researchers to identify and analyse the business trends of the future.

The inaugural prize will be awarded to the best proposal for a book about the challenges and opportunities of growth. The main theme of the proposed work should be forward-looking. In the spirit of the Business Book of the Year, the proposed book should aim to provide a compelling and enjoyable insight into future trends in business, economics, finance or management. The judges will favour authors who write with knowledge, creativity, originality and style and whose proposed books promise to break new ground, or examine pressing business challenges in original ways.
              
Only writers who are under 35 on November 11 2014 (the day the prize will be awarded) are eligible. They can be a published author, but the proposal itself must be original and must not have been previously submitted to a publisher.

The judging panel for 2014 comprises: 
Vindi Banga, partner, Clayton Dubilier & Rice
Lynda Gratton, professor, London Business School
Jorma Ollila, chairman, Royal Dutch Shell and Outokumpu
Dame Gail Rebuck, chair, Penguin Random House, UK

The proposal should be no longer than 5,000 words – an essay or an article that conveys the argument, scope and style of the proposed book – and must include a description of how the finished work would be structured, for example, a list of chapter headings and a short bullet-point description of each chapter. In addition entrants should submit a biography, emphasising why they are qualified to write a book on this topic. The best proposals will be published on FT.com.

The organisers cannot guarantee publication of any book by the winners or runners-up. The finalists will be invited to the November 11 dinner where the Bracken Bower Prize will be awarded alongside the Business Book of the Year Award, in front of an audience of publishers, agents, authors and business figures. Once the finalists’ entries appear on FT.com, authors will be free to solicit or accept offers from publishers. The closing date for entries is 5pm (BST) on September 30th 2014.




© 2014, David E. Giles

David Mimno on "Data Carpentry"

There's a post on David Mimno's blog  today titled, "Data Carpentry".

I like it a lot, because it emphasises just how much effort, time and creativity can be required in order to get one's data in order before we can get on with the fun stuff - estimating models, testing hypotheses, making forecasts, and so on. I know that this was something that I didn't fully appreciate when I was starting my career. And when I did get the message, I found it rather irksome!

However, the message isn't going to change, so we just have to live with it, and accept the realities of working with "real" data.

In his post, David explains why he doesn't like the oft-used term"data cleaning" (which makes us sound like "data janitors"), and why he prefers the term "data carpentry". Certainly, the latter has more constructive overtones.

As he says:
"To me these imply that there is some kind of pure or clean data buried in a thin layer of non-clean data, and that one need only hose the dataset off to reveal the hard porcelain underneath the muck. In reality, the process is more like deciding how to cut into a piece of material, or how much to plane down a surface. It’s not that there’s any real distinction between good and bad, it’s more that some parts are softer or knottier than others. Judgement is critical.
The scale of data work is more like woodworking, as well. Sometimes you may have a whole tree in front of you, and only need a single board. There’s nothing wrong with the rest of it, you just don’t need it right now."
A nice post, and a very nice "take" on a crucial part of the work that we do.


© 2014, David E. Giles

Thursday, August 14, 2014

Early Computing

During a recent visit to Harvard U. I came across this monstrosity thing of beauty in the Science Center - the building where the Dept. of Statistics is housed:



It's an "Aiken-IBM Automatic Sequence Controlled Calculator - I", and dates from 1944. (Obviously, those other people in the picture liked it a lot too - I couldn't get them out of the way! But they do give you a sense of scale.)

From left to right, the main components of the calculator are:
  1. Panel of 60 constants.
  2. Unit of 72 storage counters.
  3. The multiply/divide unit.
  4. Fractional counters.
  5. Interpolators - 1, 2, 3.
  6. Sequence control.
  7. Typewriters (barely visible).
  8. Card feed (out of sight).
  9. Card punch (even more out of sight!).
It's about the size of strech limo, and a great piece of computing history.

It reminded me of "The Monkey Run" - ah, those were the days!


© 2014, David E. Giles

Tuesday, August 12, 2014

The A. R. Bergstrom Prize in Econometrics, 2015

A. R. (Rex) Bergstrom can reasonably be regarded as the "father" of modern econometrics in New Zealand. He made seminal contributions to continuous-time econometrics and the finite-sample properties of simultaneous equations estimators. He also taught and inspired a generation of young econometricians.

You can read Peter Phillips' 1988 "ET Interview" with Rex here.

Rex's contributions are recognized by a biennial award, named in his honour. Applications and nominations for the 11th A. R. Bergstrom Prize are now being solicited:




© 2014, David E. Giles

Monday, August 11, 2014

A Trio of Texts

A Trio of Texts refers to three, free, econometrics e-texts made available by Francis Diebold, at U. Penn. Francis blogs at No Hesitations.

The three books on question are 
  • Econometrics (undergraduate level)
  • Forecasting (upper-level undergraduate/graduate level)
  • Time Series Econometrics (graduate level)
Accompanying slides, data, and code are also available, and the material is updated regularly.

The material is of an extremely high quality, and I strongly recommend all three books.



© 2014, David E. Giles

Wednesday, August 6, 2014

Wrapping up at the JSM

Today was my last day at the 2014 JSM in Boston - unfortunately I'm unable to stay on for tomorrow's sessions.

As always, it was a terrific meeting - extremely well organized, and with a huge number of interesting papers from a very diverse range of contributors. There were too many highlights to cover here. Suffice to say that I'll be going home with a lot of material that I'll be wanting to follow up in the coming months.

In addition to catching up with various professional acquaintances, it was great to meet some new people. At the top of my list in this respect were Gareth Thomas and Glenn Sueyoshi, from EViews

Among the things that we discussed were some of the new features that will be included in EViews 9, when it is released. My lips are sealed, of course, but rest assured that followers of this blog will enjoy these new features a lot!

So, now it's back to reality..................


© 2014, David E. Giles

Tuesday, August 5, 2014

My Talk at the JSM

Tomorrow morning (Wednesday 6 August) I'll be presenting at the Joint Statistical Meetings in Boston.

The title for my talk is "Modelling Asymmetries in the Market for Gasoline in Western Canada", and it's based on some research that I have underway on this topic. 

The question that's addressed in this work is: "Are the upward and downward movements in retail gasoline prices, that follow increases and decreases in crude oil prices, symmetric?" There's a common perception that the "flow-through" from oil prices to gasoline prices is faster when prices are rising than when they are falling. I use ARDL models with an explicit allowance for possible asymmetry to test this hypothesis.

I'll have more to say on this when the work is completed, but in the meantime you can see some partial results by downloading the slides for my talk here.


© 2014, David E. Giles

The 7 Pillars of Statistical Wisdom

Yesterday, Stephen Stigler presented the (ASA) President's Invited Address to an overflow, and appreciative, audience at the 2014 Joint Statistical Meetings in Boston. The title of his talk was, "The Seven Pillars of Statistical Wisdom".

I'd been looking forward to this presentation by our foremost authority on the history of statistics, and it surpassed my (high) expectations.

The address will be published in JASA at some future date, and I urge you to read it when it appears. In the meantime, here are the "seven pillars" - the supporting pillars of statistical science - with some brief comments:

Monday, August 4, 2014

Estimation & Accuracy After Model Selection

This was the title of Brad Efron's invited paper at the 2014 Joint Statistical Meetings in Boston this morning. It was a great presentation, with excellent discussants - Lan Wang, Lawrence Brown, and Soumendra Lahiri.

The paper and discussion are scheduled to appear in the September 2014 issue of JASA.

A lot of what Brad and his discussants had to say related to one of the main points in one of my recent posts. Namely, if you search for a model specification, then this affects all of your subsequent inferences - and usually in a rather complicated way. Typically, even after searching for a preferred model, we tend to "pretend" that we haven't done this, and that the model's form was known from the outset. Naughty! Naughty!

What Brad has done is to address the "pre-test" issue in a rather nice way. You won't be surprised to learn that bootstrapping features heavily in the methodology that he's developed. Using two examples - one non-parametric, and one parametric - he showed how to take account of model selection via Mallows' Cp statistic, and the lasso (respectively), when constructing regression confidence intervals.

One important feature of his analysis involves "smoothing" the results to take account of the discontinuities that inherent in model selection. Although it wasn't mentioned in Brad's talk, these discontinuities are the source of some of the most important problems associated with pre-testing in general. For example, traditional pre-test estimators of regression coefficients (based, say, on a prior test of linear restrictions on those coefficients) are inadmissible under a range of standard loss functions. This inadmissibility is entirely due to the fact that these pre-test estimators are discontinuous functions of the random sample data.

All in all it was a great session, with some nice take-away quotes:

  • "The discussants actually discussed my paper."
  • "Simulations are hard to do."
  • "Model averaging is perfectly easy to do, but model selection is not."

I took some comfort from the last two of these comments!


    © 2014, David E. Giles

    Saturday, August 2, 2014

    Correlation and Causation

    A hat-tip to Judea Pearl, whose e-newsletter alerted me to this interesting post on the EvaluationHelp blog. It shows the original sixteen diagrams in Sewall Wright's classic 1921 paper on correlation and causality.

    Philip and Sewall Wright were responsible for seminal contributions to the basic notions of instrumental variables estimation and parametric identification, though there is still some debate over their relative contributions to these important concepts.


    Reference


    Wright, S. (1921). Correlation and causation. Part I Method of path coefficients. Journal of Agricultural Research, 20, 557-585.

    © 2014, David E. Giles