Friday, August 30, 2013

Right-Tail Augmented Dickey-Fuller Tests in EViews

A few days ago, I received an email from Itamar Caspi, a regular follower of this blog. Itamar has developed a really nice EViews "Add-In" package that facilitates the application of "Right-Tail Augmented Dickey-Fuller" tests.

His Add-in package, named rtadf, covers four tests:

1. ADF.
2. Rolling ADF.
3. sup ADF (SADF). See Phillips et al. (2011).
4. Generalized SADF (GSADF), see Phillips et al. (2013).

Thursday, August 29, 2013

New Econometrics Journal

There's a relatively new econometrics journal on the block. The Journal of Econometric Methods, published its second issue last month.

The Editors are Jason Abrevaya, Bo Honore, Atsushi Inoue, Jack Porter, and Jeffrey Wooldridge. The Aims and Scope of the journal are described as follows:

"The Journal of Econometric Methods welcomes submissions in theoretical and applied econometrics of direct relevance to empirical economics research. The journal aims to bridge the widening gap between econometric research and empirical practice. We aim to publish papers from top scholars in econometrics, but submissions must (i) consider a topic of broad interest to practitioners and (ii) be written in a style that is targeted at practitioners. Subject to these requirements, the journal will consider submissions in all areas of econometrics.
We will not consider submissions that are application-specific. While econometric methodology should be thoroughly illustrated with empirical data, such methodology should be useful above and beyond the specific application considered."
The articles published to date are of a very high standard, and I'm looking forward to seeing more.

© 2013, David E. Giles

Monday, August 26, 2013

From My Reading List...........

Here are a few of the papers that I've been reading over the past week or so:
  • Amisano, G. and J. Geweke, 2013. Prediction using several macroeconomic models.Working Paper Series NO. 1357, European Central Bank.
  • Arnold, B. C. and H. K. T. Ng, 2011. Flexible bivariate beta distributions. Journal of Multivariate Analysis, 102, 1194-1202.
  • Johansen, S. and B. Nielsen, 2013.  Outlier detection in a regression using an iterated one-step approximation to the Huber-skip estimator. Econometrics, 1, 53-70.
  • Magnus, J. R. and A. L. Vasnev, 2013. Practical use of sensitivity in econometrics with an illustration to forecast combinations. B. A. Working Paper No. 04/2013, The University of Sydney Business School, University of Sydney.
  • Pendakur, K. and S. Sperlich, 2010. Semiparametric estimation of consumer demand systems in real expenditure. Journal of Applied Econometrics, 25, 420-457.
  • Solon, G, S. J. Haider, and J. Wooldridge, 2013. What are we weighting for? Working Paper 18859. National Bureau of Economic Research.

© 2013, David E. Giles

Friday, August 23, 2013

Should I do a Ph.D.?

When it comes to discussions with students, there's one question that's a "hardy perennial". There's no simple answer, whatever your discipline is, but Tim Hopper is tackling the question, Should I do a Ph.D?, with a series of interviews.

The first interview is with John Cook, whose blog, The Endeavour, I like to follow. So, I was especially interested in what John had to say. He comes at the question as formally trained mathematician, with broad "real world" experience, but I found my self nodding appreciatively at most of what he had to say.

Tim provides links to some other related material that addresses the important question that he's raised, and I'm looking forward to seeing his upcoming interviews on this topic.

© 2013, David E. Giles

Thursday, August 22, 2013

Tableau Public

There are lots of nice data resources out there - I've mentioned Quandl, DataZoa, and Knoema in the past.

I've recently been taking a look at Tableau Public, and it's certainly an interesting (and free) data visualization tool.

I liked this example from their Gallery - "Breaking the Law - Canadian Style"

© 2013, David E. Giles

Forecasting From Log-Linear Regressions

I was in (yet another) session with my analyst, "Jane", the other day, and quite unintentionally the conversation turned, once again, to the subject of "semi-log" regression equations.

After my previous rant to discussion with her about this matter, I've tried to stay on the straight and narrow. It's better for my blood pressure, apart from anything else! Anyway, somehow how we got back this topic, and she urged me to get some related issues off my chest. This is therapy, after all!

Right at the outset, let me state quite categorically that lots of people estimate semi-logarithmic regressions for the wrong reasons. I'm not condoning what they do. That's their problem - I have enough of my own! However, if they're going to insist on doing this, then I'm going to insist that they be consistent when it comes to using their estimated model for forecasting purposes!

I mean, we wouldn't use an inconsistent estimator, would we? So why should we put up with an inconsistent modeller?

Let me tell you all about it.........

Tuesday, August 20, 2013

Knoema Data Site Update

Last year I had a short post that mentioned the Knoema data site.

I had a message from Olga today, mentioning that there have been some big developments with that site in recent month. These include their World Data Atlas. It's definitely worth taking a look at this resource.

© 2013, David E. Giles

Friday, August 16, 2013

Comic-Book Econometrics

We often hear the term "Cookbook Econometrics" - usually used in a derogatory sense - and I've posted on this topic in the past. "Comic-Book Econometrics" is something completely different. It relates to a particular econometrics computing package, and a comic that I keep in my office.

No connection? Let's see......

Thursday, August 15, 2013

On the Value of Econometric Training

I received the following piece from Angelo Melino today. I hadn't seen it before, and I just loved it! 

Angelo posts this for the students in his M.A. econometrics course, and I think it deserves to be brought to the attention of all of our students. You just never know!

Tuesday, August 13, 2013

A Must-Read Post for Econometrics Students

Confidence Intervals - and how to interpret them correctly?

Hopefully, the correct interpretation was emphasized, again and again, by your instructor is Intro. Stats. 100. Even so, it's alarming to see how many grad. students (and faculty!) still get it wrong.

The concept of a confidence interval was first introduced by Jerzy Neyman and his co-authors. I've posted about some of the history of this here, and noted that even a future Nobel Laureate didn't "get it" when Neyman presented the concept at a seminar!

With this in mind, a really nice post appeared on the Statistical Research blog today. Its title is, When Discussing Confidence Level With Others, and I strongly recommend it.

© 2013, David E. Giles

Saturday, August 10, 2013

Large and Small Regression Coefficients

Here's a trap that newbies to regression analysis have been known to fall into. It's to do with comparing the numerical values of the point estimates of  regression coefficients, and drawing conclusions that may not actually be justified.

What I have in mind is the following sort of situation. Suppose that Betsy (name changed to protect the innocent) has estimated a regression model that looks like this:

               Y = 0.4 + 3.0X- 0.7X2 + 6.0X3 +.....+ residual .

Betsy is really proud of her first OLS regression, and tells her friends that "X3 is two times more important  in explaining y than is X1" (or words to that effect).

Putting to one side such issues as statistical significance (I haven't reported any standard errors for the estimated coefficients), Is Betsy entitled to make such a statement - based on the earth-shattering observation that "six equals three times two"?

NBER-NSF Time-Series Conference

The 2013 NBER-NSF Time Series Conference is being hosted by the Federal Reserve Board, in Washington D.C. next month. You can read about this event here.

Even a cursory look at the conference program will convince you that there are some really interesting looking papers being presented by some top people at this conference.

Check out the program, and you'll see what I mean. I'm going to be contacting several of the authors for access to their papers and talks. 

© 203, David E. Giles

Friday, August 9, 2013

In Praise of a Good Abstract

When you're writing up your research, it's a good idea to keep in mind that a lot of the potential readers of your exciting new paper are going to be busy people. I'm not talking about journal editors and referees - they're busy too, but they have an obligation to read your paper carefully. 

The rest of us have no such obligation, so you have to convince us that your research results are as interesting and important to us as they are to you.

I read a lot of papers dealing with econometrics and various areas of statistics. I also "pass over" even more papers that come my way via emails, web pages, and the like. 

Sometimes I'm following particular researchers/authors because I know from past experience that their work will be of interest to me. Otherwise, the title might catch my eye, and then I'll go as far as reading the abstract, and maybe the concluding section. Depending on the impression I've gained by that stage, I may or may not read the paper itself.

I think that, in this respect, I'm pretty typical of most of my colleagues. So, that's why the abstract of your paper is crucially important.

Tuesday, August 6, 2013


You may have noticed that in the past few of days some promotional links for dataZoa have appeared in the lower part of the right side-bar of this blog page.

Usually, I don't go in for this sort of thing. However, I decided to make an exception in this case. Just so you know - I was not asked to do this, I'm getting nothing out of this, and I have no financial interest in dataZoa.

I simply wanted to share information about a really, really, nice resource. So, let me tell you just a little about it, and then you can go and explore dataZoa™ for yourselves.

The Stats Chat Blog

Recently, I've begun following the Stats Chat blog. Run by the Department of Statistics at the University of Auckland - the largest statistics department in New Zealand or Australia (and the birthplace of R) - this blog apparently started in April of this year.

It's aim is:
"to foster discussion of data around us, particularly in the media, and build an archive of resources for the general public, journalists and teachers".
Although the posts may appear to have a heavy N.Z. orientation, the topics covered are definitely of interest to a broader audience. I especially recommend this blog to students with an interest in statistics and/or econometrics.

You'll see interesting material, presented by members of a talented statistics group that has a long history of excellence.


I love the Department's (and blog's) by-line:

"Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write." – H.G. Wells.

© 2013, David E. Giles

Monday, August 5, 2013

Great Data Charts Using WebGL

I had an email today from Matt Hergott, who wrote:
"I notice you place an emphasis on charts and graphs. Many analyses could be helped by your suggestion that software offer charts of the data before running a regression. Along these lines, you might want to look at my new website: .
It contains four interactive three-dimensional scenes pertaining to econometrics and finance. The first graph is a simulation that makes it easy to spot outliers in a regression with two explanatory variables. The 3-D interactivity is important because people generally need different perspectives to see where the residuals are located.
I programmed these charts in JavaScript and WebGL. Now that Microsoft has decided to include WebGL in the upcoming Internet Explorer 11, it means that all major desktop browsers will support WebGL in the near future. This could open up a new frontier in the communication of quantitative concepts and results."
Matt's charts are really impressive. I must confess I really didn't know anything about WebGL (my loss) until he brought it to my attention.

It looks as if there are exciting times ahead!

© 2013, David E. Giles

Saturday, August 3, 2013

Unbiased Model Selection Using the Adjusted R-Squared

The coefficient of determination (R2), and its "adjusted" counterpart, really don't impress me much! I often tell students that this statistic is one of the last things I look at when appraising the results of estimating a regression model.

Previously, I've had a few things to say about this measure of goodness-of-fit  (e.g., here and here). In this post I want to say something positive, for once, about "adjusted" R2. Specifically, I'm going to talk about its use as a model-selection criterion.

This Job is Killing Me!

The Indeed website (available for a number of countries) is a well-known clearing house for jobs and job-seekers.Trolling for opportunities there earlier today, I decided to take a peek at trends in job postings relating to econometrics and the funeral business. (At my age, such associations come easily!)

So, based on data for just the U.S., here is what I found. First, I looked at the number of posted jobs (as a percentage of all postings). The trends since the beginning of 2011 are interesting!

Next, I looked at the % growth rates in the job postings since 2005:

Perhaps a cointegration analysis would be in order!

© 2013, David E. Giles

Friday, August 2, 2013

Allocation Models With Autocorrelated Errors

Not too long ago, I had a couple of posts about "allocation models" (here and here). These models are systems of regression equations in which there is a constraint on the data for the dependent variables for the equations. Specifically, at every point in the sample, these variables sum exactly to the value of a linear combination of the regressors. In practice, this linear combination usually is very simple - it's just one of the regressors.

So, for example, suppose that the dependent variables measure the shares of Canada's exports that go to different countries. These shares must add up to one in value. If we have an intercept (a series of "ones") in each equation, then we have an allocation model.

In one of the comments on the earlier posts, I was asked about the possibility of autocorrelated errors in the empirical example that I provided. In my response, I noted that if autocorrelation is present, and is allowed for in the estimation of the model, then special care is needed. In particular, any modification to the model, to allow for a specific form of autocorrelation, must satisfy the "adding up" constraints that are fundamental to the allocation model.

Let's see what this involves, in practice.