Sunday, December 2, 2018

December Reading for Econometricians

My suggestions for papers to read during December:


© 2018, David E. Giles

Tuesday, November 27, 2018

More Long-Run Canadian Economic Data

I was delighted to hear recently from former grad. student, Ryan Macdonald, who has worked at Statistics Canada for some years now. Ryan has been kind enough to draw my attention to all sorts of interesting items from time to time (e.g., see my earlier posts, here and here).

I always appreciate hearing from him.

His latest email was prompted by my post, A New Canadian macroeconomic Database.

Ryan wrote:
"I saw your post on long run data and thought you might be interested in a couple of other long-run datasets for your research.  If I remember correctly you are familiar with the GDP/GNI series, Long-run Real Income EstimatesI also added the long-run Bank of Canada commodity price series that go back to 1870 to it.  There is also a dataset for the provinces with estimates going back to 1950 or 1926 depending on the variable: Long-run Provincial and Territorial Data ."
Thanks for this information, Ryan.This will be very helpful, and I'd be more than happy to publicize any further such developments.

© 2018, David E. Giles

Thursday, November 22, 2018

A New Canadian Macroeconomic Database

Anyone who's undertaken empirical macroeconomic research relating to Canada will know that there are some serious data challenges that have to be surmounted.

In particular, getting access to long-term, continuous, time series isn't as easy as you might expect.

Statistics Canada has been criticized frequently over the years by researchers who find that crucial economic series are suddenly "discontinued", or are re-defined in ways that make it extremely difficult to splice the pieces together into one meaningful time-series.

In recognition of these issues, a number of efforts have been made to provide Canadian economic data in forms that researchers need. These include, for instance, Boivin et al. (2010), Bedock and Stevanovic (2107), and Stephen Gordon's on-going "Project Link".

Thanks to Olivier Fortin-Gagnon, Maxime Leroux, Dalibor Stevanovic, &and Stéphane Suprenant we now have an impressive addition to the available long-term Canadian time-series data. Their 2018 working paper, "A Large Canadian Database for Macroeconomic Analysis", discusses their new database and illustrates its usefulness in a variety of ways.

Here's the abstract:
"This paper describes a large-scale Canadian macroeconomic database in monthly frequency. The dataset contains hundreds of Canadian and provincial economic indicators observed from 1981. It is designed to be updated regularly through (the) StatCan database and is publicly available. It relieves users to deal with data changes and methodological revisions. We show five useful features of the dataset for macroeconomic research. First, the factor structure explains a sizeable part of variation in Canadian and provincial aggregate series. Second, the dataset is useful to capture turning points of the Canadian business cycle. Third, the dataset has substantial predictive power when forecasting key macroeconomic indicators. Fourth, the panel can be used to construct measures of macroeconomic uncertainty. Fifth, the dataset can serve for structural analysis through the factor-augmented VAR model."
Note - these are monthly data! And they're freely available. Although the paper doesn't appear to provide the source for accessing the data, Dalibor kindly pointed out to me that there's a download link here, on his webpage. This link will give you the data in spreadsheet form, together with all of the necessary background information.

The only slight concern that I have about this resource - and I don't want to sound ungrateful - is the issue of the updating of the data over time. You'll note from the abstract that the database "...... is designed to be updated regularly through (the) StatCan database....". Given my comments (above) about some of the issues that we've all faced for a very long time when it comes to StatCan data, I  know that updating this new database on a regular basis is going to be a bit of a challenge.

Added 8 March 2019: I'm glad to learn that new update of the database is now available here.

However, let's not let this concern detract from the considerable benefits that we'll all derive from having access to this rich set of Canadian macroeconomic time-series.

Thanks, again, to the authors for constructing this database, and for making it freely available!

References

Bedock, N. & D. Stevanovic, 2017. An empirical study of credit shock transmission in a small open economy. Canadian Journal of Economics, 50, 541–570.

Boivin, J., M. Giannoni, & D. Stevanovic, 2010. Monetary transmission in a small open economy: more data, fewer puzzles. Technical report, Columbia Business School, Columbia University.

Fortin-Gagnon, O., M. Leroux, D. Stevanovic, & S. Suprenant, 2018. A large Canadian database for macroeconomic analysis. CIRANO Working Paper 2018s-25.

Gordon, S., 2018. Project Link - Piecing together Canadian economic history. Département d'économique, Université Laval.

© 2018, David E. Giles

Wednesday, November 14, 2018

More Sandwiches, Anyone?

Consider this my Good Deed for the Day!

A re-tweet from a colleague whom I follow on Twitter brought an important paper to my attention. I thought I'd share it more widely.

The paper is titled, "Small-sample methods for cluster-robust variance estimation and hypothesis testing in fixed effect models", by James Pustejovski (@jepusto) and Beth Tipton (@stats-tipton). It appears in The Journal of Business and Economic Statistics.  

You can tell right away, from its title, that this paper is going to be a must-read for empirical economists. And note the words, "Small-sample" in the title - that sounds interesting.

 Here's a compilation of Beth's six tweets:

Monday, November 5, 2018

Econometrics Reading for November

In between raking leaves and dealing with some early snow, I've put together this list of suggested reading for you:
  • Beckert, W., 2018. A note on specification testing in some structural regression models. Mimeo., Department of Economics, Mathematics and Statistics, Birkbeck College, University of London.
  • Clarke, D., 2018. A convenient omitted bias formula for treatment effect models. Economics Letters, in press.
  • Liu, Y. & Y. Rho, 2018. On the choice of instruments in mixed frequency specification tests. Mimeo., School of Business and Economics, Michigan Technological University.
  • Lütkepohl, H., A. Staszewska-Bystrova, & P. Winker, 2018. Constructing joint confidence bands for impulse functions of VAR models - A review. Lodz Economic Working Paper 4/2018, Faculty of Economics and Sociology, University of Lodz.
  • Richardson, A., T. van Florenstein Mulder, & T. Vehbi, 2018. Nowcasting New Zealand GDP using machine learning algorithms.
  • Słoczyński, T., 2018. A general weighted average representation of the ordinary and two-stage least squares estimands. Mimeo., Department of Economics, Brandeis University.

© 2018, David E. Giles

Tuesday, October 9, 2018

The Refereeing Process in Economics Journals

The peer-review process is an essential part of academic publishing. We use it in the hope of ensuring the honesty, novelty, importance, and timeliness of published research. The selection of (usually anonymous) referees by a representative of the journal to which a research paper has been submitted for consideration, and the preparation of the reports/reviews by those referees, are key steps in the overall process of the dissemination of research results.

There are several different "models" when it comes to the refereeing, or peer-review process. Some of these have been described and compared recently, and in detail, here. It's also interesting to note that peer-reviewing is actually a relatively recent phenomenon in most academic disciplines.

There's no doubt that a well-crafted referee's report is a blessing - to both the recipient author and the handling Editor/Associate Editor/Editorial Board member who's looking to that report for an informed basis for making an editorial decision.

Unfortunately, such reports are not necessarily the norm in Economics/Econometrics - more on this below!

I know this is so, all too well - not only from the times when, as an author, I've been "on the receiving end" of some decidedly unhelpful reports; but also (and much more importantly) from my experiences on the other side of the fence, as a "handling editor" for a quite a number of economics, econometrics, and statistics journals.

Some would say that the academic publishing process is a bit of a crap-shoot. At times, I think that there's some truth to that. However, there's a great deal that both authors and referees can do to make the exercise more rational. 

Wednesday, October 3, 2018

A Shout-Out for The Replication Network

In May 2015 I posted about the newly-formed The Replication Network (TRN). Since then, their team has been extremely busy promoting and fostering their objectives to serve "...... as a channel of communication to (i) update scholars about the state of replications in economics, and (ii) establish a network for the sharing  of information and ideas." TRN's "..... goal is to encourage economists and their journals to publish replications."

And they're doing a great job!

As a member of TRN I receive email newsletters from them regularly. I thought I'd share the one that I received this morning, in the hope that it might encourage some of you to become TRN members.

Here it is:

Monday, October 1, 2018

Essential Fall Reading

  • Buono, D., G. Kapetanios, M. Marcellino, G. Mazzi, & F. Papailias, 2018. Big data econometrics - Now casting and early estimates. Working paper N. 82, Baffi Carefin Centre for Applied Research on International Markets, Banking, Finance, and Regulation, Bocconi University.
  • Fair, R. C., 2018. Information content of DSGE forecasts. Mimeo
  • Lewbel, A., 2018. The identification zoo - Meanings of Identification. Forthcoming, Journal of Economic Literature.
  • Pretis, F., J. J. Reade, & G. Sucarrat, 2018. Automated general-to-specific (GETS) regression modeling and indicator saturation for outliers and structural breaks. Journal of Statistical Software, 86, 3.
  • Woodruff, R. S., 1971. A simple method for approximating the variance of a complicated estimate. Journal of the American Statistical Association, 66, 411-414.
  • Zhang, R. & N. H. Chan, 2018. Portmanteau-type tests for unit-root and cointegration. Journal of Econometrics, in press.
© 2018, David E. Giles

Thursday, September 20, 2018

Controlling My Heating Bill Using Bayesian Model Averaging

Where we live, in rural Ontario, we're not connected to "natural gas". Our home furnace runs on propane, and a local supplier sends a tanker to refill our propane tanks on a regular basis during the colder months.

Earlier this month we had to make a decision regarding our contract with the propane retailer. Should we opt for a delivery price that can vary, up or down, throughout the coming fall and winter; or should we "lock in" at a fixed delivery price for the period from October to May of next year?

Now, I must confess that my knowledge of the propane industry is slight, to say the least. I decided that a basic analysis of the historical propane price data might provide some insights to assist in making this decision. It also occurred to me, after doing this, that the analysis that I went through might be of interest to readers, as a simple exercise in forecasting using Bayesian model averaging.

Here are the details...........

Sunday, September 2, 2018

September Reading List

This month's list of recommended reading includes an old piece by Milton Friedman that you may find interesting:
  • Broman, K. W. & K. H. Woo, 2017. Data organization in spreadsheets. American Statistician, 72, 2-10.
  • Friedman, M., 1937. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32, 675-701.
  • Goetz, T. & A. Hecq, 2018. Granger causality testing in mixed-frequency VARs with (possibly) cointegrated processes. MPRA Paper No. 87746.
  • Güriş, B., 2018. A new nonlinear unit root test with Fourier function. Communications in Statistics - Simulation and Computation, in press.
  • Honoré, B. E. & L. Hu, 2017. Poor (Wo)man's bootstrap. Econometrica, 85, 1277-1301. (Discussion paper version.)
  • Peng, R. D., 2018. Advanced Statistical Computing. Electronic resource.
© 2018, David E. Giles

Sunday, August 5, 2018

An Archival History of the Econometric Society

For those of you who have an interest in the history of Econometrics as a discipline - that's all of you, right (?) - there's a fascinating collection of material available at The Econometric Society: An Archival History.

As the name suggests, this repository relates to the Econometric Society and the journal Econometrica. It contains all sorts of fascinating facts, correspondence, and the like.

© 2018, David E. Giles

Wednesday, August 1, 2018

Recommended Reading

Here's my reading list for August:

  • Ardia, D., K. Bluteau, & L. F. Hoogerheide, 2018. Methods for computing numerical standard errors: Review and application to value-at-risk estimation. Journal of Time Series Econometrics. Available online.
  • Bauer, D. & A. Maynard, 2012. Persistence-robust surplus-lag Granger causality testing. Journal of Econometrics, 169. 293-300.
  • David, H. A., 2009. A historical note on zero correlation and independence. American Statistician, 63, 185-186.
  • Fisher, T. J. & M. W. Robbins, 2018. A cheap trick to improve the power of a conservative hypothesis tests. American Statistician. Available online.
  • Granger, C. W. J., 2012. Useful conclusions from surprising results. Journal of Econometrics, 169, 142-146.
  • Harville, D. A., 2014. The need for more emphasis on prediction: A 'nondenominational' model-based approach (with discussion). American Statistician, 68, 71-92.
© 2018, David E. Giles

Sunday, July 15, 2018

Handbook of Quantile Regression

Quantile regression is a powerful and flexible technique that is widely used by econometricians and other applied statisticians. In modern terms we tend to date it back to the classic paper by Koenker and Bassett (1978).

Recently, I reviewed the Handbook of Quantile Regression. This edited volume comprises a number of important, original, contributions to the quantile regression literature. The various chapters cover a wide range of topics that extend the basic quantile regression set-up.

You can read my review of this book (Giles, 2018), here. I hope that it motivates you to explore this topic further.

References
Giles, D. E., 2018. Review of Handbook of Quantile Regression. Statistical Papers, 59, 849-850. 

Koenker, R., 2005. Quantile Regression. Cambridge University Press, Cambridge.

Koenker, R. and G. W. Bassett, 1978. Regression quantiles. Econometrica, 46, 33-50.

Koenker, R., V. Chernozhukov, H. Huming, & L. Peng (eds.), 2017. Handbook of Quantile Regression. Chapman & Hall/CRC, Boca Raton, FL.

© 2018, David E. Giles

Saturday, July 14, 2018

What's in a Journal Name?

Back in 2011 I put together a very light-hearted working paper titled, What's in a (Journal) Name? Here's the associated link.

That paper addressed the (obviously) important question: "Is there a a correlation between the ranking of an economics journal and the length of the journal's title?"

I analyzed a sample of 159 academic economics journals. Although there was no significant association between journal quality and journal title length for the full sample of data, I did find that there was a significant “bathtub” relationship between these variables when the data were subjected to a rank correlation analysis over sub-samples. 

This led me to conclude (p.5),among other things:
'This “bathtub” relationship will undoubtedly sound alarm bells in the corridors of publishing houses as they assess proposals for new economics journals. The title, Economics, is no longer available, having been cunningly snapped up in recent years by an open-access, open-assessment e-journal which managed to “cover all of the bases” in one fell swoop. Even more recently the American Economics Association laid claim to the titles Macroeconomics and Microeconomics, albeit with an “AEA” prefix that they may wish to re-consider. The publishers of the journal, SERIEs: Journal of the Spanish Economic Association, which was launched in 2010, will no doubt ponder the merits of dropping the last six words of its title. However, there is hope. The title Econometrica has been spoken for since 1933, but to the best of our knowledge the more worldly journal title Econometrics is still available. Publishers should register their interest forthwith!'
As usual the latter remark proved to be safe advice on my part! I wonder if my subsequent invitation to join the Editorial Board of Econometrics was some sort of reward? 

I'll probably never know!

© 2018, David E. Giles

Friday, July 13, 2018

More on Regression Coefficient Interpretation

I get a lot of direct email requests from people wanting help/guidance/advice of various sorts about some aspect of econometrics or other. I like being able to help when I can, but these requests can lead to some pitfalls -  for both of us.

More on that in a moment. Meantime, today I got a question from a Ph.D student, "J", which was essentially the following:

" Suppose I have the following regression model

             log(yi) = α + βXi + εi    ;  i = 1, 2, ...., n .

How do interpret the (estimated) value of β?"

I think most of you will know that the answer is:

"If X changes by one unit, then y changes by (100*β)%".

If you didn't know this, then some trivial partial differentiation will confirm it. And after all, isn't partial differentiation something that grad. students in ECON should be good at?

Specifically,

      β = [∂log(yi) / ∂Xi] = [∂logyi / ∂yi][∂yi∂Xi] = [∂yi  ∂Xi] / yi,

which is the proportional change in y for a unit change in X. Multiplying by 100 puts the answer into percentage terms.

So, I responded to "J" accordingly.

So far, so good.

But then I got a response:

"Actually, my model includes an interaction term, and really it looks like this:

    log(yi) = α + βXi + γ [XiΔlog(Zi)] + εi    ;  i = 1, 2, ...., n.

How do I interpret β?"

Whoa! That's not the question that was first asked - and now my previous answer (given in good faith) is totally wrong! 

Let's do some partial differentiation again, with this full model. We still have:

[∂log(yi) / ∂Xi] = [∂logyi / ∂yi][∂yi / ∂Xi] = [∂yi  ∂Xi] / yi.

However, this expression now equals [β γ Δlog(Zi)].

So, a one unit change in X leads to a percentage change in y that's equal to 100*[β γ Δlog(Zi)]%.

This percentage change is no longer constant - it varies as Z takes on different sample values. If you wanted to report a single value you could evaluate the expression using the estimates for β and γ, and either the sample average, or sample median, value for Δlog(Z).

This illustrates one of the difficulties that I face sometimes. I try to respond to a question, but I really don't know if the question being asked is the appropriate one; or if it's been taken out of context; or if the information I'm given is complete or not.

If you're a grad. student, then discussing your question in person with your supervisor should be your first step!

© 2018, David E. Giles

Friday, July 6, 2018

Interpreting Dummy Variable Coefficients After Non-Linear Transformations

Dummy variables - ones that take only the values zero and one - are commonly used as regressors in regression models. I've devoted several posts to discussing various aspects of such variables, notably here, but also here, here, and here.

When the regression model in question is linear, in both the variables and the parameters, the interpretation of coefficient of such a dummy variable is simple. Suppose that the model takes the form:

    yi = α + β Di + Σj γj Xji + ε    ;     E(ε) = 0   ;   i = 1, ...., n.                          (1)

The range of summation in the term on the right-hand side of (1) is from 1 to k, if there are k regressors in addition to the dummy variable, D. (There is no loss of generality in assuming a single dummy regressor in what follows, and no further distributional assumptions about the error term will be needed or used.)

As you'll know, if Di = 0, then the intercept coefficient in (1) is just α; and it shifts to (α + β) if Di = 1. It changes by an amount equal to β, and so does the predicted mean value of y. Conversely, this amount changes by -β  if Di changes from 1 to 0. Estimating (1) by OLS will give us an estimate of the effect on y of Di sw from 0 to 1 in value, or vice versa.

But a bit more on estimation issues below!


Another way of interpreting what is going on is to think about the growth rate in the expected value of y that is implied when D changes its value. Setting Di = 0, and then Di = 1, this growth rate is:

   g01i = [ (α + β + Σj γj Xji) - (α Σj γj Xji)] / (α Σj γj Xji) = [β /  (α Σj γj Xji)] ,

which you can multiply by 100 to convert it into a percentage rate of growth, if you wish. 

Note that this growth rate depends on the other parameters in the model, and also on the sample values for the other regressors

Conversely, when D changes in value from 1 to 0, this growth rate is different, namely:

   g10i = - [β / (α + β + Σj γj Xji)]                            (i = 1, ...., n).

In this fully linear model these growth rates offer a somewhat less appealing way of summarizing what is going on than does the amount of change in the expected value of y. The latter doesn't depend on the other parameters of the model, or on the sample values of the regressors.

However, this situation can change very quickly once we move to a regression model that is non-linear, either in the variables or in the parameters (or both). 

That's what I want to focus on in this post. 

Let's consider some interesting examples that involve common transformations of the dependent variable in a regression model. Apart from anything else, such transformations are often undertaken to make the assumption of a normally distributed error term more reasonable.

Monday, July 2, 2018

Some Reading Suggestions for July

Some summertime reading:
  • Chen, T., DeJuan, J., & R. Tian, 2018. Distributions of GDP across versions of  the Penn World Tables: A functional data analysis approach. Economics Letters, in press. 
  • Clements, K.W., H. Liu, & Y. Tarverdi, 2018. Alcohol consumption, censorship and misjudgment. Applied Economics, online
  • Jin, H., S. Zhang, J. Zhang,& H. Hao, 2018. Modified tests for change points in variance in the possible presence of mean breaks. Journal of Statistical Computation and Simulation, online
  • Pata, U.K., 2018. The Feldstein Horioka puzzle in E7 countries: Evidence from panel cointegration and asymmetric causality analysis. Journal of International Trade and Economic Development, online.
  • Sen, A., 2018. A simple unit root testing methodology that does not require knowledge regarding the presence of a break. Communications in Statistics - Simulation and Computation, 47, 871-889.
  • Wright, T., M. Klein, &K. Wieczorek, 2018. A primer on visualizations for comparing populations, including the issue of overlapping confidence intervals. American Statistician, online.

© 2018, David E. Giles

Sunday, July 1, 2018

Dummy Variables in a Semilogarithmic Regression: Exact Distributional Results

For better or worse, semilogarithmic regression models are used a lot in empirical economics. 

It would be nice to think that this is because the researcher found that a logarithmic transformation of the model's dependent variable led to residuals that were more "normally" distributed than without the transformation. Unfortunately, however, it's often just "for convenience". With this transformation, the estimates of the regression coefficients have a simple interpretation, as explained below

I hate it when the latter situation arises. I've long since lost track of the number of times I've been at a seminar where the speaker has used this "simple interpretation" as an excuse for their choice of a semilogarithmic regression specification. For goodness sake, the choice of the model's functional form should be based on more than "convenience"!

For some of my previous comments about this point, see this post.

Most of you will know that when our semilogarithmic model includes a dummy (zero-one) regressor, we have to be careful about how we interpret that regressor's estimated coefficient. Suppose that we have the following regression model, where D is a dummy variable, and the X's are regresssors that are measured "continuously"

   ln(yi) = α + β Di + Σj γj Xji + ε    ;     E(ε) = 0   ;   i = 1, ...., n.                         

Note that there's no loss of generality here in having just one dummy variable in the model.

Then, the interpretation of the regression coefficients is:
  1. A one-unit change in Xj leads to a proportional change of  γj (or a percentage change of 100γj) in y.
  2. When the dummy variable changes from D = 0 to D = 1, the proportional change in y is [exp(β) -1]. Conversely, going from D = 1 to D = 0 implies a proportional change in y of  [exp(-β) -1]. Again, multiply by 100 to get a percentage change.
See Halvorsen and Palmquist (1980) for an explanation of the second of these results, and my comments in this earlier post.

Kennedy (1981) and Giles (1982) discuss the issue of estimating this proportional change in the case of the dummy variable. Their results relate to point estimation - with a focus on unbiased estimation of the proportional change, when the model's errors are normally distributed..

But what about interval estimation of this effect? 

Tuesday, June 19, 2018

Shout-Out for Marc Bellemare

If you don't follow Marc Bellemare's blog (shame on you - you should!), then you may not have caught up with his recent posts relating to his series of lectures on "Advanced Econometrics - Causal Inference With Observational Data" at the University of Copenhagen in May of this year.

Marc kept us all on tenterhooks by "releasing" the slides for these lectures progressively - smart move!

Update, 23 July 2018 -

However, all of the eight lectures in the series are now available for downloading:
Thanks for sharing them, Marc.


© 2018, David E. Giles

Wednesday, June 6, 2018

The Series of Unsurprising Results in Economics (SURE)

Andrea Menclover of the University of Canterbury (New Zealand) has recently founded the SURE Journal, whose aims and scope are as follows:

'The Series of Unsurprising Results in Economics (SURE) is an e-journal of high-quality research with “unsurprising” findings. We publish scientifically important and carefully-executed studies with statistically insignificant or otherwise unsurprising results. Studies from all fields of Economics will be considered. SURE is an open-access journal and there are no submission charges. (My emphasis, DG.)

SURE benefits readers by:
  • Mitigating the publication bias and thus complementing other journals in an effort to provide a complete account of the state of affairs;
  • Serving as a repository of potential (and tentative) “dead ends” in Economics research.

SURE benefits writers by:
  • Providing an outlet for interesting, high-quality, but “risky” (in terms of uncertain results) research projects;
  • Decreasing incentives to data-mine, change theories and hypotheses ex post or exclusively focus on provocative topics.'

To find out more or to submit a manuscript, visit: http://surejournal.org.

This is a novel venture that has a lot to offer at a time when research replicability and publication bias are (rightly) receiving so much attention.

I'm delighted to be associated with the new journal as a member of its Editorial Board.

© 2018, David E. Giles

Friday, June 1, 2018

Suggested Reading for June

© 2018, David E. Giles

Thursday, May 31, 2018

The Uniqueness of the Cointegrating Vector

Suppose that we have (only) two non-stationary time-series, X1t and X2t (t = 1, 2, 3, .....). More specifically, suppose that both of these series are integrated of order one (i.e., I(1)). Then there are two possibilities - either X1 and X2 are cointegrated, or they aren't.

You'll recall that if they are cointegrated, then there is a linear combination of X1 and X2 that is stationary. Let's write this linear combination as Zt = (X1t + αX2t). (We can normalize the first "weight" to the value "one" without any loss of generality.) The vector whose elements are 1 and α is the so-called "cointegrating vector".

You may be aware that if such a vector exists, then it is unique.

Recently, I was asked for a simple proof of this uniqueness. Here goes.........

Thursday, April 26, 2018

Results of the Econometric Game, 2018

In a recent post I mentioned the 2018 "edition" of The Econometric Game, which was held in Amsterdam earlier this month.

In random order, the finalists, after the first two days' of competition, were the teams representing:

Aarhus University
Erasmus Universiteit Rotterdam
Harvard University
Lund University
McGill University
Universiteit van Tilburg
Universiteit van Amsterdam
University Carlos III Madrid
University of Bristol
University of Toronto

These teams then competed in a further one-day event..

The team from University Carlos III Madrid emerged the winner; with those from Harvard University and Aarhus University taking second and third places respectively.

The organizers of The Game have provided a gallery of photos. here   

Congratulations to all involved for another impressive event!

© 2018, David E. Giles

Wednesday, April 25, 2018

April Reading

Very belatedly, here is my list of suggested reading for April:
  • Biørn, E., 2017. Identification, instruments, omitted variables, and rudimentary models: Fallacies in the "experimental approach" to econometrics. Memorandum No. 13/2017, Department of Economics, Oslo University.
  • Chambers, M. J., and M. Kyriacou, 2018. Jackknife bias reduction in the presence of a near-unit root. Econometrics, 6, 11.
  • Derryberry, D., K. Aho, J. Edwards, and T. Peterson, 2018. Model selection and regression t-statistics. American Statistician, in press.
  • Mitchell, J., D. Robertson, and S. Wright, 2018. R2 bounds for predictive models: What univariate properties tell us about multivariate predictability. Journal of Business and Economic Statistics, in press. (Free download here.)
  • Parker, T., 2017. Finite-sample distributions of the Wald, likelihood ratio, and Lagrange multiplier test statistics in the classical linear model. Communications in Statistics - Theory and Methods, 46, 5195-5202.
  • Troster, V., 2018. Testing Granger-causality in quantiles. Econometric Reviews, 37, 850-866.

© 2018, David E. Giles

Monday, March 19, 2018

The (Undergraduate) (Econo) Metrics Game

In a comment on my recent post about the long-running Econometrics Game for graduate student teams, "BJH" kindly pointed out the existence of a counterpart for undergraduate econometrics students.

The "Metrics Game" is a two-day competition organised by OEconomica in association with the University of Chicago’s Department of Economics and the Becker Friedman Institute. 

The 2018 competition is the fourth in the series, and gets underway on 7 April at the University of Chicago.

It's great to see competitions of this type being made available for students at all levels of study.

© 2018, David E. Giles

Sunday, March 18, 2018

The Econometric Game, 2018

Readers of this blog will be familiar with The Econometric Game. You'll find my posts about the 2016 and 2017 Games here, and here the first of those posts links to ones about the Games from previous years.

The Econometric Game is a competition between teams of graduate students in econometrics. It's organised by the study association for Actuarial Science, Econometrics & Operational Research (VSAE) of the University of Amsterdam, and it has been a terrific success.

The Econometric Game has been held annually since 1999. This year, 30 teams have been chosen to compete in the Games, which will be held in Amsterdam from 11 to 13 of April. The theme for this year's competition is "Econometrics of Happiness".

The winners in both 2016 and 2017 were teams representing Harvard University. Let's see how they perform this year. I'll have some follow-up posts once the Game gets underway next month.

© 2018, David E. Giles

Wednesday, February 21, 2018

March Reading List

  • Annen, K. & S. Kosempel, 2018. Why aid-to-GDP ratios? Discussion Paper 2018-01, Department of Economics and Finance, University of Guelph.
  • Conover, W. J., A. J. Guerrero-Serrano, & V. G. Tercero-Gomez, 2018. An update on 'a comparative study of tests for homogeneity of variance'. Journal of Statistical Computation and Simulation, online.
  • Foroni, C., M. Marcellino, & D. Stevanović, 2018. Mixed frequency models with MA components. Discussion Paper  No. 02/2018, Deutsche Bundesbank.
  • Sen, A., 2018. Lagrange multiplier unit root test in the presence of a break in the innovation variance. Communications in Statistics - Theory and Methods, 47, 1580-1596.
  • Stewart, K. G., 2018. Suits' watermelon model: The missing simultaneous equations empirical example. Mimeo., Department of Economics, University of Victoria.
  • Weigt, T. & B. Wilfling, 2018. An approach to increasing forecast-combination accuracy through VAR error modeling. Paper 68/2018, Department of Economics, University of Münster.
© 2018, David E. Giles

Sunday, February 11, 2018

Recommended Reading for February

Here are some reading suggestions:
  • Bruns, S. B., Z. Csereklyei, & D. I. Stern, 2018. A multicointegration model of global climate change. Discussion Paper No. 336, Center for European, Governance and Economic Development Research, University of Goettingen.
  • Catania, L. & S. Grassi, 2017. Modelling crypto-currencies financial time-series. CEIS Tor Vegata, Research Paper Series, Vol. 15, Issue 8, No. 417.
  • Farbmacher, H., R. Guber, & J. Vikström, 2018. Increasing the credibility of the twin birth instrument. Journal of Applied Econometrics, online.
  • Liao, J. G. & A. Berg, 2018. Sharpening Jensen's inequality. American Statistician, online.
  • Reschenhofer, E., 2018. Heteroscedasticity-robust estimation of autocorrelation. Communications in Statistics - Simulation and Computation, online.

© 2018, David E. Giles

Saturday, February 10, 2018

Economic Goodness-of-Fit

What do we mean by a "significant result" in econometrics?

The distinction between "statistical significance" and "economic significance" has received a good deal of attention in the literature. And rightly so.

Think about the estimated coefficients in a regression model, for example. Putting aside the important issue of the choice of a significance level when considering statistical significance, we all know that results that are significant in the latter sense may or may not be 'significant' when their economic impact is considered.

Marc Bellemare provided a great discussion of this in his blog a while back.

Here, I want to draw attention to a somewhat related issue - distinguishing between the statistical and economic overall goodness-of-fit of an economic model.

Thursday, February 8, 2018

ASA Symposium on Statistical Inference - Recorded Sessions

In October of last year, the American Statistical Association held a two-day Symposium on Statistical Inference in Bethesda, MD.

The symposium was sub-titled, Scientific Method for the 21st. Century: A World Beyond p < 0.05. That gives you some idea of what it was about.

The ASA has now released video recordings of several of the sessions at the symposium, and you can find them here.

The video sessions include:

"Why Is Eliminating P-Values So Hard? Reflections on Science and Statistics." (Steve Goodman)

"What Have We (Not) Learnt from Millions of Scientific Papers with P-Values?" (John Ioannidis)

"Understanding the Needs for Statistical Evidence of Decision-Makers in Medicine." (Madhu Mazumdar, Keren Osman, & Elizabeth Garrett-Mayer) 

"Statisticians: Sex Symbols, Liars, Both, or Neither?" (Christie Aschwanden, Laura Helmuth, & Aviva Hope Rutkin) 

"The Radical Prescription for Change." (Andrew Gelman, Marcia McNutt, & Xiao-Li Meng)

Closing Session: “Take the Mic”

The videos are stimulating and timely. I hope that you enjoy them.

© 2018, David E. Giles

Saturday, February 3, 2018

Bayesian Econometrics Slides

Over the years, I included material on Bayesian Econometrics in various courses that I taught - especially at the grad. level. I retired from teaching last year, and I thought that some of you might be interested in the slides that I used when I taught a Bayesian Econometrics topic for the last time.

I hope that you find them useful - just click on the numbers below.

1. General Background
2. Constructing Prior Distributions
3. Properties of Bayes Estimators and Tests
4. Bayesian Inference for the Linear Regression Model
5. Bayesian Computation
6. More Bayesian Computation 
7. Acceptance-Rejection Sampling
8. The Metropolis-Hastings Algorithm
9. Model Selection - Theory
10. Model Selection - Applications
11. Consumption Function Case Study
© 2018, David E. Giles

Tuesday, January 2, 2018

Econometrics Reading for the New Year

Another year, and lots of exciting reading!
  • Davidson, R. & V. Zinde-Walsh, 2017. Advances in specification testing. Canadian Journal of Economics, online.
  • Dias, G. F. & G. Kapetanios, 2018. Estimation and forecasting in vector autoregressive moving average models for rich datasets. Journal of Econometrics, 202, 75-91.  
  • González-Estrada, E. & J. A. Villaseñor, 2017. An R package for testing goodness of fit: goft. Journal of Statistical Computation and Simulation, 88, 726-751.
  • Hajria, R. B., S. Khardani, & H. Raïssi, 2017. Testing the lag length of vector autoregressive models:  A power comparison between portmanteau and Lagrange multiplier tests. Working Paper 2017-03, Escuela de Negocios y EconomÍa. Pontificia Universidad Católica de ValaparaÍso.
  • McNown, R., C. Y. Sam, & S. K. Goh, 2018. Bootstrapping the autoregressive distributed lag test for cointegration. Applied Economics, 50, 1509-1521.
  • Pesaran, M. H. & R. P. Smith, 2017. Posterior means and precisions of the coefficients in linear models with highly collinear regressors. Working Paper BCAM 1707, Birkbeck, University of London.
  • Yavuz, F. V. & M. D. Ward, 2017. Fostering undergraduate data science. American Statistician, online. 

© 2018, David E. Giles

Monday, January 1, 2018

Interpolating Statistical Tables

We've all experienced it. You go to use a statistical table - Standard Normal, Student-t, F, Chi Square - and the line that you need simply isn't there in the table. That's to say the table simply isn't detailed enough for our purposes.

One question that always comes up when students are first being introduced to such tables is:
"Do I just interpolate linearly between the nearest entries on either side of the desired value?"
Not that these exact words are used, typically. For instance, a student might ask if they should take the average of the two closest values. How should you respond?