Sunday, May 19, 2019

Update on the "Series of Unsurprising Results in Economics"

In June of last year I had a post about a new journal, Series of Unsurprising Results in Economics (SURE).

If you didn't get to read that post, I urge you to do so. 

More importantly, you should definitely take a look at this piece by Kelsey Piper, from a couple of days ago, and titled, "This economics journal only publishes results that are no big deal - Here’s how that might save science".

Kelsey really understands the rationale for SURE, and the important role that it can play in terms of reducing publication bias, and assisting with replicating results.

You can get a feel for what SURE has to offer by checking out this paper  by Nick Huntington-Klein and Andrew Gill that they are publishing.

We'll all be looking forward to more excellent papers like this!

© 2019, David E. Giles

Wednesday, May 1, 2019

May Reading List

Here's a selection of suggested reading for this month:
  • Athey, S. & G. W. Imbens, 2019. Machine learning methods economists should know about. Mimeo.
  • Bhagwat, P. & E. Marchand, 2019. On a proper Bayes but inadmissible estimator. American Statistician, online.
  • Canals, C. & A. Canals, 2019. When is n large enough? Looking for the right sample size to estimate proportions. Journal of Statistical Computation and Simulation, 89, 1887-1898.
  • Cavaliere, G. & A. Rahbek, 2019. A primer on bootstrap testing of hypotheses in time series models: With an application to double autoregressive models. Discussion Paper 19-03, Department of Economics, University of Copenhagen.
  • Chudik, A. & G. Geogiardis, 2019. Estimation of impulse response functions when shocks are observed at a higher frequency than outcome variables. Globalization Institute Working Paper 356, Federal Reserve Bank of Dallas.
  • Reschenhofer, E., 2019. Heteroscedasticity-robust estimation of autocorrelation. Communications in Statistics - Simulation and Computation, 48, 1251-1263.
© 2019, David E. Giles

Monday, April 29, 2019

Recursions for the Moments of Some Continuous Distributions

This post follows on from my recent one, Recursions for the Moments of Some Discrete Distributions. I'm going to assume that you've read the previous post, so this one will be shorter. 

What I'll be discussing here are some useful recursion formulae for computing the moments of a number of continuous distributions that are widely used in econometrics. The coverage won't be exhaustive, by any means. I provide some motivation for looking at formulae such as these in the previous post, so I won't repeat it here. 

When we deal with the Normal distribution, below, we'll make explicit use of Stein's Lemma. Several of the other results are derived (behind the scenes) by using a very similar approach. So, let's begin by stating this Lemma.

Stein's Lemma (Stein, 1973):


"If  X ~ N[θ , σ2], and if g(.) is a differentiable function such that E|g'(X)| is finite, then 

                            E[g(X)(X - θ)] = σ2 E[g'(X)]."

It's worth noting that although this lemma relates to a single Normal random variable, in the bivariate Normal case the lemma generalizes to:


"If  X and Y follow a bivariate Normal distribution, and if g(.) is a differentiable function such that E|g'(Y)| is finite, then 

                            Cov.[g(Y )X] = Cov.(X , Y) E[g'(Y)]."

In this latter form, the lemma is useful in asset pricing models.

There are extensions of Stein's Lemma to a broader class univariate and multivariate distributions. For example, see Alghalith (undated), and Landsman et al. (2013), and the references in those papers. Generally, if a distribution belongs to an exponential family, then recursions for its moments can be obtained quite easily.

Now, let's get down to business............


Sunday, April 21, 2019

Recursions for the Moments of Some Discrete Distributions

You could say, "Moments maketh the distribution". While that's not quite true, it's pretty darn close.

The moments of a probability distribution provide key information about the underlying random variable's behaviour, and we use these moments for a multitude of purposes. Before proceeding, let's be sure that we're on the same page here.

Friday, April 12, 2019

2019 Econometric Game Results

The Econometric Game is over for another year.

The winning team for 2019 was from the University of Melbourne.

The second and third placed teams were from the Maastricht University and Aarhus University, respectively.

Congratulations to the winning teams, and to all who competed this year!

© 2019, David E. Giles

Wednesday, April 10, 2019

EViews 11 Now Available

As you'll know already, I'm a big fan of the EViews econometrics package. I always found it to be a terrific, user-friendly, resource when teaching economic statistics and econometrics, and I use it extensively in my own research.

Along with a lot of other EViews users, I recently had the opportunity to "test drive" the beta release of the latest version of this package, EViews 11. 

EViews 11 has now been officially released, and it has some great new features. (Click on the links there to see some really helpful videos.) To see what's now available, check it out here

Nice update. Thanks!

© 2019, David E. Giles

Tuesday, April 9, 2019

SHAZAM!

This past weekend the new movie, Shazam, topped the box-office revenue list with over US$53million - and this is it's first weekend since being released.

Not bad!

Of course, in the Econometrics World, we associate the word, SHAZAM, with Ken White's famous computing package, which has been with us since 1977. 

Ken and I go way back. A few years ago I had a post about the background to the SHAZAM package. In that post I explained what the acronym "SHAZAM" stands for. If you check it out you'll see why it's timely for you to know these important historical facts!

And while you're there, take a look at the links to other tales that illustrate Ken's well-known wry sense of humour.

© 2019, David E. Giles

Monday, April 8, 2019

A Permutation Test Regression Example

In a post last week I talked a bit about Permutation (Randomization) tests, and how they differ from the (classical parametric) testing procedure that we generally use in econometrics. I'm going to assume that you've read that post.

(There may be a snap quiz at some point!)

I promised that I'd provide a regression-based example. After all, the two examples that I went through in that previous post were designed to expose the fundamentals of permutation/randomization testing. They really didn't have much "econometric content".

In what follows I'll use the terms "permutation test" and "randomization test" interchangeably.

What we'll do here is to take a look at a simple regression model and see how we could use a randomization test to see if there is a linear relationship between a regressor variable, x, and the dependent variable, y. Notice that I said a "simple regression" model. That means that there's just the one regressor (apart from an intercept). Multiple regression models raise all sorts of issues for permutation tests, and we'll get to that in due course.

There are several things that we're going to see here:
  1. How to construct a randomization test of the hypothesis that the regression slope coefficient is zero.
  2. A demonstration that the permutation test is "exact". That it, its significance level is exactly what we assign it to be.
  3. A comparison between a permutation test and the usual t-test for this problem.
  4. A demonstration that the permutation test remains "exact", even when the regression model is mi-specified by fitting it through the origin.
  5. A comparison of the powers of the randomization test and the t-test under this model mis-specification.


Wednesday, April 3, 2019

What is a Permutation Test?

Permutation tests, which I'll be discussing in this post, aren't that widely used by econometricians. However, they shouldn't be overlooked.

Let's begin with some background discussion to set the scene. This might seem a bit redundant, but it will help us to see how permutation tests differ from the sort of tests that we usually use in econometrics.

Background Motivation

When you took your first course in economic statistics, or econometrics, no doubt you encountered some of the basic concepts associated with testing hypotheses. I'm sure that the first exposure that you had to this was actually in terms of "classical", Neyman-Pearson, testing. 

It probably wasn't described to you in so many words. It would have just been "statistical hypothesis testing". The whole procedure would have been presented, more or less, along the following lines:

Monday, April 1, 2019

Some April Reading for Econometricians

Here are my suggestions for this month:
  • Hyndman, R. J., 2019. A brief history of forecasting competitions. Working Paper 03/19, Department of Econometrics and Business Statistics, Monash University.
  • Kuffner, T. A. & S. G. Walker, 2019. Why are p-values controversial?. American Statistician, 73, 1-3.
  • Sargan, J. D.,, 1958. The estimation of economic relationships using instrumental variables. Econometrica, 26, 393-415. (Read for free online.)  
  • Sokal, A. D., 1996. Transgressing the boundaries: Towards a trasnformative hermeneutics of quantum gravity. Social Text, 46/47, 217-252.
  • Zeng, G. & Zeng, E., 2019. On the relationship between multicollinearity and separation in logistic regression. Communications in Statistics - Simulation and Computation, published online.
  • Zhang, X., S. Paul, & Y-G. Yang, 2019. Small sample bias correction or bias reduction? Communications in Statistics - Simulation and Computation, published online.
© 2019, David E. Giles

Friday, March 29, 2019

Infographics Parades

When I saw Myko Clelland's tweet this morning, my reaction was "Wow! Just, wow!"

Myko (@DapperHistorian) kindly pointed me to the source of this photo that he tweeted about:


It appears on page 343 of Willard Cope Brinton's book, Graphic Methods for Presenting Facts (McGraw-Hill, 1914).

Myko included a brief description in his tweet, but let me elaborate by quoting from pp.342-343 of Brinton's book, and you'll see why I liked the photo so much:
"Educational material shown in parades gives an effective way for reaching vast numbers of people. Fig. 238 illustrates some of the floats used in presenting statistical information in the municipal parade by the employees of the City of New York, May 17, 1913. The progress made in recent years by practically every city department was shown by comparative models, charts, or large printed statements which could be read with ease fro either side of the street. Even though the day of the parade was rainy, great crowds lined the sidewalks. There can be no doubt that many of the thousands who saw the parade came away with the feeling that much is being accomplished to improve the conditions of municipal management. A great amount of work was necessary to prepare the exhibits, but the results gave great reward."
Don't you just love it? A gigantic mobile poster session!

© 2019, David E. Giles

Thursday, March 21, 2019

A World Beyond p < 0.05


This entire issue is open-access. In addition to an excellent editorial, Moving to a World Beyond "p < 0.05" (by Ronald Wasserstein, Allen Schirm, and Nicole Lazar) it comprises 43 articles with such titles as:
I'm sure that you get the idea of what this supplementary issue is largely about.

But look back at its title - Statistical Inference in the 21st. Century: A World Beyond p < 0.05. It's not simply full of criticisms. There's a heap of excellent, positive, and constructive material in there.

Highly recommended reading!


© 2019, David E. Giles

Wednesday, March 20, 2019

The 2019 Econometric Game

The annual World Championship of Econometrics, The Econometric Game, is nearly upon us again!

Readers of this blog will be familiar with "The Game" from posts relating to this event in previous years. For example, see here for some 2018 coverage.

This year The Econometric Game will be held from 10 to 12 April. As usual, it is being organized by the study association for Actuarial Science, Econometrics & Operational Research (VSAE) of the University of Amsterdam. 

Teams of graduate students from around the globe will be competing for top prize on the basis of their analysis of econometrics case studies. The top three tams in 2018 were from  Universidad Carlos III Madrid,  Harvard University, and Aarhus University.

Check out this year's Game, and I'll post more on it next month.

(30 March, 2019 update - This year's theme has now been announced. It's "Climate Econometrics".)

© 2019, David E. Giles

Wednesday, March 13, 2019

Forecasting After an Inverse Hyperbolic Sine Transformation

There are all sorts of good reasons why we sometimes transform the dependent variable (y) in a regression model before we start estimating. One example would be where we want to be able to reasonably assume that the model's error term is normally distributed. (This may be helpful for subsequent finite-sample inference.)

If the model has non-random regressors, and the error term is additive, then a normal error term implies that the dependent variable is also normally distributed. But it may be quite plain to us (even from simple visual observation) that the sample of data for the y variable really can't have been drawn from a normally distributed population. In that case, a functional transformation of y may be in order.

So, suppose that we estimate a model of the form

              f(yi) = β1 + β2 xi2 + β3 xi3 + .... + βk xik + εi ;    εi ~ iid N[0 , σ2] .                         (1)


where f(.) is usually a 1-1 function, so that f-1(.) is uniquely defined. Examples include f(y) = log(y), (where, throughout this post, log(a) will mean the natural logarithm of 'a'.); and f(y) = √(y) (if we restrict ourselves to the positive square root).

Having estimated the model, we may then want to generate forecasts of y itself, not of f(y). This is where the inverse transformation, f-1(y), comes into play.

Saturday, March 9, 2019

Update for A New Canadian Macroeconomic Database

In a post last November I discussed "A New Canadian Macroeconomic Database".

The long-term, monthly, database in question was made available by Olivier Fortin-Gagnon, Maxime Leroux, Dalibor Stevanovic, &and Stéphane Suprenant. Their 2018 working paper, "A Large Canadian Database for Macroeconomic Analysis", provides details and some applications of the new data.

Dailbor wrote to me yesterday to say that the database has now been updated. This is great news! Regular updates are crucial for important data repositories such as this one.

The updated database can be accessed at www.stevanovic.uqam.ca/DS_LCMD.html .

© 2019, David E. Giles

Wednesday, March 6, 2019

Forecasting From a Regression with a Square Root Dependent Variable

Back in 2013 I wrote a post that was titled, "Forecasting From Log-Linear Regressions". The basis for that post was the well-known result that if you estimate a linear regression model with the (natural) logarithm of y as the dependent variable, but you're actually interested in forecasting y itself, you don't just report the exponentials of the original forecasts. You need to add an adjustment that takes account of the connection between a Normal random variable and a log-Normal random variable, and the relationship between their means.

Today, I received a query from a blog-reader who asked how the results in that post would change if the dependent variable was the square root of y, but we wanted to forecast the y itself. I'm not sure why this particular transformation was of interest, but let's take a look at the question.

In this case we can exploit the relationship between a (standard) Normal distribution and a Chi-Square distribution in order to answer the question.

Friday, March 1, 2019

Some Recommended Econometrics Reading for March

This month I am suggesting some overview/survey papers relating to a variety of important topics in econometrics:
  • Bruns, S. B. & D. I. Stern, 2019. Lag length selection and p-hacking in Granger causality testing: prevalence and performance of meta-regression models. Empirical Economics, 56, 797-830.
  • Casini, A. & P. Perron, 2018. Structural breaks in time series. Forthcoming in Oxford Research Encyclopedia in Economics and Finance. 
  • Hendry, D. F. & K. Juselius, 1999. Explaining cointegration analysis: Pat I. Mimeo., Nuffield College, University of Oxford.
  • Hendry, D. F. & K. Juselius, 2000. Explaining cointegration analysis: Part II. Mimeo., Nuffield College, University of Oxford.
  • Horowitz, J., 2018. Bootstrap methods in econometrics. Cemmap Working Paper CWP53/18. 
  • Marmer, V., 2017. Econometrics with weak instruments: Consequences, detection, and solutions. Mimeo., Vancouver School of Economics, University of British Columbia.

© 2019, David E. Giles

Sunday, February 10, 2019

A Terrific New Book on the Linear Model

Recently, it was my distinct pleasure to review a first-class book by David Harville, titled Linear Models and the Relevant Distributions and Matrix Algebra.

(Added 28 February, 2019: You can now read the published review in Statistical Papers, here.)

Here is what I had to say:

Tuesday, February 5, 2019

Misinterpreting Tests, P-Values, Confidence Intervals & Power

There are so many things in statistics (and hence in econometrics) that are easily, and frequently, misinterpreted. Two really obvious examples are p-values and confidence intervals.

I've devoted some space in earlier posts to each of these concepts, and their mis-use. For instance, in the case of p-values, see the posts here and here; and for confidence intervals, see here and here.

Today I was reading a great paper by Greenland et al. (2016) that deals with some common misconceptions and misinterpretations that arise not only with p-values and confidence intervals, but also with statistical tests in general and the "power" of such tests. These comments by the authors in the abstract for their paper sets the tone of what's to follow rather nicely:
"A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so - and yet these misinterpretations dominate much of the scientific literature." 
The paper then goes through various common interpretations of the four concepts in question, and systematically demolishes them!

The paper is extremely readable and informative. Every econometrics student, and most applied econometricians, would benefit from taking a look!


Reference

Greenland, S., S. J. Senn, K. R. Rothman, J. B. Carlin, C. Poole, S. N. Goodman, & D. G. Altman, 2016. Statistical tests, p values, confidence intervals, and power: A guide to misinterpretations. European Journal of Epidemiology, 31, 337-350.  

© 2019, David E. Giles

Sunday, February 3, 2019

February Reading

Now that Groundhog Day is behind us, perhaps we can focus on catching up on our reading?
  • Deboulets, L. D. D., 2018. A review on variable selection in regression. Econometrics, 6(4), 45.
  • Efron, B. & C. Morris, 1977. Stein's paradox in statistics. Scientific American, 236(5), 119-127.
  • Khan, W. M. & A. u I. Khan, 2018. Most stringent test of independence for time series. Communications in Statistics - Simulation and Computation, online.
  • Pedroni, P., 2018. Panel cointegration techniques and open challenges. Forthcoming in Panel Data Econometrics, Vol. 1: Theory, Elsevier.
  • Steel, M. F., J., 2018. Model averaging and its use in economics. MPRA Paper No. 90110.
  • Tay, A. S. & K. F. Wallis, 2000. Density forecasting: A survey. Journal of Forecasting, 19, 235-254.
© 2019, David E. Giles

Sunday, January 13, 2019

Machine Learning & Econometrics

What is Machine Learning (ML), and how does it differ from Statistics (and hence, implicitly, from Econometrics)?

Those are big questions, but I think that they're ones that econometricians should be thinking about. And if I were starting out in Econometrics today, I'd take a long, hard look at what's going on in ML.

Here's a very rough answer - it comes from a post by Larry Wasserman on his (now defunct) blog, Normal Deviate:
"The short answer is: None. They are both concerned with the same question: how do we learn from data?
But a more nuanced view reveals that there are differences due to historical and sociological reasons.......... 
If I had to summarize the main difference between the two fields I would say: 
Statistics emphasizes formal statistical inference (confidence intervals, hypothesis tests, optimal estimators) in low dimensional problems. 
Machine Learning emphasizes high dimensional prediction problems. 
But this is a gross over-simplification. Perhaps it is better to list some topics that receive more attention from one field rather than the other. For example: 
Statistics: survival analysis, spatial analysis, multiple testing, minimax theory, deconvolution, semiparametric inference, bootstrapping, time series.
Machine Learning: online learning, semisupervised learning, manifold learning, active learning, boosting. 
But the differences become blurrier all the time........ 
There are also differences in terminology. Here are some examples:
Statistics       Machine Learning
———————————–
Estimation        Learning
Classifier          Hypothesis
Data point         Example/Instance
Regression        Supervised Learning
Classification    Supervised Learning
Covariate          Feature
Response          Label 
Overall, the the two fields are blending together more and more and I think this is a good thing."
As I said, this is only a rough answer - and it's by no means a comprehensive one.

For an econometrician's perspective on all of this you can't do better that to take a look at Frank Dielbold's blog, No Hesitations. If you follow up on his posts with the label "Machine Learning" - and I suggest that you do - then you'll find 36 of them (at the time of writing).

If (legitimately) free books are your thing, then you'll find some great suggestions for reading more about the Machine Learning / Data Science field(s) on the KDnuggets website - specifically, here in 2017 and here in 2018.

Finally, I was pleased that the recent ASSA Meetings (ASSA2019) included an important contribution by Susan Athey (Stanford), titled "The Impact of Machine Learning on Econometrics and Economics". The title page for Susan's presentation contains three important links to other papers and a webcast.

Have fun!

© 2019, David E. Giles

Friday, January 11, 2019

Shout-out for Mischa Fisher

One of my former grad. students, Mischa Fisher, is currently Chief Economist and Advisor to the Governor of the State of Illinois. In this role he has oversight of a number of State agencies dealing with economics and data science.

This week, he had a really nice post on the Datascience.com blog. It's titled "10 Data Science Pitfalls to Avoid".

Mischa is very knowledgeable, and he writes extremely well. I strongly recommend that you take a look at his piece.

© 2019, David E. Giles

Monday, January 7, 2019

Bradley Efron and the Bootstrap

Econometricians make extensive use of various forms of "The Bootstrap", thanks to Bradley (Brad) Efron's pioneering work.

I've posted about the history of the bootstrap previously - e.g., here, and here.

You probably know by now that Brad was awarded The International Prize in Statistics last November - this was only the second time that this prize has been awarded. It's difficult to think of a more deserving recipient.


If you want to read an excellent account of Brad's work, and how the bootstrap came to be, I recommend the 2003 piece by Susan Holmes, Carl Morris, and Rob Tibshirani.

There are some fascinating snippets in this conversation/interview, including:
Efron: "One of the reasons I came to Stanford was because of its humor magazine. I wrote a humor column at Caltech, and I always wanted to write for a humor magazine. Stanford had a great humor magazine, The Chaparral. The first few months I was there, the editor literally went crazy and had to be hospitalized, and so I became editor. For one issue we did a parody of Playboy and it went a little too far. I was expelled from school, ..... I went away for 6 months and then I came back. That was by far the most famous I’ve ever been." 
 Referring to his seminal paper (Efron, 1979):
Tibshirani: "It was sent to the Annals. What kind of reception did it get?" 
Efron: "Rupert Miller was the editor of the Annals at the time. I submitted what was the Rietz lecture, and it got turned down. The associate editor, who will remain nameless, said it that didn’t have any theorems in it. So, I put some theorems in at the end and put a lot of pressure on Rupert, and he finally published it."
I guess there's still hope for the rest of us!

References

Efron, B., 1979. Bootstrap methods: Another look at the jackknife. Annals of Statistics, 7, 1-26.

Holmes, S., C. Morris, & R. Tibshirani, 2003. Bradley Efron: A conversation with good friends. Statistical Science, 18, 268-281.

© 2019, David E. Giles

Tuesday, January 1, 2019

New Year Reading Suggestions for 2019

With a new year upon us, it's time to keep up with new developments -
  • Basu, D., 2018. Can we determine the direction of omitted variable bias of OLS estimators? Working Paper 2018-16, Department of Economics, University of Massachusetts, Amherst.
  • Jiang, B., Y. Lu, & J. Y. Park, 2018. Testing for stationarity at high frequency. Working Paper 2018-9, Department of Economics, University of Sydney. 
  • Psaradakis, Z. & M. Vavra, 2018. Normality tests for dependent data: Large-sample and bootstrap approaches. Communications in Statistics - Simulation and Computation, online.
  • Spanos, A., 2018. Near-collinearity in linear regression revisited: The numerical vs. the statistical perspective. Communications in Statistics - Theory and Methods, online.
  • Thorsrud, L. A., 2018. Words are the new numbers: A newsy coincident index of the business cycle. Journal of Business Economics and Statistics, online. (Working Paper version.)
  • Zhang, J., 2018. The mean relative entropy: An invariant measure of estimation error. American Statistician, online.
© 2019, David E. Giles