Showing posts with label Robust estimation. Show all posts
Showing posts with label Robust estimation. Show all posts

Sunday, September 1, 2019

Back to School Reading

Here we are - it's Labo(u)r Day weekend already in North America, and we all know what that means! It's back to school time.

You'll need a reading list, so here are some suggestions:

  • Frances, Ph. H. B. F., 2019. Professional forecasters and January. Econometric Institute Research Papers EI2019-25, Erasmus University Rotterdam.
  • Harvey, A. & R. Ito, 2019. Modeling time series when some observations are zero. Journal of Econometrics, in press.
  • Leamer, E. E., 1978. Specification Searches: Ad Hoc Inference With Nonexperimental Data. Wiley, New York. (This is a legitimate free download.)
  • MacKinnon, J. G., 2019. How cluster-robust inference is changing applied econometrics. Working Paper 1413, Economics Department, Queen's University.
  • Steel, M. F. J., 2019. Model averaging and its use in economics. Mimeo., Department of Statistics, University of Warwick.
  • Stigler, S. M., 1981. Gauss and the invention of least squares. Annals of Statistics, 9, 465-474. 
© 2019, David E. Giles

Friday, May 31, 2019

Reading Suggestions for June

Well, here we are - it's June already.

Here are my reading suggestions:
© 2019, David E. Giles

Wednesday, November 14, 2018

More Sandwiches, Anyone?

Consider this my Good Deed for the Day!

A re-tweet from a colleague whom I follow on Twitter brought an important paper to my attention. I thought I'd share it more widely.

The paper is titled, "Small-sample methods for cluster-robust variance estimation and hypothesis testing in fixed effect models", by James Pustejovski (@jepusto) and Beth Tipton (@stats-tipton). It appears in The Journal of Business and Economic Statistics.  

You can tell right away, from its title, that this paper is going to be a must-read for empirical economists. And note the words, "Small-sample" in the title - that sounds interesting.

 Here's a compilation of Beth's six tweets:

Monday, October 13, 2014

Illustrating Asymptotic Behaviour - Part III

This is the third in a sequence of posts about some basic concepts relating to large-sample asymptotics and the linear regression model. The first two posts (here and here) dealt with items 1 and 2 in the following list, and you'll find it helpful to read them before proceeding with this post:
  1. The consistency of the OLS estimator in a situation where it's known to be biased in small samples.
  2. The correct way to think about the asymptotic distribution of the OLS estimator.
  3. A comparison of the OLS estimator and another estimator, in terms of asymptotic efficiency.
Here, we're going to deal with item 3, again via a small Monte Carlo experiment, using EViews.

Friday, June 15, 2012

F-tests Based on the HC or HAC Covariance Matrix Estimators

We all do it - we compute "robust" standard errors when estimating a regression model in any context where we suspect that the model's errors may be heteroskedastic and/or autocorrelated.

More correctly, we select the option in our favourite econometrics package so that the (asymptotic) covariance matrix for our estimated coefficients is estimated, using either White's heteroskedasticity-consistent (HC) estimator, or the Newey-West heteroskedasticity & autocorrelation-consistent (HAC) estimator.

The square roots of the diagonal elements of the estimated covariance matrix then provide us with the robust standard errors that we want. These standard errors are consistent estimates of the true standard deviations of the estimated coefficients, even if the errors are heteroskedastic (in White's case) or heteroskedastic and/or autocorrelated (in the Newey-West case).

That's fine, as long as we keep in mind that this is just an asymptotic result.

Then, we use the robust standard error to construct a "t-test"; or the estimated covariance matrix to construct an "F-test", or a Wald test.

And that's when the trouble starts!