Wednesday, May 28, 2014

June Reading List


Put away that novel! Here's some really fun June reading:
  • Berger, J., 2003. Could Fisher, Jeffreys and Neyman have agreed on testing?. Statistical Science, 18, 1-32.
  • Canal, L. and R. Micciolo, 2014. The chi-square controversy. What if Pearson had R? Journal of Statistical Computation and Simulation, 84, 1015-1021.
  • Harvey, D. I., S. J. Leybourne, and A. M. R. Taylor, 2014. On infimum Dickey-Fuller unit root tests allowing for a trend break under the null. Computational Statistics and Data Analysis, 78, 235-242.
  • Karavias, Y. and E. Tzavalis, 2014. Testing for unit roots in short panels allowing for a structural breaks. Computational Statistics and Data Analysis, 76, 391-407.
  • King, G. and M. E. Roberts, 2014. How robust standard errors expose methodological problems they do not fix, and what to do about it. Mimeo., Harvard University.
  • Kuroki, M. and J. Pearl, 2014. Measurement bias and effect restoration in causal inference. Biometrika, 101, 423-437.
  • Manski, C., 2014. Communicating uncertainty in official economic statistics. Mimeo., Department of Economics, Northwestern University.
  • Martinez-Camblor, P., 2014. On correlated z-values in hypothesis testing. Computational Statistics and Data Analysis, in press.

© 2014, David E. Giles

2 comments:

  1. Dear Professor Giles,
    your reading list is as usual very interesting. One thing I stumbled over is this:
    Refering to the included quote from Greene in your post here (http://davegiles.blogspot.de/2013/05/robust-standard-errors-for-nonlinear.html) and to the paper of King & Roberts, I find it confusing to read in K&R (Page 3):
    "Consider a simple and well-known example, in the best case for robust standard er-
    rors: The maximum likelihood estimator of the coefficients in an assumed homoskedastic
    linear-normal regression model can be consistent and unbiased (albeit inefficient) even if
    the data generation process is actually heteroskedastic."

    As far I got Greene, estimation with ML (in constrast to OLS) does not ensure unbiased coefficients under heteroskedasticity. I spotted the word "can" but it actually seems rather unlikely to me. Do you know what I did not understand?

    ReplyDelete
    Replies
    1. Martin: If the model is "linear and normal" (I assume that means normally distributed errors), the MLE =OLS for the estimation of the coefficients (betas), though not for the estimation of the error variance.
      In this particular case, the MLE is an unbiased estimator of the coefficients. In general, MLE can be biased or unbiased in finite samples, though it is asymptotically unbiased. And this is irrespective of any heteroskedasticity issues.

      Delete

Note: Only a member of this blog may post a comment.