Pages

Friday, September 23, 2011

Student's t-Test, Normality, and the Bootstrap

Is Student's t-statistic still t-distributed if the data are non-normal? I addressed this question - in fact, an even more general question - in an earlier post titled "Being Normal is Optional!". There, I noted that the null distribution of any test statistic that is scale-invariant, will be the same if the data come from the elliptically symmetric family distributions, as if they come from the normal distribution.


For econometricians, this has widespread relevance to our day-to-day work. All of our standard test statistics are scale-invariant. If you're not sure about this, just note that if they weren't, then we'd get different results if we measured our data in dollars, as opposed to millions of dollars, say! More specifically, the t-test, the F-test, Wald tests, LM tests, LR tests, all have this desirable property. So, normality of the data, and normality of the errors in a regression model, are certainly not necessary for the standard textbook results that you're familiar with. Normality is just a sufficient condition.

In that earlier post I gave some references to papers published in the 1970's that supported this quite general result. However, if we focus on the particular case of the t-test, there's some interesting, and somewhat earlier, work that you should be aware of. Let's take a look.

First, recall that Student's t-distribution arises when we from the ratio of two independent random variables: t = [Z / (C/v)1/2], where Z follows a standard normal distribution; and C has a chi-square distribution, with (say) v degrees of freedom. Then, t is  t-distributed with the same v degrees of freedom. Of course, this a very general definition, and it's not specific to any particular situation. An application of this result arises when we go about our standard testing of individual coefficients in the linear regression model, for example.

"Student" (W. S. Gosset) and his contemporaries were actually aware of the fact that the Student's t-distribution is somewhat "robust" to departures from the underlying normality - the distribution of the numerator part of the statistic, t. For example, Fisher (1925) showed that a sufficient (but not necessary)condition for the t-statistic to have its usual null distribution is that the sample data have the property of "rotational symmetry".

You can find out what the latter property is by reading Efron's paper. In a nutshell, though, rotational symmetry is rarely encountered with conventional sampling procedures, unless the population is normal. So this didn't really help very much in practice. In addition, Fisher's result says nothing about the alternative distribution, and hence about the power of the t-test. Is it still UMP against one-sided alternatives? Probably not.

The early statisticians didn't have any general results to describe the extent to which robustness to non-normality would be assured when standard sampling procedures are used.

This remained an interesting but open question until Bradley Efron (among other things, the father of the "bootstrap") answered it in the late 1960's. There had been lots of other contributions along the way, notably by Hotelling (1961), and these were surveyed by Hatch and Posten (1961). Then,  Efron (1969) provided us with the following important, and practical, results.

Suppose that the data are sampled independently, and that each observation comes from a distribution that is symmetric about zero. (These distributions need not be the same!) The data then have a property called "orthant symmetry". Notice that normality is a special case of this, as are lots of other sampling situations. Under orthant symmetry, the t-test is remarkably robust to departures from normality, unless this departure is really extreme. 

Under non-normal orthant symmetry, the t-test tends to be a bit "conservative" under the null hypothesis. That's to say, the true significance level is lower than the assumed (declared) level. Efron provides bounds on this reduction in the significance level, and he also investigates the test's power.

The literature on this problem continues to grow - just check the statistics journals. I'm not suggesting that Efron had the last word, but what he had to say was highly significant. And, as often seems to be the case when it comes to the folklore side of econometrics, we've known all about this for over forty years!

Basically, we have good reason not to be too worried about using the t-test in our regression analysis, even if we can't be sure that the errors are normally distributed. Normality is a sufficient, but not necessary, condition for the test to be reliable. And, in any case, there's a back-up plan. We can always bootstrap the t-test!

Thank you, Dr. Efron!


Note: The links to the following references may require that your computer's IP address gives you access to the electronic versions of the publications in question. That's why a written References section is provided.

References

Efron, B. (1969). Student's t-test under symmetry conditions. Journal of the American Statistical Association, 64, 1278-1302.

Fisher, R. A. (1925). Applications of Student's distribution. Metron, 5, 90-104.

Hatch, L. 0. and Posten, H. O. (1967). "Robustness of the Student procedure. A survey. Research Report No. 24, Department of Statistics, University of Connecticut, Storrs, CT.

Hotelling, H., (1961). The behavior of some standard statistical tests under non-standard conditions. Proceedings of the Fourth Berkeley Symposium, 1, 319-360.


© 2011, David E. Giles

1 comment:

  1. A useful and interesting post! Thank you!

    ReplyDelete

Note: Only a member of this blog may post a comment.