Pages

Friday, July 6, 2012

The Milliken-Graybill Theorem

Let's think about a standard result from regression analysis that we're totally familiar with. Suppose that we have a linear OLS regression model with non-random regressors, and normally distributed errors that are serially independent and homoskedastic. Then, the usual F-test statistic, for testing the validity of a set of linear restrictions on the model's parameters, is exactly  F-distributed in finite samples, if the null hypothesis is true.

In fact, the F-test is Uniformly Most Powerful Invariant (UMPI) in this situation. That's why we use it! If the null hypothesis is false, then this test statistic follows a non-central F-distribution.

It's less well-known that all of these results still hold if the assumed normality of the errors is dropped in favour of an assumption that the errors follow any distribution in the so-called "elliptically symmetric" family of distributions. On this point, see my earlier post here.

What if I were now to say that some of the regressors are actually random, rather than non-random? Is the F-test statistic still exactly F-distributed (under the null hypothesis)?