Pages

Friday, July 6, 2012

The Milliken-Graybill Theorem

Let's think about a standard result from regression analysis that we're totally familiar with. Suppose that we have a linear OLS regression model with non-random regressors, and normally distributed errors that are serially independent and homoskedastic. Then, the usual F-test statistic, for testing the validity of a set of linear restrictions on the model's parameters, is exactly  F-distributed in finite samples, if the null hypothesis is true.

In fact, the F-test is Uniformly Most Powerful Invariant (UMPI) in this situation. That's why we use it! If the null hypothesis is false, then this test statistic follows a non-central F-distribution.

It's less well-known that all of these results still hold if the assumed normality of the errors is dropped in favour of an assumption that the errors follow any distribution in the so-called "elliptically symmetric" family of distributions. On this point, see my earlier post here.

What if I were now to say that some of the regressors are actually random, rather than non-random? Is the F-test statistic still exactly F-distributed (under the null hypothesis)?

I imagine that a lot of you would answer, "no". You'd probably add something along the lines: "The details of its finite-sample distribution will no doubt depend on the way in which the regressors are random".

The latter statement is indeed correct, but it can still be the case that the statistic is exactly F-distributed!

[I'm sure that you'd all also mention that if there are q linear independent restrictions being tested, then the statistic qF will have a large-sample (asymptotic) distribution that is Chi-Square with q degrees of freedom, if the restrictions are valid.]

Now, let's stick with finite-sample results, and let's see how it can possibly be true that the F-statistic is still F-distributed, even in the presence of certain types of random regressors.

It all hinges on the so-called "Milliken-Graybill Theorem". In simple terms, the Milliken-Graybill result says the following. If we augment a regression model with additional regressors that are random, only because they are (possibly non-linear) functions of the OLS estimator of the original coefficients in the model, then the F-test statistic for the restrictions that these additional regressors have zero coefficients is exactly F-distributed under the null.

You can check out the formal details in Milliken and Graybill (1970).

So, the theorem provides us with a sufficient set of conditions on the randomness of certain of the regressors, for the usual F-statistic to still be exactly F-distributed, in finite samples.

Interesting, no doubt, but what's the practical use of this result?

One example given by Milliken and Graybill (1970, p.805) is as follows. Suppose we have a non-linear regression model of the form:

    yi = β0 + β1log(X1i) + β2log(X2i) + β3X1i-2β1 + β4 e2X2i + εi   ;  i = 1, ..., n

and we want to test H0: β3 = β4 = 0. If we create the regressors X1i-2b1 and  e-b2X2i, where b1 and b2 are the estimates of β1 and β2 when the (linear) model:

                      yi = β0 + β1log(X1i) + β2log(X2i) +vi     ;    i = 1, ..., n

is fitted by OLS; and then estimate the model:

   yi = β0 + β1log(X1i) + β2log(X2i) + β3X1i-2b1 + β4 e-b2X2i + ui   ;  i = 1, ..., n

then the usual F-statistic for testing H0: β3 = β4 = 0 will still be exactly F-distributed if H0 is true.

Let's think of another very common application of the Milliken-Graybill Theorem in econometrics.

You've probably come across Ramsey's so-called "Regression Specification Error Test" (RESET test). This in one of a suite of regression mis-specification tests proposed by Ramsey (1969), and the one that is still most widely used by econometricians. This is a test of a particular set of linear restrictions relating to random regressors, and the test statistic is exactly F-distributed (under the null of no mis-specification).

Why is it an exact test? Well, again, it's because of the Milliken-Graybill Theorem!

Briefly, here's the way we usually set up and apply the RESET test. (There are other possibilities - e.g., see Thursby and Schmidt, 1977).

Suppose that we have a standard OLS regression model, where the regressors are non-stochastic; and the errors are normally distributed, serially independent, and homoskedastic:

                        y = Xβ + ε.                                                          (1)

However, we are concerned that we may have mis-specified the functional form of the model, or omitted some (unknown) relevant regressors. In this case, E[ε | X] = ζ ≠ 0.

The RESET test is an example of a "variable addition test" - see Pagan (1984), Pagan and Hall (1983), and Giles and DeBenedictis (1998) for more details, and for other examples of such tests.

Specifically, we take the OLS fitted (predicted) values from (1), say y*. We then add powers of y* to (1), and fit the augmented model:

                       y = Xβ + (y*)2γ2 + (y*)3γ3 + ...... + u.                   (2)

What's the motivation for doing this?

Well, it really goes back to earlier work by Anscombe (1961). We approximate the unknown conditional mean of the errors, ζ, by the unknown function Zθ, and then test if ζ = 0 by testing if θ = 0.          

This unknown function can be approximated (locally) by taking a Taylor series expansion, leading to the powers of y* that enter (2).

Typically, we include no more than 3 or 4 powers of y* in (2) in practice, because of the strong multicollinearity between these powers. For further justification, see Ramsey and Gilbert (1972) and Thursby (1989). Note that we can't include y* itself as a regressor in (2). This is because the overall regressor matrix in (2) would then have less than full (column) rank.

Now, y* = Xb, where b is the OLS estimator of β in (1). So, although the powers of y* that enter as regressors in (2) are random, the only source of this randomness is the OLS estimator, b.

The RESET test involves testing the restrictions, γ2 = γ3 = .... = 0. Notice that if this null hypothesis is true, then the only regressors in the model will be the columns of X, and these are non-random. So, by the Milliken-Graybill Theorem, the usual F-statistic for testing the restrictions associated with the RESET test will still be exactly F-distributed (under the null), in finite samples.

One important point that is often overlooked is that the above form of the RESET F-test is not UMPI. Under the alternative hypothesis, the model (2) includes random regressors. In fact, the power of the RESET test can be quite poor in some situations, and this is something to watch for. More details on this are provided by a number of authors, including Thursby and Schmidt (1977) and DeBenedictis and Giles(1998).

Finally, my colleague, Ken Stewart provided some extensions of the basic Milliken-Graybill result, and a re-interpretation in terms of Gauss-Newton regressions, in his paper Stewart (2000). This is well worth looking at for some additional insights.

Here's a simple example showing the application of the RESET test in EViews. The associated workfile is in the code section of this blog, and the (artificial) data I've used are in the data section.

The basic OLS regression result is:


The Jarque-Bera test suggests that the errors are normally distributed:


In addition, if you apply the usual LM test for serial independence of the errors, this hypothesis can't be rejected. Similarly, the Breusch-Pagan-Godfrey test and White's test suggest that the errors are homoskedastic. So, we can apply the RESET test.

In EViews, this is done, once the above model is estimated, by selecting VIEW / STABILITY DIAGNOSTICS / RAMSEY RESET TEST. You then choose the number of powers of y* that you want to use. Selecting "1" means that only (y*)2 will be used, etc. I've selected "3", and the results I get are:


The RESET F-statistic has a p-value of just over 20%, so I am not going to reject the null hypothesis that the powers of y* have zero coefficients. This is good news - I cannot reject the hypothesis that the functional form of the model is correctly specified.

The take-away message:
Suppose we add some regressors to a model, and these variables are random solely because they are functions of the original OLS estimator. If we construct an F-test of restrictions, such that the random regressors aren't present when the null hypothesis is true, then the F-statistic will still be exactly F-distributed, by virtue of the Milliken-Graybill Theorem.
This is a very powerful result when it comes to the construction of a range of "variable addition" tests for model mis-specification. 


References

Anscombe, F. J., 1961. Examination of residuals. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability,1, 1-36.

DeBenedictis, L. F. and D. E. A. Giles, 1998. Diagnostic testing in econometrics: Variable addition, RESET, and Fourier approximations. In A. Ullah and D. E. A. Giles (eds.), Handbook of Applied Economic Statistics, Marcel Dekker, New York, 383-417. (WP version; figures.)

Milliken, G. A. and F. A. Graybill, 1970. Extensions of the general linear hypothesis model. Journal of the American Statistical Association, 65, 797-807.

Pagan, A. R., 1984. Model evaluation by variable addition. In D. F. Hendry and K. F. Wallis (eds.), Econometrics and Quantitative Economics, Blackwell, Oxford, 103-133.

Pagan, A. R. and A. D. Hall, 1983. Diagnostic tests as residual analysis. Econometric Reviews, 2, 159-218.

Ramsey, J. B., 1969. Tests for specification errors in classical linear least squares regression analysis. Journal of the Royal Statistical Society, Series B, 31, 350–371.

Ramsey, J. B. and R. Gilbert, 1972. A Monte Carlo study of some small sample properties of tests for specification error. Journal of the American Statistical Association, 67, 180-186.

Stewart, K. G. (2000). GNR, MGR, and exact misspecification testing. Econometric Reviews, 19, 233-240. (WP version here.)

Thursby, J. G., 1989. A comparison of several specification error tests for a general alternative. International Economic Review, 30, 217-230.

Thursby, J. G. and P. Schmidt, 1977. Some properties of tests for specification error in a linear regression model. Journal of the American Statistical Association, 72, 635-641.




© 2012, David E. Giles

8 comments:

  1. I'm confused by your statement that the F-test for a set of linear restrictions in UMP. Think about the univariate case. A one-sided t-test is UMP, but a two-sided t-test (i.e. the F-test) is only UMPU (uniformly most powerful unbiased). In the case of a set of linear restrictions, I thought the optimality property of the F-test is very weak (something like most powerful among the set of tests that have a fixed size for all alternatives within a certain ellipsoid...)

    ReplyDelete
    Replies
    1. Angelo: Absolutely right! In general it's very rare to be able to construct a UMP test against 2-sided alternatives. What I should have said is "Uniformaly Most Powerful Invariant" (correction now made), and I belive this more restrictive assertion is correct.

      Delete
  2. Am I correct in understanding that this only applies when the regressors in the "original" model are non-stochastic? If so, this seems like a pretty major caveat (and one that one might argue should have been included in your "take-away message").

    ReplyDelete
    Replies
    1. Yes - that's right. It's there in the message - there should be no random regressors when the null is true (i.e., when the restrictions are imposed).

      Delete
  3. Sir tell me about how to do BDS test in Eviews

    ReplyDelete
    Replies
    1. Seriously??? You start up EViews, you click on the "HELP" button, and then you type in "BDS". :-)

      Delete
  4. Please sir, how important is the Ramsey reset test for an ARDL model. I ran an ARDL model, and all other post estimation results are very good news including the cusum and cusum of square test, but the Ramsey test shows me that the model is not correctly specified. Majority of the research I reviewed did not conduct the Ramsey test of any test for model specification in their research

    ReplyDelete
    Replies
    1. The test won't be appropriate in that context as there are lags of the dependent variable among the regressors. Note that in the post I say:
      "Notice that if this null hypothesis is true, then the only regressors in the model will be the columns of X, and these are non-random. So, by the Milliken-Graybill Theorem, the usual F-statistic for testing the restrictions associated with the RESET test will still be exactly F-distributed (under the null), in finite samples." The requirement of non-random regressors is crucial.

      Delete

Note: Only a member of this blog may post a comment.