Pages

Wednesday, April 16, 2014

An Exercise With the SURE Model

Here's an exercise that I sometimes set for students if we're studying the Seemingly Unrelated Regression equations (SURE) model. In fact, I used it as part of a question in the final examination that my grad. students sat last week.

Suppose that we have a 2-equation SURE model:

                        y1 = X1β1 + ε1

                        y2 = X2β2 + ε2  ,

where the sample is "balanced" (i.e,. we have n observations on all of the variables in both equations), and the errors satisfy the usual assumptions for a SURE model:

                     E[ε] = 0  ;  V(ε) = (Σ ⊗ In
where                ε' = [ε1' , ε2']' .

Exercise:  Prove that the SURE estimators of β1 and β2 are identical to the OLS estimators of β1 and β2 if the condition, X(X1'X1)-1 X1' = X(X2'X2)-1 X2' , is satisfied.

Viren Srivastava and I gave this as Exercise 2.14 in our 1987 book on the SURE model. However, we didn't give the solution there - so don't think you can cheat in that way!

You can see that the above condition is satisfied if X1 = X2, and the latter condition is one that is mentioned in most econometrics textbooks. However, it's much more stringent than is needed to get the result.

Also, the above condition is necessary, as well as sufficient, for the OLS and SURE estimators to coincide. However, that's another matter.

I'll post the "solution" to the exercise in a few days' time.


Reference

Srivastava, V. K. and D. E. A. Giles, 1987. Seemingly Unrelated Regression Equations Models:Estimation and Inference. Marcel Dekker, New York. 



© 2014, David E. Giles

2 comments:

  1. I have two ways to demonstrate this result:
    A. Kruskal's condition says that GLS=OLS if I can find a matrix G such that WX=XG, where W is the covariance matrix of the disturbances (which I'll denote S@I(n,n) since I can't figure out how to write Sigma or the Kronecker product in this comment). The condition says X2=X1*C (just post multiply the left and rhs of the condition by X2). Therefore we can write X=diag(X1,X2)=diag(X1,X1)A =(I(2,2)@X1)A where A'=(I(n,n) C'). Then use the property of Kronecker products to show that (S@I(n,n))*(I(2,2)@X1)=(I(2,2)@X1)@(S@I(n,n)), so G=(S@I(n,n))A. If you don't want to use Kruskal's theorem, you can follow the reasoning above and add a few lines to show that GLS is just OLS on each equation.

    B. Subract sigma(1,2)/sigma(1,1) times the first equation from the second. The transformed error u2-(sigma(1,2)/sigma(1,1))*u1 is uncorrelated with u1. So GLS amounts to minimizing a weighted sum of the squared errors from the first equation and the second equation. Given the GLS estimate b1, the n.s. condition for b2 is that it satisfies
    X2'(y2-X2*b2-(sigma(1,2)/sigma(1,1))(y1-X1*b1))=0.
    By symmetry, we also see that the n.s. condition for b1 given b2 is that it satisfies
    X1'(y1-X1*b1-(sigma(1,2)/sigma(2,2))(y2-X2*b2))=0.

    The ols estimators satisfy
    X2'(y2-X2*b2ols)=0 and X1'(y1-X1*b1ols)=0

    Your condition says that a vector is orthogonal to span(X1) iff it is orthogonal to span(X2). So we also get
    X1'(y2-X2*b2ols)=0. and X2'(y1-X1*b1ols)=0

    From this we quickly see that under your condition, the OLS estimators satisfies the n.s. condition for the GLS estimators, so they'll be the same.

    ReplyDelete

Note: Only a member of this blog may post a comment.