Pages

Tuesday, January 17, 2012

Are Those Conditions Necessary, or Just Sufficient?

We all know the difference between conditions that are necessary, and ones that are sufficient, for some result to hold. However, it's not uncommon for us to lose track of which is which when it comes to certain econometric results. I'm going to focus on just one example of this, and in doing so I'll try and clear up a common misconception.

Specifically, we're going to take a look at the OLS estimator of the coefficient vector in a standard linear regression model, and focus on the conditions that are usually mentioned in the context of this estimator being (weakly) consistent.

Here's the model we'll be dealing with:

                y = + ε     ; ε ~ [0 , σ2I] .

The model has k regressors, and we have a sample of size n.

I'll assume that rank(X) = k, so the OLS estimator of is defined, and is

               b = (X 'X)-1 X 'y.

I'm going to assume either that the columns of the X matrix are non-random; or, if they are random then any correlation that exists between these random regressors and the errors of the model disappears if n is large enough.

That's to say, if the regressors are non-random, then as n  → ∞, [X 'X / n] tends to a finite and non-singular matrix (say) Q. Note that this matrix will be unobservable, as we can't ever see what happens if n is infinitely large. So, what we're assuming is that the full rank assumption about X (and hence X 'X) continues to hold if the sample size is allowed to grow without limit.

If, on the other hand, the regressors are random, I'm going to assume that plim[X 'X / n] = Q. As before, Q is assumed to be a finite, non-singular, but unobservable matrix.

Now, if we think about the dimensions of the (X 'X) matrix, you might be a bit puzzled by this. After all, the (X 'X) matrix is (k x k). These dimensions don't change as n increases. However, if we focus on the X matrix itself, we see that as n grows, so does the number of rows in this matrix. So, the elements that are involved in the calculation of the (X 'X) matrix are indeed changing as n increases, and so this matrix changes in terms of the values of its elements.

Now I'm going to make a second, crucial, assumption. I'm going to assume, again, that either the regressors are non-random - in which case they can't possibly be correlated with the errors in the regression model (no matter what the sample size is) - or else, if the regressors are random, then they eventually become uncorrelated with the errors if the sample size grows without limit.

That is, I'm going to assume that plim[X 'ε / n] = 0 ( a null vector).

Let's now take a look at our OLS estimator, which we can write as:

               b = [X 'X / n]-1[X 'y / n] = [X 'X / n]-1 [X '( + ε) / n]  ;

or,
               b = β + [X 'X / n]-1 [X 'ε / n].

Then, under the assumptions that I've made, as the sample size grows without limit,

               plim(b) = β + (Q-1)(0) = β ,

as Q-1 is finite,  by assumption. That is, the OLS estimator of the coefficient vector is weakly consistent.

Incidentally, notice that if the regressors are non-random, then plim[X 'X / n] = limit[X 'X / n] = Q, so this result still holds.

Now, let's think about a simple, but rather special, regression model - one that has an intercept and just one other regressor - a linear time-trend. That is:

                yt = β1 + β2 t + εt   ;    εt  ~ i.i.d. [0,  σ2]  ; t = 1, 2, ...., n.

For this model, the (1 , 1) and (2 , 2) elements of [X 'X] are n and [n (n + 1)(2n + 1) / 6], respectively; and the (1 , 2) and (2 , 1) elements are each [n (n + 1) / 2]. If we then consider plim[X 'X / n] (which equals the limit[X 'X / n] in this case), we see that three of the four elements become infinitely large as  n → ∞, and so Q is not a finite matrix. One of the assumptions above isn't satisfied.

Because the regressor, t, is non-random, it can't be correlated with the error term, no matter what the sample size is. Given this, and the zero mean for the errors, the OLS estimators of the two coefficients are definitely unbiased. Now, if I can show that the OLS estimators of the coefficients  are mean-square consistent then, by Chebyshev's Inequality, they will also be weakly consistent.

Given this estimator's unbiasedness all I have to show, then, is that the covariance matrix of b = (b1 , b2)'  tends to a null matrix, as n tends to infinity. This covariance matrix has the form σ2 [X 'X]-1, as usual.
The determinant of [X ' X] is [n2(n + 1)(n -1) / 12], and so the elements of the [X 'X]-1 matrix are as follows:
  • (1 , 1) :  [2 (2n + 1)] / [n (n -1)]
  • (2 , 2) :  12 / [n (n + 1)(n - 1)]
  • (1 , 2) = (2 , 1) :  -6 / [n (n - 1)] .
Clearly, as n → ∞, each of these elements tends to zero, and so the covariance matrix of b becomes a null matrix (as σ2 is positive, but finite).
Accordingly, in this time-trend model, the OLS estimator of the coefficient vector is mean-square consistent, and hence it is also weakly consistent (i.e., plim(b) = β ).

So, what is going on here? The assumption that plim[X 'X / n] = Q, a finite and positive definite matrix, isn't satisfied, but the OLS estimator is still weakly consistent. Obviously, what we've shown is that this particular assumption is sufficient, but not necessary, for consistency to hold. You might ask yourself if the other assumption, that plim[X 'ε / n], is necessary or sufficient.

The take-home message:
.
When looking at the assumptions that are made in establishing results in Econometrics (or elsewhere), be careful to determine which assumptions are sufficient, which ones are necessary, and which ones are both.


© 2012, David E. Giles

2 comments:

  1. Small typo (?) before the take-home message: "...assumption is sufficient, but _not_ necessary, for consistency to hold."

    ReplyDelete
    Replies
    1. Alfredo: Whoops! Thanks for spotting that - fixed now.

      Delete

Note: Only a member of this blog may post a comment.