Friday, May 13, 2011

Gripe of the Day

I have a pet peeve that I really must get off my chest. Yes, another one! Rest assured, there are plenty more where this comes from.

Logit and Probit models are the joint work-horses of a lot of microeconometric studies. They've been widely used for a very long time, and these days they are frequently used with what I'd call pretty large data-sets - ones where the sample size is in the thousands, or even hundreds of thousands. 

These models are invariably estimated by Maximum Likelihood (ML), and as we all know, ML estimators have great large-sample (asymptotic) properties. Specifically, they're weakly consistent, asymptotically efficient, and asymptotically normal - under specific conditions. These conditions are what we usually term the 'regularity conditions', and these are simply some mild conditions on the derivatives of the underlying density function for the random data in the model. The normal and logistic distributions satisfy these conditions, which were first introduced in the context of the formal properties of ML estimators by Dugué (1937) and Cramér (1946). So, what could go wrong?

Often, when applying ML estimation, the likelihood equations (the first-order conditions) that have to be solved to maximize the log-likelihood function are non-linear in the parameters, and the solution may not be unique. That's to say, there may be several local maxima and minima, and we need to make sure that we have found the solution (the ML estimates) that relate to the global maximum of the function. Basically, this is important to ensure that we get those nice asymptotic properties. In the case of the Logit and Probit models, everything is O.K. The likelihood equations have a unique solution, corresponding to a (global) maximum of the log-likelihood function itself. 

We seem to be in the clear - right? Well, unfortunately we still have a problem. With any ML exercise we need to be sure that we are maximizing the correct likelihood. If not, then there may be some problems. I say "may", because sometimes things don't fall apart totally. An example of this is with the standard models for 'count' data. There, the common Poisson specification, with the mean modelled as the exponential of a linear combination of regressors, is still a 'Quasi-ML' estimator even if the underlying Poisson assumption is wrong. As long as the specification of the mean, in terms of the regressors, is correct, the ML estimator (based on the invalid Poisson specification) for the parameters associated with the mean will still be weakly consistent.

However, we're not always this lucky. What if we set up the model correctly, except that we made the wrong assumption the variance of the underlying distribution? How would this affect the ML estimator for models like Logit and Probit?

Let's think back to something pretty basic. In the linear regression model, certain type of mis-specification have only mild implications for our inferences. For example, although heteroskedasticity renders the usual estimated covariance matrix for the OLS parameter estimator inconsistent, the parameter estimates themselves only lose efficiency - they are still unbiased and consistent. Similarly, although the omission of relevant regressors from the standard linear regression model generally biases the OLS parameter estimator, this bias vanishes if the included and excluded regressors are orthogonal (uncorrelated in the sample).
However, these results change if the model is non-linear in the parameters - a fact that is well known (e.g., see the various references on my professional website - here), but essentially ignored in most empirical studies. More specifically, these results change (for the worse) in the context of such non-linear models as Logit, Probit, Tobit, etc. For example, in the presence of heteroskedasticity, the MLE's of the parameters of these models are inconsistent.
So, worrying about reporting 'robust' standard errors is of second-order importance. There are much more serious matters to be concerned about! If the parameter estimates themselves are inconsistent, it doesn't matter how big your sample is, or what you do to 'patch up' the (meaningless) standard errors, you're in over your head! Similarly, the omission of relevant covariates (even if they are uncorrelated with the included ones) also renders the MLE's of the parameters themselves inconsistent in Logit and Probit models.
I don't know how many times I've asked someone in a seminar -
"Did you test for heteroskedasticity in your Logit model?"
only to get the flippant reply -
"It's O.K. - I've reported het.-consistent strandard errors, and I have x-thousand observations."
No - it's not O.K.. At least not on the basis of this response.

(I'd have a little more sympathy if the speaker said that there was very little numerical difference bewtween the regular standard errors and the het.-consistent standard errors. However, that's not a test - it's just a rough rule-of-thumb that things may not be too bad.)

Lagrange Multiplier (LM) tests for this and other types of mis-specification in Logit, Probit, and related models have long been available (e.g., see Davidson and MacKinnon, 1984, and other references here), but they seem to be largely ignored in awful lot of empirical research. What's wrong with these applied econometricians?

It may be that there's a lack of familiarity with the theoretical literature - but heck, this is not hot-off-the-press news! Maybe there isn't easy access to the code needed to implement these specification tests? Let's check that out.

First, the SHAZAM econometrics package. It doesn't have a heteroskedasticity test for Logit/Probit built in, but they supply an add-on 'procdure' for the standard LM tests at the following URL: . So, if you're a SHAZAM user, you have no excuses when it comes to testing! If you want to estimate a Logit or Probit model with heteroskedasticity, then you need to write your own code.

Next, STATA. I'm not a STATA user, but here's what I understand (hat tip to Mark and to Lindsay, readers of a previous version of this). STATA does facilitate estimating a Probit (not a Logit) model with multiplcative heteroskedasticity - at least since Version 6 - via the hetprob command. And along with that estimation there's a likelihood ratio test. The test is based, as always on the restricted (no het.) and unrestricted (het.) versions of the model. As far as I can tell, the simpler LM tests (which require the estimation of only the standard, no-het. Logit or Probit models) don't seem to be there.

Finally, EViews. This package doesn't include estimation routines for Logit or Probit models with heteroskedasticity. Neither does it include heteroskedasticity tests for Logit/Probit. However, for the testing you can download EViews workfiles and associated program files to perform them from my website, and for convenience I've also put these files in the Code page that goes with this blog. Make sure that you read the 'READ-ME' object in each workfile. Also, note that in addition to the heteroskedasticity tests, I also include LM tests for testing the validity of the underlying distribution (Normal in the case of Probit, and Logistic in the case of Logit).

So where does that leave us? Frankly, it would be nice if the major econometrics packages included the various well-known mis-specification tests for Logit, Probit, and related models - not just tests for heteroskedasticity. Why not? They're all simple enough to implement, and they're darned important! While they're at it, it would be nice to see standard routines incorporated to allow us to perform ML estimation of Logit and Probit model with heteroskedasticity.

And then, of course, it's up to us! My gripe is not just with the packages. It's even more to do with all of those people doing applied econometric work who just don't seem to acknowledge that a lot of their results are suspect because they've ignored some basic issues.We can do better than this! 

p.s. (7 October, 2011) The EViews program code referred to above has been corrected today.
HT to eagle-eyed Sven Steinkamp at Universität Osnabrück  for bringing the error to my attention.

Note:  The links to the following references will be helpful only if your computer's IP address gives you access to the electronic versions of the publications in question. That's why a written References section is provided.


Cramér, H. (1946). Mathematical Methods of Statistics, Princeton University Press, Princeton N.J.

Davidson, R. and J. G. MacKinnon (1984). Convenient specification tests for logit and probit models. Journal of Econometrics 25, 241-262.

Dugué, D. (1937). Application des propriétés de la limite au sens du calcul des probabilités à l’étude de diverses questions d’estimation. Journal de l’École Polytechnique, 3, 305-372.

© 2011, David E. Giles


  1. Can you clarify what "Hmmm!!!!" means w.r.t. to Stata? The hetprob command looks like a standard (Harvey, Econometrica 1976) way of estimating a probit with multiplicative heteroskedasticity, and comes with an LR heteroskedasticity test.


  2. The probit command in Stata does allow for robust standard errors incorrectly (as you note) for reasons that are not at all clear to me. I see lots of papers where the authors use this feature and it also drives me bonkers.

    As noted by Mark, the -hetprob- command uses the correct procedure to estimate a multiplicative heteroskedastic probit model using Harvey's multiplicative heteroskedasticity. This is, it properly corrects both sigma and Pr(y=1) provided you can specify the heteroskedasticity (which I think since most applied folk are just so used to slapping robust standard errors into models, they forget that this is a possibility).

    The output from the -hetprob- contains the results of the LR test of heteroskedasticity, which tests the full model with het against the full model without. I do not believe Davidson and MacKinnon's LM1 or LM2 tests have been implemented into an embedded command (if they have, I have wasted lots of coding time in the past) in Stata though you can create them manually.

  3. Mark: thanks for pointing that out. This was the feedback I was asking for in the original version of this post. I've amended the post, and I hope it now provides a more accurate and balanced assessment of the situation.
    Much appreciated.

  4. Lindsay: Thanks very much for your comments, which also clarified some things for me about STATA. I hope that the amended version of the post is more accurate. Thanks for reading!

  5. Dave,

    Stata's hetprob goes back to version 6, and before that it was a user-written add-in, so it's been around quite a while.

    Do you have any thoughts on the LR test for heteroskedasticity as parameterized by hetprob vs. the simpler LM tests? My intuition is that it's the usual tradeoff (you're more likely to find multiplicative heteroskedasticity if it's there, but less likely to find other kinds) but it would be helpful to know whether this is likely to matter much in most applications.


  6. Mark: Thanks again for the info. Much appreciated.

    Regarding LRT vs. LM - my intuition agrees with yours. That is, the LRT will probably have the better power against multiplicative het. than the LM test, at least in moderate sized samples; but probably lower power against other forms of het.

    Of course both tests are 'consistent', so their powers will both tend to 100% as 'n' goes to infinity. But only against the alternative hypothesis that they're constructed for. So, it's a question of robustness. How well do the tests perform, relative to each other, if there is heteroskedasticity different from the form being tested against? And in particular, do these powers differ by much in the case of the large samples typically used with Logit and Probit?

    I don't have any information on that, but I am sure as heck going to follow it up. It's an important question that you're raising here.

    Thanks again, Mark.


  7. I've been through William h.Greene, Goldberger,Angrist & Pischke, and Kennedy, and a course in econometrics and mathematical statistics, but I'm not sure what causes the inconsistency in MLE estimates in the face of heteroskedasticity. (does one of your references show an actual proof, some mathematical detail illustrating this?Or is there a more intuitive explanation I'm completely overlooking? Understanding the issues regarding the standard errors in OLS seems much more intuitive.

  8. Matt: Thanks for the good comment. First, for a proof of this result, you could look at Adonis Yatchew and Zvi Griliches' paper, "Specification Error in Probit Models", Review of Economics & Statistics, 1985,67, pp. 134-139. It's short and very readable, and includes other refs.

    As a general comment - whenever a model is mis-specified, you lose the guarantee that MLE will be consistent. It MAY still be consistent, at least for some of the parameters, but it may not. A good example is with count data. There, the Poisson estimator is still consistent under certain types of mis-specification, as long as the latter relates only to the conditinal mean of the data.

    In teh case of the linear regression model, with Normal errors, the MLE for the (conditional mean) coefficients is the OLS estimator. It's still consistent even if the model is mis-specified in that there is heteroskedasticity. This is no longer the case if the model is non-linear in the parameters. On the other hand, even in the linear case, this type of mis-specification affects the MLE of the covariance matrix of the OLS estimator of the coefficients (whew!). The same is true with autocorrelatation. (Think Newey-West.)

    I'll post something more on this, especially regarding intuition.

    Thanks again for raising this.