Oh dear, here we go again. Hard on the heels of this post, as well an earlier one here, I'm moved to share even more misgivings about the Linear Probability Model (LPM). That's just a fancy name for a linear regression model in which the dependent variable is a binary dummy variable, and the coefficients are estimated by OLS.
Friday, June 1, 2012
Another Gripe About the Linear Probability Model
NOTE: This post was revised significantly on 15 February, 2019, as a result of correcting an error in my original EViews code. The code file and the Eviews workfile that are available elsewhere on separate pages of this blog were also revised. I would like to thank Frederico Belotti for drawing my attention to the earlier coding error.
So you're still thinking of using a Linear Probability Model (LPM) - also known in the business as good old OLS - to estimate a binary dependent variable model?
So you're still thinking of using a Linear Probability Model (LPM) - also known in the business as good old OLS - to estimate a binary dependent variable model?
Well, I'm stunned!
Yes, yes, I've heard all of the "justifications" (excuses) for using the LPM, as opposed to using a Logit or Probit model. Here are a few of them:
Subscribe to:
Posts (Atom)