I received the following email query a while back:
"It's my understanding that in the event that you have a large sample size (in my case, > 2million obs) many tests for functional form mis-specification will report statistically significant results purely on the basis that the sample size is large. In this situation, how can one reasonably test for misspecification?"
Well, to begin with, that's absolutely correct - if the sample size is very, very large then almost any null hypothesis will be rejected (at conventional significance levels). For instance, see this earlier post of mine.
Schmueli (2012) also addresses this point from the p-value perspective.
But the question was, what can we do in this situation if we want to test for functional form mis-specification?
Schmueli offers some general suggestions that could be applied to this specific question:
- Present effect sizes.
- Report confidence intervals.
- Use (certain types of) charts
To this, I'd add, consider alternative functional forms and use ex post forecast performance and cross-validation to choose a preferred functional form for your model.
You don't always have to use conventional hypothesis testing for this purpose.
Schmueli, G., 2012. Too big to fail: Large samples and the p-value problem. Mimeo., Institute of Service Science, National Tsing Hua University, Taiwan.