Like a lot of others, I follow
Andrew Gelman's blog with great interest, and today I was especially pleased to see
this piece relating to a recent study on the extent to which researchers do or do not interpret confidence intervals correctly.
If you've ever taught an introductory curse on statistical inference (from a frequentist, rather than Bayesian perspective), then I don't need to tell you how difficult it can be for students to really understand what a confidence interval is, and (perhaps more importantly) what it isn't!
It's not only students who have this problem. Statisticians acting as "expert witnesses" in court cases have no end of trouble getting judges to understand the correct interpretation of a confidence interval. And I'm sure we've all seen or heard empirical researchers misinterpret confidence results! For a specific example of the latter, involving a subsequent Nobel laureate, see my old post
here!
The study that's mentioned by Andrew today was conducted by four psychologists (
Hoekstra et al., 2014) and involved a survey of academic psychologists at three European Universities. The participants included 442 Bachelor students, 34 Master students, and 120 researchers (Ph.D. or faculty members).
Yes, the participants in this survey are psychologists, but we won't hold that against them, and my hunch is that if we changed "psychologist" to "economist" the results wouldn't alter that much!
Before summarizing the findings of this study, let's see what the authors have to say about the correct interpretation of a confidence interval (CI) constructed from a particular sample of data: