During a recent conversation with Bob Reed (U. Canterbury) I recalled an interesting experience that I had at the American Statistical Association Meeting in Houston, in 1980. I was sitting in a session listening to an author presenting a paper about the bias and MSE of certain simultaneous equations estimators. The results were based on a Monte Carlo experiment. However, something just didn't seem right.
I looked at the guy sitting next to me - I didn't know him, but he was also looking puzzled. Then, at the same time, we both said to each other, "But the first two moments of that estimator don't exist!" The next thing out of our mouths was, "Who's going to tell him?"
The guy next to me turned out to be Tom Fomby, and I believe he was the one who politely explained to the speaker that his results were nonsensical.
If (the sampling distribution of) an estimator doesn't have a well-defined mean then it's nonsensical to talk that estimator's bias. Equally, if it doesn't have a well-defined variance, then it makes no sense to talk about its MSE. In other words, the Monte Carlo simulation results were trying to measure something that didn't exist!
So, what was going on here?