Teachers frequently use analogies when explaining new concepts. In fact, most people do. A good analogy can be quite eye-opening.
The other day my wife was in the room while I was on the 'phone explaining to someone why we often like to apply BOTH the ADF test and the KPSS test when we're trying to ascertain whether a partcular time-series is stationary or non-stationary. (More specifically, whether it is I(0) or I(1).) The conversation was, not surprisingly, relatively technical in nature.
After the call was over, it occurred to me that my wife (who is an artist, and not an econometrician) might ask me what the heck all that gobbly-gook was all about. As it happened, she didn't - she had more important things on her mind, no doubt. But it forced me to think about a useful analogy that one might use in this particular instance.
I'm not suggesting that what I came up with is the best possible analogy, but for what it's worth I'll share it with you.
First, though, here's a brief summary of the technical issue(s).
- We want to test if our time-series is I(1) or I(0).
- When we apply the ADF test the null hypothesis is that the series is I(1), while the alternative hypothesis is that it is I(0). On the other hand, the KPSS test is based on a null hypothesis that the series is I(0), and the alternative hypothesis is that it is I(1).
- So, the null and alternative hypotheses are reversed when we move from one test to the other.
- We also know that both of these tests can lack power in many situations. They're not all that reliable.
- If we wrongly conclude that the series is I(0), when in fact it is I(1), this is pretty serious. We don't want to end up working with a non-stationary time-series unwittingly. For instance, we may end up fittng a "spurious regression".
- On the other hand, if we conclude that the series is I(1), when in fact it is I(0), this may not be quite so serious. For instance, if we were to (unnecessarily) difference the I(0) series it would still be stationary - although not I(0). That's not so bad in istself, but it may mean that we don't go on to test for possible cointegration between this series and any others that we may be working with.
- So, there's a strong incentive to come to the correct conclusion when we test the data. For that reason, a "second opinion" might be a good idea. If both the ADF and KPSS tests lead us to the same conclusion, we might take some comfort from that.
Now, here's my analogy. Suppose we have a dimly lit room with two windows, and in that room there are some snakes. They may be a completely harmless ones, or they may be deadly. Before entering the room, we'd like to try and figure what sort they are!
I take a flashlight, shine it in one wndow, and try to decide what I'm facing. I could go ahead on the basis of what I see through the one window using one flashlight and take a chance. Or, I could go around to the other window, perhaps use a different flashlight, and see what I can conclude from that different perspective.
If I come to the same conclusion about the nature of the snakes, whichever window and flashlight I use, then I may feel much better informed and prepared when I enter the room than if I come to diferent conclusions from the two viewings.
Wouldn't you want to look through both wndows before proceeding?