tag:blogger.com,1999:blog-2198942534740642384.comments2014-12-21T01:45:57.200-08:00Econometrics Beat: Dave Giles' BlogDave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comBlogger2496125tag:blogger.com,1999:blog-2198942534740642384.post-63842962335383828462014-12-20T09:56:52.051-08:002014-12-20T09:56:52.051-08:00Yes it would. (I don't use Stata.)Yes it would. (I don't use Stata.)Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-87493301658690762812014-12-19T13:31:23.626-08:002014-12-19T13:31:23.626-08:00Dear Prof Giles,
Would this also apply to the clu...Dear Prof Giles,<br /><br />Would this also apply to the cluster-robust variance matrix estimator ? [I think it is vce(cluster) in stata]<br />Or is there another modification that would be required to do an F-test?<br /><br />Thanks <br /><br />Louislouisphttp://www.blogger.com/profile/03587802877832131062noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-1807875786327805482014-12-18T05:41:58.741-08:002014-12-18T05:41:58.741-08:00In the example you give, differencing Z would be o...In the example you give, differencing Z would be optional. This is because a first-differenced I(0) series is still stationary, although not I(0).Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-27150295425806986482014-12-18T02:05:04.206-08:002014-12-18T02:05:04.206-08:00Thank you for this great example Mr. Giles. I have...Thank you for this great example Mr. Giles. I have a question on the case in whic "the data are all stationary.". If we the result is so, do we proceed by applying OLS with the levels or differences of the variables ? Or do we just take the differences of the variables which are I(1) ? In a concrete example:<br /><br />Say we have Y= f(X, Z) and we are sure that Y and X are I(1) but Z is I(0). If bounds test rejects cointegration and if we ares supposed to proceed with an OLS, do we take the differences of Y and X only ? Or also Z ?<br /><br />Thank you in advance, this is a very successful blog. Ozan Ekin Kurthttp://www.blogger.com/profile/05864232256808561316noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-44710573565060625972014-12-16T13:05:31.531-08:002014-12-16T13:05:31.531-08:00Thanks sir!Thanks sir!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-75793457517524451622014-12-15T09:42:05.567-08:002014-12-15T09:42:05.567-08:00Because if they are cointegrated then we DON'T...Because if they are cointegrated then we DON'T have a spurious regression, and OLS is "super-consistent".Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-44695762262022103092014-12-15T09:35:07.224-08:002014-12-15T09:35:07.224-08:00Hello Sir,
Great post!! Thank you very much! I hav...Hello Sir,<br />Great post!! Thank you very much! I have only a question about this sentence: "We know that all of the series are integrated of the same order, and they are cointegrated. In this case, we can estimate two types of models: (i) An OLS regression model using the levels of the data. This will provide the long-run equilibrating relationship between the variables". Why may I use the OLS estimator model with a set of time-series variables I(1) and cointegrated?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-42539575988449002702014-12-12T21:38:28.817-08:002014-12-12T21:38:28.817-08:00Finally, VAR Forecasting. Great.
It is a pain to s...Finally, VAR Forecasting. Great.<br />It is a pain to solve a VAR model for forecasting. 常青树http://www.blogger.com/profile/09499813030229291918noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-55727922891779446082014-12-12T09:13:39.410-08:002014-12-12T09:13:39.410-08:00Although the sarcasm of Anonymous's June 10 co...Although the sarcasm of Anonymous's June 10 comment isn't warranted, I have to agree with the general sentiment--your simulation isn't very fair since it is prima facie mis-specified for all models except the probit. There is a data generating model that yields the linear probability model (or, more exactly, a Bernoulli glm with a linear link). Maybe you should be comparing to that. See http://stats.stackexchange.com/questions/81789/if-%CF%B5-is-uniformly-distributed-then-a-linear-probability-model-is-appropriate-cAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-20780874525603683162014-12-11T11:21:04.149-08:002014-12-11T11:21:04.149-08:00My compliments to this great blog! A question: The...My compliments to this great blog! A question: The cited paper of Prof. Lutkepohl seems to relax the usual "rules" about VAR vs. VECM specification for non-stationary series, depending on whether they are cointegrated or not. Refer e.g. to the blog post http://davegiles.blogspot.sk/2012/06/integrated-cointegrated-data.html or http://davegiles.blogspot.sk/2012/05/more-about-spurious-regressions.html) By contrast, the paper of Prof. Lutkepohl discusses levels VAR models for a mix of I(0) and I(1) series, even though "cointegrating relations may be present".<br /><br />I wonder, what exactly is the worst mistake we can make by skipping the VAR vs. VECM issue, and deliberately using levels VAR? Thank you.<br /><br />NadjaAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-17821112114820457482014-12-09T12:15:41.557-08:002014-12-09T12:15:41.557-08:00Yes, this can certainly happen.Yes, this can certainly happen.Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-55132294190991327922014-12-09T12:14:55.521-08:002014-12-09T12:14:55.521-08:00Is it possible for the sign of the regression coef...Is it possible for the sign of the regression coefficient to change from "+" in OLS to "-" in 2SLS? Or does it imply that there is an issue with the specification?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-77370728704817562592014-12-08T07:12:56.717-08:002014-12-08T07:12:56.717-08:00At the 10% significance level, the response is not...At the 10% significance level, the response is not significantly different from zero.Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-4592131777162460812014-12-08T03:07:31.973-08:002014-12-08T03:07:31.973-08:00Dear prof.,
In the graph above the effect is stat...Dear prof.,<br /><br />In the graph above the effect is statistically nill because the lower band is always below 0...is it correct?<br />Please answer me, I'm in trouble with the interpretation of graphs like that!<br /><br />Cris.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-87830520335903077742014-12-06T07:08:24.263-08:002014-12-06T07:08:24.263-08:00Thanks!Thanks!Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-69149443151176303942014-12-06T06:55:50.002-08:002014-12-06T06:55:50.002-08:00In a related article BÅRDSEN, G. & LÜTKEPOHL, ...In a related article BÅRDSEN, G. & LÜTKEPOHL, H. 2011. Forecasting levels of log variables in vector autoregressions. International Journal of Forecasting, 27, 1108-1115, the authors claim that "despite its theoretical advantages, the optimal forecast is shown to be inferior to the naive forecast if specification and estimation uncertainty are taken into account. Hence, in practice, using the exponential of the log forecast is preferable to using the optimal forecast." An interesting result! (Log-normality is assumed and smearing estimate is not considered.)Daumantasnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-40790710478981816632014-12-05T17:17:26.484-08:002014-12-05T17:17:26.484-08:00My comments seem to be getting eaten on Chrome, so...My comments seem to be getting eaten on Chrome, so apologize if this turns out to be a duplicate.<br /><br />I have two questions. It is my understanding is these sorts of transformations improve the mean prediction, but does not ensure that predictions for individual cases are particularly good. If so, why not use a glm or het-robust poisson regression?<br /><br />If you assume group-wise heteroskedasticity, you can smear by group, which relaxes the identically distributed errors assumption to a degree. What do you think of this practice? <br /><br /><br />Dimitriyhttp://www.blogger.com/profile/02728704178088861714noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-37297336873029414212014-12-05T12:53:07.359-08:002014-12-05T12:53:07.359-08:00Thanks for the comment. Keep in mind that this is ...Thanks for the comment. Keep in mind that this is just illustrative - not a comprehensive study.Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-21128729272283383212014-12-05T12:46:23.553-08:002014-12-05T12:46:23.553-08:00Great post! It seems that the subtleties of back-t...Great post! It seems that the subtleties of back-transformation may easily go unnoticed; at least I did not think about them the first time I had to back-transform. Fortunately, there are posts like this where the small tricky parts face the daylight. One comment: even though your examples are just examples, I would still prefer having a little longer "out of sample" parts. Eight or ten observations seem quite few to draw conclusions from, especially when performances of more than two alternatives are to be compared and ranked.Daumantasnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-63366874278036144942014-12-05T05:01:12.987-08:002014-12-05T05:01:12.987-08:00I have the same question because my dependent vari...I have the same question because my dependent variable is the GDP growth rate.<br /><br />I will be thankful for your answer,<br />StephanieAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-19118515153891034832014-12-04T20:46:48.042-08:002014-12-04T20:46:48.042-08:00Great, very interesting, and thanks for the code!!...Great, very interesting, and thanks for the code!!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-70160576929930063022014-12-04T16:17:05.031-08:002014-12-04T16:17:05.031-08:00Sorry - I'm not a MATLAB user.Sorry - I'm not a MATLAB user.Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-35122463004042371622014-12-04T16:15:45.180-08:002014-12-04T16:15:45.180-08:00hi, Dr Giles
sorry to bother you, I'm a big fa...hi, Dr Giles<br />sorry to bother you, I'm a big fan from China,and I'm a PhD candidate.<br />I have a question about VEC model. When using matlab, the error correct term is regarded as a exogeneous variable in VAR system, however, when using Eviews or gretl, the ECT is used as a restricted VAR model, and it wil give me total different answer, so, help me out of here.<br />Cheers~~keyu Zhanghttp://www.blogger.com/profile/17600166136671114040noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-41777858205668843642014-12-03T07:43:42.787-08:002014-12-03T07:43:42.787-08:00For an even more egregious example of treating a m...For an even more egregious example of treating a mediating variable as a confounder, see http://www.runnersworld.com/health/will-running-too-much-kill-you. Researchers claimed to demonstrate that runners who ran less than 20 miles a week had lower mortality than those who ran more. To reach this conclusion, it was necessary to condition on BMI, blood pressure, and cholesterol. But running influences mortality in large part by its effects on BMI, blood pressure, and cholesterol.<br /><br />I was disappointed by "researchers know this. There's just nothing they can do about it." How about not conditioning?<br /><br />The root of the problem, in my view, is the combination of a desire to avoid imposing unnecessary assumptions, which is understandable, with a belief that conditioning is a purely neutral activity that imposes no assumptions, which is heinous. There are many things in life which it is reasonable to want but not to expect to have (hey, I'd like a pony!), and assumption-free statistical inference is one of them. Yes; every time you choose not to condition on a variate, you are implicitly assuming that variate is not a confounder. But it is equally true that every time you choose to condition on a variate, you are assuming that it IS a confounder, or else unrelated. Much reported research would be improved if authors had a clearer grasp of this fact. For that matter, I think most papers would be improved if you had to explicitly draw one of those little causal diagrams of your assumptions, along with your p-values.<br /><br />In that connection, I recall that Judea Pearl has a wonderful diagram of a "Simpson's paradox machine"; conditioning successively on {X1}, {X1,X2}, {X1,X2,X3} ... reverses the conclusion each time. There is nothing wrong with testing more than one causal model, of course. If your conclusions stand up, then you can say "our results are robust to ..." etc, and if they don't, that indicates a very interesting direction for future research. But as the paradox machine makes clear, it is the combination of conditioners that matters. And if you have many potential confounders of which you are uncertain, such that this combination grows uncomfortably large? Why, then that is a fair indication of the weight to put on your results.Phil Koopnoreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-52716916825024766562014-12-02T08:42:44.796-08:002014-12-02T08:42:44.796-08:001. Yes. You don;t use a VECM if there's no coi...1. Yes. You don;t use a VECM if there's no cointegration.<br />2. I don't understand your second question.Dave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.com