tag:blogger.com,1999:blog-2198942534740642384.post1217151649039287633..comments2023-10-24T03:16:41.009-07:00Comments on Econometrics Beat: Dave Giles' Blog: Monte Carlo Simulation Basics, III: Regression Model EstimatorsDave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-2198942534740642384.post-60320490769071494142017-03-11T20:30:10.035-08:002017-03-11T20:30:10.035-08:00My pleasure - thanks for the helpful discussion.
D...My pleasure - thanks for the helpful discussion.<br />DGDave Gileshttps://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-64046697080893615782017-03-11T20:25:14.959-08:002017-03-11T20:25:14.959-08:00Yes, everything makes perfect sense now. When doin...Yes, everything makes perfect sense now. When doing these simulations, I did not fully appreciate the importance of having fixed regressors because I would usually select a large n. For example, I often set my n=500 (sample size) and N=5,000 (number of replications). Thus, in many cases, I would not reject normality of the sampling distribution; however, as you clearly point out (using the dynamic OLS model as an example), with smaller sample sizes, the distribution of OLS estimates are not normally distributed. <br /><br />When I set n=10 and N=5,000, I did one simulation where I drew x from a normal(0,1) each time I drew the error term, and I was able to reject normality of the 5,000 OLS coefficient estimates very easily; the p-values were extremely small using a test similar to the Jarque-Bera test. I then did another simulation where I drew x once and kept it fixed across all the replications; I could no longer reject normality. The p-value was close to 0.56.<br /><br />Thus, I can clearly see the important of keeping x as a fixed regressor when trying to do simulations where one is interested in computing empirical power, for example. Thanks again!Chrishttps://www.blogger.com/profile/00614012660427960869noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-11510193660724885012017-03-11T18:56:28.345-08:002017-03-11T18:56:28.345-08:00Chris - if the random regressors are uncorrelated ...Chris - if the random regressors are uncorrelated with the errors, then of course there is no bias. Does that make sense?<br /><br />DavidDave Gileshttps://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-54405717687612514832017-03-11T12:51:35.042-08:002017-03-11T12:51:35.042-08:00Thanks for the reply. I just want to clarify one a...Thanks for the reply. I just want to clarify one additional question I have: In econometrics, the assumption of non-stochastic regressors is often relaxed. Thus, the vector of regressors may be viewed as either all stochastic or a mixture of stochastic and non-stochastic regressors. I performed a simple simulation with the following data generating process:<br />y = \beta_0 + \beta_1 x + \epsilon,<br />where \beta_0 = 1, \beta_1 = 0.5, and x and \epsilon are normally distributed with mean 0 and variance 1. I draw a sample of 500 observation on x and \epsilon, compute y, and estimate the OLS regression coefficients. I then repeat this exercise 1,000 times and collect the results---there is no assumption of x being fixed in repeated samples. OLS, on average, still demonstrates that the parameter estimate for \beta_1 is unbiased. May I take this to assume that I have demonstrated conditional unbiasedness? Then one could argue that I could integrate over the conditional distribution to demonstrate unconditional bias. Does this line of thought hold? I want to think how this finding connects with the idea you have suggested above, which suggests that using stochastic regressors will allow me to see bias introduced into the OLS estimator. Thank you again for your insightful blog and comments. Chrishttps://www.blogger.com/profile/00614012660427960869noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-10045304830659200202017-03-11T09:05:08.972-08:002017-03-11T09:05:08.972-08:00Absolutely essential for this exercise. Unless you...Absolutely essential for this exercise. Unless you want to see the bias introduced into the OLS estimator when you have random regressors.Dave Gileshttps://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-72398311049617352712017-03-11T09:02:23.077-08:002017-03-11T09:02:23.077-08:00Dear Prof. Giles,
Thank you for these posts about...Dear Prof. Giles,<br /><br />Thank you for these posts about using Monte Carlo simulation to illustrate properties of the sampling distribution of OLS parameters. I have a question about the necessity of keeping the observed values of the regressors, x2 and x3, as non-random. How necessary is it to keep these regressors "fixed in repeated samples"?Chrishttps://www.blogger.com/profile/00614012660427960869noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-32321789964932367702016-12-16T17:33:10.956-08:002016-12-16T17:33:10.956-08:00In the 1970's this helped with establishing fo...In the 1970's this helped with establishing formal proofs of the small-sample properties of various simultaneous equations estimators.Dave Gileshttps://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-41535315751598789602016-12-07T19:49:01.341-08:002016-12-07T19:49:01.341-08:00Could you provided instances where simulations led...Could you provided instances where simulations led to formal proofs regarding the properties of estimators or test statistics?Stéphanenoreply@blogger.com