tag:blogger.com,1999:blog-2198942534740642384.post6959974734886758974..comments2023-10-24T03:16:41.009-07:00Comments on Econometrics Beat: Dave Giles' Blog: Solution to Regression ProblemDave Gileshttp://www.blogger.com/profile/05389606956062019445noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-2198942534740642384.post-16661992175872292202013-12-27T08:21:35.063-08:002013-12-27T08:21:35.063-08:00Thomas - thanks. I like the geometric explanation....Thomas - thanks. I like the geometric explanation.Dave Gileshttps://www.blogger.com/profile/05389606956062019445noreply@blogger.comtag:blogger.com,1999:blog-2198942534740642384.post-67530083716653494652013-12-26T16:16:49.788-08:002013-12-26T16:16:49.788-08:00[Nitpick: you haven't actually shown the first...[Nitpick: you haven't actually shown the first answer is incorrect, though that's trivial]<br /><br /><br />Here's a geometrical/graphical proof:<br /><br />Draw a scatterplot, scaled so the x and y standard deviations are the same number of inches. The reverse regression line is the reflection of the forward regression line in the diagonal, so if we turn the page to make y horizontal, the reverse regression line has lower slope than the forward line, except that the lines are the same if all the points lie on the diagonal.<br /><br /><br />There should also be a nice proof based on the pairwise-slope construction of OLS. That is, with two points, \hat\beta is obviously the slope of the line joining them, and with more than two points \hat\beta is a weighted average of all the pairwise slopes, with weights proportional to the squared difference in x. I like this construction because it gives you \hat\beta as an average difference in y per unit difference in x without mentioning linearity anywhere. It's due originally to one of the old French maths guys (I forget which), but it keeps being rediscovered independently.<br /><br />Since all the pairwise reverse regressions are just the reciprocals of the pairwise forward regressions, it seemed as though this should just be something like Jensen's inequality, but that goes the wrong way -- you do need to look explicitly at how the weights for the pairwise slopes differ in the reverse regression. <br /><br />Thomas Lumleyhttp://notstatschat.tumblr.comnoreply@blogger.com