Pages

Thursday, December 26, 2013

Solution to Regression Problem

O.K. - you've had long enough to think about that little regression problem I posed the other day. It's time to put you out of your misery!

Here's the problem again, with a solution.


Problem:
Suppose that we estimate the following regression model by OLS:

                     yi = α + β xi + εi .

The model has a single regressor, x, and the point estimate of β turns out to be 10.0.

Now consider the "reverse regression", based on exactly the same data:

                    xi = a + b yi + ui .

What can we say about the value of the OLS point estimate of b?
  • It will be 0.1.
  • It will be less than or equal to 0.1.
  • It will be greater than or equal to 0.1.
  • It's impossible to tell from the information supplied.
Solution:
Let x'i and y'i be the data taken as deviations from their respective sample means. Then, the OLS estimator of β is β* = ∑x'iy'i / ∑x'i2; and the OLS estimator of b is b* = ∑x'iy'i / ∑y'i2.

So we can write,   β*b* = (∑x'iy'i )2 / [∑x'i∑y'i2].

Immediately, by the Cauchy-Schwarz Inequality, β*b* ≤ 1, and so the correct answer is that b* ≤ 0.1.


© 2013, David E. Giles

2 comments:

  1. [Nitpick: you haven't actually shown the first answer is incorrect, though that's trivial]


    Here's a geometrical/graphical proof:

    Draw a scatterplot, scaled so the x and y standard deviations are the same number of inches. The reverse regression line is the reflection of the forward regression line in the diagonal, so if we turn the page to make y horizontal, the reverse regression line has lower slope than the forward line, except that the lines are the same if all the points lie on the diagonal.


    There should also be a nice proof based on the pairwise-slope construction of OLS. That is, with two points, \hat\beta is obviously the slope of the line joining them, and with more than two points \hat\beta is a weighted average of all the pairwise slopes, with weights proportional to the squared difference in x. I like this construction because it gives you \hat\beta as an average difference in y per unit difference in x without mentioning linearity anywhere. It's due originally to one of the old French maths guys (I forget which), but it keeps being rediscovered independently.

    Since all the pairwise reverse regressions are just the reciprocals of the pairwise forward regressions, it seemed as though this should just be something like Jensen's inequality, but that goes the wrong way -- you do need to look explicitly at how the weights for the pairwise slopes differ in the reverse regression.

    ReplyDelete
    Replies
    1. Thomas - thanks. I like the geometric explanation.

      Delete

Note: Only a member of this blog may post a comment.