Monday, March 21, 2011

From C to Shining C

In each episode of that wonderful long-running public radio show, A Prairie Home Companion, Garrison Keilor always ends his monologue about the good burghers of the mythical town of Lake Wobegon, and the local Norwegian bachelor farmers, with the words:
"That's all of the news from Lake Wobegon - the town where all the women are strong, all the men are good looking, and all the children are above average."
Well, Garrison, do I have news for you! There really is a Lake Wobegon - and we live there! I'm not entirely sure about that bit to do with the men and women, but  apparently the younger folk definitely think that they're all above average - mainly because we've (collectively) told them they are. We have only ourselves to blame.

I'm referring to the horrendous "grade inflation" that's taken in place in our secondary and tertitary education systems, at least in North America. I'm sure we're not alone in the world in this respect, and for all I know the problem may be equally bad in the elementary schools. For whatever reason - and I'm not pointing the finger here (Hmmm) - we've let the students con us into letting them con themselves into believing that every one of them is above average. Worse than that, they firmly believe that a perfectly respectable C, or even a B, grade amounts to what we used to call a FAIL: the dreaded F-word that the PCP (Political Correctness Police) tell us not to use in polite company!

"Give me an A+, or give me death!" is the catch-cry in the hallways, and some days the second of those options looks pretty darn tempting. I'm sorry Gene Kranz, but "Failure is an Option" in this context. Indeed it's a fact of life that is better learned sooner rather than later.

Anyway, I hope that everyone feels good with these inflated, largely meaningless, grades - and isn't that how this all came about? We had to make sure that everyone felt good - warm and fuzzy, and protected from reality. We couldn't have little Suzie feeling worthless by sending her home with a B+ for introductory shuffleboard, could we?

I've sat there on the stage in my glorious academicals at more than one convocation ceremony in which virtually all of the graduands from certain disciplines graduate "With Distinction". I'm sure their parents are really proud, but in what sense are their sons and daughters actually "distinctive"? Sorry folks, but they're not! The students themselves have been cheated - especially those who really deserved to shine - and so have their potential employers. That A+ in intermediate macramé, or in advanced knitting, is not going to do you much good in life - at least not until you end up in the retirement home, and by then it's probably a bit late in the day to really matter. More to the point, that A+ in econometrics will pale somewhat if the lowest grade in the class is an A. Trust me on that!

The grade inflation that I'm referring to is neither new nor unexpected. The anecdotal evidence is rife; but what about some hard numbers? Not surprisingly, this is not a topic that some of those in the system want to discuss too openly .For example, Stanford University stopped making grade information public in the mid 1990's - gosh, I'm not sure why.  As far as Canadian universities are concerned, some of the best evidence is actually a little dated and it relates only to institutions in Ontario. Anglin &  & Meng (2000) provide some interesting figures relating to first-year grades in 7 Ontario universities over the period 1973 to 1994. They find that significant grade inflation took place over that period, with substantial variation across disciplines. (Remember my comment about this in the last paragraph?) Economics, together with Chemistry and Mathematics emerge as the tough guys, while English, French and Music were the softies (at least in the mid-90's).

Among other things, Anglin & Meng also discuss the relationship between grade inflation and students' assessments of instructors. Apparently, in some circles "An A for a lay" (if it was ever a reality) has long since been reversed to become "An A for teaching performance for an A on the mid-term exam". There is actually quite a large literature on this phenomenon, and the recent contribution by Lin (2011) provides both empirical evidence and a whole raft of references to other studies.

Now, just in case you think I'm over-reacting to a purely local problem, I suggest that you check out Rojstaczer & Healy (2010) and the Gradeinflation website and see what has apparently been happening elsewhere - including the Yales, the Princetons, the Dartmouths and the Harvards of this world. It's quite enlightening - and really scary. I won't reproduce the excellent and telling charts on that site, but I'd suggest that you check out the links to other related articles, and the data-sets that are provided at the bottom of that web page. I've used some of the latter in the following econometric analysis. Meanwhile, by way of illustrating the magnitude of the problem I'm talking about, over the period 1990 to 2007 the average undergraduate GPA at the University of Georgia (a public school) rose from 2.64 to 3.20 on a 4-point scale. Privately funded Elon University also apparently succeeded in attracting brighter and brighter students - their average GPA increased  from 2.67 to 3.12 over the same period. Interesting - and the same story emerges if you look further back in time, for school after school.

Here are two small empirical exercises to get you thinking about this some more. The first addresses (in a very limited way) the question: "Has grade inflation been worse in private universities than in public ones?" Remember - we're dealing with U.S. data here. Also, you'll see that I've  used only a very small sample of the available data. This will undoubtedly affect my results, but it will give you a basis for extending my analysis to the full data-set, if you are so inclined. The second exercise involves some Granger non-causality testing involving two campuses of the University of Hawaii - Manoa and Hilo. Let's make this clear - I have nothing against U of H. It's just that this particular geographical choice automatically "controls" for a whole host of factors that we'd otherwise need to try and measure, and the 2 campuses are natural competitors. Here the objective is to see if we can determine "Who is Leading Whom" in the race to the bottom (of academic standards). Again, I love Hawaii!! The same caveat applies - I've used only a limited amount of the available data. As usual, my data and an EViews workfile are available on the Data and Code pages of this blog so that you can replicate and extend what I've done, if you want to.


Figure 1 shows the data for the undergraduate GPAs for 6 U.S. universities over time. Solid lines relate to public universities, while dotted lines are for the 3 private schools. Note that the length of the available sample varies by school, and so do the annual compound rates of growth in the average GPA. These six growth rates range from 0.13% p.a. for Harvey Mudd College, to 1.01% p.a. for East Carolina University.

To address the first of the above questions, I've fitted a 6-equation Seemingly Unrelated Regressions (SUR) model. Each equation includes an intercept and a linear time-trend. According to the augmented Dickey-Fuller tests that I've performed, each of the series has a unit root. There aren't enough observations to apply Johansen's tests for cointegration. However, using the less powerful Engle-Granger two-step procedure, I find the the six GPA time-series are cointegrated. Of course, I don't know how many cointegrating relationships there are, so I'm going to model the data in their levels. In other words, I'm estimating the long-run equilibrating relationships, not the short-term dynamics. Using a SUR model improves the precision of my estimates, and also enables me to test cross-equation (across-schools) restrictions on the parameters. Specifically, I can easily test if the slope of the trend in GPA scores for one school is the same as that for another school, and so on. Let's see what emerges.

If you look at the results in the EViews file that I've supplied, you can easily confirm the unit root and cointegration testing. You can also see the estimated SUR model - there is not much point in trying to display it in full here. However the first important thing you'll observe if you look at the estimated model is that the trend terms are positive and extremely significant (p-value = 0.0000) in every equation. The grade inflation that is visually apparent in Figure 1 is statistically significant. The next important things are three Wald tests that I've conducted:
  1. H0: The Trend Coefficients are the Same for the 3 Public Universities vs. HA: H0 is False.
  2. H0: The Trend Coefficients are the Same for the 3 Private Universities vs. HA: H0 is False.
  3. H0: The Trend Coefficients are the Same for the Private & Public Universities vs. HA: H0 is False.
The results that I got were:
  1. W1 = 183.557; d.o.f. = 2; p-value = 0.000. Conclusion: Reject H0.
  2. W2 = 50.112; d.o.f. = 2; p-value = 0.000. Conclusion: Reject H0.
  3. W3 = 251.429; d.o.f. = 5; p-value = 0.000. Conclusion: Reject H0
So, there is some reasonable evidence, at least among these six universities, that there has been a significant difference between grade inflation in private as opposed to public schools. The same holds for different schools in the same group.

I should point out that if you look at the full set of data available on the Gradeinflation website, one of the broad conclusions reached there is that "...GPAs are higher at private schools. Average rates of contemporary grade inflation are (slightly) higher as well."

The second question I posed above was to do with cause and effect between the grade inflation on the two Hawaiian campuses, Manoa and Hilo, that we can see in Figure 2:


I've fitted a 2-equation Vector Autoregressive (VAR) model as the basis for testing for Granger (non-) causality The maximum lag length of one year was determined using the Schwarz information criterion. The estimated VAR model is not especially interesting in itself, but you can check the details in the accompanying EViews file if you wish. As you will see, the estimated model "passes" all of the usual tests of the residuals, etc. Concentrating on the causality tests, though, this is what we are interested in:
  1. H0: Manoa GPA does NOT Granger-cause Hilo GPA ; HA: Manoa GPA DOES Granger-cause Hilo GPA.
  2. H0: Hilo GPA does NOT Granger-cause Manoa GPA ; HA: Hilo GPA DOES Granger-cause Manoa GPA.
Both of these two GPA time-series have a unit root, and are cointegrated (according to the Engle-Granger test), we know that there must be Grnager causality in one direction or the other (or both) between these two variables. The fact that the data are I(1) also means that I have to be very careful when I test for Granger non-causality. I've followed the Toda and Yamamoto (1995) procedure, by adding one extra lag of both variables into the equations of the VAR model, but not including these terms in the Wald test for causality. Here are the results of my tests:
  1. W1 = 2.751; d.o.f. = 1; p-value = 0.097. Conclusion: Manoa DOES Granger-cause Hilo.
  2. W2 = 0.098; d.o.f. = 1; p-value = 0.755. Conclusion: Hilo DOES NOT Granger-cause Manoa.
According to these results, at least at the 10% significance level, the steady grade inflation observed at U of H (Manoa) over the period 1986 to 2007 caused the eagle-eyed grade-watchers at the Hilo campus to jump on the gravy train too Of course, there are lots of other interesting questions that could be addressed using Gradeinflation data and some simple econometrics. I'll leave it to you to play around.

So what do I conclude from all of this?
  • Grade inflation is rampant in our high schools and universities, and it's not just a Canadian issue.
  • Grade inflation has already devalued the grades and degrees that our students are receiving.
  • Grade inflation is grossly unfair to those who have genuinely earned their A+'s.
  • The different rates of grade inflation across disciplines unfairly penalize students in "tough" disciplines (such as Economics) when it comes to inter-disciplinary "competitions" - for graduate funding from the national funding bodies, for example. 
As proud Canadians I suggest we should demand that average grades in our high schools and universities go from "From C to shining C, to shining C". Well, that may be a little draconian, but I'm sure you get my point.

So, speak up - don't be shy. If it helps, reach for the Powdermilk biscuits (in the blue pack), as advertised each week  on A Prairie Home Companion - "....giving shy persons the strength to get up and do what has to be done."


Note: The links to the following references will be helpful only if your computer's IP address gives you access to the electronic versions of the publications in question. That's why a written References section is provided.

References

Anglin, P. and R. Meng (2000). Evidence on grades and grade inflation at Ontario's Universities. Canadian Public Policy, XXVI, 361-368.

Lin, T-C. (2011). Economic effects of grades on course evaluations. Applied Economics Letters, in press.

Rojstaczer, S. and C. Healy (2010). Grading in American colleges and universities. Teachers College Record, http://www.tcrecord.org/ ID number: 15928, 4 March.

Toda and Yamamoto (1995). Statistical inferences in vector autoregressions with possibly integrated processes. Journal of Econometrics, 66, 225-250.


© 2011, David E. Giles

4 comments:

  1. The University of Manitoba used to release statistics on grades in the IS books (http://umanitoba.ca/admin/oia/publications/index.html). Ironically, they began omitting these statistics in 2006-2007.

    I have data dating from 1985-2006 which shows a clear rising in overall GPA at UofM over that time, with average undergraduate GPA rising from 2.7 to 2.9. If grades are rising, but have a maximum, the A+, the distribution of grades necessarily compresses at the top of the scale, making it increasingly difficult to compare students/graduates with varying skill levels. Looking at the data by grade (Note: UofM does not assign minus grades), in most faculties, including science and engineering, we see a clear increase in A+ and A grades and a corresponding decrease in B and C+ grades. D grades remain quite steady over this period. The one faculty that bucks the trend a bit, though not completely, is nursing.

    ReplyDelete
  2. Great example, Lindsay. Thanks for that. And a good point about grade compression at the top end.

    ReplyDelete
  3. Can we consider the possibility that grades have a different meaning than they did in 1940, and are no longer designed to rank students against each other but rather are designed to grade mastery of a given subject (at the level expected in a certain class)? In that case, "grade inflation" at least at the university level could simply reflect students learning more, or even that university student bodies are more talented to begin with.

    At the very least, the data does not rule this out.

    ReplyDelete
  4. Adam - you're right in what you say - the data do not rule that out. Regrettably, my own experience over the past 35 years or so suggests otherwise - but that's just a personal anecdote. Thanks for the interest.

    ReplyDelete

Note: Only a member of this blog may post a comment.