University of Arkansas Office for Education Policy

Does “Quality Counts” count the right stuff?

In The View from the OEP on January 26, 2012 at 2:03 pm

Sneak preview == the answer is NO!

The OEP comes out today with our annual summary of Arkansas performance on the annual Quality Counts ranking published by Education Week.  In keeping with our theme that more information is better than less, we are glad that this esteemed organization publishes these data each year. The problem we have is with the overall rating that is based on a convoluted (and nonsensical) combination of the individual indicators.  Graduate student Stuart Buck wrote a thoughtful blog post criticizing the ranking system here, and Stuart and I together penned a Letter to the Editor published in the February 4, 2009 issue of Education Week outlining some of the system’s flaws.

To be clear, the data presented by Ed Week in Quality Counts 2012 ARE helpful; check out our OEP Policy Brief (published today) in which we highlight the many interesting facts and figures from the 2012 edition of Quality Counts.  However, we are very skeptical about the usefulness of the QC overall ranking, and are concerned that this flawed figure is the one most often reported and discussed by media and policymakers.  It’s not even that we hate single indicators (we don’t!); it is simply the flawed nature of THIS single indicator with which we take issue.

Here is the text from the letter to the editor that Stuart and I wrote in 2009 … those criticisms, unfortunately, remain relevant today….

No ‘Quality’ Method for Rating States’ Performance

Article Tools

To the Editor:

We applaud Education Week for collecting education statistics about all 50 states. The latest of your annual Quality Counts reports (Jan. 8, 2009) is indeed an invaluable starting point. It goes a step too far, however, when it pools together disparate measures to arrive at each state’s overall score. This may not be problematic for education scholars, but policymakers might (and do) inaccurately treat a state’s overall rating as meaningful.

In fact, Quality Counts averages so much incommensurable data that we are reminded of the old joke of a beggar sitting on the streets of New York, with a sign reading, “Wars, 2; Legs Lost, 1; Wives Who Left Me, 2; Children, 3; Lost Jobs, 2. TOTAL: 10.”

First, the “school finance” measure rewards states for spending more money, whether or not that leads to actual results. The problem is most obvious when the “spending” measure is averaged with the measure for “K-12 achievement.” In theory, a high-spending state with low achievement—perhaps combining extravagance and incompetence—could get an overall score equal to that of a state with low spending and high achievement. But, all else being equal, the latter state obviously has a more efficient education system.

Second, the Chance-for-Success Index gives states higher grades for having fewer disadvantaged students. Unsurprisingly, Massachusetts does quite well on this measure, while Mississippi is near the bottom. This is certainly useful information, but it makes no sense that a state’s score here is averaged together with “K-12 achievement” to produce an overall score. (We were so baffled that we took the trouble of checking the scores of several states, just to be sure that this was what had been done.)

Thus, Quality Counts downgrades a state that produces A-level achievement for impoverished students, while upgrading another state simply for being blessed with privileged students. Imagine two states, equal on all measures but three. The first produces high achievement for poor kids using little money, while the second produces low achievement for rich kids using lots of the public’s resources. It is clear which system is doing better.

Unfortunately, Quality Counts misses the point.

Stuart Buck
Research Associate
Gary Ritter
Endowed Chair in Education Policy
University of Arkansas
%d bloggers like this: