Latest News and Comment from Education

Tuesday, October 1, 2013

Shanker Blog » Underlying Issues In The DC Test Score Controversy

Shanker Blog » Underlying Issues In The DC Test Score Controversy:

Underlying Issues In The DC Test Score Controversy

Posted by  on October 1, 2013


In the Washington Post, Emma Brown reports on a behind the scenes decision about how to score last year’s new, more difficult tests in the District of Columbia Public Schools (DCPS) and the District’s charter schools.
To make a long story short, the choice faced by the Office of the State Superintendent of Education, or OSSE, which oversees testing in the District, was about how to convert test scores into proficiency rates. The first option, put simply, was to convert them such that the proficiency bar was more “aligned” with the Common Core, thus resulting in lower aggregate proficiency rates in math, compared with last year’s (in other states, such as Kentucky and New York, rates declined markedly). The second option was to score the tests while “holding constant” the difficulty of the questions, in order to facilitate comparisons of aggregate rates with those from previous years.
OSSE chose the latter option (according to some, in a manner that was insufficiently transparent). The end result was a modest increase in proficiency rates (which DC officials absurdly called “historic”).
I don’t have a particularly strong opinion about how OSSE should have proceeded. I can see it going either way. Setting cut scores is as much a political and human judgment call as anything else.
The controversy surrounding this decision, however, is both ironic and instructive. My understanding is that the actual scale scores are comparable between years (albeit imperfectly), even with the changed tests. The issue, again, is where to set the minimum score above which students are called advanced, proficient, etc., which in turn determines the rates for each of these designations (with proficiency rates getting the most attention). Unfortunately, OSSE doesn’t report anything except the rates. This severely limits the value of their annual testing results as reported to the public.
In other words, much of this controversy is about choosing between two cut score configurations, both of which present the data in an incomplete manner. Changes in the actual average scores, though still cross-sectional and therefore not “progress” measures, arguably provide a better sense of the change in performance of the typical