Monday, March 31, 2014

Shanker Blog » When Growth Isn't Really Growth, Part Two

Shanker Blog » When Growth Isn't Really Growth, Part Two:



When Growth Isn’t Really Growth, Part Two

Posted by  on March 31, 2014
Last year, we published a post that included a very simple graphical illustration of what changes in cross-sectional proficiency rates or scores actually tell us about schools’ test-based effectiveness (basically nothing).
In reality, year-to-year changes in cross-sectional average rates or scores may reflect “real” improvement, at least to some degree, but, especially when measured at the school- or grade-level, they tend to be mostly error/imprecision (e.g., changes in the composition of the samples taking the test, measurement error and serious issues with converting scores to rates using cutpoints). This is why changes in scores often conflict with more rigorous indicators that employ longitudinal data.
In the aforementioned post, however, I wanted to show what the changes meant even if most of these issues disappeared magically. In this one, I would like to extend this very simple illustration, as doing so will hopefully help shed a bit more light on the common (though mistaken) assumption that effective schools or policies should generate perpetual rate/score increases.
Let’s first quickly review the illustration presented in the previous post, which is  pasted below.
Here we have a hypothetical progress of test scores in a highly effective middle school. In this school, we have applied our magical assumptions, which basically eliminate the normal sources of volatility/imprecision that plague these cross-sectional changes in real life:
  1. We are using actual scores instead of proficiency rates (read this great paper about problems with the latter);
  2. Every single incoming cohort of sixth graders is the exact same size and performs at exactly the same level