Useless Testing "Gap" Analyses and the Newspapers That Love Them
There is so much wrong with this "analysis" by the The Bergen Record of test scores and other outcomes for the 31 "Abbott" districts -- the (mostly) disadvantaged New Jersey school districts that received additional aid for years (although inconsistently) under a series of court rulings. Among other problems:
- As Julia Sass Rubin points out, the authors don't include the charter schools, which are a large portion of the student enrollments in many districts.
- They don't account for changes over time in student characteristics.
- They assume the proficiency rates consistently measure "proficiency" across time, a huge assumption that, to my knowledge, has never been assessed by the state.
- They assume the Abbotts got "extra aid," even though, for years, the amount the districts received was not what the state's own law says they need to provide an adequate education (and that amount isn't close to enough to expect equalized school outcomes anyway).
I could go on, but I want to make a larger point about so-called "gap" analyses, and why any attempt, like The Record's, to judge an education policy's efficacy based on "gaps" is fundamentally flawed.
And I'm going to keep this so simple in can be drawn out in Sharpies (click to enlarge):
Let's say you have two schools: one in a wealthier area, one is a less-wealthy place. The advantaged school starts off with a higher proficiency rate than the disadvantaged school. As time goes on, the disadvantaged school improves -- but so does the advantaged school.
After several years, both schools are performing better than they did previously. But the Jersey Jazzman: Useless Testing "Gap" Analyses and the Newspapers That Love Them: