TEACHER VALUE-ADDED SCORES: PUBLISH AND PERISH
On the heels of the Los Angeles Times’ August decision to publish a database of teachers’ value-added scores,New York City newspapers are poised to do the same, with the hearing scheduled for late November.
Here’s a proposition: Those who support the use of value-added models (VAM) for any purpose should be lobbying against the release of teachers’ names and value-added scores.
The reason? Publishing the names directly compromises the accuracy of an already-compromised measure. Those who blindly advocate for publication – often saying things like “what’s the harm?” – betray their lack of knowledge about the importance of the models’ core assumptions, and the implications they carry for the accuracy of results. Indeed, the widespread publication of these databases may even threaten VAM’s future utility in public education.
Let me explain. Value-added models, which are statistical techniques for isolating the effect of individual teachers on gains in their students’ test scores, rely on a set of core assumptions. Some of these can be tested; others cannot. One of the most important assumptions – and one that has recently gotten a lot of public attention – is that students are randomly assigned to teachers.
If students and teachers were in fact randomly assigned, with enough years of data, value-added models could