Thursday, November 14, 2013

Shanker Blog » The Wrong Way To Publish Teacher Prep Value-Added Scores

Shanker Blog » The Wrong Way To Publish Teacher Prep Value-Added Scores:

The Wrong Way To Publish Teacher Prep Value-Added Scores

Posted by  on November 14, 2013



As discussed in a prior post, the research on applying value-added to teacher prep programs is pretty much still in its infancy. Even just a couple of years of would go a long way toward at least partially addressing the many open questions in this area (including, by the way, the evidence suggesting that differences between programs may not be meaningfully large).
Nevertheless, a few states have decided to plow ahead and begin publishing value-added estimates for their teacher preparation programs. Tennessee, which seems to enjoy being first — their Race to the Top program is, a little ridiculously, called “First to the Top” — was ahead of the pack. They have once again published ratings for the few dozen teacher preparation programs that operate within the state. As mentioned in my post, if states are going to do this (and, as I said, my personal opinion is that it would be best to wait), it is absolutely essential that the data be presented along with thorough explanations of how to interpret and use them.
Tennessee fails to meet this standard. 
For example, one of the big issues is separating selection (who applies and gets accepted to programs) from actual program effects (how well the candidates are trained once they get there). That is, a given program’s graduates may have relatively high value-added scores, but that doesn’t necessarily mean that the program they attended was thereason for the high scores. It may be that certain programs, by virtue of their location (or,