Tuesday, February 28, 2012

Big Apple’s Rotten Ratings « InterACT

Big Apple’s Rotten Ratings « InterACT:

Big Apple’s Rotten Ratings

If you’ve been following the news in education or in New York City recently, you’ve no doubt heard about the city releasing it’s “value-added” calculations of test data, putatively showing – with high degrees of error – the effectiveness of teachers. I’ve made it somewhat of a crusade to fight against the validity of this approach to teacher evaluation, and I won’t rehash the whole set of arguments here and now. (Though I did provide links to some of my prior posts, below). If you want to look more deeply into the current events in New York City, you can find a good list of responses provided by – who else? – Larry Ferlazzo.

So far, I think the best image from the whole fiasco comes from math teacher Gary Rubinstein, who ran the numbers himself, a bunch of different ways. The first analysis works on the premise that a teacher should not become dramatically better or worse in one year. He compared the data for 13,000 teachers over two consecutive years and found this – a virtually random distribution:

graph

Value-added teacher ratings correlation over two years - image by Gary Rubinstein

Read more of Gary’s analysis – Part 1 and Part 2.

When it comes to close examination of issues involving research and statistics, another source I read frequently