Friday, July 24, 2015

Evaluating teachers: Precise but irrelevant metrics? - The Hechinger Report

Evaluating teachers: Precise but irrelevant metrics? - The Hechinger Report:

Evaluating teachers: Precise but irrelevant metrics?





 I've told this joke before:
Two hot-air balloonists get lost, and they’re floating aimlessly. They spot someone down below and call out, “Hello!”
The person on the ground replies, “Hello!”
“Where are we?” one calls down.
Up comes the reply: “You’re in a balloon!”
They continue to drift, when one of the balloonists says to the other, “Who was that?”
The other responds, “That was obviously an economist.”
“An economist? How can you tell?” the first asks.
“Because what he said was precise, but irrelevant.”
Unfair to economists? Of course! But surely in keeping with the mongoose-cobra relationship that characterizes sociologists and economists. (And some of my best friends, etc., etc.) A case in point:
Earlier this week, FiveThirtyEight, founded by data whiz Nate Silver, posted a feature on the application of value-added models to the evaluation of K-12 teachers. Quantitative editor Andrew Flowersargued that a key part of the debate is over, and that recent studies have converged on the finding that value-added measures accurately predict students’ future test scores. The article cites all of the usual suspects: Raj Chetty, John Friedman, Jonah Rockoff, Jesse Rothstein, Tom Kane, and Doug Staiger. Thoughtful and creative economists one and all, armed with an arsenal of quantitative methods and administrative data to which to apply them.
The debate has hinged on the fact that students are usually not randomly assigned to teachers, and thus one can never be sure that differences among teachers in their students’ test scores are due to the influence of the teacher, rather than to unmeasured differences in the attributes of students or of a classroom.Evaluating teachers: Precise but irrelevant metrics? - The Hechinger Report: