Arne-Ology & the Bad Incentives of Evaluating Teacher Prep with Student Outcome Data

Posted on April 25, 2014

Rate This

As I understand it, USDOE is going to go ahead with the push to have teacher preparation programs rated in part based on the student growth outcomes of children taught by individuals receiving credentials from those programs. Now, the layers of problems associated with this method are many and I’ve addressed them previously here and in professional presentations.
  1. This post summarizes my earlier concerns about how the concept fails both statistically and practically.
  2. This post explains what happens at the ridiculous extremes of this approach (a warped, endogenous cycle of reformy awesomeness)
  3. These slides present a more research based, and somewhat less snarky critique
Now, back to the snark.
This post builds on my most recent post in which I challenged the naive assertion that current teacher ratings really tell us where the good teachers are. Specifically, I pointed out that in Massachusetts, if we accept the teacher ratings at face value, then we must accept that good teachers are a) less likely to teach in middle schools, b) less likely to teach in high poverty schools and c) more likely to teach in schools that have more girls than boys.
Extending these findings to the policy of rating teacher preparation programs by the ratings their teachers receive… working on the assumption that these ratings are quite strongly biased by school context, it would make sense for Massachusetts teacher preparation institutions to try to get their teachers placed in low poverty elementary schools that have fewer boys.
Given that New Jersey growth percentile data reveal even more egregious patterns of bias, I now Arne-Ology & the Bad Incentives of Evaluating Teacher Prep with Student Outcome Data | School Finance 101: