Latest News and Comment from Education

Thursday, April 6, 2017

New (Unvetted) Research about Washington DC’s Teacher Evaluation Reforms | VAMboozled!

New (Unvetted) Research about Washington DC’s Teacher Evaluation Reforms | VAMboozled!:

New (Unvetted) Research about Washington DC’s Teacher Evaluation Reforms


In November of 2013, I published a blog post about a “working paper” released by the National Bureau of Economic Research (NBER) and written by authors Thomas Dee – Economics and Educational Policy Professor at Stanford, and James Wyckoff – Economics and Educational Policy Professor at the University of Virginia. In the study titled “Incentives, Selection, and Teacher Performance: Evidence from IMPACT,” Dee and Wyckoff (2013) analyzed the controversial IMPACT educator evaluation system that was put into place in Washington DC Public Schools (DCPS) under the then Chancellor, Michelle Rhee. In this paper, Dee and Wyckoff (2013) presented what they termed to be “novel evidence” to suggest that the “uniquely high-powered incentives” linked to “teacher performance” via DC’s IMPACT initiative worked to improve the performance of high-performing teachers, and that dismissal threats worked to increase the voluntary attrition of low-performing teachers, as well as improve the performance of the students of the teachers who replaced them.
I critiqued this study in full (see both short and long versions of this critique here), and ultimately asserted that the study had “fatal flaws” which compromised the (exaggerated) claims Dee and Wyckoff (2013) advanced. These flaws included but were not limited to that only 17% of the teachers included in this study (i.e., teachers of reading and mathematics in grades 4 through 8) were actually evaluated under the value-added component of the IMPACT system. Put inversely, 83% of the teachers included in this study about teachers’ “value-added” did not have student test scores available to determine if they were indeed of “added value.” That is, 83% of the teachers evaluated, rather, were assessed on their overall levels of effectiveness or subsequent increases/decreases in effectiveness as per only the subjective observational and other self-report data include within the IMPACT system. Hence, while authors’ findings were presented as hard fact, given the 17% fact, their (exaggerated) conclusions did not at all generalize across teachers given the sample limitations, and despite what they claimed.
In short, the extent to which Dee and Wyckoff (2013) oversimplified very complex data to oversimplify a very complex context and policy situation, after which they exaggerated questionable findings, was of issue, that should have been reconciled or cleared out prior to the study’s release. I should add that this study was published in 2015 in the (economics-oriented and not-educational-policy specific) Journal of Policy Analysis and New (Unvetted) Research about Washington DC’s Teacher Evaluation Reforms | VAMboozled!: