Shanker Blog: Teacher Evaluations and Turnover in Houston
We are now entering a time period in which we might start to see a lot of studies released about the impact of new teacher evaluations. This incredibly rapid policy shift, perhaps the centerpiece of the Obama Administration’s education efforts, was sold based on illustrations of the importance of teacher quality.
The basic argument was that teacher effectiveness is perhaps the most important factor under schools’ control, and the best way to improve that effectiveness was to identify and remove ineffective teachers via new teacher evaluations. Without question, there was a logic to this approach, but dismissing or compelling the exits of low performing teachers does not occur in a vacuum. Even if a given policy causes more low performers to exit, the effects of this shift can be attenuated by turnover among higher performers, not to mention other important factors, such as the quality of applicants (Adnot et al. 2016).
A new NBER working paper by Julie Berry Cullen, Cory Koedel, and Eric Parsons, addresses this dynamic directly by looking at the impact on turnover of a new evaluation system in Houston, Texas. It is an important piece of early evidence on one new evaluation system, but the results also speak more broadly to how these systems work.
The Houston policy is a very interesting context. Teachers in Houston do not have tenure, and most work under one year contracts. And Houston’s evaluation system, unlike many of its counterparts elsewhere, places a lot more control in the hands of principals. So, the impact of this policy is in many respects that of providing principals with more information about their teachers’ performance, and the ability to act on it (also see Rockoff et al. 2012).
Cullen et al. focus on the relationship between teacher turnover and performance before and after the implementation of the new system in Houston (called the Effective Teachers Initiative, or ETI). Put differently, the focus is on the change in the composition of teachers who exit, pre- and post-ETI.
Prior to ETI, there was a negative relationship between teacher effectiveness and exits – i.e., less effective teachers were more likely to exit than their more effective colleagues, with effectiveness here defined in terms of validated measures of teachers’ ability to raise students’ test scores (in part because the original value-added scores, unlike the other components of the system, are available both before and after the new evaluations were implemented).
The big finding of Cullen et al. is that the relationship was stronger after the onset of the new evaluation system, with the estimated effects concentrated among low-performing teachers in schools serving low-performing students, who were more likely to exit the district than they were before ETI.
On the one hand, this suggests that the new evaluations worked as intended. Under a system in which principals were armed with better information about their teachers’ performance (full evaluation results instead of single year value-added scores), teachers who were less effective in raising test scores were more likely to exit the district (or be dismissed) post-ETI than they were prior to ETI, particularly in schools serving lower performing students. On the other hand, all exits increased under the new evaluations -- including among teachers who were rated as average and high performers. The extent to which this spike is attributable to the new evaluation system per se is unclear, but it served to “dilute” the impact on student achievement of the increase in exits among low performers. There is also some indication that higher-rated teachers were more likely to switch out of schools with low-performing students after ETI (versus before the policy), which would also attenuate the impact of the policy.
The upshot here is that Houston’s new teacher evaluation system did seem to boost differential attrition productively, but the magnitude of this increase, along with countervailing forces, was insufficient to have a meaningful effect on student achievement.