Thursday, April 9, 2015

D.C.'s lessons for New York on teacher evaluations | Capital New York

D.C.'s lessons for New York on teacher evaluations | Capital New York:

D.C.'s lessons for New York on teacher evaluations








When Governor Andrew Cuomo proposed a new teacher evaluation system in January that would rely heavily on the judgment of outside consultants, rank-and-file teachers and principals across the city exploded in outrage.
Similar consultants have already evaluated teachers in a handful of other places across the country, including Toledo, Ohio; Montgomery County, Maryland; and perhaps most notably, Washington, D.C. And experience elsewhere suggests that having outside educators observe teachers can be successful in the short term.
But whether the use of outside evaluators improves teaching in the long run remains an open question.
Research supported by the Bill & Melinda Gates Foundation, which has advocated for more rigorous teacher evaluations, suggests that using two evaluators tends to yield more accurate assessments than using one. But such a system is costly to implement and requires careful planning, as well as buy-in from teachers, to work well.
The discussion in New York about using outside evaluators to help grade teachers has been divisive. The teachers’ and principals’ unions weren’t consulted before the plan was announced, and Cuomo initially tied increases in school funding to the plan’s passage. Now, the Legislature may hand some responsibility for a new evaluation system to the Board of Regents, with a deadline of June 30.
The Washington, D.C. system, with its reliance on outside observers to evaluate teachers, is probably the most similar to Cuomo’s proposal, but differs in key ways—most notably in the weight placed on the outsiders’ opinions.
“This type of evaluation system can help drive and sustain improvements in performance, but they have to be well communicated to teachers,” said Thomas Dee, an education researcher at Stanford University and co-author of a study that found the D.C. system to be an effective means of encouraging low-performing teachers to improve their practice.
There are lessons for New York in the Washington experience, both positive and negative.
FORMER D.C. SCHOOLS CHANCELLOR MICHELLE RHEE began implementing the city’s new teacher evaluation system, called IMPACT, more than five years ago. The old system consisted of annual teacher observations done by principals. By contrast, IMPACT relies on observational scores both from principals and from “master educators”—highly rated former teachers who work full-time for the district—as well as on student test-score growth, which increasingly is being used to evaluate teachers nationwide.
In 2008, the Washington school district invited hundreds of teachers to participate in focus groups designed to engage teachers in the new system’s design. Teachers wanted evaluators who had specific content expertise, something principals couldn’t always provide. The master educators’ program was born of this request, according to district officials.
To become master educators, teachers must survive a six-part application process. Summer training includes four rounds of video testing. Master educators complete mock evaluations for already-scored lessons, and are graded for their accuracy. They have to pass three of four tests before they perform real evaluations, and are given three additional follow-up tests throughout the year.
Forty master educators work full time to evaluate nearly 4,000 teachers, managing caseloads of about 100 teachers each semester. The district spends $6.2 million per year to fund the program, according to Maggie Thomas, assistant director of the master educator program for the D.C. public schools.
Teachers are judged by broad categories that cover things such as how well they explain content to students, how organized and time-efficient their lessons are and how they reach students with different abilities.
The IMPACT system rates teachers on a scale from “ineffective” to “highly effective.” “Ineffective” teachers are dismissed immediately and teachers rated “minimally effective” are given one year to improve before being considered for dismissal. As teachers move up the district’s professional-designation ladder from “teacher” to “expert teacher,” they are observed with less frequency. Annual IMPACT scores determine whether a teacher moves up the ladder.
But when the IMPACT plan was rolled out, some teachers, and the local teachers' union, saw it as overly punitive—more focused on firing teachers than helping them improve. In 2010, IMPACT’s first year, nearly 2 percent of teachers were fired. Then, in 2011, another 5 percent of teachers, 206 men and women, were fired for poor performance. Union officials say hundreds more have been fired since then. 
The results of IMPACT are themselves a matter of dispute. The study by Stanford’s Dee, with co-author James Wyckoff from the University of Virginia, found that the system has had a positive effect on teachers with both low and high ratings. The potential for significant bonuses and permanent raises has encouraged good teachers to get better. And the real threat of dismissal has pushed struggling teachers to leave voluntarily or to seek to improve, often with help from the master educators assigned to them. Either outcome is good for students, said Thomas, who works on the D.C. master educators program.
Another study of the IMPACT system, published by Education Sector, an education think tank run by the American Institutes for Research, reported that the teachers interviewed “almost universally liked the people who evaluated them, finding them for the most part helpful, empathetic and smart.”
But other education experts are more skeptical.
Linda Darling Hammond from Stanford University criticized IMPACT’s heavy reliance on test-score growth, which can be an unreliable way to measure teacher effectiveness. And while test scores in the district have improved since IMPACT began, a recent study by the National Urban League found that Washington produces the nation’s largest reading-proficiency gaps between black, Hispanic and white fourth-graders.
Most of the complaints about IMPACT have in fact focused on the use of test scores, rather than on the master educators. (In 2013, calculation errors resulted in erroneous evaluation scores for 44 teachers, including one who was mistakenly fired.)
But the outside evaluators have also been a source of frustration for some teachers.
Laura Fuchs, a social studies teacher, said in a statement to chancellor Kaya Henderson in 2011 that short visits from master educators don’t show the full picture of teachers’ efforts in the classroom. She complained that post-observation conferences are treated as D.C.'s lessons for New York on teacher evaluations | Capital New York: