The War On Error
The debate on the use of value-added models (VAM) in teacher evaluations has reached an impasse of sorts.Opponents of VAM use contend that the imprecision is too high for the measures to be used in evaluation;supporters argue that current systems are inadequate, that all measures entail error but this doesn’t preclude using the estimates.
This back-and-forth may be missing the mark, and it is not particularly useful in the states and districts that are already moving ahead. The more salient issue, in my view, is less about the amount of error than about how it is dealt with when the estimates are used (along with other measures) in evaluation systems.
Teachers certainly understand that some level of imprecision is inherent in any evaluation method—indeed, many will tell you about colleagues who shouldn’t be in the classroom, but receive good evaluation ratings from principals year after year. Proponents of VAM often point to this tendency of current evaluation systems to give “false positive” ratings as a reason to push forward quickly. But moving so carelessly that we disregard the error in current VAM estimates—and possible methods to reduce its negative impacts—is no different than ignoring false positives in existing systems.