A Big Open Question: Do Value-Added Estimates Match Up With Teachers’ Opinions Of Their Colleagues?
A recent article about the implementation of new teacher evaluations in Tennessee details some of the complicated issues with which state officials, teachers and administrators are dealing in adapting to the new system. One of these issues is somewhat technical – whether the various components of evaluations, most notably principal observations and test-based productivity measures (e.g., value-added) – tend to “match up.” That is, whether teachers who score high on one measure tend to do similarly well on the other (see here for more on this issue).
In discussing this type of validation exercise, the article notes:
If they don’t match up, the system’s usefulness and reliability could come into question, and it