Memo to Barack Obama: The folks who educate teachers would like to share some lessons on positive accountability with you
Why “value-added metrics” are really just the opposite
As the parent of children who attended public schools and an educator who has been a teacher of children, a school leader, and now a teacher-educator and director of a teacher-education program, I welcome the Obama administration’s efforts to ensure that educator preparation programs support their graduates to do the absolute best for the children entrusted to their care.
How they do this, however, can be helpful or harmful, depending on the kind of information they use to hold programs accountable and on what is done as a result of collecting that information. Examples of this kind of helpful data that can be used for accountability purposes include:
- Surveys of graduates and their employers about how well-prepared the graduates are for the many different aspects of teaching – this allows faculty to reflect on their strengths and weaknesses and adjust their programs accordingly.
- Tracking where education-school graduates go and how long they remain in the field of teaching – this could offer insight to how prepared graduates are for the field, as research to date indicates that more poorly prepared teachers drop out more quickly.
- Statistics about how many teacher education candidates pass performance assessments (used for certification or program completion) that demonstrate how well they can actually teach – this information opens our eyes to new directions for instruction and needs in the classroom. Several such performance assessments have recently been developed and are being used in many states to license beginning teachers – much like the bar exam in law and the medical licensing exam for physicians. Just as passing rates on these tests are reported for professional schools in law and medicine, they could be reported for schools of education, as well.
Data that can be harmful, however, are data that don’t reflect the actual work of teachers and/or programs and that are used punitively rather than for improvement. An example of this kind of accountability practice that is not only unhelpful but also harmful is the Obama administration’s proposal to withhold TEACH grants from students in particular universities on the basis of test scores of students who are taught by their graduates.
The idea of evaluating teacher preparation programs using test scores of students taught by the graduates of those programs, referred to as “value-added measures – VAM,” is fraught with problems, not only for evaluating graduates but also for evaluating individual teachers. The so-called value-added metrics have been found to be both highly unstable – shifting dramatically from year to year, based in large part on who the teachers teach –- and biased against particular groups of teachers, like those who teach new English learners, special education students, and even gifted and talented students who have already hit the ceiling on the grade-level tests (and therefore cannot show growth on those tests). The National Research Council and several research organizations have Memo to Barack Obama: The folks who educate teachers would like to share some lessons on positive accountability with you | Hechinger Report: