Special Issue of “Educational Researcher” Examines Value-Added Measures (Paper #1 of 9)
A few months ago, the flagship journal of the American Educational Research Association (AERA) – the peer-reviewed journal titled Educational Researcher (ER) – published a “Special Issue” including nine articles examining value-added measures (VAMs) (i.e., one introduction (reviewed below), four feature articles, one essay, and three commentaries). I will review each of these pieces separately over the next few weeks or so, although if any of you want an advanced preview, do click here as AERA made each of these articles free and accessible.
In this “Special Issue” editors Douglas Harris – Associate Professor of Economics at Tulane University – and Carolyn Herrington – Professor of Educational Leadership and Policy at Florida State University – solicited “[a]rticles from leading scholars cover[ing] a range of topics, from challenges in the design and implementation of teacher evaluation systems, to the emerging use of teacher observation information by principals as an alternative to VAM data in making teacher staffing decisions.” They challenged authors “to participate in the important conversation about value-added by providing rigorous evidence, noting that successful policy implementation and design are the product of evaluation and adaption” (assuming “successful policy implementation and design” exist, but I digress).
More specifically, in the co-editors’ Introduction to the Special Issue, Harris and Herrington note that in this special issue they “pose dozens of unanswered questions [see below], not only about the net effects of these policies on measurable student outcomes, but about the numerous, often indirect ways in which [unintended] and less easily observed effects might arise.” This section is of, in my opinion, the most “added value.”
Here are some of their key assertions:
- “[T]eachers and principals trust classroom observations more than value added.”
- “Teachers—especially the better ones—want to know what exactly they are doing well and doing poorly. In this respect, value-added measures are unhelpful.”
- “[D]istrust in value-added measures may be partly due to [or confounded with] frustration with high-stakes testing generally.”
- “Support for value added also appears stronger among administrators than teachers…But principals are still somewhat skeptical.”
- “[T]he [pre-VAM] data collection process may unintentionally reduce the validity and credibility of value-added measures.”
- “[I]t seems likely that support for value added among educators will decrease as the stakes increase.”
- “[V]alue-added measures suffer from much higher missing data rates than classroom observation[s].”
- “[T]he timing of value-added measures—that they arrive only once a year and during the middle of the school year when it is hard to adjust teaching assignments—is a real concern among teachers and principals alike.”
- “[W]e cannot lose sight of the ample evidence against the traditional model [i.e., based on snapshot measures examined once per year as was done for decades past, or pre-VAM].” This does not make VAMs “better,” but with this statement most researchers agree.
Inversely, here are some points or assertions that should cause pause:
- “The issue is not whether value-added measures are valid but whether they can be used in a way that improves teaching and learning.” I would strongly argue that Special Issue of “Educational Researcher” Examines Value-Added Measures (Paper #1 of 9) | VAMboozled!: