Explained Variation Is Not A Measure of Importance
Back in early April the American Statistical Association put out a “Statement on Using Value-Added Models for Educational Assessment“.
Last month, Raj Chetty, John Friedman, and Jonah Rockoff issued a response, in part because so many commentatorsseemed to misunderstand the ASA statement and in part because the ASA seemed not to have incorporated some of Chetty et al.’s most recent research.
Diane Ravitch’s unimpressed follow-up involves a few all-too-common misconceptions:
What do Chetty, Friedman, and Rockoff say about the ASA statement? Do they modify their conclusions? No. Did it weaken their arguments in favor of VAM? Apparently not. They agree with all of the ASA cautions but remain stubbornly attached to their original conclusion that one “high-value added (top 5%) rather than an average teacher for a single grade raises a student’s lifetime earnings by more than $50,000.” How is that teacher identified? By the ability to raise test scores. So, again, we are offered the speculation that one tippy-top fourth-grade teacher boosts a student’s lifetime earnings, even though the ASA says that teachers account for “about 1% to 14% of the variability in test scores…”
The argument is that if teachers account for only a small fraction of the variation in student test scores, teacher quality is probably not a useful lever by which we can improve education outcomes.
This is wrong for at least three reasons.
First, to know whether 1%-14% is a lot of variation to account for we have to compare teachers to something else. It’s not entirely clear from her post, but Ravitch1 seems to want to compare teachers to all other factors put Explained Variation Is Not A Measure of Importance | Paul Bruno: