Latest News and Comment from Education

Monday, May 15, 2017

Don't Grade Teachers With a Bad Algorithm - Bloomberg

Don't Grade Teachers With a Bad Algorithm - Bloomberg:

Don't Grade Teachers With a Bad Algorithm

The Value-Added Model has done more to confuse and oppress than to motivate.

Image result for big education ape vam

For more than a decade, a glitchy and unaccountable algorithm has been making life difficult for America's teachers. The good news is that its reign of terror might finally be drawing to a close.
I first became acquainted with the Value-Added Model in 2011, when a friend of mine, a high school principal in Brooklyn, told me that a complex mathematical system was being used to assess her teachers -- and to help decide such important matters as tenure. I offered to explain the formula to her if she could get it. She said she had tried, but had been told “it’s math, you wouldn’t understand it.”
This was the first sign that something very weird was going on, and that somebody was avoiding scrutiny by invoking the authority and trustworthiness of mathematics. Not cool. The results have actually been terrible, and may be partly to blame for a national teacher shortage.
The VAM -- actually a family of algorithms -- purports to determine how much “value” an individual teacher adds to a classroom. It goes by standardized test scores, and holds teachers accountable for what’s called student growth, which comes down to the difference between how well students performed on a test and how well a predictive model “expected” them to do.
Derived in the 1980s from agricultural crop models, VAM got a big boost from the education reform movements of presidents Bush and Obama. Bush’s No Child Left Behind Act called for federal standards, and Obama’s Race To The Top Act offered states some $350 billion in federal funds in exchange for instituting formal teacher assessments. Many states went for VAM, sometimes with bonuses and firings attached to the results.
Fundamental problems immediately arose. Inconsistency was the most notable, statistically speaking: The same person teaching the same course in the same way to similar students could get wildly different scores from year to year. Teachers sometimes received scores for classes they hadn’t taught, or lost their jobs due to mistakes in code. Some cheated to raise their students' test scores, creating false baselines that could leadto the firing of subsequent teachers (assuming they didn’t cheat, too).
Perhaps most galling was the sheer lack of accountability. The code was proprietary, which meant administrators didn't really understand the scores and appealing the model's conclusions was next to impossible. Although economists studied such things as the effects of high-scoring teachers on students' longer-term income, nobody paid Don't Grade Teachers With a Bad Algorithm - Bloomberg:
 Big Education Ape: Rest In Peace EVAAS Developer William L. Sanders | VAMboozled! http://bit.ly/2qom9Lj