Latest News and Comment from Education

Saturday, June 20, 2020

CURMUDGUCATION: No, Software Still Can't Grade Student Essays

CURMUDGUCATION: No, Software Still Can't Grade Student Essays

No, Software Still Can't Grade Student Essays


One of the great white whales of computer-managed education and testing is the dream of robo-scoring, software that can grade a piece of writing as easily and efficiently as software can score multiple choice questions. Robo-grading would be swift, cheap, and consistent. The only problem after all these years is that it still can’t be done.
Still, ed tech companies keep making claims that they have finally cracked the code. One of the people at the forefront of debunking these claims is Les Perelman. Perelman was, among other things, the Director of Writing Across the Curriculum at MIT before he retired in 2012. He has long been a critic of standardized writing testing; he has demonstrated his ability to predict the score for an essay by looking at the essay from across the room (spoiler alert: it’s all about the length of the essay). In 2007, he gamed the SAT essay portion with an essay about how “American president Franklin Delenor Roosevelt advocated for civil unity despite the communist threat of success.”
He’s been a particularly staunch critic of robo-grading, debunking studies and defending the very nature of writing itself. In 2017, at the invitation of the nation’s teachers union, Perelman highlighted the problems with a plan to robo-grade Australia’s already-faulty national writing exam. This has annoyed some proponents of robo-grading (said one writer whose study Perelman debunked, “I’ll never read anything Les Perelman ever writes”). But perhaps nothing that Perelman has done has more thoroughly embarrassed robo-graders than his creation of BABEL.
All robo-grading software starts out with one fundamental limitation—computers cannot read or CONTINUE READING: CURMUDGUCATION: No, Software Still Can't Grade Student Essays