The quest continues--how to best market the notion of having student essays scored by software instead of actual humans. It's a big, bold dream, a dream of world in which test manufacturers don't have to hire pesky, expensive meat widgets and the fuzzy unscientific world of writing can be reduced to hard numbers--numbers that we know are objective and true because, hey, they came form a computer. The problem, as I have noted many times elsewhere, is that after all these years, the software doesn't actually work.
But the dream doesn't die. Here's a paper from Mark D. Shermis (University of Houston--Clear Lake) and Susan Lottridge (AIR) presented at a National Council of Measurement in Education in Toronto, courtesy of AIR Assessment (one more company that deals in robo-scoring). The paper is two years old, but it's worth a look because it shows the reasoning (or lack thereof) used by the folks who just can't let go of the robograding dream.
"Communicating to the Public About Machine Scoring: What Works, What Doesn't" is all about managing the PR when you implement roboscoring. Let's take a look.
Warming Up
First, let's lay out the objections that people raise, categorized by a 2003 paper as humanistic, defensive and construct.
The humanistic objection stipulates that writing is a unique human skill and cannot be evaluated CONTINUE READING: CURMUDGUCATION: Selling Roboscoring: How's That Going, Anyway?