AI Pokes Another Hole In Standardized Testing
The stories were supposed to capture a new step forward in artificial intelligence. A “Breakthrough for A.I. Technology: Passing an 8th-Grade Science Test,” said the New York Times. “AI Aristo takes science test, emerges multiple-choice superstar,” said TechXPlore. Both stories were talking about Aristo (indicating a child version of Aristotle), a project of Paul Allen’s Allen Institute for Artificial Intelligence, where the headline read, “How to tutor AI from an ‘F’ to an ‘A.’”
The occasion for all this excitement is Aristo’s conquest of a big standardized test, answering a convincing 80% of questions correctly on the 12th grade science test and 90% on the 8th grade test. Four years ago, none of the programs that attempted this feat were successful at all.
We see these occasional steps forward greeted with a certain amount of hyperbole (last year the New York Post announced that computers were “beating humans” at reading comprehension), or the time the BBC announced that an AI “had the IQ of a four-year-old child,” but the field still has a very long way to go. And as it tries to get there, it tells us something about the education tasks set for humans.
Wired perhaps best captured the issue in a story headlined “AI Can Pass Standardized Tests—But It Would Fail Preschool.” AI’s still can’t answer open-ended questions, and Aristo was designed strictly to deal with multiple choice, and only within certain parameters. Aristo has problems with CONTINUE READING: CURMUDGUCATION: AI Pokes Another Hole In Standardized Testing