Monday, April 9, 2012

Digital | How machine-based tutoring could disrupt human tutors

Digital | How machine-based tutoring could disrupt human tutors:


How machine-based tutoring could disrupt human tutors

Back in January, my friend Bror Saxberg, chief learning officer of Kaplan, published an eye-popping blog about a meta-analysisthat Kurt VanLehn published recently about nearly 100 well-constructed papers about computers used to tutor learners.
A couple of headlines from the meta-analysis are worth spotlighting here.
First, the work shines some questions on Benjamin Bloom’s analysis from a couple decades ago that suggested that well-designed human tutoring could deliver around a whopping 2 standard deviations worth of learning performance. VanLehn’s paper suggests that the effect size seems to be more around 0.79 than 2 standard deviations—still, nothing at which to scoff.
Second, as Saxberg details, VanLehn does some important work in splitting up the types of tutoring research by “grain size”: answer-based tutors, step-based tutors, substep-based tutors, and human tutoring, as well as by the type of student behavior, which ranges from passive to active to constructive and finally interactive.