EdX Now Has Software to Grade Your Essays

Welcome to the future.

EdX, the nonprofit online education platform founded by Harvard and MIT, has introduced a computer system that grades students’ essays and short answers on exams, reports The New York Times. The EdX tool asks a human grader to evaluate 100 essays, after which it trains itself to instantly grade any number of others.

Automated essay grading isn’t new, and there are plenty of people putting forward the critiques you might expect, several of them quoted in the Times piece. So what’s interesting are two ways that EdX and its defenders are framing automated essay grading to make it sound not just like a necessity when you’re teaching an online course to tens of thousands of people—though it is a that—but an improvement over human grading.

The first, put forward by EdX President Anant Agarwal in the Times is that automated grading gives instant feedback so that students can write and rewrite their exams without having to wait for a teacher to evaluate each draft.

“There is a huge value in learning with instant feedback,” Agarwal tells the paper. “Students are telling us they learn much better with instant feedback.”

Kevin Drum at Mother Jones, whose also bullish on automated grading, puts this another way:

Anyone who teaches writing will tell you about the value of having students write often and with quick feedback. Every day if possible. The problem is that, practically speaking, it’s notusually possible. So if an automated system can handle short student essays and provide decent—not great, but decent—feedback immediately, that has huge potential

The second defense, also put forward by Drum, is that in complaining about automated essay grading, we’re making “perfect” the enemy of “good.”

There’s no question that a good reader, given sufficient time, will do a far better job of grading and feedback than any machine. That may change someday, but it’s certainly true today.

But the vast majority of grading isn’t done by top notch readers given plenty of time. It’s done by harried, mediocre readers. Can machines do as well or better than they do? Probably.

That’s one opinion, and of course, others will suggest that we should look for ways to give everyone access to a process that provides nuanced, careful feedback to all kinds of students, rather than accepting a mediocre compromise. But in the absence of a good way to to provide that to the global thousands who participate in various online learning enterprises, continuing to improve our artificial intelligence solutions might be the only way.