For a new technology to make it in the market it must hurdle three big barriers. One: Are potential customers aware of it? Two: Does the technology work? Three: Do potential customers actually believe that the technology works?
Many times, the last is the hardest to overcome.
Just consider the case of poor, and poorly described, artificial intelligence applications. For a long time, the de facto definition of AI was essentially, “That stuff computers can’t do yet.”
Yet technologies stuck in this particular uncanny valley do occasionally manage to get free--if the conditions are right. And that may be positive news for edtech’s current believability-challenged poster child: automated essay scoring.
Rock ‘em Sock’em Robots
Automated (or “machine”) essay scoring isn’t new. I worked with a Boulder, Colorado-based company when it was acquired by Pearson a decade ago that, at the time, had already gone far beyond the crude word-and-sentence-length counting approach of other so-called “intelligent” essay graders. I became comfortable, if not entirely conversant, with the concepts of natural language processing, n-dimensional word spaces, and contextual linguistic relationships.
In the ten years since, research into making automated essay scoring more accurate has exploded. The William and Flora Hewlett Foundation sponsored a competition that pitted nine scoring engines against each other. Another online public competition put roughly 150 teams to the test with their technological creations. The computer scientists at edX, best known for a platform for Massively Online Open Courses (MOOCs), are developing their own open-source automated system called Enhanced AI Scoring Engine (EASE).
Most, if not all, of these systems requiring training. Much as you might train your voice recognition software to better understand you, essay scoring software needs to see dozens or hundreds of human-graded essays properly scored in order to learn, based on a particular “prompt” or question, what’s a good essay versus a bad one. Once trained, the various competitions and outside analyses have found that humans and the best systems today are pretty equivalent in scoring accuracy.
I’ve got your, I mean its, back
That doesn’t mean automated essay scoring should fully replace humans. But a well-oiled machine grader can work alongside people, each backing up the other: humans, to rescue an incredibly creative essay that defies straightforward evaluation or to provide deep feedback; computers, to flag a suddenly tired, inconsistent human scorer and to provide basic feedback.
(Personally, I might have preferred the latter for a school creative writing course in which my teacher rejected my science-fiction short because it wasn’t in a style of which he approved--a story I turned around and sold to a science-fiction magazine.)
Automated essay scoring is finally gaining traction. West Virginia has been using CTB/McGraw-Hill’s engine, Utah has applied Measurement Incorporated’s technology since 2010, and Florida plans to engage American Institutes for Research’s AutoScore for its new statewide writing assessment. PARCC reportedly is considering Pearson’s engine for its Common Core assessments. Typically, the automated system is a “second reader” alongside a human scorer; if the two disagree, the essay gets kicked upstairs to another human to review.
And aside from tests, automated essay evaluation engines are increasingly used to encourage student writing practice under teacher supervision, in which teachers can turn certain feedback features on or off.
Yet the lone loud voice of Les Perelman still gets more attention than the advances made in the technology. Most recently, students at MIT (where Perelman is a former director of undergraduate writing) helped create the media-bait-friendly BABEL (Basic Automatic B.S. Essay Language) generator to “fool” an automated essay grader with verbose nonsense sentences. What the triumphant headlines ignored, but Perelman himself was honest enough to acknowledge, is that in this case the only automated scoring tool being fooled was Vantage Learning’s IntelliMetric, the single commercial engine Perelman admitted he was able to access.
It’s like judging all cheese quality on a block of Velveeta, because that’s all the store carries.
Perelman’s continued one-note (or one-engine) criticisms aside, there’s always understandable distrust of technologies that have the potential to replace people--even if, when used properly, they can backstop them and add strength to strength. This unease seems especially strong in the realm of learning, in which we’re not educating drones. We’re educating cuter and smaller iterations of ourselves.
Nudging tech out of the valley
How does this tech, or any edtech, surmount this believability barrier?
Well--first and most important--it has to work. As a former colleague once noted to me, nothing will kill a bad product faster than good marketing. Overpromising is a bad idea.
Then there has to be a nudge.
In higher education, online remote proctoring faced comparable challenges. There appeared to be a belief that nothing could be better than having a human in a physical room to monitor dozens or hundreds of students taking tests. That, despite lots of rational evidence that proctors don’t like to confront students, that it was far too easy to cheat in a crowd, and that the online proctor-to-student ratio is much lower.
Put test-takers in front of a camera? How is that better than an in-person presence? But Douglas Winneg of Software Secure said resistance to the concept changed about a year ago. It began to fade, he said, “not so much [due to] a specific event--rather a feeling of momentum.”
As colleges added more online courses, William Dorman, CEO of Kryterion, said the turning point in online proctoring may have been pressure from accrediting bodies that wanted security to go beyond passwords. That apparently raised remote proctoring’s profile overall, helped highlight differences in proctoring technology approaches, and led to a very human motivator: peer pressure. “As some entities started to use [online] proctoring,” Dorman noted, “others felt they needed to join in.”
It’s possible that’s where edtech is with automated essay scoring. There are critical caveats. If there’s appropriate use (to allow for more writing practice and feedback in instructional situations, and to support human scorers in assessment situations), if it’s applied cautiously and equitably (no dystopian future, please, in which poor schools get robo-graders instead of, instead of in addition to, human teachers) and if only the best, not the cheapest, automated engines are used.
Ultimately, humans should have control and the final say over any evaluation that has consequences. And any engines should only be used for the purpose for which they were designed and trained. But that is a separate matter from believing they’re as good as someone rushing to grade a pile of writing assignments in the current environment. I’d rather have deep, personal feedback from an expert instructor every time. But if that’s not guaranteed, I’ll take multiple sources of feedback, human or not.
If application is tightly defined, then maybe we can leap the belief barrier. And give a technology both rational and emotional acceptance.
Until, of course, the next cool edtech AI comes along.
Frank Catalano is an independent industry consultant, author and veteran analyst of digital education and consumer technologies. He's a regular columnist for EdSurge and the tech news site GeekWire, and tweets @FrankCatalano. In a previous life, he wrote short fiction for Analog, the Magazine of Fantasy and Science Fiction and other purveyors of utopias and dystopias.
No doubt all our readers in the education field are well aware of the explosion of iPads and tablets in the classroom and their ability to make learning easier and more interactive. But we suspect at least some of you are still reluctant to turn the new tech loose on grading, an area where you could be needlessly wasting hours assessing students with an antiquated system. We know change can be daunting, but we promise that within this list of apps teachers love, you’ll find something you love, too.
If you’re a teacher who’s been hanging on to a hard-copy gradebook, this app is your invitation to see what all the fuss over grading apps is about. For $10 the app comes packed with features like automatic grade calculation, status report notification emails for students or parents, attendance reports on PDF, and more.
Teacher’s Assistant Pro: Track Student Behavior:
For elementary teachers, this app is a great option for recording behavior infractions and easily contacting parents and administrators with all the details if need be. Tardiness, forgetting books, being disruptive, all this and more will never go unrecorded or unpunished again.
Ah, the dreaded essay. We’re not teachers, but we have to assume the joy you get out of torturing kids with essay assignments has to be somewhat tampered by having to grade them. iAnnotate takes the pain out of it, letting you ink, highlight, underline, stamp, make notes, and more on a PDF version of your kids’ essays via your iPad.
For an app specifically designed for grading essays, try … Essay Grader. The standout feature is the wide variety of stock comments, including praise, grammar and style critiques, and organization and documentation notes it comes loaded with. Or you can import your own customized database of your own patented phrases, so you can pick one and go.
Not every assignment is as easily graded as making a check or X mark on each number. Tasks like oral presentations have to be graded on the fly, and that’s where this app shines. Use sliders to add or subtract points during a speech on things like delivery and tone, then let the app add the scores. It even lets you record video for playback later if you want to review the performance before assigning a grade.
Like A+ Grade Calculator for Android (see below), Groovy Grader is a simple, no-charge app for inputting the number of quiz or test questions and getting back a chart of scores based on the number missed. The iPhone version can handle 150 questions and the iPad 300, but both get the helpful ability to either round off numbers or display them with either one or two decimal places.
Grades are just a part of this app that’s like a social network for teachers and students. If it would save you time to have an easy way to communicate with students about their grades, send them assignments, and hear back from them on what they need help on, this free app is worth a look.
We have to dock some points for the high cost ($31), but if you’re serious about a grading app this is one to consider. It will give you suggestions for mid-term and final grades, know based on your calendar what you’re teaching when and adapt accordingly, and of course keep copious grade and attendance records.
Anything you used to do with your grades on a spreadsheet program — compiling averages, producing class reports for the principal, using weighted formulas to determine grades — you can now do quickly and easily on your iDevice, be it an iPhone or iPad.
It’s got a downright iClunky title, but iTeacherBook is a scheduling, attendance tracker, assignment allocator, and grade recorder and reporter all rolled into one. For $5 and compatibility with both iPhone and iPad, you can’t go wrong.
Teacher Aide Pro:
Winner of 2011’s Best App Ever award in the teacher category, Teacher Aide Pro can handle 90 students per class and makes communicating with students a cinch via text, mass emailing, and CSV compatibility. This version runs $8 but the lite version is free.
The self-proclaimed “smart app for busy teachers” (redundant, are we right?), Teacher’s Pet has a solid if somewhat quirky array of features, like the ability to record a student’s attitude with just the right emoticon. But with a clean interface, calendar integration, and add-ons like student photo uploading for easy recognition, this app’s well worth the $1.99.
The developer claims a Boston high school math teacher said (s)he saves 80 minutes a week in grading time thanks to this free app. That alone is reason enough to take a flyer on it. Socrative Learner requires each student has the tech to run the app, but it turns multiple choice, true false, and “quick quiz” answers digital for instantaneous grading.
On multi-page exams, many teachers find it necessary to write the number of points deducted per page at the bottom of each page, then they have to go back through at the end to add it all up. Streamline that process with Grade Ticker, which lets you see what you’ve deducted as you go and adds it all up for you at the end.
There may be a bit of a learning curve before you get the hang of this app, but once you do you’ll appreciate its customizability and intuitiveness. Break grades down into homework, classwork, test, participation, or other divisions, track attendance, and even get reminders of students’ birthdays.
A+ Grade Calculator:
We’re sure you know that shaving just seconds off the grade time per test adds up to hours by the end of the school year, hours of your life you’ll never get back. Protect the time you have left with this app that lets you input the number of questions and see percentage and letter grades.
The developer obviously didn’t sink too much time into naming this bad boy, or into creating this hilariously brief user guide. No matter. Here’s what you need to know: you can use it to create grade point systems, it works, and it’s free.
It’s not strictly a grading app, but if you’re going to be saving a lot of graded papers and tests it will be nice to be able to access them from anywhere. Also available on iTunes, Dropbox for Android is a free service that lets you upload 2 GBs worth of data for retrieval from any device with the app.
For a standalone attendance tracker, this app is a clean solution. Present, late, and absent students can be seen at a glance with color-coded labels for each. And if you make the list a Google Spreadsheet at the start of the semester, at the end of the year just check it through Google Docs and Attendance will have calculated all tardies and absences for you automatically.
It requires a free blog with UK site PrimaryBlogger, but teachers ‘cross the pond are loving Classdroid. It lets them take a picture of a student’s work, grade it, and upload it to the web for the students and their parents to view. It may not save time in the actual process of grading, but it could prevent many of your time-sucking parent-teacher conferences by improving kids’ grades.