Quantcast
Could AI Replace Student Testing?

Artificial intelligence-based systems could provide constant feedback to students and teachers.

To call standardized testing a contentious issue would be an understatement. It's more like political trench warfare, in which one group of parents laments the organic student-centered approaches of yore while another freaks out about math problems they think are too politically correct. Critics often bemoan standardized testing as fostering assembly line education and "teaching to the test" while ignoring more abstract learning outcomes like creativity and critical thinking. Standardized testing is also expensive and time-consuming.

On the other hand, we should expect some sort of accountability in education, right? Schools are expensive, and, as new industries demand more educated workers, the stakes are higher than ever when it comes to the global economy and class mobility. Developed economies no longer have the safety net of middle-class manufacturing jobs. Whatever Trump says, that's permanent.

In a commentary published this week in Nature Human Behavior, Rose Luckin, an education researcher at University College London, argues that we now have a realistic alternative to standardized testing "at our fingertips." Technology exists to build realistic education assessments based on artificial intelligence in which students can be evaluated individually and at deep, fine-grained scales. Luckin says that AI has the capability of opening up the "black box" of learning.  

"Clever AI has penetrated general use to become so useful that it is not labelled as AI anymore," she writes. "We trust it with our personal, medical and financial data without a thought, so why not trust it with the assessment of our children's knowledge and understanding?"

Seeing inside the black box of learning will require AI systems to understand three things, according to Luckin. The first is the curriculum being taught: the subject area and learning activities that students participate in. It needs details about the steps students must undertake to complete those activities. And, finally, it needs to know what actually counts as success within those activities and the underlying steps. 

"AI techniques, such as computer modeling and machine learning, are applied to this information and the AI assessment system forms an evaluation of the student's knowledge of the subject area being studied," Luckin explains.

Luckin goes on to describe one possible intelligence assessment system, dubbed AIAssess, developed at UCL Knowledge Lab. It offers students a progression of activities that serve to both assess and develop conceptual knowledge. As a student completes more tasks, the tasks become more difficult. Markers assessed by the system include "student's knowledge of the subject matter, as well as their metacognitive awareness, knowledge of their own ability and learning needs, which is a key skill possessed by effective students and a good predictor of future performance."

AIAssess has two main components. One of these is essentially a built-in corpus of knowledge that can be used to check student answers. It's not answers to questions, but is instead an information store that can be used to find correct answers. As such, it can be used to evaluate student work in ways beyond simply scoring something as right or wrong. A student can get credit for their work, in other words. 

The second component consists of models describing individual students. It's a continuously updated assessment of not just a student's understanding of a subject, but of their potential for understanding it and their "metacognitive awareness" of their own knowledge and understanding. This is where we start being able to understand a student's learning potential—this is after all the whole essence of AI: prediction.

Based on expenditures for other large scale AI programs, such as President Obama's $4 billion autonomous vehicle initiative, Luckin puts a very rough $600 million annual price tag on a system like AIAssess in the United States or United Kingdom. That's slightly less than the current UK testing system costs (annually).

And, yes, this is a bit creepy. Luckin isn't oblivious: "The ethical questions around AI in general are equally, if not more, acute when it comes to education. For example, the sharing of data introduces a host of challenges, from individual privacy to proprietary intellectual property concerns. If we are to build scaled AI assessment systems that will be welcomed by students, teachers and parents, it will be essential to work with educators and system developers to specify data standards that prioritize both the sharing of data and the ethics underlying data use."

How you sell something like this in the current political climate is another matter. Surely evaluating the metacognitive awareness of students is too "politically correct" for one side before we even start talking about—gasp—spending money on education. Meanwhile, continuous testing and fine-grained student data mining sounds like something partisans might actually agree is dystopian. Maybe there's some way of starting small, wherein existing student data is used for building learning models and individualizing curricula. Teaching students as individuals—which is what Luckin's system ultimately reduces to—seems like a goal that we can all agree on.

Get six of our favorite Motherboard stories every day by signing up for our newsletter .