FYI.

This story is over 5 years old.

Tech

What Makes a Job Vulnerable to AI Automation?

Economics researchers lay out a new set of criteria.

When it comes to the would-be AI jobocalypse the Kool-Aid flows mightily. For those that drinketh of it, it is certain that in the not-too-distant future artificial intelligence and-or robots will steal the vast majority of jobs currently occupied by human beings. In the United States, where the social safety net is all but nonexistent, the outcome of such a technological leap forward would be societal collapse, barring dramatic progressive economic restructuring.

Advertisement

On the other hand, this might not be at all true. Maybe in real life there are a great many jobs that we just don’t want machines to do—such as those in healthcare, the fastest-growing job sector by a wide margin—or even that machines fundamentally can’t do. This second category is the focus of a policy paper published this week in Science by researchers Erik Brynjolfsson and Tom Mitchell of the Sloan School of Management at MIT and Carnegie Mellon, respectively. Generally, they find that while, no, it’s not really the “end of work,” things are nonetheless about to get weird.

“Although it is clear that ML [machine learning] is a ‘general purpose technology,’ like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities, there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly,” Mitchell and Brynjolfsson write. “Although parts of many jobs may be ‘suitable for ML’ (SML), other tasks within these same jobs do not fit the criteria for ML well; hence, effects on employment are more complex than the simple replacement and substitution story emphasized by some.”

The paper outlines eight general features that make a job SML. I won’t list them all here, but a few bear emphasizing. First, machine learning requires well-defined problems where input data can reliably be mapped to output predictions. In medical diagnostics, for example, medical records go in and diagnoses come out. That’s a clear mapping. Pictures of dogs go in, and predictions of dog breeds come out. On the other hand, we might actually be able to predict dog breeds based on pictures of dog owners, but, in that case, a clear mapping wouldn’t exist because the causality behind the prediction would be buried somewhere in the ML model.

ML models also require lots of data. They have to learn from something. To predict a medical diagnosis, a machine learning algorithm requires a load of training data consisting of patient records that have been labeled by humans with correct diagnoses. Only then can the algorithm look at new, unlabeled data and make accurate predictions.

A few points are less obvious. For example, machine learning models require relatively simple casual chains to make predictions. Like, if we have some input observations and we want to predict some output, the input pretty much has to relate directly to the output rather than to a bunch of intermediate cause and effect relationships. Also: machine learning doesn’t work in cases where wrong predictions are unacceptable. In ML, when we get models that are more than 90 percent accurate or so, we start to consider them successful. Which means that we’ve decided that being wrong 10 percent of the time is acceptable. If we’re, say, using computer vision to pilot oil tankers, then even a fraction of a single percentage point is unacceptable error.

There are other factors that are a bit less quantitative. There’s human-ness, for one. Emotional intelligence and empathy aren’t really SML. “The more unstructured task of interacting with other doctors, and the potentially emotionally fraught task of communicating with and comforting patients, are much less suitable for ML approaches, at least as they exist today,” Brynjolfsson and Mitchell note.

What seems most likely is that SML tasks are not whole jobs or professions, but are instead components within those professions. Machine learning will continue to advance, but rather than stealing all the jobs (though it will surely consume many of them), it will become a normal component of a great many jobs. Because an algorithm can predict a cancer diagnosis doesn’t mean it will become your new doctor. More likely, that algorithm will become a tool wielded by your still-human doctor.