FYI.

This story is over 5 years old.

Tech

A New Program Judges If You’re a Criminal From Your Facial Features

The machine learning experiment boasts seemingly incredible accuracy, but is being criticised for human biases and the potential to label innocent people as guilty.

Like a more crooked version of the Voight-Kampff test from Blade Runner, a new machine learning paper from a pair of Chinese researchers has delved into the controversial task of letting a computer decide on your innocence. Can a computer know if you're a criminal just from your face?

In their paper 'Automated Inference on Criminality using Face Images', published on the arXiv pre-print server, Xiaolin Wu and Xi Zhang from China's Shanghai Jiao Tong University investigate whether a computer can detect if a human could be a convicted criminal just by analysing his or her facial features. The two say their tests were successful, and that they even found a new law governing "the normality for faces of non-criminals."

Advertisement

They described the idea of algorithms that can match and exceed a human's performance in face recognition to infer criminality "irresistible". But as a number of Twitter users and commenters on Hacker News point out, by stuffing biases into artificial intelligence and machine learning algorithms, the computer could act on those biases. The researchers maintain that the data sets were controlled for race, gender, age, and facial expressions, though.

Imagine this with drones, every CCTV camera in every city, the eyes of self driving cars, everywhere there's a camera… Tim MaughanNovember 18, 2016

The images used in the research were standard ID photographs of Chinese males between the ages of 18 and 55, with no facial hair, scars, or other markings. Wu and Zhang stress that the ID photos used were not police mugshots, and that out of 730 criminals, 235 committed violent crimes "including murder, rape, assault, kidnap, and robbery."

The two state they purposely took away "any subtle human factors" out of the assessment process. As long as data sets are finely controlled, could human bias be completely eradicated? Wu told Motherboard that human bias didn't come into it. "In fact, we got our first batch of results a year ago. We went through very rigorous checking of our data sets, and also ran many tests searching for counterexamples but failed to find any," said Wu.

Here's how it worked: Xiaolin and Xi fed into a machine learning algorithm facial images of 1,856 people, of which half were convicted criminals, and then observed if any of their four classifiers—each using a different method of analysing facial features—could infer criminality.

Advertisement

They found that all four of their different classifiers were mostly successful, and that the faces of criminals and those not convicted of crimes differ in key ways that are perceptible to a computer program. Moreover, "the variation among criminal faces is significantly greater than that of the non-criminal faces," Xiaolin and Xi write.

"Also, we find some discriminating structural features for predicting criminality, such as lip curvature,"

"All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic," the researchers write. "Also, we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle." The best classifier, known as the Convolutional Neural Network, achieved 89.51 percent accuracy in the tests.

"By extensive experiments and vigorous cross validations," the researchers conclude, "we have demonstrated that via supervised machine learning, data-driven face classifiers are able to make reliable inference on criminality."

While Xiaolin and Xi admit in their paper that they are "not qualified to discuss or to debate on societal stereotypes," the problem is that machine learning is adept at picking up on human biases in data sets and acting on those biases, as proved by multiple recent incidents. The pair admit they're on shaky ground. "We have been accused on Internet of being irresponsible socially," Wu said.

Advertisement

This paper is the exact reason why we need to think about ethics in AI. Stephen MayhewNovember 17, 2016

In the paper they go on to quote philosopher Aristotle, "It is possible to infer character from features," but that has to be left to human psychologists, not machines, surely? One major concern going forward is that of false positives—that is, identifying innocent people as guilty—especially if this program is used in any sort of real-world criminal justice settings. The researchers said the algorithms did throw up some false positives (identifying non-criminals as criminals) and false negatives (identifying criminals as non-criminals), which increased when the faces were randomly labeled for control tests.

Online critics have lambasted the paper. "I thought this was a joke when I read the abstract, but it appears to be a genuine paper," said a user on Hacker News. "I agree it's an entirely valid area of study…but to do it you need experts in criminology, physiology and machine learning, not just a couple of people who can follow the Keras instructions for how to use a neural net for classification."

Read more: Google-Backed A.I. Aims to Help Journalists Write Better News Stories

Others questioned the validity of the paper, noting that one of the researchers is listed as having a Gmail account. "First of all, I don't think this is satire. I'll admit that the use of a gmail account by a researcher at a Chinese uni is facially suspicious," posed another Hackers News reader.

Wu had an answer for this, however. "Some questioned why I used gmail address as a faculty member in China. In fact, I am also a professor at McMaster University, Canada," he told Motherboard.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.