FYI.

This story is over 5 years old.

Tech

AI Will Soon Identify Protesters With Their Faces Partly Concealed

A new paper has troubling implications.
Image: Shutterstock

Protesters regularly wear disguises like bandanas and sunglasses to prevent being identified, either by law enforcement or internet sleuths. Their efforts may be no match for artificial intelligence, however.

A new paper to be presented at the IEEE International Conference on Computer Vision Workshops (ICCVW) introduces a deep-learning algorithm—a subset of machine learning used to detect and model patterns in large heaps of data—that can identify an individual even when part of their face is obscured. The system was able to correctly identify a person concealed by a scarf 67 percent of the time when they were photographed against a "complex" background, which better resembles real-world conditions.

Advertisement

The deep-learning algorithm works in a novel way. The researchers, from Cambridge University, India's National Institute of Technology, and the Indian Institute of Science, first outlined 14 key areas of the face, and then trained a deep-learning model to identify them. The algorithm connects the points into a "star-net structure," and uses the angles between the points to identify a face. The algorithm can still identify those angles even when part of a person's mug is obscured, by disguises including caps, scarves, and glasses.

Image: University of Cambridge/ National Institute of Technology/ Indian Institute of Science

The research has troubling implications for protestors and other dissidents, who often work to make sure they aren't ID'd at protests and other demonstrations by covering their faces with scarves or by wearing sunglasses. "To be honest when I was trying to come up with this method, I was just trying to focus on criminals," Amarjot Singh, one of the researchers behind the paper and a Ph.D student at Cambridge University, told me on a phone call.

Singh said he isn't sure how to prevent the technology from being used by authoritarian regimes in the future. "I actually don't have a good answer for how that can be stopped," he said. "It has to be regulated somehow … it should only be used for people who want to use it for good stuff." How to guarantee algorithms like the one Singh developed don't get into nefarious hands is an ongoing problem.

Zeynep Tufekci, a professor at the University of North Carolina, Chapel Hill, and a writer at The New York Times, discussed the dubious implications of the algorithm described in the paper on Twitter: "too many worry about what AI—as if some independent entity—will do to us. Too few people worry what *power* will do *with* AI," she wrote in a tweet.

Advertisement

Don't fret yet, though. While the algorithm described in the paper was fairly impressive, it's definitely not reliable enough to be used by law enforcement or anyone else. But the researchers behind the paper have provided future academics with an important gift to do their work. One of the problems with training machine learning models is that there simply aren't enough quality databases out there to train them on. But this paper provides researchers in the field with two different databases to train algorithms to do similar tasks, each with 2,000 images.

"This is a minor paper; narrow, conditional results. But it's the direction & this will be done with nation-state data—not by grad students," Tufecki wrote in a followup tweet.

The system described in the paper isn't capable of identifying people wearing all types of disguises. Singh pointed out to me that the rigid Guy Fawkes masks often donned by members of hacking collective Anonymous would be able to evade the algorithm, for example. He hopes one day though to be able to ID people even wearing rigid masks. "We are trying to find ways to explore that problem," he told me over the phone. It's worth noting that experimental algorithms can already identify people with 99 percent accuracy based on how they walk.

Singh and his colleagues have big plans for the algorithm in the future. They want to make the tech work in real-time locally, without needing a Wi-Fi connection. That way law enforcement could use a camera to identify someone wearing a scarf or other disguise without needing to feed data to a remote server over a Wi-Fi connection. The team also wants to train the algorithm on a larger dataset of people, especially of different races.

This isn't the first research of its kind. An earlier paper published in 2014 puts forth an automated algorithm that can recognize faces even when they're obscured (though the new paper uses a different technique and a larger data set). Another paper, from way back in 2008, analyzes the effect disguises, like different hairstyles, have on facial recognition algorithms.

Luckily, plenty of research has also been done about how to confuse or evade facial recognition tech. A team at Carnegie Mellon University crafted glasses that could trip up a facial recognition algorithm last year, for example.

In the future, we might see a competition between AI experts (possibly employed by governments) trying to create ever-more powerful facial recognition algorithms, and other researchers, trying to figure out how to evade them.