There Is No Tech Solution to Deepfakes

FYI.

This story is over 5 years old.

Tech

There Is No Tech Solution to Deepfakes

Funding technological solutions to algorithmically-generated fake videos only puts a bandage on the deeper issues of consent and media literacy.

Every day, Google Alerts sends me an email rounding up all the recent articles that mention the keyword "deepfake." The stories oscillate between suggesting deepfakes could trigger war and covering Hollywood’s latest quirky use of face-swapping technology. It’s a media whiplash that fits right in with the rest of 2018, but this coverage frequently misses what we should actually fear most: A culture where people are fooled en masse into believing something that isn’t real, reinforced by a video of something that never happened.

Advertisement

In the nine months since Motherboard found a guy going by the username “deepfakes” posting face-swapped, algorithmically-generated porn on Reddit, the rest of the world rushed straight for the literal nuclear option: if nerds on the internet can create fake videos of Gal Gadot having sex, then they can also create fake videos of Barack Obama, Donald Trump, and Kim Jong Un that will somehow start an international incident that leads to nuclear war. The political implications of fake videos are so potentially dangerous that the US government is funding research to automatically detect them.

In April, the US Defense Advanced Research Projects Agency (DARPA)’s Media Forensics department awarded nonprofit research group SRI International three contracts to find ways to automatically detect digital video manipulations. Researchers at the University at Albany received funding from DARPA to study deepfakes, and found that analyzing the blinks in videos could be one way to detect a deepfake from an unaltered video.

The worry that deepfakes could one day cause a nuclear war is a tantalizing worstcase scenario, but it skips right past current and pressing issues of consent, media literacy, bodily autonomy, and ownership of one’s own digital self. Those issues are not far-fetched or theoretical. They are exacerbated by deepfakes today. Will someone make a fake video of President Donald Trump declaring war against North Korea and get us all killed? Maybe. But the end of humanity is the most extreme end result, and it's getting more attention than issues around respecting women’s bodies or assessing why the people creating deepfakes felt entitled to using their images without permission to begin with.

Advertisement

Read more: Deepfakes Were Created As a Way to Own Women's Bodies—We Can't Forget That

Until we grapple with these deeply entrenched societal issues, DARPA's technical solutions are bandages at best, and there's no guarantee that they will work anyway.

To make a believable deepfake, you need a dataset comprised of hundreds or thousands of photos of the person’s face you’re trying to overlay onto the video. The solution proposed by researchers at the University at Albany assumes that these photos, or "training datasets," probably don’t include enough images of the person blinking. The end result is a fake video that might look convincing, but where people don't blink naturally.

But even those researchers concede that this isn’t a totally reliable way to detect deepfakes. Siwei Lyu, a professor at the State University of New York at Albany, told MIT Technology Review that a quality deepfake could get around the eye-blinking detection tool by collecting images in the training dataset that show the person blinking.

Lyu told MIT Tech Review that his team has an even better technique for detecting deepfakes than blinks, but declined to say what it is. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.”

Read more: Targets of Fake Porn Are at the Mercy of Big Platforms

This exemplifies the broader problem with trying to find a technical solution to the deepfakes problems: as soon as someone figures out a way to automatically detect a deepfake, someone will find a way around it. Platforms are finding out that it’s not as easy as block a keyword or ban a forum to combat fake porn showing up on their sites. Image host Gfycat, for example, thought it could use automated tools to detect algorithmically-generated videos on its platform and kick them off, but months after it announced this effort, we still found plenty of deepfakes hosted there.

Advertisement

The algorithms themselves will, by design, stay locked in a cat-and-mouse game of outdoing each other. When one solution for detection pops up—like the blinks—the other will learn from it, and match it. We’ve seen this happen with bots that are continually getting better at solving CAPTCHAs, forcing bot-detection systems to make the CAPTCHAs more difficult to solve, which the bots learn to beat, and so, on infinitely.

This doesn’t mean that we should throw our hands up and stop trying to find tech solutions to complex problems like deepfakes. It means that we need to recognize the limitations of these solutions, and to continue to educate people about technology and media, when to trust what they see, and when to be skeptical.

Florida senator Marco Rubio got it right when he talked about deepfakes at a Heritage Foundation forum last month: “I think the likely outcome would be that [news outlets] run the video, with a quotation at the end saying, by the way, we contacted senator so-and-so and they denied that that was them,” he said, talking about a hypothetical scenario where a deepfake video could spread as a news tip to journalists. “But the vast majority of the people watching that image on television are going to believe it.”

Fake news isn’t new, and malicious AI isn’t new, but the combination of the two, plus a widespread destabilized trust in media is only going to erode our sense of reality even more.

Advertisement

This isn’t paranoia. We saw a small glimpse of this with the spoof video that Conservative Review network CRTV made of Alexandria Ocasio-Cortez about a month after she won the Democratic congressional nomination in New York. CRTV cut together a video of Ocasio-Cortez giving an interview to make it seem like she bombed it. This wasn’t a deepfake by any means—it was rudimentary video editing. Still, more than one million people viewed it and some people fell for it. If you already thought poorly of Ocasio-Cortez, the video could reinforce your beliefs.

If people are gullible enough to believe in conspiracy theories—so much so that they show up at Trump rallies with signs and shirts supporting QAnon—we don't need AI to fool anyone into believing anything.

The first headline we published for a deepfakes story, back in December, said: “AI-Assisted Fake Porn Is Here and We’re All Fucked.” We stand by that. We are still deeply fucked. Not because a deepfake is going to lead to nuclear war, but because we have so many problems we need to solve before we worry about advanced detection of AI-generated video.

We need to figure out how platforms will moderate users spreading malicious uses of AI, and revenge porn in general. We have to solve the problems around consent, and the connection between our bodily selves and our online selves. We need to face the fact that debunking a video as fake, even if it’s proven by DARPA, won’t change someone’s mind if they’re seeing what they already believe. If you want to see a video of Obama saying racist things into a camera, that’s what you’ll see—regardless of whether he blinks.

The Department of Defense can’t save us. Technology won’t save us. Being more critically-thinking humans might save us, but that’s a system that’s lot harder to debug than an AI algorithm.