Tech

Internal Emails Show FBI Freaking Out About Deepfakes

“Do we have the ability to effectively detect this?” asks one email. “No.”
FBI laptop
Image: Nes
Screen Shot 2021-02-24 at 3
Hacking. Disinformation. Surveillance. CYBER is Motherboard's podcast and reporting on the dark underbelly of the internet.

At least some officials in the FBI were seemingly caught off guard by the emergence of deepfakes in 2018, with one acknowledging that the Bureau did not have the capability to detect the altered images at the time, according to a series of internal FBI emails obtained by Motherboard.

The news shows that as deepfakes started as a vehicle for non-consensual pornography, government officials were already concerned about other ways deepfakes would impact their work, including for surveillance and investigating crime.

Advertisement

“Do we have the ability to effectively detect this?” one FBI official from the Bureau’s Operational Technology Division (OTD), which handles advanced technical issues such as hacking tools, wrote in an July 2018 email. The context was a Washington Post newsletter titled “Doctored videos could send fake news crisis into overdrive, lawmakers warn.”

“No,” came the response from another OTD official. They then pointed to DARPA’s media forensics program called MediFor, a research effort that includes countering deepfakes

Do you have concrete examples of deepfakes being abused? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

“That is our best current USG [U.S. government] research effort to address this problem,” the OTD official added in their email.

Other emails show FBI officials thinking what other implications for deepfakes may be.

“Made me think that if they are doing this for trivial crap, then what is being done to surveillance video or other facial recognition images by others with better tools,” one FBI official wrote in January 2018.

William McKinsey, section chief of the information technology section at FBI, replied “I googled face swapping and learned a lot.”

Advertisement

“Pls follow [redacted by FBI] closely. It could put us out of business,” he added. In another later email he wrote “This could require urgent action [on] our part if it is real.”

On September 4, 2018, an official from the FBI Cyber Division wrote “we are meeting with a company on Friday that claims to have solutions. We’ll see…”

The ecosystem of deepfake-detection has evolved greatly since 2018. Intel, for instance, has been running ads for its own such capability on the Bloomberg Businessweek podcast recently. In those, Intel claims to be 96 percent accurate. Those claims, as are many in the deepfake detection industry, are difficult to verify.

In June, the FBI warned that scammers were increasingly using deepfaked, non-consensual pornography of specific people to extort money from those targets.

“Malicious actors use content manipulation technologies and services to exploit photos and videos—typically captured from an individual's social media account, open internet, or requested from the victim—into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites,” the FBI’s announcement read. “The photos are then sent directly to the victims by malicious actors for sextortion or harassment, or until it was self-discovered on the internet. Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet.”

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.