FYI.

This story is over 5 years old.

Tech

DHS Can Neither Confirm Nor Deny It Has Records on Deepfakes

As DARPA researchers work on identifying manipulated videos, and lawmakers call for an intelligence community report, the DHS is staying tight-lipped on deepfakes.

Governments are worried about deepfakes. The sometimes highly realistic yet fake, algorithmically-generated videos could portray a world leader in a compromising scenario.

Last week, lawmakers demanded that the intelligence community creates a report on how hostile nations may use deepfakes. But some members are already keen to remain tight-lipped on the topic, with a section of the Department of Homeland Security neither confirming or denying it has emails, reports, or other documents concerning deepfakes in response to a Freedom of Information Act (FOIA) request by Motherboard.

In its response, the Office of Intelligence and Analysis (I&A) said it was denying the request under an exemption that protects “intelligence sources and methods from unauthorized disclosure.” I&A also neither confirmed nor denied the existence of the requested records, which included presentations and talking points of meetings related to deepfakes, as “disclosure of the information you requested would reveal law enforcement techniques or procedures and the circumstances under which those procedures ore [sic] techniques were used.” The I&A also cited several other exemptions.

Got a tip? You can contact this reporter securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

Motherboard first reported on deepfakes in December, when AI and porn enthusiasts were creating face-swapped fake sex tapes using an accessible, open-source algorithm made by a user named u/deepfakes. They would process hundreds or thousands of images of celebrities through machine learning tools, and then map the celebrities’ faces onto different pornographic videos. The phenomenon quickly exploded with an easier-to-use app version. Reddit, Twitter, Pornhub, and other image hosting sites banned the practice following widespread media coverage and AI experts sounding the alarms that these creations were unethical and potentially dangerous.

But that doesn’t mean deepfakes have gone away. In April, the US Defense Advanced Research Projects Agency’s (DARPA) Media Forensics department gave contracts to research groups to discover ways to detect deepfakes-style video manipulation. Matthew Turek, head of the Media Forensics program, recently told MIT Technology Review that its researchers discovered “subtle cues” in videos and images that can reveal alterations. A team led by Siwei Lyu, a professor at the State University of New York at Albany, found that deepfake subjects rarely blink, although the robustness of these methods is still in question.