FYI.

This story is over 5 years old.

Tech

Gfycat's AI Solution for Fighting Deepfakes Isn't Working

Chasing AI-created fake videos with AI moderators doesn’t work, as Gfycat may be finding out.

Some of the most popular celebrity deepfakes—fake porngraphic videos of celebrities and average people generated with a machine learning algorithm—are still alive and well where Motherboard first discovered them back in December: Image hosting platform Gfycat.

One of the earliest deepfakes, of Gal Gadot, was deleted or re-uploaded under a different name, but up until Monday, other high-profile celebrity deepfakes remain at the same URLs, including ones of Scarlett Johansson, Maisie Williams, and Taylor Swift.

Advertisement

In January, deepfakes started disappearing from Gfycat, the preferred uploading spot for many deepfake makers on Reddit and elsewhere. A spokesperson for Gfycat told Motherboard that they found pornographic deepfakes “objectionable” and were actively deleting them from the site. "Our terms of service allow us to remove content that we find objectionable. We are actively removing this content,” a spokesperson for Gfycat told me in an email at the time.

A month later, Gfycat claimed it had an AI-assisted solution to combating fake video. Gfycat would use Project Angora, a pre-deepfakes technology that searches the web for higher-resolution versions of whatever gif you’re trying to upload, in tandem with Project Maru, which recognizes individual faces in gifs and automatically tags the people in them.

This, Gyfcat claimed at the time, would be an effective method of combating deepfakes, since they usually contain celebrity faces that the software could detect give the abundance of reference material already available on the internet.

Popular deepfakes, like the ones that are still hosted on Gfycat, should supposedly be low-hanging fruit for this Project Maru and Project Angora.

When I asked Gfycat these deepfakes are still up, a spokesperson for Gfycat told me that since these videos were uploaded before they started banning deepfakes, they somehow missed them. “We rely on user-reporting for the few instances when deepfakes content was uploaded in the months and years before the launch of our deepfakes AI technology,” the spokesperson told me in an email.

Advertisement

“Since we have over 40 million+ pieces of content on our site that were created by users prior to our launch of deepfakes AI, it would be very costly (hundreds of thousands of dollars) for us to run our current AI technology on all that content,” they said. “We’re still a startup :-)”

Gfycat asked me for the links to these deepfakes, and after I provided it to the company, it removed them. I uploaded a new file of one of the same deepfake videos with no problem, and after several hours it’s still up.

Gfycat isn’t the only platform that’s declared war on fake AI porn and failed to follow through. There are still tons of deepfake videos easily accessible on Pornhub, despite the platform proclaiming in February that it would ban any deepfakes it catches. “We do not tolerate any nonconsensual content on the site and we remove all said content as soon as we are made aware of it," a Pornhub spokesperson told me in an email in February. "Nonconsensual content directly violates our TOS [terms of service] and consists of content such as revenge porn, deepfakes or anything published without a person’s consent or permission.”

Developing new moderation software is often described as a cat-and-mouse game, by nature of how these algorithms work. Internet platforms devise a new way to automatically detect objectionable content, and users find a new way to get around that fix. It’s possible that Gfycat’s AI moderator just needs more time to improve, but half a year in it is far from perfect.

Since January, we’ve seen researchers try to track down deepfakes using tiny fluctuations in skin color, huge databases that try to detect forgeries, and eye-blinking rates. I wouldn't be surprised if all of these techniques will be eventually foiled by new ways of making and hiding deepfakes methods. It's just the cat-and-mouse nature of policing anything in technology.