Gfycat says it’s figured out a way to train an artificial intelligence to spot fraudulent videos. The technology builds on a number of tools Gfycat already used to index the GIFs on its platform.
[..]
Gfycat’s AI approach leverages two tools it already developed, both (of course) named after felines: Project Angora and Project Maru. When a user uploads a low-quality GIF of, say, Taylor Swift to Gfycat, Project Angora can search the web for a higher-res version to replace it with. In other words, it can find the same clip of Swift singing “Shake It Off” and upload a nicer version.

Now let’s say you don’t tag your clip “Taylor Swift.” Not a problem. Project Maru can purportedly differentiate between individual faces and will automatically tag the GIF with Swift’s name. This makes sense from Gfycat’s perspective—it wants to index the millions of clips users upload to the platform monthly.

Here’s where deepfakes come in. Created by amateurs, most deepfakes aren’t entirely believable. If you look closely, the frames don’t quite match up; in the below clip, Donald Trump’s face doesn’t completely cover Angela Merkel’s throughout. Your brain does some of the work, filling in the gaps where the technology failed to turn one person’s face into another.

Project Maru is not nearly as forgiving as the human brain. When Gfycat’s engineers ran deepfakes through its AI tool, it would register that a clip resembled, say, Nicolas Cage, but not enough to issue a positive match, because the face isn’t rendered perfectly in every frame. Using Maru is one way that Gfycat can spot a deepfake—it smells a rat when a GIF only partially resembles a celebrity.

Source: Gfycat Uses Artificial Intelligence to Fight Deepfakes Porn | WIRED