Let's Face It

Published: 2021-03-26 00:00:00

Arbitrage Blog Image

If you are not familiar with it, deepfake technology is essentially a filter in which a person in an existing image or video is replaced with someone else's likeness. The general concept of deepfake is not new; however, deepfake footage leverages techniques from machine learning and artificial intelligence (AI) to generate visual and audio content with a high potential to deceive. If you're not impressed now, think back to the Tron movie that came out in 2010. Deepfake technology is what was used to create the young Jeff Bridges. That's right, Jeff Bridges was able to play the young version of himself using this technology; it wasn't make up nor was it a look alike. The machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures which pretty much just boils down to using math to model and predict facial movements. Due to their use typically being nefarious (revenge porn, fake news videos, fraud, defamation), the use of deepfake technology has been limited from both industry and government in order to detect and limit the use of deepfake videos and images.

Until recently, there was no good way to detect a video or image doctored by deepfake technology. Last year, a tool was released to help journalists determine real images from fake images. Jigsaw, a tech company owned by Google's parent, unveiled a free tool that researchers said could help journalists spot doctored photographs regardless of how the images were generated. Jigsaw said it was testing the tool, called Assembler, with more than a dozen news and fact-checking organizations around the world. The tool is meant to verify the authenticity of images or show where they may have been altered. Reporters can feed images into Assembler, which has seven "detectors," each one built to spot a specific type of photo-manipulation technique.


The technology around this has only gotten better since this year a new method of identifying altered images was released this year. The tool was created by computer scientists from the University at Buffalo earlier this year. The tool was tested on portrait-style photos and was 94% effective at detecting deepfake images. The tool exposes the fake images by analyzing the corneas, which have a mirror-like surface that generates reflective patterns when illuminated by light (think about when you make eye contact with someone and can see your reflection in their eyes). In a photo of a real face, the reflection on the subject's eyes will be similar because they are seeing the same thing - unlike deepfake images which are synthesized, so they typically fail to accurately capture these reflections. Deepfake images often exhibit inconsistencies, such as different geometric shapes or mismatched locations of the reflections.


Based on these two separate tools, it seems as if we are approaching the development of a tool to help prevent the usage of deepfakes as real news. Soon we won't have to worry too much about "recently discovered problematic tweets" or "embarrassing pictures that could end your career" type issues if the fakes can be uncovered as fake. Sounds good, right?

Like this article? Share it with a friend!