scorecard
  1. Home
  2. tech
  3. how-to
  4. What is a deepfake? Everything you need to know about the AI-powered fake media

What is a deepfake? Everything you need to know about the AI-powered fake media

Dave Johnson   

What is a deepfake? Everything you need to know about the AI-powered fake media
Tech6 min read
  • Deepfakes use deep learning artificial intelligence to replace the likeness of one person with another in video and other digital media.
  • There are concerns that deepfake technology can be used to create fake news and misleading, counterfeit videos.
  • Here is a primer on deepfakes – what it is, how it works, and how it can be detected.

Computers have been getting increasingly better at simulating reality. Modern cinema, for example, relies heavily on computer-generated sets, scenery, and characters in place of the practical locations and props that were once common, and most of the time these scenes are largely indistinguishable from reality.

Recently, deepfake technology has been making headlines. The latest iteration in computer imagery, deepfakes are created when artificial intelligence (AI) is programmed to replace one person's likeness with another in recorded video.

What is a deepfake and how does it work?

The term "deepfake" comes from the underlying technology "deep learning," which is a form of AI. Deep learning algorithms, which teach themselves how to solve problems when given large sets of data, are used to swap faces in video and digital content to make realistic-looking fake media.

There are several methods for creating deepfakes, but the most common relies on the use of deep neural networks involving autoencoders that employ a face-swapping technique. You first need a target video to use as the basis of the deepfake and then a collection of video clips of the person you want to insert in the target.

The videos can be completely unrelated; the target might be a clip from a Hollywood movie, for example, and the videos of the person you want to insert in the film might be random clips downloaded from YouTube.

The autoencoder is a deep learning AI program tasked with studying the video clips to understand what the person looks like from a variety of angles and environmental conditions, and then mapping that person onto the individual in the target video by finding common features.

Another type of machine learning is added to the mix, known as Generative Adversarial Networks (GANs), which detects and improves any flaws in the deepfake within multiple rounds, making it harder for deepfake detectors to decode them.

GANs are also used as a popular method for creation of deepfakes, relying on the study of large amounts of data to "learn" how to develop new examples that mimic the real thing, with painfully accurate results.

Several apps and softwares make generating deepfakes easy even for beginners, such as the Chinese app Zao, DeepFace Lab, FaceApp (which is a photo editing app with built-in AI techniques), Face Swap, and the since removed DeepNude, a particularly dangerous app that generated fake nude images of women.

A large amount of deepfake softwares can be found on GitHub, a software development open source community. Some of these apps are used for pure entertainment purposes - which is why deepfake creation isn't outlawed - while others are far more likely to be used maliciously.

Many experts believe that, in the future, deepfakes will become far more sophisticated as technology further develops and might introduce more serious threats to the public, relating to election interference, political tension, and additional criminal activity.

How are deepfakes used?

While the ability to automatically swap faces to create credible and realistic looking synthetic video has some interesting benign applications (such as in cinema and gaming), this is obviously a dangerous technology with some troubling applications. One of the first real-world applications for deepfakes was, in fact, to create synthetic pornography.

In 2017, a reddit user named "deepfakes'' created a forum for porn that featured face-swapped actors. Since that time, porn (particularly revenge porn) has repeatedly made the news, severely damaging the reputation of celebrities and prominent figures. According to a Deeptrace report, pornography made up 96% of deepfake videos found online in 2019.

Deepfake video has also been used in politics. In 2018, for example, a Belgian political party released a video of Donald Trump giving a speech calling on Belgium to withdraw from the Paris climate agreement. Trump never gave that speech, however - it was a deepfake. That was not the first use of a deepfake to create misleading videos, and tech-savvy political experts are bracing for a future wave of fake news that features convincingly realistic deepfakes.

Of course, not all deepfake video poses an existential threat to democracy. There's no shortage of deepfakes being used for humor and satire, such as chips that answer questions like what would Nicolas Cage look like if he's appeared in "Raiders of the Lost Ark"?

Are deepfakes only videos?

Deepfakes are not limited to just videos. Deepfake audio is a fast-growing field that has an enormous number of applications.

Realistic audio deepfakes can now be made using deep learning algorithms with just a few hours (or in some cases, minutes) of audio of the person whose voice is being cloned, and once a model of a voice is made, that person can be made to say anything, such as when fake audio of a CEO was used to commit fraud last year.

Deepfake audio has medical applications in the form of voice replacement, as well as in computer game design - now programmers can allow in-gamer characters to say anything in real time rather than relying on a limited set of scripts that were recorded before the game was published.

How to detect a deepfake

As deepfakes become more common, society collectively will most likely need to adapt to spotting deepfake videos in the same way online users are now attuned to detecting other kinds of fake news.

Oftentimes, as is the case with cybersecurity, more deepfake technology must emerge in order to detect and prevent it from spreading, which can in turn trigger a vicious cycle and potentially create more harm.

There are a handful of indicators that give away deepfakes:

  • Current deepfakes have trouble realistically animating faces, and the result is video in which the subject never blinks, or blinks far too often or unnaturally. However, after researchers at University of Albany published a study detecting the blinking abnormality, new deepfakes were released that no longer had this problem.
  • Look for problems with skin or hair, or faces that seem to be blurrier than the environment in which they're positioned. The focus might look unnaturally soft.
  • Does the lighting look unnatural? Often, deepfake algorithms will retain the lighting of the clips that were used as models for the fake video, which is a poor match for the lighting in the target video.
  • The audio might not appear to match the person, especially if the video was faked but the original audio was not as carefully manipulated.

Combatting deepfakes with technology

While deepfakes will only get more realistic with time as techniques improve, we're not entirely defenseless when it comes to combating them. A number of companies are developing methods for spotting deepfakes, several of them being startups.

Sensity, for example, has developed a detection platform that's akin to an antivirus for deepfakes that alerts users via email when they're watching something that bears telltale fingerprints of AI-generated synthetic media. Sensity uses the same deep learning processes used to create fake videos.

Operation Minerva takes a more straightforward approach to detecting deepfakes. This company's algorithm compares potential deepfakes to known video that has already been "digitally fingerprinted." For example, it can detect examples of revenge porn by recognizing that the deepfake video is simply a modified version of an existing video that Operation Minerva has already catalogued.

And last year, Facebook hosted the Deepfake Detection Challenge, an open, collaborative initiative to encourage the creation of new technologies for detecting deepfakes and other kinds of manipulated media. The competition featured prizes ranging up to $500,000.

Related coverage from Tech Reference:

READ MORE ARTICLES ON


Advertisement

Advertisement