+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

As social media platforms brace for the incoming wave of deepfakes, Google's former 'fraud czar' predicts the biggest danger is that deepfakes will eventually become boring

Jan 30, 2020, 17:30 IST
  • Facebook, Twitter, and TikTok recently announced plans to tackle the spread of "deepfake" videos on their platforms as the new form of misinformation gains momentum.
  • For now, deepfakes still look artificial and creating one requires as much effort as "Hollywood-level art," F5 Networks AI chief Shuman Ghosemajumder told Business Insider.
  • But "perfectly realistic" deepfakes aren't far off, and as our brains get used to them, that "purely false information" will simply become our new reality, he said.
  • The vast amount of fake news that already exists foreshadows what could happen as deepfakes become easier to create at scale.
  • Visit Business Insider's homepage for more stories.

In recent weeks, Facebook, Twitter, and TikTok have all found themselves scrambling to tackle a new form of disinformation: deepfake videos.

Advertisement

Deepfakes are videos manipulated using artificial intelligence to make it look like someone is saying something they actually haven't, and as they become more realistic and easier to create, everyone from social media companies to lawmakers are trying to sort out the possible implications.

Shuman Ghosemajumder, the new global head of artificial intelligence for F5 Networks and Google's former "click-fraud czar," is one of the people working to understand what companies are up against. While he was CTO of Shape Security (recently acquired by F5), Ghosemajumder dealt with fraud such as fake accounts and credential stuffing, he developed a framework to think about the future of deepfakes, which he said are a "societal concern."

Ghosemajumder believes that deepfakes will evolve and spread in three distinct stages, which he says society needs to understand as it prepares to address the challenge posed by the new technology.

The first stage, which he says we've already reached, is where one person can produce one piece of convincing fake content. Take, for example, a viral video released in 2018 by Buzzfeed that appeared to show former US President Barack Obama cursing and calling President Trump names, but had actually been voiced by director and actor Jordan Peele and manipulated using deepfake software.

Advertisement

Making a video like that "would be very time consuming, it'd be like creating Hollywood-level art," Ghosemajumder said.

However, the machine learning tools behind deepfakes are getting easier to use. As that happens, Ghosemajumder warned, we're headed toward stage two, where one million people are each able to create one convincing deepfake.

Doctored videos of Facebook CEO Mark Zuckerberg, "Game of Thrones" actor Kit Harington, and "Wonder Woman actress Gal Gadot have all popped up in the last year - a sign that one-off deepfakes are increasingly common.

But the real danger, according to Ghosemajumder, comes at stage three, when those tools become accurate and efficient enough to enable one million people to produce one million deepfakes.

"Now all of a sudden, you can put that in the hands of a million relatively talented creators and they'll create things that look perfectly realistic," he said.

Advertisement

We've already reached this stage with text- and image-based disinformation, such as content produced by the Internet Research Agency, the Russian "troll farm" that sought to influence the 2016 US presidential election.

We're not quite at that level of sophistication when it comes to deepfake videos, Ghosemajumder said, but new research is quickly getting us there. Some experts have even predicted that "perfectly realistic" deepfakes are less than a year away.

The real danger comes when the novelty wears off

"The problem is once the technology gets advanced enough that it's perfect, that you actually can't tell the difference anymore, it doesn't look that remarkable," Ghosemajumder said.

He gave the example of riding in a self-driving car for the first time. At first, the experience is "unnerving enough" that it sets off alarm bells for the passenger, causing them to worry about what could go wrong. Then, after being in the car for awhile, "it's just boring," he said.

As convincing deepfakes become the norm, Ghosemajumder is concerned our brains will similarly stop ringing those alarm bells.

Advertisement

"There's nothing unusual to see there, your mind has gotten accustomed to it and it's just the new reality at that point," Ghosemajumder said, adding that "now it's just purely false information."

Social media companies continue to wrestle with the best approach toward deepfakes, unsure of whether and how to police them or if they might actually be an entertaining feature for users.

A Chinese app called Zao that took off last September lets people superimpose their faces onto celebrities like Leonardo di Caprio, and while the results are noticeably artificial, they hint at the impact deepfakes could have once the tools to create them are made more widely accessible. TikTok and parent company ByteDance have also reportedly experimented with a deepfake feature.

At the same time, lawmakers have hammered Facebook, Twitter, and other social media companies, who they feel haven't taken a strong enough stance on preventing misinformation - of all types - from spreading on their platforms.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article