+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Deepfake porn is a huge problem — here are some of the tools that could help stop it

Feb 18, 2024, 17:04 IST
Business Insider
Unstable Diffusion has become the face of AI image generation.Stefani Reynolds/AFP/Getty Images
  • The number of AI-generated deepfakes is exploding as tools become more sophisticated and widely available.
  • Most deepfakes are porn and are being targeted at everyone from children to celebrities.
Advertisement

The risks of artificial intelligence can seem overwhelming — for every benefit the technology provides, there's an adverse use.

One major problem created by AI is deepfakes: videos, images, or audio generated by AI that can be designed to mimic a victim saying or doing something that never happened.

Some deepfakes superimpose a likeness onto real video footage, while others are completely computer-generated.

A 2019 study called The State of Deepfakes by Deep Trace found that 96% of deepfake videos were pornographic.

Henry Ajder, a deepfakes researcher who coauthored that study, told Business Insider that while the figure may have changed, the problem remains just as acute.

As AI tools such as Dall-E, Stable Diffusion, and Midjourney become more widely accessible, it's becoming easier for people with very little technical knowledge to create deepfakes.

Advertisement

"The overall percentage is now lower, but the overall volume of deepfake content which is pornographic has exploded," Ajder said. "We're talking about millions of victims around the world."

While the content is fake, the humiliation, sense of trauma, and intimidation for victims are very real.

A British teenager killed herself in 2021 after deepfake pornographic images of her were created and shared by other students from her school in a Snapchat group, BBC News reported.

Deepfake porn of pop superstar Taylor Swift has raised awareness of the issue.Axelle/Bauer-Griffin/Getty Images

Last month deepfake porn of Taylor Swift started circulating online, prompting Elon Musk to ban searches of the pop superstar on X.

The situation might seem grim, but there are tools and methods available that can help protect against AI manipulation of your identity.

Deepfake detection

Digital watermarks, where content is clearly labeled as AI-generated, have been endorsed by the Biden administration as one solution.

Advertisement

The labels aim to both raise public awareness and make it easier for platforms to scrape and remove damaging fake content.

Google and Meta have both announced plans to start labeling material created or modified by AI with a "digital credential" to make the origins of content clearer.

And OpenAI, the creator of ChatGPT and the image generator DALL-E, plans to include both a visual watermark and hidden metadata that reveals the history of an image, in line with the Coalition for Content Provenance and Authenticity (C2PA) standards.

AI-generated image of a young womanOpen AI / Business Insider

There are also dedicated platforms designed to check the origins of online material. Sensity, the company behind the 2019 study on deepfakes, has developed a detection service that alerts users via email when they're watching content with telltale AI-generated fingerprints.

But even when an image may be obviously fake — as most still lack watermarks — the subject may still feel victimized.

Advertisement

'Poison pills'

Defensive tools that protect images from manipulation are seen as a stronger solution, though they're still in the early stages of development.

These tools give users the option to process their images with an imperceptible signal that, when run through any AI-powered system, creates an unusable, blurry mess.

For example, Nightshade, a tool created by researchers at the University of Chicago, adds pixels to images that corrupt when fed to an AI but leaves the image looking as intended for humans.

"You can think of Nightshade as adding a small poison pill inside an artwork in such a way that it's literally trying to confuse the training model on what is actually in the image," Ben Zhao, one of the researchers, told NPR.

The World of AI·magination exhibition in New York, created using generative AI.Anadolu / Getty

While designed to protect artists' IP, the technology can work on any photograph.

Advertisement

"That's a really nice frontline defense for people to feel like, okay, I'm safe uploading photos from my friend's birthday that weekend," Ajder said.

Regulation can make a difference

At least 10 states have already implemented a patchwork of legal protections for victims of deepfakes, according to The Associated Press.

But recent high-profile cases have upped the pressure on lawmakers to deter and punish the malicious use of AI and deepfakes.

The Federal Communications Commission has banned AI-generated robocalls after hoax calls were made in the New Hampshire primaries, using an AI-generated voice that sounded like Joe Biden.

And in January, a bipartisan group of senators introduced a federal bill known as the DEFIANCE Act that would allow victims to sue those who create and distribute sexual deepfakes of them, making it a civil rather than criminal issue.

A bill introduced last May by Rep. Joe Morelle to criminalize the sharing of deepfakes has not progressed.

Advertisement

But much of the new legislation is coming up against free speech proponents. The reason, according to Henry Ajder, is that some see the private creation of deepfakes as akin to a fantasy someone may have inside their head about a crush. If no one ever knows about the pornographic content, has anyone really been harmed?

Criminal deterrents

This has had an effect on legislation in the UK, where the Online Safety Act has made it illegal to distribute deepfake porn — but not to create it.

"My counterargument is that if you're creating this content, you're bringing into existence something that could be shared in a way that a fantasy can't really be shared," Ajder says.

Though hard to prosecute, criminalizing deepfake porn is still an important deterrent, in his view. Some people are using AI tools to create content for their own private consumption, Ajder says. However, ensuring they realize this is criminal behavior is important, he adds, to deter those who might merely be curious.

Governments can also put pressure on search engine providers, AI-tool developers, and the social media platforms where content is distributed.

Advertisement

In India, a deepfake porn scandal involving Bollywood actresses spurred the government to fasttrack legislation and pressure the big tech companies to prevent AI-generated content from being spread online.

"We should not kid ourselves that it is possible to fully remove the problem," admits Ajder. "The best we can do, in my view, is introduce as much friction as possible so that you have to be incredibly intentional about trying to create this content."

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article