Facebook is making deepfake videos using paid actors so that it can help researchers better detect fake footage
- Facebook announced the Deepfake Detection Challenge on Thursday.
- The social networking company said it will contribute $10 million to fund research and prizes to help detect and combat deepfake videos, which use AI to create footage that makes it seem like a person said something they didn't, or appeared in a video they weren't in.
- Facebook said it will hire paid actors to create a library of facial characteristics and traits, which will help the industry detect deepfake videos.
- Visit Business Insider's homepage for more stories.
Facebook is using paid actors to create a library of "deepfake" videos, part of an industry-wide effort to improve tools for fighting the rising threat of doctored videos made with artificial intelligence technology.
The effort is part of a broader Deepfake Detection Challenge that Facebook announced on Thursday. The initiative is being done in partnership with Microsoft and several academic institutions, Facebook said.
"The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer," Facebook Chief Technology Officer Mike Schroepfer said in a blog post on Thursday.
Schroepfer said that Facebook will contribute more than $10 million to fund the effort, which will include research and prizes.
Deepfake videos, which use AI technology to make it appear that a person is saying or doing something that they never actually did, is a growing problem that has raised alarm bells at a time when misinformation has quickly taken root on social media.
Today's deepfake videos look incredibly realistic but can usually be detected thanks to telltale signs such as inconsistent shadows and double-eyebrows, Facebook said. But as the technology progresses, Facebook said that it will critical to have an established set of tools to detect fakes. For that reason, Facebook said it will create a library of deepfakes that researchers can use for building new detection tools.
"It's important to have data that is freely available for the community to use, with clearly consenting participants, and few restrictions on usage," Schroepfer said in the blog post. "That's why Facebook is commissioning a realistic data set that will use paid actors, with the required consent obtained, to contribute to the challenge. No Facebook user data will be used in this data set."
The other participants in the initiative include Microsoft, The Partnership on AI, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY.
Facebook has come under criticism for not being quicker to take down deepfake videos on its social network. In June, Facebook CEO Mark Zuckerberg said that one of the challenges is defining what constitutes a deepfake compared to other forms of misinformation. A celebrity or politician could claim that a news report, edited to show only a limited clip of a speech, constitutes misinformation, for instance.
"This is a topic that can be very easily politicized," said Zuckerberg. "People who don't like the way that something was cut often will argue that it did not reflect the true intent, or it was misinformation."
You can read Facebook's blog post about the Deep Fake Challenge here.