- Separating fact from fiction has never been so hard for voters.
- As billions head to the polls this year, AI deepfakes pose a real threat to credible information.
We keep being told that AI is going to make life easier. Yet voters heading to the polls in 2024 have reason to be doubtful.
This year, roughly half the world's population will find out just how problematic AI is to the democratic process as they prepare to vote in elections in the US as well as the likes of Britain, India, and Mexico.
Voters, tasked with figuring out candidate and party policies, already faced a difficult job. Now, AI threatens to make that process a whole lot harder.
"Even if we shut down AI development, the information landscape post-2023 will never be the same as it was before," Ethan Mollick, an associate professor at Wharton, wrote this month.
Recent advances in generative AI, spurred by OpenAI's ChatGPT, mean the technology is now a much bigger problem.
Voters in New Hampshire found that out the hard way after they started receiving calls ahead of an unofficial Democrat primary in which Joe Biden seemed to be telling them not to vote.
The deepfake robocalls, first reported by NBC News on Monday, opened with the classic Biden catchphrase "what a bunch of malarkey," before trying to keep people away from Tuesday's ballot.
The Biden robocall is particularly pernicious too because of how difficult it can be to distinguish a genuine voice from a fake.
Peer-reviewed research published in August in the journal PLOS ONE found that people struggled to detect artificially-generated speech more than a quarter of the time.
"The difficulty of detecting speech deepfakes confirms their potential for misuse," researchers noted at the time.
AI-generated deepfakes have been causing other problems too. In the UK, research by Fenimore Harper Communications found more than 100 deepfake video ads impersonating Prime Minister Rishi Sunak on Facebook.
According to the research, the paid ads — 143 of which were uncovered between December 8 and January 8 — "may have reached over 400,000" people. Funding for ads appeared to come from 23 countries, including "Turkey, Malaysia, the Philippines, and the United States."
"It appears to be the first widespread paid promotion of a deep-faked video of a UK political figure," Fenimore Harper's report said. Meta did not immediately respond to Business Insider's request for comment.
Though it's not clear exactly who is behind the deepfakes in the US and UK, the recent proliferation of AI means almost anyone with internet access and an AI tool can cause some havoc.
Mollick noted in his newsletter how he created a deepfake video in a few minutes by sending a 30-second video of himself and 30 seconds of audio of his voice to AI startup Heygen.
"I had an avatar that I could make say anything, in any language. It used some of my motions — like adjusting the microphone — from the source video, but created a clone of my voice and altered my mouth movements, blinking and everything else," he wrote.
Guardrails
AI companies are making some efforts to address the problems. Earlier this month, OpenAI unveiled its plans to prevent the misuse of AI ahead of this year's elections.
The plans include using guardrails on tools such as text-to-image model DALL-E to prevent it from generating images of real people, as well as banning tools such as ChatGPT being used for "political campaigning and lobbying."
"Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process," OpenAI said in a blog.
Other organizations are striving to combat the spread of AI-generated fakery. Lisa Quest, UK and Ireland lead for management consulting firm Oliver Wyman, told my colleague Spriha Srivastava in Davos about the work its social impact team does alongside charitable organizations in "the online safety realm" to restrict the spread of misinformation.
They face an uphill battle, to say the least — as do voters trying to work out what they can and cannot trust.