Deepfakes of trusted and popular doctors are being used to illegally sell marijuana products online
Jul 18, 2024, 13:53 IST
In a twist that sounds straight out of a sci-fi thriller, some of the UK's most beloved TV doctors are now finding their faces hijacked by AI to hawk dubious products online. According to a startling report by The BMJ, these digital doppelgangers are being used to peddle everything from miracle cures for high blood pressure and diabetes to hemp gummies.
The phenomenon, known as deepfaking, uses artificial intelligence to map a real person’s likeness onto another video. The results can be uncannily realistic — so much so that a recent study found up to half of viewers couldn't distinguish deepfakes from authentic videos.
Some of the medical influencers targeted have amassed millions of followers with large spheres of influence, such as Hilary Jones, Michael Mosley and Rangan Chatterjee. While the study does not mention the following names, another podiatrist and influencer Dana Brems, aka FootDocDana on Instagram, had recently expressed concern after deepfakes surfaced of the personality recommending a “ear-cleaning” product.
John Cormack, a retired doctor from Essex, partnered with The BMJ to uncover the extent of this digital deception. “The bottom line is, it's much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way,” Cormack explains.
The proliferation of fake content featuring familiar faces is an inevitable side effect of our current AI revolution, says Henry Ajder, a deepfake technology expert. “The rapid democratisation of accessible AI tools for voice cloning and avatar generation has transformed the fraud and impersonation landscape.”
The issue has reached such proportions that even the targeted doctors are fighting back. Hilary Jones, for instance, employs a social media specialist to search for and take down deepfake videos misrepresenting his views. “Even if you do, they just pop up the next day under a different name,” Jones laments.
Meta, the company behind Facebook and Instagram where many of these videos have been found, has promised to investigate. "We don't permit content that intentionally deceives or seeks to defraud others, and we're constantly working to improve detection and enforcement," a Meta spokesperson told The BMJ.
Deepfakes prey on people's emotions, notes journalist Chris Stokel-Walker. When a trusted figure endorses a product, viewers are more likely to believe in its efficacy. This emotional manipulation is precisely what makes deepfakes so insidious.
Spotting deepfakes has become increasingly challenging as the technology improves. Additionally, the recent tsunami of non-consensual deepfake videos would suggest that they might be having some commercial success, despite being illegal.
For those who find their likenesses being used without consent, there seems to be little recourse. However, Stokel-Walker offers some advice: scrutinise the content for telltale signs of fakery, leave a comment questioning its authenticity, use the platform's reporting tools, and report the account responsible for sharing the post.
As AI continues to blur the lines between reality and digital deception, it's crucial for users to remain vigilant. The faces we trust most could be the very ones leading us astray — at least, digitally speaking.
Advertisement
The phenomenon, known as deepfaking, uses artificial intelligence to map a real person’s likeness onto another video. The results can be uncannily realistic — so much so that a recent study found up to half of viewers couldn't distinguish deepfakes from authentic videos.
Some of the medical influencers targeted have amassed millions of followers with large spheres of influence, such as Hilary Jones, Michael Mosley and Rangan Chatterjee. While the study does not mention the following names, another podiatrist and influencer Dana Brems, aka FootDocDana on Instagram, had recently expressed concern after deepfakes surfaced of the personality recommending a “ear-cleaning” product.
John Cormack, a retired doctor from Essex, partnered with The BMJ to uncover the extent of this digital deception. “The bottom line is, it's much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way,” Cormack explains.
The proliferation of fake content featuring familiar faces is an inevitable side effect of our current AI revolution, says Henry Ajder, a deepfake technology expert. “The rapid democratisation of accessible AI tools for voice cloning and avatar generation has transformed the fraud and impersonation landscape.”
The issue has reached such proportions that even the targeted doctors are fighting back. Hilary Jones, for instance, employs a social media specialist to search for and take down deepfake videos misrepresenting his views. “Even if you do, they just pop up the next day under a different name,” Jones laments.
Advertisement
Meta, the company behind Facebook and Instagram where many of these videos have been found, has promised to investigate. "We don't permit content that intentionally deceives or seeks to defraud others, and we're constantly working to improve detection and enforcement," a Meta spokesperson told The BMJ.
Deepfakes prey on people's emotions, notes journalist Chris Stokel-Walker. When a trusted figure endorses a product, viewers are more likely to believe in its efficacy. This emotional manipulation is precisely what makes deepfakes so insidious.
Spotting deepfakes has become increasingly challenging as the technology improves. Additionally, the recent tsunami of non-consensual deepfake videos would suggest that they might be having some commercial success, despite being illegal.
For those who find their likenesses being used without consent, there seems to be little recourse. However, Stokel-Walker offers some advice: scrutinise the content for telltale signs of fakery, leave a comment questioning its authenticity, use the platform's reporting tools, and report the account responsible for sharing the post.
As AI continues to blur the lines between reality and digital deception, it's crucial for users to remain vigilant. The faces we trust most could be the very ones leading us astray — at least, digitally speaking.
Advertisement
The findings of this research can be accessed here.