- Misinformation about the Israel-Gaza conflict is spreading on X.
- The rampant spread is "a direct result of Musk's policies," a misinformation expert told Insider.
Fake news about the Israel-Gaza conflict is spreading like wildfire on X.
Misleading content, video game footage, old videos, and even Algerian football celebrations are being passed off as real examples of the violent conflict unfolding between Israel and Hamas.
At X, formerly Twitter, it appears to be all hands on deck to tackle the spread of misinformation. The platform's CEO, Linda Yaccarino, even pulled out of speaking arrangements, citing the developing conflict and a need to "remain fully focused on X platform safety."
However, much of the blame for the chaos can be directed at the platform's owner and CTO, Elon Musk, a misinformation specialist says.
"I would be fairly confident to say that a lot of what we're seeing right now is a direct result of the policy changes that were implemented by Musk," Sander van der Linden, the director of the Cambridge Social Decision-Making Laboratory and a professor at the University of Cambridge, told Insider.
Van der Linden said Musk's early changes to the platform, including gutting the trust and safety teams, changing the verification process, and introducing monetized content, are not only allowing, but also motivating, the spread of misinformation on the platform.
"The fact that people with verified accounts can monetize their content means they have the wrong incentive," he said. "They're incentivized to spread content that's going to get engagement, clicks, and ultimately make them money. Whatever polarizing images they can find, whether it's real or fake, is going to elicit clicks."
Van der Linden pointed to accounts posting images that were completely out of context, including videos of soldiers paragliding into Egypt and old Arma 3 video game footage.
He added that Musk's changes to the verification process also allowed people with verified accounts to easily impersonate previously trustworthy accounts, such as those belonging to media organizations.
"That's causing a lot of confusion," van der Linden said. "If an account is blue, it means it's verified, which confers a certain degree of trust. Even if you keep saying don't trust verified accounts, it's still difficult for people to navigate that landscape."
Musk has also fired a huge section of the team that worked on online safety and moderation.
In a Threads post that showed a widely shared video of a football celebration misrepresented as the Israel-Gaza conflict, Stanford professor Alex Stamos said: "Sadly, the destruction of the teams Twitter put in place to fight organized manipulation makes it harder for individuals to speak to a global audience as their message gets buried by troll farms, state propaganda organs, and grifters."
Van der Linden said Musk's cuts had left moderating teams chasing content rather than controlling it.
"They're always going to be running behind the facts trying to put out the fires whilst having very little control over the narrative on Twitter," he said.
"They're only taking down the most radical content, for example, blocking Hamas accounts and things like that, but they don't really have the resources to deal with this," van der Linden added.
Musk has also been adding fuel to the fire, recommending two accounts that have been known for spreading false information.
Van der Linden acknowledged there was "a general problem with platforms rolling back efforts to intervene when it comes to misinformation" but said it was "uniquely bad" on X.
"Meta, in comparison, still has more effective resources and is doing more also preemptively than X at the moment," he said.
Van der Linden said although misinformation was also present on Twitter before Musk's time, the platform used to have better resources to tackle it. "But, of course, that's his vision," he added. "Unmoderated speech."
Representatives for X did not immediately respond to Insider's request for comment, made outside normal working hours.