Skyrocketing slurs, racism, and antisemitic content on Twitter may encourage domestic terrorists, report warns: 'Violence is inevitable'
- Tweets with slurs, antisemitic, and racist content have skyrocketed since Elon Musk's takeover.
- Federal officials have warned that Twitter posts will translate to real-world acts of violence.
Despite Elon Musk's insistence to the contrary, researchers have found antisemitic and racist slurs have skyrocketed on Twitter since his takeover — with experts warning the hateful rhetoric will translate to real-world acts of violence.
In the weeks since Musk completed his $44 billion acquisition of the social media platform, researchers from the Center for Countering Digital Hate, Anti-Defamation League and other groups have found posts including slurs and white nationalist, racist, antisemitic or far-right content have risen while Twitter removes less content and takes longer to review it.
The Center for Countering Digital Hate found, in the week immediately following Musk's takeover, that use of the N-word tripled the 2022 average, anti-LGBTQ slurs were up between 39% and 53%, and antisemitic slurs rose 22%. The New York Times reported antisemitic posts referring to Jews or Judaism soared more than 61 percent in the first two weeks while accounts supporting ISIS came roaring back.
"The idea that there is a difference between online chatter and real-word harm is disabused by a decade of research," Juliette Kayyem, a national security expert and former assistant DHS secretary, told The Washington Post, adding that Musk's current approach to open content on Twitter "re-socializes the hate and rids society of the shaming that ought to occur regarding antisemitism," she said.
Musk has maintained that hate speech impressions — the number of times content is viewed — are down overall, posting a graph on Friday that demonstrated the downward trend and saying "hate speech impressions are <0.1% of what's seen on Twitter."
The billionaire owner of Twitter, a self-proclaimed "free speech absolutist" who has rallied against widespread content moderation and reinstated the accounts of several users originally suspended for hate speech, has not revealed any internal metrics used for determining what posts are classified as hate speech, or the total number of hate speech impressions from before or after his takeover.
Yoel Roth, Twitter's former head of trust and safety, last month acknowledged the "surge" in hateful posts in a tweet, but also said the company was successfully taking steps to reduce their reach.
Musk and representatives for Twitter did not immediately respond to Insider's requests for comment.
The rise of antisemitic content is of particular cause for concern among experts, given the high-profile amplification of anti-Jewish posts by celebrities such as Kanye West, who was kicked off the platform again this month after violating the platform's rules against incitement to violence.
"This type of escalation and hate and dehumanization, the hatred of the Jewish population — it's a really directed target. Violence is inevitable," The Washington Post reported Denver Riggleman, a former Air Force intelligence officer and Republican member of Congress, said.
The Department of Homeland Security released a statement late last month regarding current domestic terrorism threats to the United States that highlighted the "enduring threat to faith-based communities, including the Jewish community" and referenced an incident last month wherein an 18-year-old was arrested after making online threats against a synagogue.
Researchers have consistently found that online rhetoric feeds real-world behavior, with hate speech online being linked to increases in violence toward minorities, "including mass shootings, lynchings, and ethnic cleansing," according to the nonpartisan think tank Council for Foreign Relations.
"Whether it's the individual or the group dynamics, they are feeding off this crap and this hate — that is the reason why content moderation was created in the first place," The Washington Post reported Kayyem said. "Content moderation wasn't invented because they wanted everyone to be nice, it was created because of the realization that these kinds of attitudes, if allowed to foster in society, lead to violent conduct."