YouTube, which has more than 1 billion users, has been criticized for not moving swiftly to take down homophobic content and videos denying the veracity of mass shootings at schools. The company has updated its hate speech policies and currently employs 10,000 human moderators.
YouTube has repeatedly faced issues with regulating the content that appears on its site. Following the February 2018 Parkland, Florida, mass shooting at Marjory Stoneman Douglas High School, videos appeared on YouTube calling one of the survivors a crisis actor. The same account which posted those videos, InfoWars, also uploaded videos calling the Sandy Hook Elementary School mass shooting in 2012 a hoax.
The most recent example came in June, when Vox journalist Carlos Maza spoke out about conservative media star Steven Crowder, who had repeatedly mocked Maza's sexual orientation and hurled ethnic slurs at him in multiple YouTube videos.
Five days after Maza took to Twitter to highlight Crowder's attacks, YouTube announced that Crowder was not in violation of any policies. One day later, YouTube reneged and demonetized Crowder's videos.
YouTube then updated its hate speech policy, "prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion." The content of YouTube's policy update was criticized as weak and its timing suspect.
Source: Wired, The Guardian, Business Insider
Facebook has been criticized for failing to adequately moderate both hate speech and calls for genocide of the Rohingya minority Muslim group in Myanmar. And while the company employs human content moderators, that comes with its own set of issues.
In August 2018, a Reuters investigation found that Facebook did not adequately moderate both hate speech and calls for genocide of the Rohingya minority Muslim group in Myanmar. According to the Human Rights Watch, nearly 700,000 Rohingya refugees have fled the Rakhine State of Burma because of a military-led ethnic cleansing campaign since August 2017.
ProPublica has called Facebook's enforcement of hate speech rules "uneven," explaining that "its content reviewers often make different calls on items with similar content, and don't always abide by the company's complex guidelines."
Facebook's use of human content moderators came under scrutiny in June when The Verge reported that Keith Utley — a 42-year-old employee at a Facebook moderation office operated by Cognizant in Tampa, Florida — died of a heart attack on the job a year prior.
The Verge also reported the job site was home to poor working conditions, where content moderators are tasked with watching hours of graphic footage that had been posted on Facebook in an effort to keep the content clean of anything that violates its terms of service.
Fast Company reported five days after the Verge piece that Facebook was expanding its tools for content moderators, with the goal of buffering the negative psychological effects of consuming disturbing Facebook posts, and that these tools were underway before the Verge story.
Sources: ProPublica, Reuters, Fast Company, Business Insider, The Verge
Twitter has made policy changes to help curb abuse, but it still has a long-standing white supremacist problem.
Twitter says it's been making progress when it comes to ridding its site of spam and abuse. The platform — which has 300 million monthly users, according to Recode — said in an April blog post that rather than relying on users to flag abuse, it's implemented technology that automatically flags offensive content.
Twitter said that "by using technology, 38% of abusive content that's enforced is surfaced proactively for human review instead of relying on reports from people using Twitter."
Yet, Twitter just can't seem to figure out how to block white supremacists. Vice reported that a Twitter employee said that "Twitter hasn't taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians."
"The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material," Vice reported.
Source: Vice, Twitter, Recode
Got a tip on content moderation and product regulation at major tech companies? Reach this article's writer, Rebecca Aydin, via email at raydin@businessinsider.com