Facebook will now ban content that includes "implicithate speech " like blackface or anti-Semitic stereotypes, the company announced Tuesday.- Facebook has faced mounting pressure from civil rights groups to ramp up its enforcement of anti-hate speech policies, culminating in an advertiser boycott last month over hate speech on the platform.
- The company said Tuesday that it has been ramping up its artificial intelligence that detects hate speech — 95% of the hate speech it removed between April and June was detected by its AI.
- Facebook will also undergo a quarterly third-party audit of its hate speech moderation starting in 2021.
Facebook is tweaking is community standards to ban "implicit hate speech" on its platforms, and will soon take down content in violation of the policy like blackface and anti-Semitic stereotypes.
Facebook has faced mounting scrutiny from civil rights groups in recent months over concerns about the spread of hate speech and misinformation on its platform. Under pressure from activists, dozens of advertisers joined a boycott of Facebook's ad platform in the past two months, likely costing Facebook millions in revenue.
The new policy is meant to remove offensive content that previously skirted Facebook's ban on hate speech and was made after consultation with outside experts, vice president of content policy Monika Bickert told reporters Tuesday.
"This type of content has always gone against the spirit of our hate speech policy, but it can be really difficult to take concepts especially those that are commonly expressed in imagery and define them in a way that allows our content reviewers based around the world to consistently and fairly identify violations," Bickert said.
She added that the policy would not affect certain content with news value, like posts displaying a politician's use of blackface.
The company said on Tuesday that it has ramped up its artificial intelligence that detects hate speech — 95% of the hate speech it removed between April and June was detected by its AI, up from 89% in the first quarter of 2020, according to its latest transparency report.
It's also increased the volume of posts removed in the past quarter — Facebook removed 22.5 million posts for violating its community standards between April and June, up from 9.6 million in the first quarter of 2020.
But some of Facebook's
Rosen added that, to address concerns about its content moderation, Facebook will now undergo a quarterly third-party audit of its hate speech moderation starting in 2021.
You can read Facebook's full transparency report for the second quarter of 2020 here.