scorecard
  1. Home
  2. tech
  3. news
  4. Instagram reveals it took down millions of pieces of harmful content last quarter, including child abuse imagery and terrorist propaganda

Instagram reveals it took down millions of pieces of harmful content last quarter, including child abuse imagery and terrorist propaganda

Rob Price   

Instagram reveals it took down millions of pieces of harmful content last quarter, including child abuse imagery and terrorist propaganda
Tech2 min read

facebook ceo mark zuckerberg

  • Facebook has published its biannual Transparency Report, which discloses how much problematic content it took action against over the last six months.
  • For the first time, Facebook is also publishing data relating to its photo-sharing app Instagram.
  • Over the last three months, Instagram took down millions of pieces of content relating to child abuse, terrorist propaganda, drug sales, and self-harm.
  • The data highlights the sheer scale of the content moderation challenge facing Facebook and other social networks.
  • Visit Business Insider's homepage for more stories.

Instagram took down millions of pieces of harmful or dangerous content last quarter, including hundreds of thousands of posts promoting terrorism and child exploitation imagery.

On Wednesday, Facebook published its biannual Transparency Report that discloses metrics about how it polices itself - and for the first time, that report included data relating to Instagram, its photo-sharing app. The data gives an unprecedented glimpse into the sheer volume of problematic and illegal content Instagram is battling to keep off its social network.

In the third quarter of 2019, Instagram took action against 753,700 pieces of content relating to child nudity or sexual exploitation of children, and 133,300 pieces of content that promoted terrorist propaganda. Meanwhile, it took action against 1.5 million pieces of content relating to the sale or trade of drugs, and 58,600 on firearms.

Instagram has been heavily criticised over its role in hosting posts that promote self-harm, and over the last six months it took action against more than 1.6 million pieces of content that contain depictions of suicide or self-imagery.

Instagram and Facebook are not unique in facing this wave of troublesome content: All major social networks and communication platforms, from Twitter to Snapchat, inevitably play host to problematic or illegal content. Such companies inevitably hire legions of content moderators in attempts to scrub their platforms of undesirable content (the treatment of these workers has become a controversial issue in its own right), and are also increasingly touting artificial intelligence as a way to more proactively police themselves.

In Q3 2019, Facebook says its systems were able to detected 79.1% of suicide/self-injury content before it was reported by users, 94.6% of child nudity/child exploitation imagery, and 92.2% of terrorist propaganda.

Facebook released significantly more data relating to its core social network Facebook. In Q3 2019, it took action against: 7 million pieces of content relating to hate speech, 3.2 million over bullying/harassment, 11.6 million over child nudity/exploitation, 5.2 million over terrorist propaganda, 25.2 million over graphic violence, and 2.5 million over suicide/self-injury, among other things.

Do you work at Facebook? Contact this reporter via encrypted messaging app Signal at +1 (650) 636-6268 using a non-work phone, email at rprice@businessinsider.com, Telegram or WeChat at robaeprice, or Twitter DM at @robaeprice. (PR pitches by email only, please.)

Read more:

Exclusive FREE Report: The Stories Slide Deck by Business Insider Intelligence

READ MORE ARTICLES ON


Advertisement

Advertisement