Meta expanding child safety measures as scrutiny mounts
Dec 2, 2023, 11:11 IST
Amid an increased scrutiny over the alleged proliferation of sexual abuse content about children on its platform, Meta has said it is expanding and updating its child safety features aimed at protecting kids.
The company said that in addition to developing technology to tackle this abuse, it hires specialists dedicated to online child safety and sharing information with industry peers and law enforcement.
"Predators don't limit their attempts to harm children to online spaces, so it's vital that we work together to stop predators and prevent child exploitation," the company said in a statement late on Friday.
Meta said it takes recent allegations about the effectiveness of its work very seriously, and "created a task force to review existing policies, examine technology and enforcement systems we have in place, and make changes that strengthen our protections for young people, ban predators, and remove the networks they use to connect with one another".
The task force took immediate steps to strengthen its protections, and child safety teams continue to work on additional measures, the company added.
The Wall Street Journal recently detailed how Instagram and Facebook show inappropriate and sexual child-related content to users. In June, the report detailed how Instagram connects a network of accounts buying and selling child sexual abuse material (CSAM), guiding them to each other via its recommendations algorithm.
A follow-up investigation published on Friday showed how the problem extends to Facebook Groups, where there's an ecosystem of pedophile accounts and groups, some with as many as 800,000 members.
Meta said that on Instagram, potentially suspicious adults will be prevented from following one another, will not be recommended to each other in places like Explore and Reels, and will not be shown comments from one another on public posts, among other things.
"On Facebook, we're using this technology to better find and address certain Groups, Pages and Profiles," said the company.
Additionally, Groups whose membership overlaps with other Groups that were removed for violating our child safety policies will not be shown in Search.
After launching a new automated enforcement effort in September, "we saw five times as many automated deletions of Instagram Lives that contained adult nudity and sexual activity."
"We actioned over 4 million Reels per month, across Facebook and Instagram globally, for violating our policies," said Meta.
Advertisement
The company said that in addition to developing technology to tackle this abuse, it hires specialists dedicated to online child safety and sharing information with industry peers and law enforcement.
"Predators don't limit their attempts to harm children to online spaces, so it's vital that we work together to stop predators and prevent child exploitation," the company said in a statement late on Friday.
Meta said it takes recent allegations about the effectiveness of its work very seriously, and "created a task force to review existing policies, examine technology and enforcement systems we have in place, and make changes that strengthen our protections for young people, ban predators, and remove the networks they use to connect with one another".
The task force took immediate steps to strengthen its protections, and child safety teams continue to work on additional measures, the company added.
Advertisement
A follow-up investigation published on Friday showed how the problem extends to Facebook Groups, where there's an ecosystem of pedophile accounts and groups, some with as many as 800,000 members.
Meta said that on Instagram, potentially suspicious adults will be prevented from following one another, will not be recommended to each other in places like Explore and Reels, and will not be shown comments from one another on public posts, among other things.
"On Facebook, we're using this technology to better find and address certain Groups, Pages and Profiles," said the company.
Additionally, Groups whose membership overlaps with other Groups that were removed for violating our child safety policies will not be shown in Search.
Advertisement
"We're sending Instagram accounts that exhibit potentially suspicious behaviour to our content reviewers and we'll automatically disable these accounts if they exhibit enough of the 60+ signals we monitor," said the company.After launching a new automated enforcement effort in September, "we saw five times as many automated deletions of Instagram Lives that contained adult nudity and sexual activity."
"We actioned over 4 million Reels per month, across Facebook and Instagram globally, for violating our policies," said Meta.