YouTube's human moderators couldn't stem the deluge of Christchurch massacre videos, so YouTube benched them
- YouTube said videos of the mass shooting at two mosques in Christchurch, New Zealand, were being uploaded at an "unprecedented" speed last Friday.
- YouTube's Chief Product Officer Neal Mohan told The Washington Post that his team removed human moderators, instead letting AI unilaterally block videos.
- Even then not everything was caught, as users got around the machine by adding watermarks, cutting together clips, or in some cases even animating people in the video.
- Mohan said YouTube's processes could be improved. "Frankly, I would have liked to get a handle on this earlier," he told the Post.
YouTube has provided an insight into how it dealt with the "unprecedented" deluge of videos of the mass shooting in Christchurch, New Zealand, last Friday.
In an interview with The Washington Post, YouTube's Chief Product Officer Neal Mohan explained how the company was playing whack-a-mole to block videos, taken from a Facebook live stream, which were being uploaded at a rate of one per second.
In order to deal with the volume of videos, Mohan and other executives took the decision to override YouTube's content moderation systems, doing away with human moderators entirely. Instead, they used AI software to immediately identify the most violent parts of the video and autonomously block it.
"We made the call to basically err on the side of machine intelligence, as opposed to waiting for human review," said Mohan, adding that a "trade-off" was that some videos unconnected to the shooting got taken down by the system. He also made the decision to disable YouTube's "recent upload" search tool.
Although the AI software made instantaneous decisions on whether to block a video, it could also be tricked. Many uploading copies of the footage made edits to get it past YouTube's safeguards - adding watermarks, cutting together clips, or in some cases even animating people in the video.
These edits meant loads of videos were slipping through YouTube's "hashing" systems, which are used to spot when footage has been duplicated somewhere, often for enforcing copyright.
"Like any piece of machine learning software, our matching technology continues to get better, but frankly, it's a work in progress," said Mohan.
YouTube declined to say exactly how many videos of the shooting it was able to remove or block, but said it was in the tens of thousands. In a statement released on Twitter on Monday, YouTube added that it has terminated hundreds of accounts "created to promote or glorify the shooter."
"Frankly, I would have liked to get a handle on this earlier," said Mohan. "Every time a tragedy like this happens we learn something new, and in this case it was the unprecedented volume [of videos]."
He added: "This was a tragedy that was almost designed for the purpose of going viral ... We've made progress, but that doesn't mean we don't have a lot of work ahead of us, and this incident has shown that, especially in the case of more viral videos like this one, there's more work to be done."
On Sunday, Facebook said it removed 1.5 million versions of the video within the first 24 hours. The situation is ongoing and researchers are still finding versions of the video on Facebook and Instagram, according to Bloomberg's Sarah Frier.