+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Mark Zuckerberg says AI won't be able to reliably detect hate speech for 'five to 10' years

Apr 11, 2018, 01:22 IST

Aaron Bernstein/Reuters

Advertisement
  • Facebook CEO Mark Zuckerberg said that while it's relying increasingly on artificial intelligence to police content on its site, AI doesn't work well for identifying hate speech.
  • AI won't be ready to reliably distinguish hate speech from legitimate expression for another five to 10 years, he said.
  • Zuckerberg's comments came during his testimony Tuesday at a Senate hearing focusing on the Cambridge Analytica scandal.


Facebook is increasingly relying on artificial intelligence to identify content posted to its service that violates its policies, but CEO Mark Zuckerberg said there's one type of content AI struggles with - hate speech.

Indeed, it will take another five to 10 years for AI to be ready to police hate speech and be able to reliably distinguish it from legitimate political expression, Zuckerberg told senators during his testimony at a congressional hearing Tuesday. Although Facebook has worked on AI that could identify hate speech, the error rates are just too high, he said.

"We're not there yet," he said.

Hate speech is a problem for AI, because it's subject to lots of nuance, he said. Also, because Facebook operates in numerous countries around the world, its AI needs to understand those nuances in multiple languages.

Advertisement

"You have to understand what's a slur and whether something hateful," he said.

In his testimony, Zuckerberg noted that Facebook originally relied on its users to identify objectionable content. In the wake of the 2016 election and reports that Russian-linked actors hijacked Facebook's service to spread fake news and other propaganda, the company has been stepping up efforts to police content on the service. Facebook expects to have some 20,000 people working on security and reviewing content by the end of this year, Zuckerberg said.

NOW WATCH: A Wall Street chief economist explains what could be the saving grace for mega-cap tech companies

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article