Twitter will now prompt users to review if tweets are "harmful or offensive" before posting.- Twitter says the feature can distinguish between offensive content, sarcasm, and "friendly banter."
- In a test, 34% of people prompted checked their reply or didn't reply after all, Twitter said.
Twitter on Wednesday rolled out a new feature that prompts users to check whether their tweets are "potentially harmful or offensive" before posting them.
The company said the prompts will pop up on English-language Twitter accounts on Apple and Android devices from Wednesday.
"People come to Twitter to talk about what's happening, and sometimes conversations about things we care about can get intense and people say things in the moment they might regret later," Twitter said in a blog post.
The feature uses artificial intelligence (AI) to detect harmful language in a
The
Twitter said in the blog post that 34% of people who received a prompt went on to revise their initial reply or decided to not post at all. Users in the test wrote 11% fewer offensive replies after being prompted for the first time, Twitter said.
Users were also less likely to receive offensive and harmful replies back, Twitter added.
-Twitter Support (@TwitterSupport) February 22, 2021
In early testing, Twitter's systems struggled to tell the difference between offensive content and jokes between friends. But Twitter said that the feature can now distinguish between "sarcasm and friendly banter" and takes into account "the nature of the relationship between the author and replier."
The news comes after English sports teams, soccer players, and athletes staged a four-day boycott of social media to protest online racist abuse aimed at Black players.
The new content prompts aren't the only feature Twitter has been trialing in an attempt to promote a more amicable environment on the platform. Last year, the company began warning users that they should read an article before posting it to "help promote informed discussion." This feature is still being tested.