Twitter looks at flagging misinformation with bright red and orange badges
- Twitter is experimenting with the idea of placing brightly-colored labels beneath misinformation tweeted by public figures, the company confirmed.
- On Thursday, NBC News published screenshots from a leaked demo of Twitter's colored labels, one of which showed a tweet by 2020 presidential candidate Bernie Sanders labeled "harmfully misleading."
- According to NBC, the basic idea is that the Twitter community will rate how likely or unlikely a tweet is to be harmfully misleading, and their feedback will inform how that tweet is labeled (if it's labeled at all).
- A Twitter spokeswoman said the company is "exploring a number of ways" to fight fake news on its platform.
- Visit Business Insider's homepage for more stories.
Twitter is experimenting with placing brightly-colored labels beneath misinformation tweeted by public figures.
On Thursday, NBC News published screenshots from a leaked demo of Twitter's colored label experiment, one of which showed a tweet by 2020 presidential candidate Bernie Sanders labeled "harmfully misleading."
The screenshots also show a tweet about the coronavirus outbreak and a tweet about whistleblowers, both from verified accounts.
Twitter confirmed it was looking at new ways to address information to BI.
The demo incorporated both orange and red badges, with red badges presumably indicating especially damaging misinformation.
According to Thursday's NBC report, the basic idea is that Twitter's community will rate how likely or unlikely a tweet is to be harmfully misleading, and their feedback will inform how that tweet is labeled (if it's labeled at all).
A Twitter spokeswoman said the company is "exploring a number of ways" to fight fake news on its platform.
She added: "This is a design mockup for one option that would involve community feedback. Misinformation is a critical issue and we will be testing many different ways to address it."
It's possible the feature won't happen, since it's just a demo.
Earlier this month, the social media site announced a ban on so-called "deepfakes" and other manipulated content that could cause serious harm.
In the run-up to the 2020 presidential elections, the emergence of deepfake technology has caused alarm in political circles for its potential to spread misinformation.
In the context of politics, the tech can be used to create (often startlingly lifelike) videos that falsely show politicians making offensive statements, behaving embarrassingly, or saying things they didn't actually say.
One of the most high-profile video to date containing manipulated content falsely depicted house speaker Nancy Pelosi slurring her words. While it wasn't technically a deepfake video, the clip racked up millions of views and sparked online speculation that she'd been drunk.
An Axios investigation from June 2019 found that presidential candidates' campaign teams had done little to prepare for deepfake videos, which could be used to undermine their campaigns.