- Court documents spotted by the Washington Post show
police used afacial recognition system on a Twitter video to charge a protester with assaulting an officer at Lafayette Square in June. - Police forcibly removed peaceful
Black Lives Matter protesters using tear gas and rubber bullets ahead of President Trump arriving for a photo-op at Lafayette Square on June 1. - According to the court documents seen by the Post, the man pulled a police officer to the ground and punched him in the face. Police tried to apprehend him, but he escaped. Officers then turned to social media.
- Civil rights activists and AI experts have warned against police using facial recognition, as the technology has been shown to display significant racial bias.
New court documents show how police can use facial recognition tech on videos posted on social media platforms such as Twitter to track down suspects.
The Washington Post reported on a case Monday concerning a man present at Lafayette Square, Washington DC on June 1. On that day, police forcibly removed peaceful Black Lives Matter protesters using tear gas and rubber bullets ahead of President Trump arriving for a photo-op outside St. John's Church.
According to the court documents seen by the Post, the man pulled a police officer to the ground and punched him in the face. Police tried to apprehend him, but he escaped.
The police then turned to Twitter to track the protester down, the Post reported.
An officer found a video of the man that had been shared on the social media site, the report said. An image from that video was then fed into a facial recognition system used by
The system returned the name of a man called Michael Joseph Peterson Jr., and police said they found a backpack left at the scene containing Peterson's ID, according to the court documents.
Peterson was then charged with two counts of assaulting an officer and one count of obstructing law enforcement, per charging documents seen by the Post.
This is a rare insight into exactly how law enforcement deploys facial recognition in tandem with social media.
Both civil rights organizations and AI experts have voiced concerns over police use of facial recognition. The technology has historically displayed racial and gender bias, more frequently misidentifying women and people with darker skin tones.
"That is not a sustainable way to integrate new technologies into the policing architecture in the United States, and it's going to result again and again and again in civil rights violations," American Civil Liberties Union facial recognition Kade Crockford told the Post.
Crockford added that there are probably many more cases where facial recognition has been deployed to make an arrest without the information being made public.
News broke in June of this year of the first documented wrongful US arrest involving facial recognition. A Black man in Detroit was detained for 30 hours before police realized he had been wrongly identified.
Fairfax County Police Major Christian Quinn, who leads the pilot program for using NCRFRILS at the Metropolitan Washington Council of Governments (MWCOG), told the Post the technology is only ever used to surface leads, not as probable cause for arrest.
The program began in 2017 and is funded through the end of 2020, the Post reported.
"I would not usher in a tool that imposes on people's right to privacy, anonymity and civil rights," Quinn said.
He added that the system has so far thrown up 2,600 leads — although he did not say how many of these led to arrests.