Facial-recognition software fails to correctly identify people '96% of the time,' Detroit police chief says
- In Detroit, where facial-recognition software is used in police investigations, the software fails "96% of the time," Detroit Police Chief James Craig said.
- Craig said as much during a public meeting on Monday, Vice reported.
- "If we were just to use the technology by itself, to identify someone, I would say 96% of the time it would misidentify," Craig said.
- Police around the US use facial-recognition software, though several major cities have banned its use.
Facial-recognition software used by police to identify people is wildly inaccurate, according to Detroit Police Chief James Craig.
"If we were just to use the technology by itself, to identify someone, I would say 96% of the time it would misidentify," Craig said in a public meeting on Monday, Vice reported.
"If we would use the software only, we would not solve the case 95 to 97% of the time. That's if we relied totally on the software, which would be against our current policy."
The New York Times reported last week what may have been the first known case of a man being wrongfully arrested — in Detroit — after being misidentified by facial-recognition software.
Robert Julian-Borchak Williams was detained by Detroit police for 30 hours.
Williams told police the person on the CCTV footage did not look like him, The Times reported. A detective responded, "I guess the computer got it wrong."
The city of Detroit uses software developed by a company named DataWorks Plus, which said that facial-recognition tech isn't intended as the sole way of identifying people.
The system doesn't "bring back a single candidate," DataWorks Plus general manager Todd Pastorini told Vice.
"It's hundreds. They are weighted just like a fingerprint system based on the probe."
Across the US, police departments are using facial-recognition software developed by a variety of companies.
Major tech players — including Amazon, IBM, and Microsoft — have their own versions of facial recognition for sale, although all three companies said they were reevaluating their use amid nationwide anti-police-brutality protests.
An under-the-radar tech startup called Clearview AI has taken a different approach: Its client list spans more than 2,200 law-enforcement departments, government agencies, and companies in 27 countries.
Critics of facial-recognition technology have argued for years that the technology is inaccurate and dangerous.
"Facial recognition is a horrifying, inaccurate tool that fuels racial profiling + mass surveillance," Rep. Alexandria Ocasio-Cortez said on June 10. "It regularly falsely ID's Black + Brown people as criminal. It shouldn't be anywhere near law enforcement."
A federal study published in late 2019 found "empirical evidence" of racial bias in facial-recognition software.
Across the board, the study found that facial-recognition software made "false positive" — inaccurate matches — far more often when the person was Asian or Black than when the person was white.
"The team saw higher rates of false positives for Asian and African American faces relative to images of Caucasians," the report from the National Institute of Standards and Technology read.
How much more often?
"The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm," the report said.
That's especially problematic in the case of the Detroit Police Department, which has used facial-recognition software 70 times so far in 2020.
In 68 of those cases, the software was used on a Black person.