- Researchers at Harrisburg University announced Tuesday that they had built
facial recognition software that could predict whether someone is likely to become a criminal based on a picture of their face, sparking immediate backlash across the internet. - The researchers — including two professors and a Ph.D. student who was a former NYPD officer — claimed that the software had an 80% success rate and "no racial bias."
- They said the software could "extract minute features in an image that are highly predictive of criminality," but have not published research to back up that claim.
- By Thursday morning, Harrisburg University had pulled the web page announcing the new software. The university told Business Insider that "the faculty are updating the paper to address concerns raised."
- Visit Business Insider's homepage for more stories.
A team of researchers made waves this week with a bold, as-of-yet unsubstantiated claim: They built software, they said, that can predict whether someone is a criminal based on a picture of their face.
In a now deleted press release, Harrisburg University announced that the technology is "capable of predicting whether someone is likely going to be a criminal." The release said the software was built by professors Nathaniel Ashby and Roozbeh Sadeghian alongside NYPD veteran Jonathan Korn, a Ph.D. student.
In the 24 hours following its publication, the release was met with swift backlash from academics, data analysts, and civil liberties advocates, who said the Harrisburg University researchers' claims were unrealistic and irresponsible.
Critics also highlighted the supposed technology's potential for bias, especially given that existing facial recognition software has been found to misidentify people of color as much as 100 times more than white people.
Carl Bergstrom, a University of Washington biology professor who's writing a book on misinformation and data, called the Harrisburg project "racist bullsh--." Input's Mehreen Kasana described it as "21st Century phrenology." Journalist and data scientist Dan Nguyen slammed it as "stupid" and "scary, since some cops will believe it."
By Thursday morning, the press release had been removed, but it can still be read via internet archives.
A Harrisburg University spokesperson told Business Insider that "the faculty are updating the paper to address concerns raised." Ashby, Sadeghian, and Korn did not immediately respond to a request for comment.
It's hard to know exactly what the Harrisburg researchers built, given that the original release didn't include any data or original research documents to substantiate their claims. The findings — titled "Deep Neural Network Model to Predict Criminality Using Image Processing" — were set to be published in an upcoming Springer Nature research book, according to the press release.
The researchers initially claimed that their technology "can extract minute features in an image that are highly predictive of criminality."
"Crime is one of the most prominent issues in modern society. Even with the current advancements in policing, criminal activities continue to plague communities," Korn said in the release. "The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas."
The now deleted release said the researchers' next step is to seek out "strategic partners to advance this mission."
Read the original article on Business Insider