+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

NYC police have spent millions on a tech company that claims it can use AI to monitor social media and predict future criminals

Sep 12, 2023, 01:58 IST
Insider
The New York Police Department has spent more than $8 million on Voyager Labs products.Eduardo Munoz/Reuters
  • The NYPD signed a contract for more than $8 million with Voyager Labs in 2018.
  • The company says it uses AI to monitor online behavior.
Advertisement

The New York City Police Department has spent millions buying products from a tech company that claims it can use social media to track and even predict crimes, a new report reveals.

The Surveillance Technology Oversight Project, a nonprofit dedicated to combating mass surveillance and protecting privacy, has released redacted versions of NYPD contracts with Voyager Labs that show the department signed a contract for more than $8 million with the company in 2018.

Voyager Labs is a tech company that says it produces "AI-based investigation solutions." It sells products across various industries, including law enforcement, the US public sector, and corporate security, according to its website.

While law enforcement's use of social media analytics is nothing new, Voyager Labs says its products are capable of more than surveillance, The Guardian reported. The company has claimed its products can also predict future crime, according to an investigation from the Brennan Center for Justice, a law and public policy institute.

"Voyager Discover takes Voyager Analytics' abilities a step further, analyzing not only who is most influential but also who is most invested in a given stance: emotionally, ideologically, and personally," says a Voyager Labs sales pitch to the Los Angeles Police Department, obtained by the Brennan Center for Justice. "This ability moves the discussion from those who are most engaged online to those most engaged in their hearts."

Advertisement

Voyager Labs has also claimed its AI can assign risk scores to social media users regarding their "ties to or affinity for Islamic fundamentalism or extremism," according to the Brennan Center for Justice report.

Another one of their products — VoyagerCheck — "provides an automated indication of individuals who may pose a risk," according to the Voyager Labs website.

William Colston, vice president for global marketing at Voyager Labs, told Insider the company uses only publicly available data and that their software "is not intended to be a substitute for rigorous human oversight and analysis."

"We categorically reject any notion that our software is designed to infringe upon civil liberties or freedom of speech, or that it is biased in any way," Colston wrote to Insider.

Will Owen, communications director for the Surveillance Technology Oversight Project, called the use of these products "invasive" and "alarming" in a press release.

Advertisement

An NYPD spokesperson told Insider the department uses the software to monitor suspects for a variety of crimes — like gun violence, terrorism, and human trafficking — but clarified that it does not currently use the predictive tools that Voyager Labs offers.

"The Department uses these types of technologies to aid in active investigations and does not use features that would be described as predictive of future criminality," the spokesperson said.

Meta, the parent company to Facebook and Instagram, sued Voyager Labs in January, claiming that the tech company created thousands of fake accounts to scrape data from more than 600,000 users, The Guardian reported.

Voyager Labs has since filed to dismiss the lawsuit and is awaiting a court decision, according to Colston.

"If Meta is truly committed to protecting its users, and acting in the public interest, then the use of analytical software by those trying to stop malicious actors should be embraced and encouraged," Colston wrote.

Advertisement

This story was updated on September 11 to include comment from William Colston received after publication.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article