+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Microsoft took an ethical stand on facial recognition just days after being blasted for a sinister AI project in China

Apr 17, 2019, 16:18 IST

Pedro Fiúza/NurPhoto via Getty Images

Advertisement
  • Microsoft announced on Tuesday that it rebuffed a request from a US police agency to install its facial recognition software on officers' car and body cameras.
  • President Brad Smith cited human rights concerns, saying that AI bias could mean a disproportionate number of women and ethnic minorities would end up being held for questioning.
  • While Microsoft is taking an ethical stand on AI, less than a week ago it was accused of being complicit in helping China develop facial analysis AI, which could be used to oppressed its Uighur Muslim population.
  • Visit BusinessInsider.com for more stories.

Microsoft President Brad Smith announced on Tuesday that the company refused a request from a US police department to install its facial recognition software, citing human rights concerns, Reuters reports.

Speaking at a Stanford University conference on ethical AI, Smith said Microsoft had received the request from a California law enforcement agency to install the technology in officers' cars and body cameras.

"Anytime they pulled anyone over, they wanted to run a face scan," Smith said, adding the officer would check the person's face against a database.

Read more: Artificial intelligence experts from Facebook, Google, and Microsoft called on Amazon not to sell its facial recognition software to police

Advertisement

He said the company concluded that the inherent bias in facial recognition - which is largely trained on white male faces - meant that it would be less accurate identifying women's and people from ethnic minorities' faces, therefore they would end up being held for questioning more frequently.

Smith called for tighter regulation on facial recognition and AI in general, warning that data-hungry companies could end up in a "race to the bottom." His comments come as pressure is building on Amazon to stop selling its facial recognition "Rekognition" software to law enforcement. Amazon shareholders are due to hold a vote on the issue on May 22.

However, Smith said Microsoft had provided the software to a US prison. Smith also told Business Insider in February that an all-0ut ban on selling facial recognition software to government agencies would be "cruel in its humanitarian effect."

Less than a week ago, the company's reputation took a bruising when it was accused of being complicit in helping a Chinese military-run university develop AI facial analysis, which critics said China could then use to oppress its citizens - specifically the country's Uighur Muslim minority.

Sen. Marco Rubio of Florida, one of the US government's most vocal China critics, described Microsoft's partnership with the Chinese military as "deeply disturbing" and "an act that makes them complicit" in China's human rights abuses.

Advertisement

Microsoft did not immediately respond to Business Insider's request for comment.

NOW WATCH: Elon Musk's biggest challenge won't be Tesla or SpaceX - here's why

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article