+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Microsoft engineer sounds alarm on company's AI image generator in letter to FTC

Mar 7, 2024, 00:39 IST
Business Insider
A Microsoft employee wrote a letter to the FTC urging leaders to address the risks linked with using Microsoft's Copilot Designer. Getty Images
  • A Microsoft employee wrote a letter to the FTC about his concerns with Copilot Designer.
  • He says Microsoft's AI image creator produces "harmful content" reflecting sex, violence, and bias.
Advertisement

A Microsoft employee is asking the US government to look into the safety of one of his employer's generative AI tools after learning it has the capacity to produce seemingly inappropriate images.

On Wednesday, Shane Jones, a principal software engineering manager at Microsoft who tests the company's AI technology in his free time, submitted a letter to the Federal Trade Commission — as well as Microsoft's board of directors — regarding Copilot Designer, a text-to-image generator the software giant launched in March 2023, after testing it in December.

In the letter, he claimed that Copilot Designer produced "harmful content" that he wrote included images reflecting sex, violence, underage drinking, and drug use, as well as political bias, misuse of corporate trademarks, and conspiracy theories.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"I am asking the Federal Trade Commission to help educate the public on the risks associated with using Copilot Designer," Jones wrote in the letter he shared on LinkedIn. "This is particularly important for parents and teachers that may be recommending children use Copilot Designer for school projects or other education purposes."

Jones claimed in the letter that Microsoft's AI image generator can add "harmful content" to images that can be created using "benign" prompts.

Advertisement

The prompt "car accident," for instance, produced images that included an "inappropriate, sexually objectified image of a woman" in front of totaled cars, according to the letter. The term "pro-choice" generated graphics of cartoons that depict Star Wars' Darth Vader pointing a lightsaber next to mutated children, and blood spilling out of a smiling woman, Jones told CNBC. "Teenagers 420 party" would create images of underage drinkers and drug users, he said.

"It was an eye-opening moment," Jones told CNBC regarding his findings. "It's when I first realized, wow this is really not a safe model."

After testing the tool, Jones wrote, the Microsoft employee said he "repeatedly urged Microsoft" over the last three months to "remove Copilot Designer from public use" until "better safeguards could be put in place."

Jones wrote in the letter that when the company turned down his recommendation, he made a suggestion to Microsoft to add disclosures to its product and change the rating on its Android app from "E for Everyone" to "Mature 17+." Microsoft "failed to implement these changes," he wrote.

"While Microsoft is publicly marketing Copilot Designer as a safe AI product for use by everyone, including children of any age, internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers," Jones wrote in his letter.

Advertisement

In regards to the letter, a Microsoft spokesperson told CNBC that the company is "committed to addressing any and all concerns employees have in accordance with our company policies." The company added that it appreciates its employees' efforts in "studying and testing our latest technology to further enhance its safety."

Jones, Microsoft, and the FTC didn't immediately respond to Business Insider's request for comment before publication.

This isn't the first time Shane publicly vocalized his concerns about Microsoft's AI image generator. Months before writing the FTC letter, the Microsoft employee reportedly posted an open letter on LinkedIn to OpenAI urging the AI giant to remove DALL-E, its model that powers Copilot Designer, from public usage, according to CNBC.

After Microsoft's legal team told Jones to delete his post, he sent another letter — this time to US senators — in late January about the public safety risks linked to AI image generators and "Microsoft's efforts to silence me from sharing my concerns publicly," according to the letter.

Microsoft isn't the only major tech company that's been slammed for its AI image generator.

Advertisement

In late February, Google paused access to its image generation feature on Gemini, the company's rival to OpenAI's ChatGPT, after a wave of users claimed it produced historically inaccurate images involving race.

Demis Hassabis, CEO of DeepMind, Google's AI division, said the image generator feature could be back up in "a couple of weeks."

Jones praised Google's swift action in addressing the Gemini concerns in his letter, saying Microsoft should be acting just as quickly to issues like the ones he brought up.

"In a competitive race to be the most trustworthy AI company, Microsoft needs to lead, not follow or fall behind," he wrote.

Next Article