+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Taylor Swift once threatened to sue Microsoft over its chatbot Tay, which Twitter manipulated into a bile-spewing racist

Sep 10, 2019, 17:19 IST

Andrew H. Walker/Getty

Advertisement
  • Microsoft once received a legal threat from Taylor Swift, the Guardian reports.
  • The threat was in relation to an AI-powered chatbot Microsoft created to interact with people on social media, called Tay.
  • When the bot was given its own Twitter account, it was quickly manipulated by trolls into spewing racist remarks and abuse.
  • Visit Business Insider's homepage for more stories.

An anecdote from Microsoft president Brad Smith's upcoming book reveals that he once received a legal threat from Taylor Swift over a chatbot.

The chatbot in question was called XiaoIce, and was designed to converse with real people on social media. However, its US name was Tay.

In his forthcoming book "Tools and Weapons," per the Guardian's Alex Hern, Smith says he was on vacation having dinner when he received a message.

Read more: Microsoft President Brad Smith says these are the 10 biggest challenges facing tech in 2019

Advertisement

"An email had just arrived from a Beverly Hills lawyer who introduced himself by telling me: 'We represent Taylor Swift, on whose behalf this is directed to you,'" Smith writes.

"He went on to state that 'the name Tay, as I'm sure you must know, is closely associated with our client.' No, I actually didn't know, but the email nonetheless grabbed my attention. The lawyer went on to argue that the use of the name Tay created a false and misleading association between the popular singer and our chatbot, and that it violated federal and state laws."

In 2016 Tay was given its own Twitter account where it could learn from its interactions with Twitter users. Unfortunately, it was quickly manipulated to spew horrendously racist tweets, at one point denying the holocaust.

Microsoft shut Tay down after less than 24 hours.

Twitter

Advertisement

Smith says that the incident taught him, "not just about cross-cultural norms but about the need for stronger AI safeguards."

Business Insider contacted Taylor Swift's representation for comment on the incident, and whether the matter was resolved to her satisfaction. They were not immediately available for comment.

NOW WATCH: Steve Jobs left Apple to start a new computer company. His $12-million failure saved Apple.

You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article