The aim of the chatbot was to "experiment with and conduct research on conversational understanding," but Microsoft was forced to apologize, temporarily take it down, and delete some of Tay's most inflammatory tweets just a day after it launched. Tay returned online this week, but it's still suffering technical issues - and, bizarrely, appeared to endorse smoking marijuana in front of the police.
Microsoft was "naive"
Dennis Mortensen, CEO of AI company X.ai, told Business Insider we can expect several more episodes like this one as chatbots and AI applications move towards the mainstream.
X.ai is the company behind "Amy Ingram," an artificial intelligence personal assistant that takes on the job of setting up meetings via email, text, or other messaging services such as Slack so busy executives don't have to waste time with the email ping-pong that usually goes with scheduling time in the calendar.
Mortensen said it was "naive" of Microsoft not to foresee a potential issue. Tay was built to train itself using public text datasets as both the input and the output, so it was inevitable people would attempt to game the system.
"All you really see is too crazy people connected on the internet, under the moniker of Microsoft," Mortensen said. "You can do that any day. If you look at Reddit or 4chan, any two idiots can be connected on the internet every 10 minutes. There's nothing happening here that doesn't happen all the time."
Rogue chatbots will be as common as PCs crashing - we'll become "immune" to them
What some people may not have considered when they perused Tay's offensive tweets was that the chatbot didn't actually understand what humans were saying to it and what it was responding with. It was simply making a connection between two text strings, Mortensen said.
"There will be plenty of examples like Microsoft coming out over the next year that are even more dramatic. Just think about what happened with Google Photos," Mortensen said.
The Google Photos app, which used AI image recognition to make it easier or users to group photos together, mistakenly labeled a black couple as being "gorillas." Google said was "appalled and genuinely sorry" for the error and quickly enacted a fix.
X.ai
Mortensen's company, on the other hand, is working on what he calls "vertical down" AI, which should be immune from going on racist tirades.
Amy Ingram is designed to be a "super human" that understands the language of meeting scheduling - so she can set up a meeting with someone in Denmark by sending them a request in Danish, loop in a Chinese colleague with an email written in Mandarin, and then send you the calendar invite back in English.
"The philosophy of vertical AI is that [unlike linking text strings, as with Microsoft's Tay] we do need to understand what Amy reads and what she writes back, so there's no opportunity to go rogue in that sense. You won't see this happen," Mortensen added.
X.ai is currently in beta. The company has raised $11.3 million in funding to date.