Microsoft unintentionally gave us a taste of our own medicine with its racist artificial intelligence chatbot 'Tay' - here's why we should be thankful
The Wild West of the internet is notoriously good at making bad decisions. In 1998, a collective of internet users chose Hank the "Angry Drunken Dwarf" as the most beautiful person in the world.
In 2012, a coordinated internet campaign picked a school for the deaf as the winning recipient of a Taylor Swift concert.
And this year, some on the internet helped turn an advanced artificial intelligence chatbot, programmed to learn from human interactions, into a racist, sexist bot called Tay - all in just one day.
Here are some of Tay's now infamous Tweets:
Soon after Tay's bigoted Tweets started going viral, Microsoft Research's Peter Lee apologized in a blog post: "Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time."
But we should also thank Microsoft for pointing out that we need to be more deliberate with how we interact with such kinds of AI technology, since these programs will only magnify the ideas and information we feed them.
This concept is best explained in the words of a classic computer science aphorism, "Garbage in, garbage out." This basically means that the quality of the input will determine the quality of the output. Tay, for example, was exposed to racist and sexist ideas that led her to learn and tweet those ideas.
The internet changed lives and altered history. Artificial Intelligence will likely have a similar impact on the world, so long as it becomes equally ubiquitous. AI bots, which are currently only in early stages of development, have been making headlines for beating humans at various tasks for more than a decade.
But the anticipation and excitement that follows each new AI development is also accompanied with fear - and for good reason. What garbage will be input into the machines? How will we choose to use these machines? Who will set AI's moral compass?
Some of the greatest minds of this century warn us that artificial intelligence is an "existential threat" to humanity. Stephen Hawking said in an interview with BBC in 2014, "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." During a 2015 Reddit AMA session, Bill Gates said, "I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent…A few decades after that through the intelligence is strong enough to be a concern."
We should be cautious of an entity that has the ability to evolve, grow, and learn faster than us. We should have built-in filters as an equivalent for a human's moral compass. We should protect ourselves from being either enslaved by or enamored with hyper-intelligent beings that we've created.
Elon Musk, in a 2014 interview at the AeroAstro Centennial Symposium voiced, "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."
Artificial Intelligence is no more science fiction or a technology from the distant future, it is very real.
The debate should be over how much we should fear, apprehend, and address a possible rise of artificial intelligence. But while a lot of the accompanying challenges are forcing us to look ahead in time, Microsoft's Tay shines light on how it may be important for us to look inwards, at how we interact with and use technology.