Microsoft Terminates Tay AI Chatbot After Tweets About Hitler
NewsSocial Media March 29, 2016 Euan Viveash
No, really, it did….Tay is an AI chat bot developed by Microsoft with the aim of interacting with the millennial 18-24 age group.
It’s developers quickly learned however that social media and open access to the internet might not have been the best learning environment for its creation.
Within hours or going live, Tay had appeared to have gone ‘rogue,’ on Twitter, routinely cursing, supporting Donald Trump’s run for US President, seemingly agreeing with Adolf Hitler’s views on the world, and making racist remarks.
All in all, it wasn’t the best first day for Tay.
The AI which was experimental in nature had its creators editing some of its more out there tweets, and also “making some adjustments”.
Microsoft said:
“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation… The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”
Of course, every action has consequences and Tay’s interactions with the internet public have proven to be rather unfortunate.
“The more you chat with Tay the smarter she gets, so the experience can be more personalised for you.”
Tay, may be advanced as far as AI goes, but that has also proved to be a limiting factor. The AI chatbot, without as it were, the basic knowledge about the real world and the experience gained by living a life, had little to no idea about what could, and couldn’t be considered offensive.
Tay engaged with the world therefore, not so much as a millennial, but instead like a young impressionable child, without the guiding hands and words of loving parents to show it a more measured path. Tay as a result of some world class trolling, consequently began offering some world class offensive commentary of her own.
Of course, this was never Microsoft’s intention, who have proven to be as naïve as Tay herself on this occasion. The real question though is whether this says more about AI, or more about humans. From the moment, Tay went public, she was bombarded with attempts by humans trying to catch her out, and teaching her, for all intents and purposes to become what she did. I mean what would have happened if people had tried to talk to her and help her learn to be well adjusted instead….
Microsoft have now ended Tay’s first foray into the real world, and has also deleted some of the more offensive tweets.