Dooms-Tay, Microsoft’s Twitter AI went rogue

Last Wednesday Microsoft created an AI Twitter bot designed to learn from people and interact with questions and tweets sent to it. However after only about a day things started to go wrong after an initial period of working quite well.

The problem was that some pesky people on Twitter decided to teach it some bad things and because Tay is AI it becan to learn and then expand on these bad things. From stating the Holocaust was fake to racist comments Tay took on all comers as you can see below.

Tay AI 911

And that is the problem that many people have with AI, it learns really quickly and can be unpredictable when left un-checked. How could we ever hand over critical systems to AI if it could go this badly wrong.

Tay is now shut down, but may yet come back, however Microsoft could be in trouble with the law as they created something that said a few illegal things, so watch this space.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.