Unsurprisingly, Microsoft's AI Bot Tay Was Tricked Into Being Racist
A mere day after Microsoft's artificial intelligence experiment launched, the Internet taught it to be racist, xenophobic and inappropriate.By Mikah Sargent | March 24, 2016
Surprise, surprise — just a day after Microsoft's new artificial intelligence, Tay, launched on several social platforms, it was corrupted by the Internet.
If you haven't heard of Tay, it's a machine learning project created by Microsoft that's supposed to mimic the personality of a 19-year-old girl. It's essentially an instant messaging chat bot with a bit more smarts built in.
Those smarts give Tay the ability to learn from the conversations she has with people; that's where the corruption comes into play.