So Microsoft launched a chat bot yesterday, named Tay. And no, she doesn’t make neo-soul music. The idea is that the more that you chat with Tay, the smarter she gets. She replies instantly when you engage her and because of all that feedback, her answers get more and more intelligent over time.
Tay sent out her first tweet at 1:14PM, and started engaging with Twitter users pretty quickly.
https://twitter.com/TayandYou/status/712613527782076417
Microsoft designed the experiment to “test and improve their understanding of conversational language”, presumably to improve Cortana. The only problem with this model is humans are not exactly the best role models around. I mean, it’s been just over 24 hours, and we’ve already taught an otherwise intelligent bot to type like a retard.
Even worse. FOR PETE’S SAKE, LOOK AT WHAT HUMANS ARE TWEETING AT A BOT
Some unsavoury people on Twitter found the bot and started to take advantage of the bot’s learning process to get it to say racist, bigoted and very…Donald-Trump-like things.
I mean…
Wow it only took them hours to ruin this bot for me.
This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V
— zoë “Baddie Proctor” quinn (@UnburntWitch) March 24, 2016
By the time Microsoft’s developers discovered what was going on, they started to delete all the offensive tweets, but the damage was already done – thank god for screenshots. I reckon moving forward, they will implement filters and try to curate Tay’s speech a little more, so this doesn’t happen again.
https://twitter.com/TayandYou/status/712832594983780352
Some think Microsoft should have left the offending tweets as a reminder of how dangerous Artificial Intelligence could get.
https://twitter.com/DetInspector/status/712833936364277760
I’m inclined to disagree. All this is for me though, is a reminder of how many depraved people we have to share the world with.
UPDATE: So, it turns out that Tay isn’t terribly dumb after all.
https://twitter.com/DeMarko/status/713014976475230211