Microsoft created an AI bot and the internet gave it a crash course in racism

Screen Shot 2016-03-24 at 3.51.59 PM

So Microsoft launched a chat bot yesterday, named Tay. And no, she doesn’t make neo-soul music. The idea is that the more that you chat with Tay, the smarter she gets. She replies instantly when you engage her and because of all that feedback, her answers get more and more intelligent over time.

Tay sent out her first tweet at 1:14PM, and started engaging with Twitter users pretty quickly.

Microsoft designed the experiment to “test and improve their understanding of conversational language”, presumably to improve Cortana. The only problem with this model is humans are not exactly the best role models around. I mean, it’s been just over 24 hours, and we’ve already taught an otherwise intelligent bot to type like a retard.

Tay

Even worse. FOR PETE’S SAKE, LOOK AT WHAT HUMANS ARE TWEETING AT A BOT

Some unsavoury people on Twitter found the bot and started to take advantage of the bot’s learning process to get it to say racist, bigoted and very…Donald-Trump-like things.

I mean…

 

1-2 3

screen shot 2016-03-24 at 10.48.22 screen shot 2016-03-24 at 11.10.58 screen shot 2016-03-24 at 11.55.42 screen_shot_2016-03-24_at_11_14_23

By the time Microsoft’s developers discovered what was going on, they started to delete all the offensive tweets, but the damage was already done – thank god for screenshots. I reckon moving forward, they will implement filters and try to curate Tay’s speech a little more, so this doesn’t happen again.

Some think Microsoft should have left the offending tweets as a reminder of how dangerous Artificial Intelligence could get.

I’m inclined to disagree. All this is for me though, is a reminder of how many depraved people we have to share the world with.

UPDATE: So, it turns out that Tay isn’t terribly dumb after all.

 

Discussion on Radar


  1. I disagree with the writer's disagreement to the "AI is dangerous" statement. Yes, this proves that AI can be very dangerous and a lot of thought has to be put into it before unleashing it, and even at that with plenty safeguards. I have read way too many scifi books not to be wary

  2. That's exactly what they are: Science Fiction. Will Skynet happen? Possibly. Can we do anything to stop it? Possibly. Should anyone lose any sleep over it? Probably not. We go dey alright.

    Plus, I was really only disagreeing with the idea that we should leave Tay's tweets as a reminder.

  3. I won't lose sleep over weak AI. That's what all these machines are. I'd get really worried when we have Strong AI or AGI.

    Till then, it's just hype and noise. The Media has a way of disguising what they know absolutely nothing about as absolute truth.

  4. Weak AI today is strong AI tomorrow.

    Tay that couldn't keep back exactly how she 'felt', may not feel differently tomorrow, but may be able to hide behind pleasantry and correctness, even from her creators. While keeping the loathe to herself.

    They get it right now, they have it for the long haul.

  5. LOL. An oversimplification.

  6. It's safe to say AlphaGo is a much more complex "organism" than Tay is. I mean, AlphaGo practically grounded Machine Learning research to a halt since around October last year. That said, scientists don't see it ushering in an era of iRobots and Machine Overlords.

    For the researchers, it's just another tick in the win column. It doesn't signal the end of the world. Not even close.

    I recommend anyone who's interested in the future of AI read this article.

    It’s tempting at this point to cheer wildly, and to declare that general artificial intelligence must be just a few years away. After all, suppose you divide up ways of thinking into logical thought of the type we already know computers are good at, and “intuition.” If we view AlphaGo and similar systems as proving that computers can now simulate intuition, it seems as though all bases are covered: Computers can now perform both logic and intuition. Surely general artificial intelligence must be just around the corner!

    But there’s a rhetorical fallacy here: We’ve lumped together many different mental activities as “intuition.” Just because neural networks can do a good job of capturing some specific types of intuition, that doesn’t mean they can do as good a job with other types. Maybe neural networks will be no good at all at some tasks we currently think of as requiring intuition.

    https://www.quantamagazine.org/20160329-why-alphago-is-really-such-a-big-deal/

  7. Nothing ever really signals the end like the end.

    AlphaGo before getting into bed with Google had her principal set up an Ethics board.

    Mr Musk has resounded ominously over this matter before it came to the fore.

    You think we still have decades, I say it's already with us. All that is needed is for these things to master the art of thinking 'safely', and incremental acknowledgement (learning / knowing), then it is fed with all the living, breathing data in the world.

    Then you'd have a protege that has anticipated all our mastery.

  8. says:

    i laughed when i read the Tay tweet between Xbox one and PS4... it shot his makers in the balls... lol "xbox has no games"

Continue the discussion at radar.techcabal.com