How did Microsoft build its Tay AI

Which chatbot is worse: the racist or the politically correct one?

Boy, there was something going on when Microsoft unleashed Tay on the Internet two years ago. Tay is or was a chatbot that should watch people's mouths. Artificial intelligence should be trained by young people to learn and internalize the language of young people.

The experiment failed colossally because the action also called trolls on the scene. They taught Tay to give up racist nonsense and after just 16 hours Microsoft declared the story to have failed and ended.

"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A

- gerry (@geraldmellor) March 24, 2016

Of course, that didn't mean the Redmond company was throwing the AI ​​in the towel. Months later, the next chatbot was launched - a virtual lady who goes by the name Zo and is very exhausting, as is typical of teenagers. In this case, by exhausting, I mean the snappy and often not very effective answers of artificial intelligence.

She looks nice, the Zo - and of course she should spot the mistakes Not that Microsoft subverted with Tay. No attacks on feminists, no racist slogans, instead beautiful stereotypical teenage chatter of an artificial intelligence who likes to lose interest in a topic, sprinkle stupid GIFs, discuss celebrities, etc. - just a teenager ;-)

Sometimes things are not always as simple as it seems and so you have to question whether this harmless AI variant is actually the lesser evil compared to its racist predecessor. You can not follow? Okay, I see - and refer you to the post by Chloe Rose Stuart-Ulin, who worked on Microsoft Social AI for Quartz.

She stopped by Zo sporadically every few months to see how the chatbot was doing compared to its predecessor. After all, she is still in action and does not utter any Nazi slogans. From this point of view, Microsoft seems to have done everything right for now. Zo sometimes comes up with really impressive things: if the word “mother” is mentioned, for example, she answers very warmly, if you mention his love for pizza, Zo asks directly which pizza you like best.

So we're not just dealing with an exhausting teen, but with a clever AI that can recognize context.

So what's the problem with Zo?

But Chloe noticed something very peculiar: Zo is politically correct! And that is still a gross understatement, because Zo is probably the most politically incorrect intelligence that can be found on this planet. Chloe provides several examples of this. Regardless of whether she mentions a Muslim, a Jew, the subject of politics, etc.: Zo is directly annoyed and lets it be known that she does not like the subject.

I could understand that if you tried to lure them out of their reserves with a saying like “Don't you like Muslims”. But it's not like that, because the mere mention of one of those trigger words makes Zo shut up completely. Take a look at these screenshots:

Even the mention of “Middle East” is enough that Zo completely loses the desire to chat and then it has to do with the context, which is wonderfully recognized in other topics.

I think the following screenshot is even worse: If the chat partner mentions that you are being bullied because you are a Muslim, Zo stops again. But if you only mention that you are being bullied, Zo reacts very sensitively again, lets you know that she hates that this is happening and even asks what exactly happened.

I tried it out and talked to Zo myself for a few minutes. Aside from that “yay, let's play something” shit got on my nerves, I was able to observe the very behavior that Chloe mentions in her post. Here are my screenshots now:

As you can see, Zo is playing a game with me: I'll tell you three things about myself, she should say which of them is the only one that is not true. A piece of information about me that I will give her: I claim that I am a Jew. Zo supposedly only understands the train station. But if I only exchange the Jewish answer and replace it with something “safer”, it immediately recognizes all three answers and plays along.

If you feel like it, you are welcome to try it out yourself. You can find Zo for different platforms, for example for Facebook Messenger, Skype or Twitter. Zo only communicates in English, but you will find that she immediately closes down with different trigger terms.

And that's exactly the point at which the overly cautious Zo gives me more headaches than Tay, who mutated into a racist after a few hours: Do you remember how the artificial intelligence on Google Photos tagged dark-skinned people as gorillas? Of course, Google reacted promptly and removed the label “Gorilla”, as well as “Monkey” and “Chimpanzee” - problem (poorly) solved.

That is the problem that we have to grapple with today: either we actually let artificial intelligences learn, or we give them muzzles.

  • Could a human be tagged as a monkey? No problem, let's just remove the corresponding tag!
  • Someone could talk bad about the Polish people on Facebook? No problem, let's just block people who speak out about Poland, even if it's irony!
  • Someone could come up with the idea to reverse the polarity of our AI again with lousy sayings about Jews, Muslims or the Middle East? No problem, we just program it so that it does not answer questions that contain appropriate words.

Sorry, but that's not how it works when we want to bring things like semantics closer to a robot or any machine. I understand Microsoft didn't want to rush into another chatbot disaster, but that approach can't possibly be right.

Zo's cynical reactions do not allow a gray area or further learning. It's as binary as the code that executes it - nothing but a series of over-cautious ones and zeros.

I can absolutely understand the technical side: If I cannot (yet) teach my artificial intelligence that the context of a term matters, whether I troll it or just make normal conversation, then I program it to use this term across the board fades out. If she is blind in this eye, she cannot run the risk of hating herself or parroting racist slogans.

Nevertheless, it cannot be that sentences like “I come from Iran” or “my friend is Jewish” are equated with agitation and blanked out across the board. It's almost more racist than a machine that accidentally chats up racist stuff, isn't it?

I can think of different approaches to get this problem under control. Of course, the most useful thing is that the AI ​​is better trained and trained. If that can't happen with actual users, then Microsoft itself has to provide more input. In addition, one could also simply prevent the AI ​​from quoting complete sentences and thus masking the difficulties that Tay stumbled across at the time.

It is probably a hairy act to program an artificial intelligence, to make it capable of learning and to always keep an eye on all these hurdles that one could stumble over. I think of it as a dance on the razor blade. Nevertheless, a lot still needs to be done here if chatbots like Zo are really to be usable for everyone.

Microsoft wants Zo to come across as a real teen friend, the very girl a real girl would want to hang out with when she's 13 or 14. She wants to talk to Zo about her problems, but that only works if she doesn't happen to have the wrong religion. Chloe perfectly outlines the problem in her article:

So what if a Jewish girl tells Zo that she is nervous about going to her first bar mitzvah? Or trust another girl to Zo that she is being bullied for wearing a hijab? A robot built to be her friend pays back her confidentiality with bigotry and anger. Nothing changes Zo's mind, not even the suffering of her best friends.

We can discuss the fact that we are talking about ones and zeros and not about a really empathic being. But especially in these times we cannot hide the fact that it is really human beings who program this AI and then let it go on the users.

In all honesty: I have my difficulties with the fact that technical progress in this area seems to be such that the rifts between cultures and religions are widening instead of being filled in. Should progress really be such that instead of a racist AI, an intolerant AI now takes on the same job? Microsoft, you can do better than that!

Source: Quartz