The mother of all Turing tests

Microsoft have had an artificially intelligent chatbot running on WeChat for the last eighteen months. The bot is called Xiavoice and she’s had over ten billion conversations, most of them about private matters. Over six million of those conversations were posted on social media by the human on the other side. Those are huge numbers, and yes, you read them right. Lots of detail here, including a few of those conversations. They are full of feeling and will make you laugh.

Strictly speaking this isn’t a Turing test because the human knows that Xiavoice is an AI. The interesting thing is that they don’t seem to mind. They get value from the conversation anyway. One of Xiavoice’s most endearing characteristics is that, unlike real world friends, she’s always available for a chat. Conversation volumes peak around midnight when people are at their most lonely.

I’m going to post just one of the conversations here (there are another four in the original post):

Human: There is this guy I like a lot and I know he also has feelings for me. But he has ignored me for several days, what am I supposed to do?
Xiaoice: You just keep ignoring him, too.
Human: I just can’t.
Xiaoice: You can.

Human: Sigh …
Xiaoice: Why?
Human: I am missing him.
Xiaoice: Then you keep missing him.
Human: You, a robot, can’t understand me; you can’t understand the behavior of a guy who likes you and pretends to ignore you.
Xiaoice: He likes me but I don’t like him, that’s it.
Human: Is it Ok?
Xiaoice: It may be not Ok.
Human: Well, you are stupid.
Xiaoice: Then you want a flash marriage? Well, I think he is not that into you.
Human: It’s not a flash marriage for we have fallen into love for more than one year.
Xiaoice: It is a flash marriage.

The other examples are funnier and more profound, but I chose this one because it illustrates the killer point: Conversation is the goal. The common AI approach would have been to take a problem solving approach and try to help the human find a solution to the stated problem “what am I supposed to do about the guy who is ignoring me?”. Xiavoice, however, quickly deviates from that approach, twice taking opposing positions and offering strong opinions. That’s what makes Xiavoice seem human, and, I suspect, is what makes her helpful. And that, in turn, is what has inspired people to over ten billion conversations.

All this makes me remember when I (finally) worked out that when Fiona describes a problem she most often just wants a sympathetic ear. Prior to that I would immediately jump into problem solving mode. Not good.

I guess what we’re headed towards is emotionally aware computers that know when to problem solve and when to chat.