We trust machines because they do what we tell them to. If we want a robot to help us clean the house, we simply program it for the respective task and that’s it. Machines don’t have feelings and they could never behave in a dishonest way because we haven’t taught them how to do that. Unfortunately, we’ve got some bad news for you: AI has already learned the useful art of lying.
Facebook’s AI researchers tried to teach bots the art of negotiation, only to realize that the bots had discovered the art of lying by themselves.
AI learns to negociate and lie
At first, researchers challenged two AI agents to learn how to negotiate with one another by giving them a bargaining task. They offered the bots two books, one hat and three balls and challenged them to split the items between them. But there’s a catch: the two bots had been programmed to want different things. More specifically, each object had a different value for each bot. The goal was for bots to learn to compromise, and find some middle ground, so that both could obtain decent scores.
Researchers also made sure that it was impossible for both agents to get the best deal at the same time. Moreover, walking away from the negotiation or not agreeing on a deal after 10 rounds of dialog resulted in 0 points for both agents.
Each agent is provided its own value function, which represents how much it cares about each type of item (say each ball is worth 3 points to agent 1). As in life, neither agent knows the other agent’s value function and must infer it from the dialog (you say you want the ball, so you must value it highly).
Bots simulated various responses before deciding which one to use, trying to predict the other bot’s answers. Interestingly enough, sometimes both bots pretended to be interested in objects they didn’t really want in order to influence the result of the negotiations in their favor.
In other words, they acquired the subtle and useful art of lying. But how did AI bots learned to lie in the first place? Most likely, they detected and acquired this deceptive behavior in the training dataset. Facebook’s researchers used a massive dataset of natural language negotiations to train the two bots. It seems that the AI agents are smart enough to realize that sometimes lying is useful.
The performance of the best negotiation agent […] matched that of human negotiators. It achieved better deals about as often as worse deals, demonstrating that FAIR’s bots not only can speak English but also think intelligently about what to say.
AI can now hold fluent conversations in English
This experiment also revealed another interesting thing: AI bots can hold fluent conversations in English. Researchers tested them online in conversations with people, and most people did not realize they were talking to a bot.
Humans can use a variety of language to communicate, and this creates very complex learning models, making it very difficult for AI agents to hold conversations with people.
Working towards creating better digital assistants
This achievement in AI research will help researchers build better digital assistants and move the field forward. Hopefully, in just a few years’ time, digital assistants will be fully functional.Follow The AI Center on social media:
Join me as I track the latest progress in AI research.
Latest posts by Maddie Blau (see all)
- True Emoji is an AI app that uses your expressions to create animated emojis - November 21, 2017
- Canada’s first AI exchange-traded fund enters the market - November 2, 2017
- Are you curious to see who’s the smartest AI in the world? - November 1, 2017