This article is more than 1 year old

Intelligent robots can walk the walk – but if they can't talk the talk, we can't get along

AI must master gift of the gab, humans have to play nice too

The AI hype has triggered a moral panic as people entertain the idea that super-intelligent machines may one day dominate Earth.

Nobody agrees how many jobs will go, if any, and reports that predict the percentage of jobs affected by automation in the future vary wildly. It also doesn’t help that breakthroughs in research are always framed in ways that show how computers have surpassed humans at a specific task – from poker and Go to facial recognition.

If AI software and robots are to play bigger roles in society – driving cars, making financial investments, or diagnosing diseases – then it’s in our interests to learn how we – humans and droids – can develop mutually cooperative relationships. If we can't get along, nothing's going to happen, basically.

It’s an area that doesn’t get enough attention, an international team of computer scientists and psychologists argued in Cooperating with Machines, a paper available on arXiv.

That could be because designing algorithms for human cooperation is difficult. People rely on “intuition, cultural norms, emotions and signals, and common sense” – abstract properties that are tricky to encode into truly intelligent software.

The study

The team of researchers attempted to develop a new learning system, one that allows bots to cooperate with people, and they tested it in a series of strategic games.

First they singled out S++ – a general-purpose algorithm among 25 others – as the top performer in puzzles such as Prisoner’s Dilemma or Chicken, in which the participants can work against each other or in cooperation to succeed.

When S++ was up against humans, it didn’t always play nice. It lacked the ability to form long-term collaborative relationships to solve the challenge at hand. The team noted that “cheap talk” was key to getting humans to coordinate with each other, and that the software simply wasn't using this to join in.

Cheap talk is where an expert passes information to an uninformed decision maker to use – for example, an ecologist talking to a politician considering voting on the use of pesticides. The communication does not directly affect the outcome, and providing the info is free – or cheap. To us humans, this seems absolutely normal and non-obvious. It's something computers can struggle with, as their code and protocols exchange exact data rather than advice, biases, and other babbling that humans expect to share and interpret.

“We talk to each other a lot and send each other subtle signals that show what we are about to do,” Professor Jacob Crandall, first author of the paper and associate professor at Brigham Young University in Utah, told The Register.

What if machines had a chance to engage in some cheap talk too? Would they be more cooperative?

Cheap talk not trash talk

Crandall and his colleagues decided to investigate this by adding another step onto S++. The algorithm was allowed to choose from a series of pre-programmed responses before it made a move in a game, giving it a chance to communicate with humans. They called their new algorithm S# (S-sharp).

Some messages explicitly encouraged cooperation, and the bot could tell the other person what action it was going to play, such as “Let’s always play x strategy” or “I accept your last proposal.” Others were an attempt to bully the other player into submission: “Do as I say, or I’ll punish you” and “We can both do better than this.” It could even react to dishonesty with remarks such as “You betrayed me,” “I don’t trust you,” and “That’s not fair.” It can also decide to not send a message at all.

S# works by first computing a set of “experts,” which are the different possible strategies the bot can take in the game. The second step selects a few strategies to play. The other player then sends the bot a message, saying what their strategy will be.

After receiving the instruction, the bot sends a message back and updates its internal strategy before executing an action within the game. The cycle continues until all moves are played, and the game is over.

A total of 220 participants played 472 games with S#. Results for mutual cooperation were compared for human-human and S#-S# games. After cheap talk was added, it doubled the proportion of mutual cooperation interactions in human-S# and human-human games.

But a closer inspection shows it’s not just simply communication that solves the problem, Crandall explained. The interactions between machine and human have to be grounded in honesty and loyalty. When human players were honest, the bot was more likely to be collaborative.

S# is not programmed to lie, but if it is lied to, trust between the machine and human breaks down. It will stop responding to any messages, and begins to blank the other player, and the chances to cooperate greatly diminish.

It's why games between S#-S# had the highest rates of cooperation. It shows that human-machine relationships are not much different from human-human relationships. Simply programming a machine to be honest isn’t good enough. It relies on good behavior from both sides.

It’s important to keep that in mind when designing algorithms for the future, Professor Crandall said.

“Machines are an extension of us. We use them to perform things for us like driving, or operating other machines. As we go forward in the future, to get better results, we are going to have to cooperate.”

The challenge is also in creating AI that can teach and learn from interacting with people, Crandall added. "If AI can’t do this, it won’t have much value.” ®

More about

TIP US OFF

Send us news


Other stories you might like