This article is more than 1 year old

This AI stuff is all talk! Bots invent their own language to natter away behind humans' backs

01001011 01101001 01101100 01101100 00100001 00100001

Artificial intelligence agents can invent their own language and talk among themselves to work out the best way to get a job done, a study has shown.

Conversations come natural to humans, but they are a massive challenge for computers. Recent successes in this area include better human-language translation, and simple question answering by chatbots.

If AI agents are going to interact intelligently with humans, analyzing and aping patterns of words from conversations won't be enough. Computers have to gain a deep and true understanding of language and communication. The first step toward this is teaching software agents to develop a simple language on their own, one that describes their environment and how they can do stuff within it.

For example, the video below shows programs telling their pals to “go to the red landmark” or “look at the blue landmark.” The agents communicate by sending each other abstract symbols that are instructions; they are loosely translated into English for us to understand using a one-hot vector.

Youtube Video

This experiment is part of a study by Igor Mordatch and Pieter Abbeel of OpenAI and the University of California, Berkeley. Their research – “Emergence of Grounded Compositional Language in Multi-Agent Populations” – was popped online, on Arxiv, this month.

The bots are pale-colored blobs shuffling about in a two-dimensional world. At each step of time, the agents can make two types of action: they can move or look at something, and broadcast a word to other programs. Their communication rises from the need to collaborate with one another.

The agents are trained using reinforcement learning, which means they are given a reward as they get closer and closer to their goal. This reward is used as a signal that they're doing the right thing, and that whatever they're doing is something to learn for the future. The reward for good behavior given to each bot is the total score earned as a group, so it’s in the agents' interest to join forces and develop a language to coordinate themselves.

Goals can range from getting other bots to move to different colored landmarks, to making them direct their gaze in certain directions, to ordering them to do nothing. The different complexities of the tasks encourage the agents' vocabulary to grow.

They aren’t speaking in a language that can be easily understood. The researchers ran into different obstacles when trying to create a form of AI communication that can be interpreted by humans.

Say what?

At first, the bot lingo was more like Morse code: an abstract symbol was agreed upon and then scattered among spaces to create meaning, the researchers explained in a blog post.

The team tweaked the experiment so that there was a slight penalty on every utterance for every bot, and they added an incentive to get the task done more quickly. The Morse code-like structure was no longer advantageous, and the agents were forced to use their “words” more concisely, leading to the development of a larger vocabulary.

The bots then sneakily tried to encode the meaning of entire sentences as a single word. For example, an instruction such as “red agent, go to blue landmark” was represented as one symbol.

Although this means the job is completed more quickly since agents spend less time nattering to one another, the vocabulary size would grow exponentially with the sentence length, making it difficult to understand what’s being said. So the researchers tried to coax the agents into reusing popular words. A reward was granted if they spoke a “particular word that is proportional to how frequently that word has been spoken previously.”

Since the AI babble is explicitly linked to its simple world, it’s no wonder that the language lacks the context and richness of human language.

The team hopes that communication can continue to develop if the complexity of the agents' environment and the range of possible actions they can take is increased. “It’s possible they’ll create an expressive language which contains concepts beyond the basic verbs and nouns that evolved here,” the boffins said.

Transparency is a leading issue in AI. Current systems have been described as black boxes, since the algorithms carrying out the decision-making process are opaque. This could potentially hinder how AI is adopted in the future if people can’t trust the technology.

One way to solve this problem is by creating a way for machines and humans to understand each other. It’s one of the several research goals at OpenAI, and the next step is to continue to work with researchers at UC Berkeley to “investigate how a translator bot might be able to ‘connect the invented languages with English via having the agents communicate with English-speaking agents’.” ®

More about

TIP US OFF

Send us news


Other stories you might like