At first, the bot lingo was more like Morse code: an abstract symbol was agreed upon and then scattered among spaces to create meaning, the researchers explained in a blog post.
The team tweaked the experiment so that there was a slight penalty on every utterance for every bot, and they added an incentive to get the task done more quickly. The Morse code-like structure was no longer advantageous, and the agents were forced to use their “words” more concisely, leading to the development of a larger vocabulary.
The bots then sneakily tried to encode the meaning of entire sentences as a single word. For example, an instruction such as “red agent, go to blue landmark” was represented as one symbol.
Although this means the job is completed more quickly since agents spend less time nattering to one another, the vocabulary size would grow exponentially with the sentence length, making it difficult to understand what’s being said. So the researchers tried to coax the agents into reusing popular words. A reward was granted if they spoke a “particular word that is proportional to how frequently that word has been spoken previously.”
Since the AI babble is explicitly linked to its simple world, it’s no wonder that the language lacks the context and richness of human language.