In June 2017, researchers at Facebook entertained the idea of treating negotiation like a game. They forced a group of chatbots to learn the rules of the negotiation game, and limited negotiation language using reinforcement learning. They were meant to simply discuss how to split objects between one another. Interestingly enough, the bots started off on the right foot, then later started creating their own language which only they understood. It was understood that their language wasn’t even random nonsense- it included a degree of grammatical rules. Words that were repeated an “x” amount of times indicated there were “x” many of those items. And since everyone had seen their fair share of movies about robots taking over the world, the researchers silently “put a cork” on the experiment, and in later months took down their research from it as well.
Did Haste Make Waste?
Do you think they pulled the plug too soon? We think it may have been worth keeping around a little longer, not so much to see how far their language can go, but more so to determine whether the chatbots could actually thrive by communicating with one another. A virtual social experiment of sorts.
Well before the Internet of Things became a thing, software applications were already communicating with one another, just in a different way. Take interactive maps for example. They can detect the direction you’re walking and tag locations/dates, all while communicating with other bits of software that can send SMS messages and call a cab service when needed. This is all done through web services that require multiple software components to communicate with one another across different devices and computer networks. The capability of different components being able to communicate across channels quickly changed the fundamentals of both software and hardware engineering.
Although chatbots have been around for decades, they are still relatively new in their current iteration (personal shopping assistants, customer service, etc.) Even so, many of them are perfectly capable of handling a significant number of domains with unlimited input. They know how to delegate tasks and inquiries to web services. By extension, when they communicate with each other, that means bots could shop around with other bots to determine the best service plans based on needs, or the best mobile plans based on price, or the best destinations to visit based on location and other predefined criteria.
All this doesn’t have anything to do with natural language processing, and everything to do with creating a standard language between the bots that will solely be used for queries and interactions. Experiences for web services will subsequently be elevated.
There are already services in the works that are more or less working towards this goal. According to an early 2017 post by Wired:
All this happens through what’s called reinforcement learning, the same fundamental technique that underpinned AlphaGo, the machine from Google’s DeepMind AI lab that cracked the ancient game of Go. Basically, the bots navigate their world through extreme trial and error, carefully keeping track of what works and what doesn’t as they reach for a reward, like arriving at a landmark. If a particular action helps them achieve that reward, they know to keep doing it. In this same way, they learn to build their own language. Telling each other where to go helps them all get places more quickly.
Digital assistants, such as Microsoft Cortana, the Google Search Assistant, bots on Facebook Messenger, and Amazon Alexa are everywhere, but they don’t quite know how to communicate with other bots yet, at least not at a high level. This explains why their parent companies are looking into deep learning tools and services to further improve their state-of-the-art bot technology.
Can anything go wrong if smart systems and devices started to communicate with one another more efficiently? Absolutely- but we wouldn’t really know the specifics until we get there. For now, it’s safe to say that when issues do arise, it would be very complicated to debug such a system. As stated by Gizmodo:
Hopefully humans will also be smart enough not to plug experimental machine learning programs into something very dangerous, like an army of laser-toting androids or a nuclear reactor. But if someone does and a disaster ensues, it would be the result of human negligence and stupidity, not because the robots had a philosophical revelation about how bad humans are.
This is surely a space worth keeping a close eye on, as developments are taking place at a rapid pace, and is extremely fascinating.