Advances in artificial intelligence (AI) are making virtual and robotic assistants increasingly capable in performing complex tasks. For these “smart” machines to be considered safe and trustworthy collaborators with human partners, however, robots must be able to quickly assess a given situation and apply human social norms. Such norms are intuitively obvious to most people—for example, the result of growing up in a society where subtle or not-so-subtle cues are provided from childhood about how to appropriately behave in a group setting or respond to interpersonal situations. But teaching those rules to robots is a novel challenge.
To address that challenge, DARPA-funded researchers recently completed a project that aimed to provide a theoretical and formal framework for what norms and normative networks are; study experimentally how norms are represented and activated in the human mind; and examine how norms can be learned and might emerge from novel interactive algorithms. The team was able to create a cognitive-computational model of human norms in a representation that can be coded into machines, and developed a machine-learning algorithm that allows machines to learn norms in unfamiliar situations drawing on human data.
The work represents important progress towards the development of AI systems that can “intuit” how to behave in certain situations in much the way people do.
DARPA wants robots and AI to know what is and is not proper behavior
“The goal of this research effort was to understand and formalize human normative systems and how they guide human behavior, so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans,” said Reza Ghanadan, DARPA program manager.
As an example in which humans intuitively apply social norms of behavior, consider a situation in which a cell phone rings in a quiet library. A person receiving that call would quickly try to silence the distracting phone, and whisper into the phone before going outside to continue the call in a normal voice. Today, an AI phone-answering system would not automatically respond with that kind of social sensitivity.
Ultimately, for a robot to become social or perhaps even ethical, it will need to have a capacity to learn, represent, activate, and apply a large number of norms that people in a given society expect one another to obey, Ghanadan said. That task will prove far more complicated than teaching AI systems rules for simpler tasks such as tagging pictures, detecting spam, or guiding people through their tax returns. But by providing a framework for developing and testing such complex algorithms, the new research could accelerate the day when machines emulate the best of human behavior.
“If we’re going to get along as closely with future robots, driverless cars, and virtual digital assistants in our phones and homes as we envision doing so today, then those assistants are going to have to obey the same norms we do,” Ghanadan said.