Robots can understand specific social interactions and use machine learning to assist them.
While robots can deliver food to college campuses and hit a hole-in-one on the golf course, they cannot perform the fundamental social interactions necessary for everyday human life.
Researchers at MIT have integrated social interactions into a framework to enable robots to understand how to help and hinder each other and learn to do these same social behaviors themselves. A robot watches its companion and guesses the task it is trying to accomplish. The robot then assists or hinders its partner based on its goals.
Researchers also demonstrated that the model generates predictable and realistic social interactions. The human viewers mainly agreed with the model when they saw the simulated robots interact with each other.
Robots that can demonstrate social skills may make it easier for humans to interact with them. These capabilities could be used by a robot working in assisted living facilities to create a more caring environment. Scientists may be able to quantitatively measure social interactions using the new model, which could aid in studying autism and analyzing antidepressant effects.
“Robots will soon be living in our world, and they need to learn to communicate on human terms with us. They must recognize when it’s time to assist and when it’s time to look for ways to prevent it from happening. Although this is still very early work and only scratches the surface, I believe this is the first serious attempt to understand how humans and machines interact socially.” Boris Katz, principal research scientist and head of the InfoLab Group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).
Ravi Tejwani (a research assistant at CSAIL), co-lead writer Yen-Ling Kuo (a CSAIL Ph.D. candidate), Tianmin Shu (a postdoctoral researcher in the Department of Brain and Cognitive Sciences), and Andrei Barbu (a senior author at CSAIL and CBMM), are all part of Katz’s paper. The report will be presented in November at CBMM’s Conference on Robot Learning.
Social simulation
Researchers created a simulation environment in which robots move about a grid of two dimensions to study social interactions.
The environment is a physical goal. A robot might have a physical plan to find a tree at a specific point on the grid. Social goals involve figuring out what another robot is trying to do and then acting on that estimate—for example, helping another robot water the tree.
Researchers used their model to determine the robot’s social and physical goals. They also decided how they should prioritize each. Robots are rewarded for taking actions that help them achieve their goals. A robot trying its best to help its friend will adjust its reward to match the tip of the other robot. If it tries its hardest to harm, it will adapt its dividend to be the opposite. The planner uses this reward, an algorithm, to decide which actions the robot should take. It also guides the robot in achieving social and physical goals.
“We have created a new mathematical framework to model social interaction between agents. If you’re a robot and want to travel to location X, and I’m another robot, and you see that I want to help you get there faster, we can collaborate. This could mean moving X closer or finding another better X. Or taking any action that you needed to at X. Tejwani says that our formulation allows us to find the ‘how.’ We specify the what’ mathematically in terms of social interactions.
Combining a robot’s social and physical goals is crucial to have genuine interactions. Humans who help each other have limits on how far they will go. Barbu states that a rational person wouldn’t give a stranger his wallet.
The researchers used this mathematical framework to identify three types of robots. The robot at level 0 can only think socially and has no physical goals. Level 1 robots have physical and social plans but assume that all other robots have only physical purposes. Level 1 robots can take actions based on the physical goals of others robots, like hindering and helping. Level 2 robots assume that other robots have physical and social plans. These robots can perform more complex actions, such as joining to help each other.
Evaluation of the model
They created 98 scenarios using robots at levels 1, 2, and 3. This was to see how the model compares with human perspectives on social interactions. Twelve people viewed 196 videos of robots interfacing and were then asked to determine these robots’ social and physical goals.
Their model reflected the human perceptions in most cases of the social interactions occurring in each frame.
“We are interested in building computational models for robots and digging deeper into human aspects. We are interested in learning what features these videos reveal about human behavior to understand social interactions. Is it possible to objectively test your ability to recognize social interactions? There may be a way to help people identify social interactions and improve their skills. Barbu states that although we are still far from being able to measure social interactions accurately, it is a significant step in the right direction.
To greater sophistication
Researchers are currently working to develop a system that uses 3D agents in an environment that allows for more interactions, such as manipulating household items. They also plan to add settings that could cause actions to fail in their model.
Researchers also plan to include a neural net-based robot planner in the model. This will allow it to learn from experience and perform faster. They also plan to experiment to gather data about how humans perceive robots engaging in social interactions.
Barbu states, “hopefully, we will have an objective benchmark that allows all researchers to work on these interpersonal interactions and inspires the kinds of science-engineering advances we’ve seen elsewhere, such as object recognition and action recognition.”
Tomer Ullman, an assistant professor in Harvard University’s Department of Psychology and head of the Computation, Cognition, and Development Lab said this was a beautiful application of structured reasoning to a complex but urgent problem. He wasn’t involved in this research. Even infants can understand social interactions such as helping and hindering. However, we need the technology to perform this reasoning at a human level of flexibility. Models like those in this paper, which have agents thinking about the benefits of others and socially planning how to thwart them best or support them, are a step in the right direction.