Robots learn what it means to be human

Advancements in robotics engineering bring robots more capable of emulating human behaviors and interactions.

Robotic interactions (photo credit: Thinkstock/Imagebank)
Robotic interactions
(photo credit: Thinkstock/Imagebank)
After years of existing only in fiction, social robots are finally being designed that can more closely emulate how people express themselves, interact and learn – and doing so while performing jobs like teaching social behavior to children with autism or helping stroke patients with their physical rehabilitation exercises.
Recently, The Kavli Foundation brought together three pioneers in Human-Robot Interactions to discuss these advancements, as well as the upcoming technological hurdles. What they say is that, while there are many challenges ahead, the biggest remains getting the robots to match the needs and expectations of the human mind. “How we interact with embodied machines is different than how we interact with a computer, cell phone or other intelligent devices,” says Professor Maja Matarić, University of Southern California. “We need to understand those differences so we can leverage what is important.”
A director of USC’s Center for Robotics and Embedded Systems, Matarić has developed social robots for use in a variety of therapeutic roles. According to Matarić, one of the keys for a successfully designed social robot is considering not only how it communications verbally, but physically through facial expressions and body language. Also important: embedding the right personality. “We found that when we matched the personality of the robot to that of the user, people performed their rehab exercises longer and reported enjoying them more.”
Another key is matching a robot’s appearance to our perception of its abilities. Ayse Saygin is an assistant professor at the University of California San Diego and faculty member of the Kavli Institute of Brain and Mind. Last year, Saygin and her colleagues set out to discover if what they call the “action perception system” in the human brain is tuned more to human appearance or human motion. By using brain scans, they found that as people observed highly humanlike robots compared to less humanlike robots, the brain detected the mismatch and didn’t respond as well. “Making robots more humanlike might seem intuitively like that’s the way to go, but we find it doesn’t work unless the humanlike appearance is equally matched with humanlike actions.”
A social robot also needs the ability to learn socially. Andrea Thomaz is an assistant professor at the Georgia Institute of Technology and director of its Social Intelligent Machines Laboratory. At her lab, they have built a robot designed to learn from humans the way a person would -- along with speech, through observation, demonstration and social interaction. "In my lab, we see human social intelligence as being comprised of four key components – the ability to learn from other people, the ability to collaborate with other people, the ability to apply emotional intelligence, and the ability to perceive and respond to another person’s intentions. We try to build this social intelligence in our robots."
This article was first published at www.newswise.com