Human-robot interactions in the NYT
It seems that the NYT somehow covered the human-robot interaction conference.
If robots can act in lots of ways, how do people want them to act? We certainly don't want our robots to kill us, but do we like them happy or sad, bubbly or cranky? "The short answer is no one really know what kind of emotions people want in robots, " said Maja Mataric, a computer science professor at the University of Southern California. (...) There are signs that in some cases, at least, a cranky or sad robot might be more effective than a happy or neutral one. At Carnegie Mellon University, Rachel Gockley, a graduate student, found that in certain circumstances people spent more time interacting with a robotic receptionist — a disembodied face on a monitor — when the face looked and sounded unhappy. And at Stanford, Clifford Nass, a professor of communication, found that in a simulation, drivers in a bad mood had far fewer accidents when they were listening to a subdued voice making comments about the drive. (...) "People respond to robots in precisely the same way they respond to people," Dr. Nass said. A robot must have human emotions, said Christoph Bartneck of the Eindhoven University of Technology in the Netherlands. That raises problems for developers, however, since emotions have to be modeled for the robot's computer. "And we don't really understand human emotions well enough to formalize them well," he said.
Above all, I like this excerpt:
"If robots are to interact with us," said Matthias Scheutz, director of the artificial intelligence laboratory at Notre Dame, "then the robot should be such so that people can make its behavior predictive." That is, people should be able to understand how and why the robot acts.
Why do I blog this?I like it because it put the emphasis on the importance of mutual modeling in social behavior; mutual modeling refers to the inference an individual does (attribution) about others in terms of their intents or their cognitive and emotional states. The quote above hence raises the fact that there is a need of improving the mutual modeling process between human and robots. Another intriguing issue is that people starts projecting or anthropomorphing with the robotic artifact, as they do with pets. I am interested in this because the blogject concept might lead to similar situations in which people will have to assign certain meaning to the blogject agency.