Gestural behavior in virtual reality and physical space
With the now overlapping on-line persona and our presence in the physical world, lots of questions concerning the connections between both worlds remains unanswered. This is the research issue addressed by the Virtual Human Interaction Laboratory at Stanford University. Devsource has a good overview about it (via the Presence mailing list), starting with the questionable motto: "How does the world change when you have five arms?".
that researchers have learned that, when we build digital versions of one another, people tend to behave the same in virtual reality (VR) as they do in physical space, at least on a gestural level. His team has studied online communities and avatar-based games, analyzing patterns of interaction and comparing how they relate to the social world. With avatars, he says, the norms of conversation and nonverbal behavior are modeled on how people behave in physical space. But there's one interesting exception: "In games, taller and more beautiful avatars actually perform better."
Why do I blog this? Since I am interested in the relationships between spatial features and behavior, this is relevant; see for instance what Philip wrote about how proxemics is still pertinent in virtual space: Jeffrey, P.and Mark, G. (1998). Constructing Social Spaces in Virtual Environments: A Study of Navigation and Interaction. In: Höök, K.; Munro, A.; Benyon, D. (ed.): Workshop on Personalised and Social Navigation in Information Space, March 16-17, 1998, Stockholm (SICS Technical Report T98:02) 1998) , Stockholm: Swedish Institute of Computer Science (SICS), S. 24-38.
But there is more:
Bailenson [the lab director] offers one bit of practical advice for software developers who build "social" user interfaces. Anytime you have a UI that guides a person, especially with a human face, people tend to make the agent look more realistic than it behaves. And that, he says, causes problems in user expectations.