I just attended a talk by Alexander Repening about the use of mobile computing and simulation for "social learning". One of the project he mentioned is quite relevant to my research. It's described in his paper Mobility Agents: Guiding and Tracking Public Transportation Users :
The Mobility Agents system provides multimodal prompts to a traveler on handheld devices helping with the recognition of the “right” bus, for instance. At the same time, it communicates to a caregiver the location of the traveler and trip status.
What is interesting to me is the location-awareness interface:
For advanced travelers, we included a tool called the Urban Radar to find bus stops by pointing out the relative position of nearby recognizable landmarks such as restaurants. Tourists exploring an urban environment can use the Urban Radar to find interesting spots. The Urban Radar uses the current location of the traveler and a specified interest, e.g., the interest in food, to find nearby locations. The radius of the search sweep can be constant but can also be switched to automatic mode. (...) Caregiver Interface: (...) The ideal application would allow a much more peripheral sense of observation that requires only a small amount of screen space and provides a concise representation of the location and/or situation of the traveler. In addition, travelers as well as their caregivers wanted to have control over who could access their data. (...) in the form of an IM client
Why do I blog this? I am less interested in the project than in the location-awareness interfaces that I am describing in one of the chapter of my dissertation. Both the traveler and the caregiver's interfaces are interesting and fits with the classification I've done (close to Jaiku, which share similar ideas with this urban radar).