Location-based wristwatch in Second Life

I'm slightly underwater lately and I missed this news about location-tracker in Second Life:

SLStats comes in the form of a wristwatch, available in Hill Valley Square [in SL] in the Huin sim. Once you register with the service in-world, the watch "watches" where you go, tracking your location as you move around the world, as well as which other avatars you come into contact with. The information is used on the SLStats site to rank most popular regions (among SLStats users, of course), and to track how much time you've spent in-world, which you can view at a link like this one, which tracks Glitchy: http://slstats.com/users/view/Glitchy+Gumshoe.

Why do I blog tis? yet another location-awareness tool that I should quote in my dissertation about this topic.

Mechanisms for gathering a team in multi-user games.

In this old paper by Ben Calica found on Gamasutra, there is a good description of the existing ideas for gathering teams in a multi-player game. Calica describes three ways to gather a team:

  1. Next on the Bus -This strategy is basically first come, next served. Games are filled by people in order of appearance in "line".
  2. The far more common approach is the Pick-Me style. Unfortunately it brings back unfortunate echoes of schoolyard horrors everywhere, with a few people waiting to be picked for what feels like their entire lives.
  3. Some of the persistent environment games have introduced the concept of wander and gather. That is, just start playing, and if you run into someone you like, play along with him or her.
  4. Once more into the Breech, Dear Friends - This is most common in the Doom-like games. Just walk into a room filled with gun toting bastards and shoot anything that moves. If you die, you come right back in to play again

Why do I blog this? Group formation and how individuals manage to gather with other persons they do not know in a virtual environment is of interest to me (in terms of CSCW research and design). A chat last week with a friend reminded me this article I used in 2002 for a study about awareness tools in first person shooters. What is interesting here is (1) to see how at that time these mechanisms were thought, (2) the fact that it did not evolve that much, (3) the emphasis on FPS (which seem to be less trendy right now) and (4) there is now better tools to support this process (such as xfire), (5) the notion of statistics about players was less frequent. Of course, my remarks here come from other papers than this gamasutra article.

Moreover, this can also be of interest for other projects than video games, what about a similar layer in web2.0 applications?

Technology and shabbat

Technology and jewish life by Manfred Gerstenfeld and Avraham Wyler interestingly describes how the development of new technologies has brought with it many challenges and decisions on several aspects of jewish life such as Shabbat observance. I have always been intrigued by how technologies or systems can cause challenges or how these problems can be circunvented by deep user-centered design, for example:

Many hotels have entrance doors controlled by an electronic eye and doors to rooms that can only be opened by electronic keys. Some Israeli hotels have two locks on their doors, one electronic and one regular, the latter for use on Shabbat by the Orthodox. For security reasons, hotels worldwide are increasingly making access to their stairways difficult, and alarms are often set up against entry so that they have come to be used almost exclusively as emergency exits. (...) Modern technology has made it possible for observant Jews in Israel to live in high-rise buildings whose higher floors have formerly been inaccessible to them on Shabbat, as they do not use regular elevators. Many hotels and high-rise buildings with Orthodox inhabitants have a special preset elevator that is halakhically permitted for use on Shabbat.

Also of interest, the discussion about what can be accepted or not:

Some products address extreme or unique situations. One halakhic technology institute constructed a telephone that enabled an Israeli ambassador to use the phone on Shabbat. In the Israeli army a special pen is used by observant soldiers on Shabbat, whose ink-mark fades away after a certain period of time. Therefore their use is not considered a form of the writing that is forbidden on Shabbat. These pens are also used in hospitals.

Why do I blog this? the design of technologies that one can possibly use on shabbat or the discussion of what should not be used is very interesting IMO in terms of user-centered design and as a critical reflection of the articulation between humans and technology. Besides, it's also a pertinent confrontation to a different way to think about technologies and their characteristics.

Lessons from a google Earth game

This Gamasutra article written by a team from Intel entitled "Mars Sucks - Can Games Fly on Google Earth? " explores whether Google* Earth could be used as the foundation of a video game (and beyond current applications such as “Find Skull Island” and "EarthContest"). Their prototype is simple:

Martian robotic spacecraft are invading Earth and sucking up humans for experiments! We were able to capture one Martian spacecraft, which we need you to pilot in an attempt to blast other Martians out of our atmosphere. The Martians are being sent messages that direct them to their next target. Your mission is to decipher the messages, and blast these Martians before they can suck people off the planet. Stay tuned for intercepted Martian messages! (...) We decided to overlay an image of a Martian craft cockpit over the Google Earth window and let the standard Google Earth controls handle moving around the globe. In the cockpit, players see a sequence of clues about the location of each Martian invader.

The article describes more technically the architecture of such project. What is interesting is their conclusions:

We learned that very simple games and casual games are possible now on Google Earth. We also learned that Google Earth is not yet ready to be the foundation of a serious action game. (...) As we write this, rumors are that Google is planning to release an application programming interface (API) for Google Earth, and we hope that will indeed happen soon. That step would really unleash the potential for building games and other applications over Google Earth. With the API release, we are hoping to find it’s much easier to display text on the screen and handle mouse events.

Why do I blog this? what find important here is the flexibility that can hopefully exist with such platforms that could be tinkered, modified and eventually that would the creation of innovative mash-ups.

Social Objects

Ulla-Maaria Mutanen's new project is called "Social Objecs" and aims at building and testing "simple service concepts for labeling, bookmarking and communicating around design, art and craft objects:

The purpose is to bring together four kinds of groups: 1) technology developers, who are interested in testing their products and applications in concrete settings like museums and design exhibitions 2) designers, manufacturers, artists, and crafters who want to generate online conversations around their work 3) museums and exhibition organizers who are interested in finding new ways to engage with their audience 4) university researchers who are interested in the social practices that connect the online and the physical

Why do I blog this? given her current work with ThingLink, this new project seems to be quite compelling. I don't know much about it but the idea of extending the social layer about artifacts is of particular interest IMO. Something that would help to track the history of interaction an object has (with its owner, other persons or the environment) is valuable and the narrative that could be generated out of it can be curious.

Qualitative video game studies: categorization and questions

In Game analysis: Developing a methodological toolkit for the qualitative study of games (a paper published in Game Studies, 6(1) december 2006), Mia Consalvo and Nathan Dutton describe a method for the critical analysis of video games as "texts". Their point is to go beyond "simply playing a game, similar to watching a film, the proper method?": They propose 4 types of targets that could be considered: Object Inventory, Interface Study, Interaction Map, and Gameplay Logs. What I appreciated is the list of questions they set corresponding to these 4 issues:

Object Inventory Interface Study
  • Whether objects are single or multi use
  • The interaction options for objects: do they have one use (and what is it)?
  • Do objects have multiple uses (and what are they)?
  • Do those uses change over time?
  • The object's cost
  • A general description of the object.
What is important about the interface, from the researcher's point of view, is the information and choices that are offered to the player, as well as the information and choices that are withheld. Examining the interface (and going beyond elegance of design or ease of use) lets researchers determine how free players are to experiment with options within a game. Alternately, it can help us see what information is privileged.
Interaction map Gameplay logs
  • Are interactions limited (is there only one or two responses offered to answer a question)? Do interactions change over time (as Sims get to know one another, and like one another, are more choices for interaction are offered)?
  • What is the range of interaction?
  • Are NPCs present, and what dialogue options are offered to them? Can they be interacted with? How? How variable are their interactions?
  • How does the game allow players to save their progress? Are there restrictions to the activity? How and why?
  • Is "saving" as a mechanism integrated somehow into the game world to provide coherence, or is some more obtrusive method offered?
  • Are there situations where avatars can "break the rules" of the game? How and why?
  • A re there situations that appear that the producers probably did not intend? What are they and how do they work?
  • Does the game make references to other media forms or other games? How do these intertextual references function?
  • How are avatars presented? How do they look? Walk? Sound? Move? Are these variables changeable? Are they stereotypical?
  • Does the game fit a certain genre? Does it defy its stated genre? How and why?

Why do I blog this? there is indeed a lack of methodological framework for video game research. Though this corresponds to different research questions than the one I am addressing, the probes and categorization described in this paper are valuable.

Wardriving with cabs

According to O'reilly radar, there's a plan from Ericsson to find cellphone coverage holes in the New York City area by deploying modem-sized sensors in cabs that will report back signal strength and clarity. I liked this part of the interview:

Ericsson chose cabs because they are always on the road and they cover most of the city. They've used other methods in the past. "Our favorite vehicle is the taxicab because of the randomness in its circulation," said Niklas Kylvag, Ericsson's manager of fleet services. But, he added, "We have used trains, trucks, buses, delivery vehicles, limousines, pretty much anything that is moving and has electricity in it. I have myself done testing in the Swiss Alps with this on my back at a ski resort."

Why do I blog this? it's interesting IMO to see how the discovery of seams in techological infrastructures is now rooted in possible end-users' behaviors.

Street life in Lausanne

Spotted in Lausanne, in front of the railway station, there is an interesting stairway (next to the MacDonald) where teens usually hang out: Street Floppy disk Street annotations

The first picture shows a tagged floppy disk stuck on a concrete wall. The second one is an interesting set of street annotations: "Jesus comes back" on a tiny paper clip, url tagged on the walls, remnants of posters, a badly-drawn penis... Why do I blog this? I was just taking some pictures for a potential project about urban gaming and traces left in space.

From proactive computing to proactive people in Ubicomp

Rogers, Y. (2006) Moving on from Weiser's vision of of calm computing: engaging UbiComp experiences. In: P. Dourish and A. Friday (Eds.) Ubicomp 2006 Proceedings, LNCS 4206, pp. 404-421, Springer-Verlag. In this paper, the author starts from the classical ubicomp description by Mark Weisre about a potential era of "calm computing" and explains how research in that domain did not match these expectations. The most important stance of Yvonne Rogers lays in this idea that "An alternative agenda is outlined that focuses on engaging rather than calming people" so that academics can have a new research agenda. Some excerpts:

There is an enormous gap between the dream of comfortable, informed and effortless living and the accomplishments of UbiComp research. As pointed out by Greenfield [20] “we simply don’t do ‘smart’ very well yet” because it involves solving very hard artificial intelligence problems that in many ways are more challenging than creating an artificial human. (...) To this end, I propose one such alternative agenda which focuses on designing UbiComp technologies for engaging user experiences. It argues for a significant shift from proactive computing to proactive people; where UbiComp technologies are designed not to do things for people but to engage them more actively in what they currently do.

What is very pertinent is to see her motivation:

My reason for proposing this is based on the success of researchers who have started to take this approach. In particular, a number of user studies, exploring how UbiComp technologies are being appropriated, are revealing how the ‘excitement of interaction’ can be brought back in innovative ways.

And of course the value of the article is also conveyed by some research directions (which are more or less also phenomenon that we can observe in research publications and projects): the development of small-scale toolkits and sandboxes (that offer the means by which to facilitate creative authoring, designing, learning, thinking and playing), the practice of scientific inquiry and research and the potential for using UbiComp technologies to engage people is as part of self-monitoring and behavioral change programs.

Why do I blog this? I have always been skeptical about the notion of "calm computing" and this article is interesting for that matter. I also found interesting this stance and the vocabulary she's using (for example "A New Agenda for UbiComp: Engaging user Experiences", this "user experience" term is not much frequent for academics).

Additionaly, her comparison between the failure of strong AI and Weiser's vision of ubicomp makes sense.

ATM as a gaming interface

Yesterday evening, some quick search on the web about using ATM interfaces as game platform led me to run across the following news: Ogaki Kyoritsu Bank is introducing fruitmachine-style games of chance which run while the ATM processes its more mundane transactions:

Since Japan's economy turned sour a decade ago, its once-complacent banks have had to work harder to attract custom. And cash machines have been relatively slow to catch on, not least because most banks still insist on charging for withdrawals. In order to persuade clients to use their machines, Japanese banks have introduced a range of inventive selling-points.

Why do I blog this? It's hard to thing more interesting than that, I was expecting some crazy hackers to have tinkered this sort of interface to create hardcore gaming experience. But the only good connection between ATM and games is that some folks designed ATM card to give access to virtual earnings.

Interest-based life logging

Blum, M. Pentland, A. Troster, G. (2006), InSense: Interest-Based Life Logging, IEEE Multimedia, 13 (4), pp. 40- 48. The paper describes a wearable data collection device called InSense based on Vannevar Bush's Memex principles. allows users to continually collect their interactions as store them as a multimedia diary. It basically take into account the sensor readings from a camera, microphone, and accelerometers. The point is to "classify the users activities and "automatically collect multimedia clips when the user is in an “interesting” situation".

What is interesting is the types of categories they picked-up to develop their context-aware framework: they chose location, speech, posture, and activities—to represent many diverse aspects of a user’s context. They also have subcategories (for instance for location: office, home, outdoors, indoors, restaurant, car, street, shop)

The experience sampling approach works like that:

Subjects wear the system for several hours without interacting with it. Audio and acceleration signals are recorded continuously. The camera takes pictures once a minute and WiFi access points are logged to establish location. After the recording session, the user employs an offline annotation tool, which presents an image at a time, the corresponding sound clip, and a list of labels from which to chooseshowing sensor placement.

What is also curious is their description of their algorithm that calculates the current level of interest of an event based on the context classification. Why do I blog this? I am less interested in the purpose of the system itself (sharing material) but rather by the data extracted from context readings and how this could be used to tell a story (or to build up a narrative). Of course, given my interest in games, I see this device as intriguing and potentially relevant to map the first life experience with virtual worlds counterparts; it could go beyond current pedometer that control dogs.

Network Architecture Lab

The Network Architecture Lab (Columbia University Graduate School of Architecture, Planning, and Preservation) directed by Kazys Varnelis:

Specifically, the Network Architecture Lab investigates the impact of computation and communications on architecture and urbanism. What opportunities do programming, telematics, and new media offer architecture? How does the network city affect the building? Who is the subject and what is the object in a world of networked things and spaces? How do transformations in communications reflect and affect the broader socioeconomic milieu? The NetLab seeks to both document this emergent condition and to produce new sites of practice and innovative working methods for architecture in the twenty-first century. Using new media technologies, the lab aims to develop new interfaces to both physical and virtual space.

More on this on BLDGBLOG. Why do I blog this? it's been a while that I spotted Kazys' new lab and I am curious to see how this works collide and to read more about their work.

The World Wide Lab: future of sciences as envisioned by Latour

Sorting my office at the lab, I ran across an old issue of Wired in which there was an article by Bruno Latour that I enjoyed reading: The World Wide Lab. In this article, Latour basically advocates for a paradigm change in research.

Science was what was made inside the walls where white coats were at work. Outside the laboratory's borders began the realm of mere experience - not experiment. (...) Today, all this is changing (...) First, the laboratory has extended its walls to the whole planet. Instruments are everywhere. Houses, factories, and hospitals have become lab outposts. [and the example that Latour takes is about GPS: locative-enabled science] (...) Second, you no longer need a white coat or a PhD to research specific questions. Take the Association Francaise contre les Myopathies, a French patient advocacy group that focuses on ignored genetic diseases. The AFM has not waited for the results of molecular biology to trickle down to patients in wheelchairs. It has hired researchers, pushed for controversial procedures like genetic therapy, and built an entire industry, producing at once a new social identity and a new research agenda. (...) Third, there is the question of scale. The size and complexity of scientific phenomena under scrutiny has grown to the point that scaling them down to fit in a laboratory is becoming increasingly difficult. (...) As a result, contemporary scientific controversies are emerging in what have been called hybrid forums.

Why do I blog this? I am quite interested in this way of describing the future of research (though it does not mean that the same research questions will be addresses). Besides, I really like this idea of the world as a lab (closer to my practices it reminds me this Living Lab initiative)

Google Earth + sketchup = non avatar based metaverse?

Seen last month in CNN Money, this article describes how the through the combination of satellite maps and 3-D software (the 3D modeling program SketchUp), Google Earth is turning into a virtual online playground. Some excerpts I found interesting below. It starts like the Second Life crazyness:

You can already download user-generated layers that sit on top of Google's 3-D Earth and show you, for example, the location of celebrity houses or hiking trails or famous landmarks. One dating service has even started showing people looking for partners as a Google Earth layer. Real estate companies have started showing off virtual versions of their buildings (for sale in the real world) on Google Earth. SketchUp allows them to build entire models of their apartments, right down to the microwave oven.

And the more interesting stuff is coming along:

The result could be that we'll soon populate a virtual version of planet Earth instead of the made-from-scratch metaverses like online games or Second Life. The main element Google Earth is missing today is avatars (...) "I would expect to see someone using Google Earth as a virtual social space by the end of the year," says Jerry Paffendorf, research director of the Acceleration Studies Foundation

Then the article starts describing how the Web can become a 3D metaverse-like environment with blabla and stuff that I am still dubious about. Why do I b log this? Even though I am not very enthusiastic about the whole article there are some relevant stuff here. Of course, stories such "Consumers could fly into the virtual New York, go shopping in a virtual Times Square, get past the velvet rope at a virtual Studio 54 and chat with an avatar dressed as Andy Warhol" always get my hackles up. The journalist seems to stretch out a bit his conclusions. IMO what is interesting with google earth and sketchup is the creativity it allows not that it can be the basis for the future of the web. This said, I additionally think it's very interesting to have a non-avatar based virtual environment; it's indeed a model on which interesting things could be done (though I feel like some avatar will pop up at some point).

Location-based annotation

Spotted this morning in Geneva:Salot

It's written "salot" with two arrows pointing at the windows (with means in fact "salaud", there is a bad typo, an english transaltion would be "asswipe").

What's the equivalent of this with a mobile social software for location tagging?

Onlife, Nintendo Wii and traces of interaction

It's been several weeks that I am hooked on using Onlife, a very simple application that tracks and help you to visualizes traces of your interaction with Mac applications.

Onlife is an application f or the Mac OS X that observes your every interaction with apps such as Safari, Mail and iChat and then creates a personal shoebox of all the web pages you visit, emails you read, documents you write and much more. Onlife then indexes the contents of your shoebox, makes it searchable and displays all the interactions between you and your favorite apps over time.

For instance, yesterday's patterns are quite clear:

Why do I blog this? the notion of "traces of interaction" is very trendy lately, I see it popping up everywhere: about blogjects, in educational technologies (how to use past interactions to fed back users and make them learn? why not using AI techniques such as cased-based reasoning to meet this end?)... This is also an approach favored by Nintendo with the "Wii play history": the Wii indeed automatically records details of what game was played. Users are then able to see the record of how long they played which games.

Now, some might be wondering, what would the potential usage of such applications? To me onlife is interesting to see my work patterns (my web browser is a very important tool that I used in conjunction with my text editor) and eventually adjust my behavior (time to shut down my IM client?). But what else? A problem here might be that those applications are too limited to make sense, a lot of stuff that we do are not logged... and eventually a tremendous problem here is... privacy...

Criticisms towards electronic toys

This week, the WSJ has a critical paper about electronic toys that I found interesting. It starts by reporting the enthusiasm geared towards those devices: the "fusion of technology and personality" of robots, the "Vtech V. Smile Baby Infant Development System claims to go beyond passive developmental videos"... and then criticizes the underlying arguments behind them, questioning their "educational" potential:

two recent studies suggest that the oft-touted educational benefits of such toys are illusory, and child development experts caution that kiddie electronics, even those bought purely for fun, can have negative side effects such as inhibiting creativity and promoting short attention spans. (...) A two-year, government-funded study by researchers at the University of Stirling in Scotland found that electronic toys marketed for their supposed educational benefits, such as the LeapFrog LeapPad, an interactive learning activity toy, and the Vtech V provided no obvious benefits to children. "In terms of basic literacy and number skills I don't think they are more efficient than the more traditional approaches," researcher Lydia Plowman told the Guardian. Although no Luddite (Ms. Plowman makes the rather perverse recommendation that parents give children their old cellphones so that they can learn to "model" adult behavior with technology) (...) At a Boston University conference on language development in November, researchers from Temple University's Infant Laboratory and the Erikson Institute in Chicago described the results of their research on electronic books. The Fisher-Price toy company, which contributed funding for the study, was not pleased. "Parents who are talking about the content [of stories] with their child while reading traditional books are encouraging early literacy," says researcher Julia Parish-Morris, "whereas parents and children reading electronic books together are having a severely truncated experience." Electronic books encouraged a "slightly coercive parent-child interaction," the study found, and were not as effective in promoting early literacy skills as traditional books.

I also liked this comment:

"A lot of these toys direct the play activity of our children by talking to them, singing to them, asking them to press buttons and levers," notes Kathy Hirsch-Pasek, co-director of the Temple University Infant Lab, in a recent research summary. "I look for a toy that doesn't command the child, but lets the child command it."

Why do I blog this? well those critics are harsh and it certainly reflects one part of the reality. It's interesting though and they should no be dismissed. However, I am sure there are some relevance like how this tool can encourage new types of behaviors like new forms of "sociality" based on them: for instance the presence of a robot lead to a discussion between kids or the family about its behavior (see "The Second Self: Computers and the Human Spirit -- Twentieth Anniversary Edition" (Sherry Turkle)for that matter). Moreover, the possibility to hack/program some of those toys can be of interest too.

Criteria to classify location-awareness

After reviewing lots of interfaces that enable location-awareness in both physical and virtual world, I discriminated criteria to describe them. There is no real formal classification of the MLA tools diversity. Nevertheless, according to Jones et al. (2004) in their conceptual framework of location-based and social applications, three characteristics are prominent: the focus of the service (people or place), the content of the awareness (absolute location, relative location or proximity), the time-span (present versus past, which has been referred to as synchronous versus asynchronous).

These characteristics offer a starting point for developing our own classification of MLA tools. Based on Jones’ framework, I discriminated five criteria, as represented on the figure below.

  • The mode of capture of users’ location, which can be self-disclosed (user initiative) or automatically grasped with different degrees of accuracy. For example the user can be asked to send his or her own location so that it can be displayed on the contacts’ lists.
  • The type of information that is stored by the system that falls in two aspects: the position and the referential. Position could either be discrete such as place names or continuous with coordinates in a 2D or 3D space. This corresponds to the space/place distinction we discussed earlier. Of course, there is a need to have a referential, which can be the physical environment, a virtual world or a shared document.
  • The mode of retrieval: user can access information about others’ location in space upon request or by receiving it automatically (if the application is opened). If the retrieval is based on the user’s intiative, it can be based on two focuses: the user can look for information about people (“Display my friends location”) or look who is located in a specific place (“Who is in that room”). This is what Jones et al. (2004) described as a people or place focus.
  • The scope of retrieval: whether it is geographic (representation of the proximity or the whole space), social (displaying everyone or only specific contacts such as friends) or bound to a specific period of time. This last characteristic corresponds to the different between synchronous (information about real-time position in space) and asynchronous MLA (information about real-time and post position in space).
  • The format of delivery that can be described with two sub-characteristics. On one hand, the location referential can be absolute (a place or location coordinates) or relative (indication that a friend is close to you for instance). On the other hand, the final format of display could be verbal (name of a place), symbolic (shown as a symbol), or geographic (depicted on a map metaphor).

Any comment/criticism on that is welcome.

Websearching as a gratification cycle

Some new elaboration on the concept of passively multiplayer game has led Justin Hall to sketch this interesting cycle:

As a reminder, he now defines passively multiplayer game as:

Passively Multiplayer is a system for turning user data into ongoing play. Using computer and mobile phone surveillance, a user and their unique history. These resulting avatars can be viewed online, and they interact with other avatars online. Examples of data: web sites visited, email addresses, chat handles, contents of email or messaging, contents of word processed documents, digital images, digital video, video game moves.

Why do I blog this? I found very interesting this cycle because it puts things in a different perspective since it frames websearching/surfing as an activity with rewards (which I found quite pertinent and true). It's quite similar to how to reach a state of flow in gaming. On this sketch, the only dimension I miss is the social one, let's add a social rating/reputation system on top of that (or see the other sketch).