PC and mobile phones personalization

Blom, J. and Monk, A.F. (2003): Theory of Personalization of Appearance: Why Users Personalize Their PCs and Mobile Phones, Human-Computer Interaction, Vol. 18, No. 3, Pages 193-228, Human-Computer Interaction, Vol. 18, No. 3, Pages 193-228.

Abstract: Three linked qualitative studies were performed to investigate why people choose to personalize the appearance of their PCs and mobile phones and what effects personalization has on their subsequent perception of those devices. The 1st study involved 35 frequent Internet users in a 2-stage procedure. In the 1st phase they were taught to personalize a commercial Web portal and then a recommendation system, both of which they used in the subsequent few days. In the 2nd phase they were allocated to 1 of 7 discussion groups to talk about their experiences with these 2 applications. Transcripts of the discussion groups were coded using grounded theory analysis techniques to derive a theory of personalization of appearance that identifies (a) user-dependent, system-dependent, and contextual dispositions; and (b) cognitive, social, and emotional effects. The 2nd study concentrated on mobile phones and a different user group. Three groups of Finnish high school students discussed the personalization of their mobile phones. Transcripts of these discussions were coded using the categories derived from the 1st study and some small refinements were made to the theory in the light of what was said. Some additional categories were added; otherwise, the theory was supported. In addition, 3 independent coders, naive to the theory, analyzed the transcripts of 1 discussion group each. A high degree of agreement with the investigators' coding was demonstrated. In the 3rd study, a heterogeneous sample of 8 people who used the Internet for leisure purposes were visited in their homes. The degree to which they had personalized their PCs was found to be well predicted by the dispositions in the theory. Design implications of the theory are discussed.

GRRR I cannot get the pdf (registration required)

Autotelematic Spider Bots

John Marshall sent me some information about this marvelous project: Rinaldo and Howard's Autotelematic Spider Bots: spider-like sculptures which interact with the public in real-time, moving around the gallery to find food sources and projecting images of what they can see onto the gallery walls.

Why do I blog this? first because I like Rinaldo's work and also because those wandering robots seems interesting, in terms of artificial life thinking.

Tracking and displaying the paths of visitors

Via Computing for Emergent Architecture: You Are Here 2004 (led by Eric Siegel) is an interesting application that tracks and displays the paths of visitors traveling through a large public space.

The system displays the aggregate paths of the last two hundred visitors along with blobs representing the people currently being tracked. When viewers approach the work, they can display the live video image with the paths of currently tracked visitors superimposed. (...) The technology of this system is rooted in surveillance systems that are rapidly being put into place in all of our public spaces: airports, shopping malls, grocery stores and our streets and parks. The motivation for such public systems ranges from security and law enforcement to marketing and advertising. The system of this artwork is wholly anonymous – no data is collected and the only use of the information is by the museum visitors to track themselves and their friends. However, in many real-world applications of such technology, the identities of those being tracked are also registered. You Are Here provides a visceral understanding of surveillance systems' capabilities and a sensual, visual representation of information that is normally only accessible as dry statistics.

This benevolent application of tracking is also meant to show the interconnectedness of viewers' with other visitors to the space by give them a sense of the aggregate presence of people over time.

Why do I blog this? it's an interesting art piece that address the issue of spatial data, food for thoughts for our replay tool project.

Collaborative WiFi-drinking interface

Lover's Cups, a MIT Medialab project by Jackie Lee and Hyemin Chung:

Lover's Cups explore the idea of sharing feelings of drinking between two people in different places by using cups as communication interfaces of drinking. Two cups are wireless connected to each other with sip sensors and LED illumination. The Lover's cups will glow when your lover is drinking. When both of you are drinking at the same time, both of the Lover's Cups glow and celebrate this virtual kiss.

The idea is to shows how computer interfaces can enhance common activities and use them as communication method between people: the act of drinking is used as an input of remote communication with the support of computer interfaces.

Why do I blog this? well sometimes awareness tools are utterly crazy!

More about it: the authors wrote a paper for CHI, check the pdf.

Dodge and destroy Calder's kinetic mobiles in an Atari space shooter

Makers of Pac-Mondrian developed a new game called Calderoids in which players have to dodge and destroy Alexander Calder's kinetic mobiles in the triangular ship of Atari's space shooter Asteroids.

Calderoids combines the relatavistic theories of Alexander Calder's kinetic sculptures with the virtual dimensions of Atari's arcade classic Asteroids. (...) After creating Pac-Mondrian, we were on a mission to create a videogame art mashup for Atari’s greatest selling arcade hit, the space shooter Asteroids. The first artist suggested whose work lent itself to the form of the game was Joan Miro, whose pen and ink ‘Constellation’ series resembled a field of asteroids. Ian Hooper declared Calder’s mobiles filled a far better formal fit, given their fanciful free flight. Creating the first body of sculptures that moved, Calder called his early sculptures ‘Constellations’ after Miro, and presaged their videogame destruction in 'Vertical Constellation with Bomb'. Although Mondrian’s squares provided the initial inspiration, the biomorphic forms in Calder's mobiles were directly influenced by his friend and sometime collaborator Joan Miro. Ian Hooper’s conception of Calderoids mirrors Calder’s own aesthetic merging of Mondrian & Miro in the mobiles. After consuming the brightly coloured squares of Pac-Mondrian, and contemplating Miro’s constellations, the motion and form of Calder’s mobiles led directly to shooting stars in Calderoids.

Available position in Olso about tangible computing

Timo told me that there is an available PhD position at the Oslo School of Architecture and Design for his Touch Project:

A PhD in Touch Radio Frequency IDentification is a wireless technology that is is currently finding applications in the replacement of barcodes in supply chains and logistics. This cheap and potentially ubiquitous technology is likely to influence the interactions we have with many products and services. The Touch project therefore looks at user-centred applications of the technology. A PhD is now available as part of the project.

Touch is interested in developing user-centred applications and services: assessing ways in which the technology might be used in everyday life in useful, fun and non-invasive ways. The growing integration of RFID readers in mobile phones enables simple interactions between phones and physical objects with a ‘swipe’ or ‘touch’. In Japan there are around 10 million people paying for tickets and other services with ‘wallet phones’ and near field communication. These applications in ticketing and retail are the first areas to emerge as mass-market uses.

An initial exploratory period will develop specific research questions and application areas. Touch will look closely at social practices around mobile use and RFID. How does the increasing digitalisation of physical objects affect identity, culture, play, and issues of social transformation. Are there areas of everyday physical activity that would benefit from network intervention? Are there networked, online activities that could be supported by interactions with the physical?

The project will develop a number of practical investigations of the relationship between the digital and the physical. In particular looking at shifts in advertising or marketing, retail activity, public and civic services, gaming or play, and issues around personal, social and communicative uses. Through the design of digital and physical artefacts, applications and prototypes, the project will build a body of knowledge around near field interactions.

The PhD will work on specific themes within the project. This will require self-initiated research, as well as collaborative development with other designers, an anthropologist, software developers, the mobile industry and user groups. Applicants should have a design background and be able to demonstrate knowledge of social, tangible or mobile interaction design. Applicants are encouraged to submit a diversity of themes and approaches within these areas.

The fellowship is provided by Institute of Design, AHO, Oslo, Norway, and has a duration of 3 years, starting date early to mid 2006. The yearly salary amounts to NOK 292.000.

Deadline for applications: Postmarked no later than 22 March 2006.

Applications should be sent to: Attn: Timo Arnall / Interaction Design Oslo School of Architecture and Design Maridalsveien 29 0175 Oslo Norway

Questions or submissions via email to timo.arnall [at] aho.no

Knowing some bits an pieces about the project that Timo explained to me, this seems to be a tremendous opportunity!

Virilio on designing accidents before the substance

Another great quote from Paul Virilio's book "L'accident originel":

"...imaginons une prospective de l'accident. En effet, puisque ce dernier est innové dans l'instant de la découverte scientifique ou technique, peut-être pourrions-nous, à l'inverse, inventer directement "l'accident" afin de déterminer par la suite, la nature de la fameuse "substance" du produit ou de l'appareil implicitement découverts, évitant ainsi le développement de certaines catastrophes prétenduments accidentelles" Paul Virilio, p114

my translation:

"...let's imagine accident forecasting. As a matter of fact, since the accident is created with the scientific or technical discovering, perhaps could we, conversely, invent directly the "accident" to thus determine the nature of the "substance" of the product or the artifact implicitly discovered, avoiding the development of certain so-called accidental catastrophe"

Why do I blog this? I like the idea of designing the troubles before thinking about the artefacts, a kind of reverse-engineering technique to foster idea creations...

PSP and GPS: two tracks

There are two ways of thinking in terms of location-based games/services on the Sony PSP. The first track is to wait for the proper GPS adapter Sony is working on, scheduled to be launched before the end of 2006 (as written in the US PlayStation Magazine). It might be a USB adapter already presented at E3 in 2004 //thanks Sylvain!):

(picture via gamongirls)

The second track is of course to lack at the undergound world: gpsp is a hack that turns the PSP into a GPS navigation system, developed by ?Art?:

Hi Guys, this is the first version of a program for the PSP that provides a practical GPS Graphic User Interface (GUI). The GPSP software for Sony PSP runs under LUAplayer 0.11 or later. LUAplayer is free. If you haven't downloaded it, you will need to get it running on your 1.50 firmware PSP in order to try this out for yourself.

Well, as it exists at the moment, it will allow you to view data from a GPS Mouse on the screen of your PSP running the GPSP program. This occurs in real time, however there is a delay from when the data is received by the microcontroller circuit, and retransmitted at the slower rate to the PSP. The pic circuit is also filtering information from the NMEA sentences transmitted by the GPS mouse, and discarding any information that GPSP doesn't use so that minimal bytes are retransmitted by the pic circuit to achieve (or try) the fastest transmission that GPSP will interpret.

Why do I blog this? if the first track is released this would be a good step towards mass-market console with location-based capabilities (I'm not talking about cell phones here).

Research taxonomy by Jarvinen

In How to select an appropriate research method in ergonomic studies? by Jarvinen is very insightful paper describing research methods that could be valuable in my work about HCI/CSCW. The paper provides a taxonomy

the research approaches is first divided into two classes, one or both are then divided again into two subclasses etc. (...) Two classes are based on whether the research question refers to what is a (part of) reality or does it stress on utility of an innovation, usually an artefact (something made by human beings). (...) To analyze our research question we can apply our taxonomy (Figure 1) to the question above and find that the question concerns a research work, i.e. a part of reality. In the further more detailed analysis of our research question we find that term 'appropriate' refers to utility, and hence we can decide that our research question concerns either innovation building or innovation evaluation. We do not yet have anything to evaluate, but we must build it, in this case we must build some method, procedure or algorithm to select an appropriate research method.

Why do I blog this? because I feel like this expresses a nice framework about the research projects I am carrying out.

Internal/external memory

While reading studies-observations board topic about "what's the most "everyware" thing available today?", I thought about the importance of USB keys. But here what interest me is less the pervasiveness (or the non-ubiquity) of this objet but rather the fact that lots of people carry a bag of external of knowledge with them. What is even more amazing is often WHERE it's carried: with a necklace.

It's funny seeing people carrying out "their external memory" with a necklace, there is an intriguing connection between this fashionable trend and the fact that this external prothesis is close to the mouth (where we somehow express information through language):

I put the "so-called" thing because the notion that memory is in people's brain is somehow passé given the situatedness of cognition (as well as some phenomenological theories).

Why do I blog this? well... I thought the connection was funny enough to be raised.

Human-robot interactions in the NYT

It seems that the NYT somehow covered the human-robot interaction conference.

If robots can act in lots of ways, how do people want them to act? We certainly don't want our robots to kill us, but do we like them happy or sad, bubbly or cranky? "The short answer is no one really know what kind of emotions people want in robots, " said Maja Mataric, a computer science professor at the University of Southern California. (...) There are signs that in some cases, at least, a cranky or sad robot might be more effective than a happy or neutral one. At Carnegie Mellon University, Rachel Gockley, a graduate student, found that in certain circumstances people spent more time interacting with a robotic receptionist — a disembodied face on a monitor — when the face looked and sounded unhappy. And at Stanford, Clifford Nass, a professor of communication, found that in a simulation, drivers in a bad mood had far fewer accidents when they were listening to a subdued voice making comments about the drive. (...) "People respond to robots in precisely the same way they respond to people," Dr. Nass said. A robot must have human emotions, said Christoph Bartneck of the Eindhoven University of Technology in the Netherlands. That raises problems for developers, however, since emotions have to be modeled for the robot's computer. "And we don't really understand human emotions well enough to formalize them well," he said.

Above all, I like this excerpt:

"If robots are to interact with us," said Matthias Scheutz, director of the artificial intelligence laboratory at Notre Dame, "then the robot should be such so that people can make its behavior predictive." That is, people should be able to understand how and why the robot acts.

Why do I blog this?I like it because it put the emphasis on the importance of mutual modeling in social behavior; mutual modeling refers to the inference an individual does (attribution) about others in terms of their intents or their cognitive and emotional states. The quote above hence raises the fact that there is a need of improving the mutual modeling process between human and robots. Another intriguing issue is that people starts projecting or anthropomorphing with the robotic artifact, as they do with pets. I am interested in this because the blogject concept might lead to similar situations in which people will have to assign certain meaning to the blogject agency.

NADA: code in flash or java to control analog devices

Shown by Mike Kuniavsky at eTech yesterday: NADA:

According to a transcript of his talk:

NADA is a suite to let designers code in flash or java to control analog devices - demoing NADA * NADA component in flash * draws a circle, adds actionscript to it to respond to an old volume knob of a tv hooked up to his computer. turns the knob, circle's transparency changes. 30 secs of coding. cool. one like of AS * has an airplane force sensor configured to zoom the image, a light sensor controlling transparency, potentiometer to rotate the image. like 50 lines of code.

free version of NADA available at http://sketchtools.com, tutorials also avail for those who have no flash experience (for design students) - examples in flash and java

Why do I blog this? talking with some (game) designers complaining about the lack of tool to create physical objects/prototypes, this tool seems to be a good starting point. This appears to be an interesting ubiquitous computing design tool.

Shoes interface which enables users to interact with real world objects

Tap World: Shoes interface for real world interaction (developed here):

"Tap World" is a pair of shoes interface which enables users to interact with real world objects. (...) "Smart Tap Shoes", which enables us to manipulate various real world apparatuses by using shoes when user's hands are occupied with another task. "Smart Tap Shoes" is composed of several sensors, Laptop PC, and infrared ray transmitters. (...) Smart Tap Shoes has tap switches behind the shoes. So user can control the objects only tap a floor with Smart Tap Shoes. User also can operate by turning shoe around the heel by rotate sensors in the heel. If he wants to turn up/down TV's volume, it would be the way of easy to use. When user did the action, Smart Tap Shoes transmitted infrared ray to control the objects. In this picture, he switched on the light by tap a floor.

Why do I blog this? this project is quite old but curious enough to land here; we'll have to pay attention to where our shoes are (by googling them) before tuning them to control our set-top boxes!

Weird toy to magnify insects and listen to their sounds

Via geisha asobi, this weird toy: Big Bad Booming Bugs:

Collect some insects and place them inside the unique sound chamber. A powerful 3X magnifier enlarges your performers so you can see every detail. Put on the headphones and listen as a microphone under the special sound stage picks up and amplifies every move and noise your bug makes!

Includes a handy capture-and-carry bug scooper.

17.99$, what a creepy thing.

Meeting with Phd advisor (march 2006)

I presented the model of mobile collaboration I defined (derived from the CatchBob results as well as coordination theories) to my PhD advisor. It actually addresses the exchange of various kinds of interfaces to foster coordination among a mobile group of players. There are actually two tracks to validate the model described previously:

  • A tool that would foster the exchange of coordination keys, to better support collaboration. In the form of a structured interface, this tool will suggest the exchange of certain kind of strategy messages (instead of automating which failed as we saw in the first experiment). The analysis of the interface usage will allow us to validate or refine the model by checking when specific keys are exchanged during over time.
  • A formal description (in the form of a grammar) of coordination elements that would help the analysis of mobile collaboration. Using along with the replay tool, this grammar will help characterizing visually how users collaborated with regards to peculiar processes: coordination keys exchange, division of labor, duration of subtasks… The validation of the model will consist in using this grammar with the replay tool to the differences for groups who badly collaborated or for those who collaborated efficiently. We already know what are the “good” and “bad” collaborators (related to the task performance and various indexes), we will see whether the grammar fits into that picture. In the end, this grammar is meant to allow a better comprehension of collaborative processes in mobile teams.

The research process, visually speaking:

Wearable Computing (location-aware) for Aircraft Maintenance

Via Tom Nicolai's weblog (which is actually a "wearlog"), this Wearable Computing for Aircraft Maintenance, a concept for a combination of wearable computing and knowledge management with the goal to shorten the maintenance process in the aircraft industry. It's a kind of location-aware, wearable information system meant to facilitate the access to different sources of information a technician needs during the maintenance task:

The core of the wearable computer is a PDA. The device can be used like a usual PDA in the handheld mode or it can be stored in a holder for wearable operation. In the holder, the PDA connects to an HMD and automatically adapts its user interface to the changed modalities. By a wrist worn input device the user controls the wearable. The input device also contains a RFID scanner. The scanner will be used to identify areas in the aircraft. Subsequently the computer can display information and logbook entries associated to that area. The aircraft itself will be equipped with RFID tags and a server to store the part descriptions with references to the RFID tags. Data storage and knowledge management is not possible on the PDA directly. Thus, the PDA is designed as a client to a notebook computer carried in the toolbox of the technician.

Paul Virilio and accidents

Just finished reading Paul Virilio's book "L'accident originel" in the train this morning. It was amazingly interesting, here some excerpts of an interview of the author about this book:

Accidents have always fascinated me. It is the intellectual scapegoat of the technological; accident is diagnostic of technology. To invent the train is to invent derailment; to invent the ship is to invent the shipwreck. The ship that sinks says much more to me about technology than the ship that floats! Today the question of the accident arises with new technologies, like the image of the stock market crash on Wall Street. Program trading: here there is the image of the general accident, no longer the particular accident like the derailment or the shipwreck. In old technologies, the accident is "local"; with information technologies it is "global." We do not yet understand very well this negative innovation. We have not understood the power of the virtual accident. We are faced with a new type of accident for which the only reference is the analogy to the stock market crash, but this is not sufficient.

The whole book deals with this idea of accidents ("ce qui arrive" / "what happens"), dromology, relation to space, speed and media. It comes form an exhibit he worked on at the Fondation Cartier in Paris, advocating for a future "Museum of the Accident": here's what he says: "Is the reconstituted accident a foreshadowing of the Museum of the Accident?":

I also like his point of how technology reshapes the spatial praxis as well as the notion of familiarity I addressed yesterday:

I think that the infosphere - the sphere of information - is going to impose itself on the geosphere. We are going to be living in a reduced world. The capacity of interactivity is going to reduce the world, real space to nearly nothing. Therefore, in the near future, people will have a feeling of being enclosed in a small, confined, environment. In fact, there is already a speed pollution which reduces the world to nothing. Just as Foucault spoke of this feeling among the imprisoned, I believe that there will be for future generations a feeling of confinement in the world, of incarceration which will certainly be at the limit of tolerability, by virtue of the speed of information. If I were to give a last image, interactivity is to real space what radioactivity is to the atmosphere.

Why do I blog this? because I like what Virilio expresses and how he does it.

Places, familiarity and proximity

The french website Espace temps features an interesting article about the shift in place inhabitance, by Mathis Stock. There is a shift between two ways of living in a place; there are actually 2 models "mono-topique and "multi-topique" (in french).

Les points rouges indiquent les lieux familiers, les points jaunes les lieux non familiers : ce modèle graphique signifie que contrairement à d’autres sociétés ou d’autres époques, les lieux proches ne sont plus nécessairement ceux qui sont les mieux connus et les plus familiers. On voit notamment dans ce modèle que les lieux familiers peuvent être situés à des distances plus grandes que le rayon marquant la limite de l’espace de proximité. La variable discriminante pour déterminer la familiarité avec les lieux n’est plus la distance, mais la fréquence. Le second cercle symbolise l’accroissement de l’accessibilité à partir d’un lieu — supposé classiquement en forme de cercles concentriques — mais qui ne rend pas compte des accessibilités différentielles, c’est-à -dire des accessibilités localement meilleures.

This is nicely expressed by one of the graphic in this paper (on the left: the old "monotopique" model, on the right, the new ""polytopique"model):

Red points depicts familiar places, yellow one depicts non-familiar places: this model shows that now, geographically-close places are not so familiar or well-known as it used to be. Familiar places can be far from each other. The important variable that explains the familiarity of places if not the proximity but the frequency.

Why do I blog this? I like this idea of not focusing on distance but frequency of places visit to characterize the familiarity. Besides, this is just a glimpse of the paper, it's fully of good references and ideas about the concept of mobility.

In favor of cooperation in games

Chris Bateman has an interesting post in his blog about the fact that cooperation is often overlooked in the video game industry. This is fact I fully acknowledge because part of my work is devoted to the study of sociocognitive processes involved in cooperation/collaboration (related to technological artifacts) and another part is about doing "user experience" R&D for video-game companies. Bateman's feeling is really what i felt when talking to some game designers. He tries to promote new game design concepts that would take this dimension into account:

The most basic form of this kind of play is the team game. A typical team game is based around each player having the same capabilities; in essence, the game provides multiple avatars, one for each player... Gauntlet (...) Mostly, we see two player team games which we could term partner games - like the co-op mode in Halo, the two player rampages in San Andreas. In a partner game, it is usually possible for the players to play independently, co-operating only when a particularly difficult problem blocks their path. Some partner games take this further, usually by exploiting some measure of asymmetry to define separate roles. (...) upport play. The main player does all the work, but the second player has the potential to contribute support to the main play. (...) tutor play. It is often the case when a player comes to a game for the first time that they will be taught to play be a second player.

Why do I blog this? I definitely agree with what he describes and it's very interesting to see how a practitioner reach the same conclusion (in terms of cooperation types) as coordination/collaboration theorists. Variables like partners asymetry/roles, tutor roles, players' contribution are very often cited in this literature.

Social proximity with bluetooth

T. Nicolai, N. Behrens, and E. Yoneki "Wireless Rope: An Experiment in Social Proximity Sensing with Bluetooth". IEEE International Conference on Pervasive Computing and Communications (PerCom) – Demo, Pisa, Italy, March 2006. The article describes an application called "Wireless Rope", an application on Java enabled phones, which collects information of surrounding devices by Bluetooth. The authors study large scale Bluetooth scanning for proximity detection with consumer devices and its effects on group dynamics during the conference.

Like a real rope tying together mountaineers, the Wireless Rope gives the urban group immediate feedback (tactile or audio) when a member gets lost or approaches. Thus everybody can fully engage in the interaction with the environment, and cognitive resources for keeping track of the group are freed. The program also displays the current status of the rope (Fig. 1). At the same time, collected information kept in the devices are gathered at a central station via special tracking stations. Registered users can look at the connection map created by gathered information from phones via the web (Fig. 2).

Why do I blog this? I'd be happy to see the results from the experiments used with this tool:

We plan to evaluate the logged information afterwards to analyse the connection patterns, group formation and evolution, and social patterns including an evaluation of the usefulness of Bluetooth for this kind of proximity detection. The result from this experiment may provide the aid which highlights relations between objects, people, situations within the given space, a scientific conference envi- ronment. This could be extended to map urban inhabitants. Our future fabric of digital and wireless computing will influence, disrupt, expand and be integrated into the social patterns within our public urban landscape.

Especially with regards to certain question: how this application fits into people's practices? how do the users react to awareness of others?