The tongue becomes a surrogate eye

More about tongue-based interfaces. This is a bit old but I ran across it yesterday: using the tongue as a "surrogate eye" (News from 2001).

Researchers at the University of Wisconsin Madison are developing this tongue-stimulating system, which translates images detected by a camera into a pattern of electric pulses that trigger touch receptors. The scientists say that volunteers testing the prototype soon lose awareness of on-the-tongue sensations. They then perceive the stimulation as shapes and features in space. (...) The Wisconsin researchers say that the whole apparatus could shrink dramatically, becoming both hidden and easily portable. The camera would vanish into an eyeglass frame. From there, it would wirelessly transmit visual data to a dental retainer in the mouth that would house the signal-translating electronics. The retainer would also hold the electrode against the tongue.

(Picture K. Kamm/U. Wis.-Milwaukee)

Why do I blog this? though this is designed for blind people, there are some intriguing potentialities in terms of human-computer input!

The fusion of research and development

The Economist gives a good overview of coporate research in an article entitled "The rise and fall of corporate R&D Out of the dusty labs". The author highlights the fact that tech firms/big corporate R&D laboratories are shifting their attention and forces from research to development. Some excerpts:

Now the big corporate laboratories are either gone or a shadow of what they were. Companies tinker with today's products rather than pay researchers to think big thoughts. (...) “The lesson learnt is that you don't isolate researchers,” says Eric Schmidt, the boss of Google. The “smart people on the hill” method no longer works, he adds. Instead, researchers have become intellectual mercenaries for product teams: they are there to solve immediate needs. (...) At its Zurich Research Laboratory [IBM] around 300 scientists representing over 20 nationalities concentrate on areas such as microelectronics, nanotechnology and computer security. Only a few years ago researchers were judged on the basis of patents and papers, but today they roll up their shirtsleeves and work alongside the company's consultants (...) This reflects IBM's transition into “services science”.

There is a lot more to draw in the paper, especially more examples from Intel, Yahoo, Google and other tech companies. An intriguing issue is also the fact that academia now struggles to find funds and then is forced into projects of just one or two years—even shorter than industry horizons. Whereas "corporate research can look farther ahead, do bigger things and risk more money for a big payout".

Another aspect that I found curious is the idea of failure: "Failure is an essential part of the process. “The way you say this is: ‘Please fail very quickly—so that you can try again’,” says Mr Schmidt".

Why do I blog this? pure interest towards the evolution of R&D.

Archizoom's "No stop city"

The "No stop city" by archizoom associati (italian radical architecture group) is one of the visionary architecture project that Kazys Varnelis desribes as having a role in terms of " bing useful when they don't rely on a proximate future but rather suspend the question of their nearness, thereby being both already present and objects of contemplation". Kazys defines this project as follows.

"Archizoom elaborated on this in their 1969 No-Stop-City, an extrapolation of the postmetropolitan urban condition – that was simultaneously utopian and dystopian. (...) Modeled on the supermarket, the factory, and the horizontal plans of Büro Landschaft, No-Stop-City was envisioned as a "well-equipped residential parking lot" composed of "large floors, micro-climatized and artificially lighted interiors." Without an exterior, these "potentially limitless urban structures" would be "made uniform through climate control and made optimal by information links." Rather than serving to identify a place, No-Stop-City would be a neutral field in which the creation of identity through consumption could be unfettered."

Why do I blog this? I find intriguing this vision of the city, reminds me of what we discussed during the LIFT07 workshop about it. This potential future sees the city as a terminus due to the economic changes and its networked organizations (made possible by technologies such as phones/internets...). As Kazys puts it after Banzi: "No longer viable as a place, the city would become a condition, existing not as a physical entity but as programming". Of course, this is a vision that makes us ask some important "why" questions about the future.

Understanding the cultural dimensions of cities (for urban computing)

Williams, A. and Dourish, P. (2006). Reimagining the City: The Cultural Dimensions of Urban Computing. IEEE Computer, 39(9), 38-43. The paper aims at changing the view of cities as they can be perceived in "urban computing": it's essentially an overview of how cities should not be seen as a generic concept that is made of infrastructures and people living in them. Much rather, the authors advocates for viewing them as a product of history and culture. In a sense they described to what extent how infrastructures, city-dwellers and their practices are entwined to answer the question "What cultural dimensions frame research in technologies for city life?". Doing so, they bring forward three "urban themes":

"Friends and strangers: (...) Others see them as embodiments of communitas, social togetherness, belonging, and mutual support. [Lovegetty, Dodgeball.com] (...) pervasive computing technologies are commonly depicted as being capable of transforming strangers into friends who are available for social (frequently heteronormative) interaction. (...) Paulos and Goodman’s device, Jabberwocky, detects the people its user encounters in travels throughout the city, lighting up when it detects someone the user has encountered before. While not designed as a friend finder, it nonetheless renders spaces intelligible in terms of occupancy and patterns of hidden and potential familiarity. (...) Mobility: (...) we share urban spaces with people who, due to disability, economic status, immigration status, employment, race, caste, and other reasons, find themselves unable to move about easily or, conversely, have mobility forced upon them. (...) Legibility: (...) cities as informative environments that inhabitants can understand and interpret."

So, once integrated, what does that bring to the table? Williams and Dourish interestingly gives 3 directions:

"see spatial distance, regional familiarity, and personal contact not simply as instrumental aspects of cityscapes to be “overcome” by new technologies, but also as contexts within which new technologies must operate.

Second, we should adopt a broader view of the city’s occupants, their activities, and the conditions in which they conduct those activities (...) While urban computing has focused primarily on the city’s image as a setting and container of action, we argue instead for viewing the city that we experience every day as a product of historically and culturally situated practices and flows."

Why do I blog this? The paper definitely echoes with my interest in space as a way to affords social and cognitive interactions. By highlighting the importance of cultural dimensions, the paper is IMHO a pertinent read about how to better think the city as a complex system in which the context matters. Hence the problem of thinking about urban computing as an easily generic design problem for which outcomes can be transferrable or sold everywhere. This definitely helps criticizing normative design such as the so-called intelligent house or smart information systems about public transport that companies want to throw on the markets.

We like complexity?

Speaking with Fabien about my Geoware presentation, one the issue I raised is that some mobile social software have an intrinsic complexity that make them unusable. For example, this crazy project by Honda makes me utterly skeptic. I don't know whether it's a east-asian thing but there seem to be a tendancy towards complexity here (and yes I know Honda is japanese). This eventually leads to a paper by Don Norman that state how cluttered asian interface are perceived as powerful application. Some excerpts:

"I recently toured a department store in South Korea. (...) I found the traditional “white goods” most interesting: Refrigerators and washing machines. The store obviously had the Korean companies LG and Samsung, but also GE, Braun, and Philips. The Korean products seemed more complex than the non-Korean ones, even though the specifications and prices were essentially identical. “Why?” I asked my two guides, both of whom were usability professionals. “Because Koreans like things to look complex,” they responded. It is a symbol: it shows their status.

But while at the store, I marveled at the advance complexities of all appliances, especially ones that once upon a time were quite simple: for example, toasters, refrigerators, and coffee makers, all of which had multiple control dials, multiple LCD displays, and a complexity that defied description."

SO what's Norman's lesson?

"Why is this? Why do we deliberately build things that confuse the people who use them? Answer: Because the people want the features. Because simplicity is a myth whose time has past, if it ever existed."

And, as he explains we do not have to go to Korea or Iran to find this tendancy, we can find it everywhere. Why do I blog this? What is interesting is that Norman is a "less is more" person so he cannot really be challenged on that topic (though some readers took the piss and harshly complained):

"I am not advocating bad design. I am simply pointing out a fact of life: purchasers, on the whole, prefer more powerful devices to less powerful ones. They equate the apparent simplicity of the controls with lack of power: complexity with power"

.

Space, cognition, interaction 3: person and artifacts relationships

This is the third blogpost of a serie that concerns my thoughts about the topic “Space, cognition, interaction” that I address in my dissertation . Step 3 is about the person and artifacts relationships (see step 1 and step 2). Another topic the literature about spatiality addresses is the relationship between people and artifacts located in the vicinity of the participants of a social interaction. Indeed, when a speaker talks about an object to his hearer, they are involved in a collaborative process termed referential communication (Krauss and Weinheimer, 1966). As a matter of fact, the practice of pointing, looking, touching or gesturing to indicate a nearby object mentioned in conversation is essential to human conversation; it is called deictic reference. This spatial knowledge can be used for mutual spatial orientation. Schober (1993) points out that it is easier to build mutual orientations toward a physical space (versus a shared conceptual perspective) because the addressee’s point of view is more easily identified in the physical world. There has been very little research focusing on referential communication in virtual space. Computer approach, like “What You See Is What I See” has been designed in order to support this process but studies show that such tools are not as powerful as deictic hand gestures (Newlands et al., 2002). The authors found fewer deictic acts in computer-mediated interaction; a possible reason for that can be the lack of adequate tools. Researchers, for example, attested that it is actually more difficult to see where avatars are pointing in a 3D virtual environment compared to the real world (Fraser et al., 2000). Consequently, if we think about the role of mutual location-awareness (MLA), knowing the location of others can allow one to make sense of deictic acts and promote referential communication. By projecting oneself to the known partner’s location, one can infer meaning from the deictic references.

Moreover, how the spatial environment is used in abstract cognition is a fundamental issue addressed in cognitive psychology (Kirsh and Maglio, 1994; Kirsh, 1995). These authors explain to what extent space between objects and people is used as a resource in problem solving. According to them, actions like pointing, writing things down, manipulating artifacts or arranging the positions and orientations of nearly objects are examples of how people encode the state of a process or simplify perception. Studies in virtual environments have shown similar results concerning the use of tools in space (Biocca et al., 2001). Biocca explores how people organized virtual tools in an augmented environment. Users had to repair a piece of equipment in a virtual environment. The way they used virtual tools showed patterns of simplifying perception and object manipulation (for instance by placing reference material like clipboard well within the visual field on their right). MLA should then be seen as another set of resources to augment cognitive processes such as memorization or problem solving.

What is also interesting with regard to human activity is the notion of social navigation (Dourish and Chalmers, 1994), which refers to situations in which a user’s navigation through an information space is guided and structured by the activities of others within that space. Social navigation can be defined as “navigation towards a cluster of people or navigation because other people have looked at something” (Munro et al., 1999, p. 3). This refers to the notion of “social space” inferred from the traces left in the environment (virtual orphysical) by people’s activity. As a matter of fact, we all leave signals insocial space that can be decoded by others as traces of a previous use: fingerprints, crowds, footsteps, graffiti, annotations and so on. From these cues, other persons can infer powerful things: others were here, this was popular, where can I find something, and so forth. This process takes place in both virtual and physical settings through recommender/voting systems or collaborative filtering. The most known example of such filtering is the Amazon’s recommendation system, which gives us pointers on books that may interest us based on others’ previous purchases.

References:

Biocca, F., Tang, A., Lamas, D., Gregg, J., Gai, P., & Brady, R. (2001): How do users organize virtual tools around their body in immersive virtual and augmented reality environments? Technical Report: Media Interface and Network Design Laboratories, East Lansing, MI.

Dourish, P. & Chalmers, M., (1994): Running Out of Space: Models of Information Navigation. In Proceedings of (HCI'94): Human Computer Interaction, Glasgow. New York: ACM Press.

Fraser, M., T. Glover, I. Vaghi, S. Benford, C. Greenhalgh, J. Hindmarsh and C. Heath (2000): Revealing the Realities of Collaborative Virtual Reality. Collaborative Virtual Environments. In Proceedings of Collaborative Virtual Environments (CVE 2000), San Francisco, CA, New York: ACM, pp. 29-37.

Kirsh, D., & Maglio, P. (1994). On distinguish between epistemic from pragmatic action. Cognitive Science, 18, 513-549.

Kirsh, D. (1995). The Intelligent Use of Space. Artificial Intelligence, 73(1-2), 31-68.

Krauss, R. M., & Weinheimer, S. (1966). Concurrent feedback, confirmation, and the encoding of referents in verbal communication. Journal of Personality and Social Psychology, 4(3), 343-346.

Munro, A.J., Höök, K., & Benyon, D. (1999). Footprints in the Snow. In A. Munro, K. Höök and D. Benyon (Eds.) Social Navigation of Information Space (pp.1-14). London: Springer.

Newlands, A., Anderson, A., Thomson, A., & Dickson N. (2002). Using Speech Related Gestures to Aid Referential Communication in Face-to-face and Computer-Supported Collaborative Work. In Proceedings of the First congress of the International Society for Gesture Studies, University of Texas at Austin, June 5 - 8, 2002.

Schober, M. F. (1993). Spatial perspective-taking in conversation. Cognition, 47, 1-24.

A dog reaction to an AIBO, an AIBO with fur, a remote-controlled car and a puppy

Kubinyi, E. , Miklosi, A. Kaplan, F. Gacsi, M. Topal, J. Csanyi, V. (2004) Social behaviour of dogs encountering AIBO, an animal-like robot in a neutral and in a feeding situation; Behavioural Processes, 65(3) : 231-239. The paper is an intriguing account of applying robotics in animal behavior test. The goals of these ethologists is to determine whether the a dog-like robot of the Sony can be used to study animal interactions.

"Twenty-four adult and sixteen 4–5 months old pet dogs were tested in two situations where subjects encountered one of four different test-partners: (1) a remote controlled car; (2) an AIBO robot; (3) AIBO with a puppy-scented furry cover; and (4) a 2-month-old puppy. In the neutral situation the dog could interact freely with one of the partners for 1 min in a closed arena in the presence of its owner. In the feeding situation the encounters were started while the dog was eating food."

The results shows that age and context influence the social behaviour of dogs. Moreover, the furry AIBO semed to evoke a higher number of responses in comparison to the car. Other aspects that I found of interest, as described by the authors:

A social partner is not only the carrier of species-specific characters to evoke behaviour on the part of the subject but also actively reacts to the actions of the other. In order to mimic interactive situations, the robot has to be able to detect and react to, at least, some elements of the environment that it shares with the tested animal (...) AIBO did not turn out to be a ‘real’ social partner for the dogs in all respects, but the change of its appearance, the improvement of its movements and speed could make this possible. (...) A further interesting question is whether puppies with experience restricted only to the robot (AIBO “raised” dog-litters) would consider the robot as a social partner.

Why do I blog this? working on a near future laboratory project with julian, I am gathering some material about pet-technology interactions. In this article, I was interested less by the idea of using a robot for behavioral tests (ethology is not my concern) but rather about this sort of study reveal about interactions between pets and technologies.

Zoolander phone

According to the Wikipedia:

Zoolander Phone is a term often used to describe any extremely small and new mobile phone. The term is a reference to the film "Zoolander" (Ben Stiller), in which the title character's (played by Ben Stiller) humorously miniature cell phone is a joke on the continually smaller phones released by phone manufacturers.

(picture taken on eatliver, but this is not the real Zoolander phone, only a look-alike)

Why do I blog this? I was only looking for a picture of an extremely small cell phone to illustrate how mobile UI are tough.

Prevent people from XXXX

weird_trash When affordances of object prevent people to act. In this case, this garbage in Geneva does not allow people to trash big objects (bombs? private trash?). In a sense, this is about delegating to non-humans a certain function.

Be educated by objects.

Update: look at the two other examples that are from France below: on the left, a simple piece of cardboard, on the right, a lucid/translucent trash bag. Both are interesting examples of forcing a transparent behavior. There are also rigid plastic garbage that are transparent (no picture though), all of those appeared in the VIGIPIRATE frenziness in the 90s. This led to lots of curious behavior that I don't have time to address here (low number of trash = garbage everywhere around the one that people found...). No to mention the different way to cover/close the public trash to avoid them being filled (!) with bombs.

Areas of play

In "The space to play", Matt Jones (Nokia Design Multimedia) interestingly describes his group work process when exploring the theme of "play". First, it starts with spotting some signals that "play" is a driving force ("Through weak signals found by our trends research group we had a hunch that "play" as a force in the world was becoming stronger, so we got the go-ahead for a research and design conception project"). Then, they gathered of a multi-disciplinary team ("myself, a technical consultant, Janne Jalkanen and a business consultant, Minh Tran. Our ranks were swelled by academics, independent experts, researchers and designers throughout the span of the project.").

The team worked with user experience experts to refine the driving forces behind "play" ("One of the main components was research carried out with behavioral trend experts, Sense Worldwide. In this collaboration we identified areas of the ever-present driver of play in global culture.") which led to set 4 relevant areas: 1) the playful engagement people have technology by hacking/modifying/tinkering things 2) the reprogramation of space through technologies (turning a metro into a gig or railway station into a pillow fight) 3) the carefully-designed space that engage people in new experiences (serendipituous meetings) 4) re-imagining the urban experience

The next step was to work on how this can fuel interaction design and mobile applications development/mobile devices design ("What would it mean to create truly playful space in our systems, services and devices? "). Matt, for that matter, describes how they wanted to go beyond user-centered design by taking into account concepts such as Csikszentmihalyi's "flow" or examples like Parkour, Elektroplankton.

Why do I blog this? though very classic, it's interesting to see how this work process is described and implemented. My only concern is that I would be happy to know more about how this is turned into applications/products ;) But this is relevant:

“What does this have to do with interaction design or mobile devices? Well, as I’ve said, in play we explore, try new things and push our limits more than in any other state. The practice of experience design often tries to prescribe set paths for the end-user of the device, rather than allow the frustrations of a free exploration of the system. What would it mean to create truly playful space in our systems, services and devices? To create digital weather projects, not just thrilling but constrained slides?”

Early instances of 1st/2nd life connections and intersections

Like V-migo, Teku Teku Angel allows the user to take advantage of his/her movements in the physical environment to make grow a a virtual pet (in the form of an angel) on the Nintendo DS. It's a pedometer that measure the daily steps and turn them into a mean to make an heavenly creature evolving.

As described on Gizmodo, Otoizm is an impressive product presented at the Toy Forum 2006 in Tokyo. Basically, connected to a music player, this device embeds a tamagotchi-like character that grows according to the genre of music you listen to and memorize phrases or compose tunes. It also has multi-user capabilities: when connected the characters dance with each other.

So, why this is important? simply, it's another kind of application that engage users in a first life/second life experience. Unlike Teku Teku Angel that is based on a physical experience, Otoizm rather work as the intersection between digital experiences: the intricate relationship between the virtual pet and the music in the form of digital bits. What is missing is an application that would take advantage of both.

Hasbro and innovation

An article in yesterday edition of the WSJ about game company Hasbro and their innovation practices (by Carol Hymowitz). Some excerpts:

To spur innovation, Hasbro managers keep in touch with a global network of game inventors, do online surveys of customers and observe thousands of children and adults playing games developed in a new lab called GameWorks at the division's headquarters. They also talk with prospective customers about their lives and how they want to spend leisure time (...) "People don't have time to play a game for three hours, so we're asking ourselves how we can leverage brands so they can be played in smaller time frames," says Jill Hambley, a vice president of marketing. (...) Hasbro is also gunning for technology-savvy customers. Sales of videogames outpace board games by more than six to one, so Hasbro makes versions of its board games that can be played on laptops, cellphones or in video format.

Why do I blog this? no big breakthrough here but it's interesting to understand how they work/innovate and of course the results are not surprising: small time chunk devoted for gaming (same a video game industry), use of tech to create new experience.

Space, cognition, interaction 2: Person to person relationship in space

This is the second blogpost of a serie that concerns my thoughts about the topic “Space, cognition, interaction” that I address in my dissertation . Step 2 is about the person to person relationship in space (see step 1). A large amount of research about how spatiality shapes one’s behavior focused on co-present settings since it is the most recurrent situation of our lives. The best-known example of how space structures social interaction is proxemics: the distance between people is indeed a marker that expresses the kind of interaction that occurs, and reveals the social relationships between the interactants (Hall, 1966). Depending on the distance, Hall proposed four kinds of spheres (intimate, personal, social and public) that each affords different types of interactions. His point was also to show how theses interactions are culturally dependent and how distance constrains the types of interactions that are likely to occur. The perception of the “others” in space thus communicates to participants as well as to observers, the nature of the relationships between the interactants and their activity. Studies of 3D worlds show that proxemics are maintained in virtual environments (Jeffrey and Mark, 1998; Krikorian et al. 2000). These authors found that, even in virtual worlds, a certain social distance is kept between participants’ avatars. They noticed how spatial invasions produced anxiety-arousing behavior (like verbal responses, discomfort and overt signs of stress) with attempts to re-establish a preferred physical distance similar to the distance obverted in the physical world.

(picture courtesy of the Library of Congress, Prints and Photograph Division, FSA-OWI Collection taken as an example of how people will maintain differing degrees of distance depending on the social setting and their cultural backgrounds)

Proximity has also proved to improve various processes like conversation initiation. Communication is easier in physical settings than in mediated contexts. The physical environment increases the frequency of meetings, the likelihood of chance encounters and therefore community membership and group awareness thanks to informal conversations triggered by repeated encounters (Kraut et al., 2002). Furthermore, distance between people has an important influence on friendship formation, persuasion and perceived expertise (Latané, 1981). Latane shows that people are more likely to deceive, be less persuaded by and initially cooperate less with someone they believe to be distant.

References:

Hall, E.T. (1966). The Hidden Dimension: Man’s Use of Space in Public and Private. Garden City, N.Y.: Doubleday.

Jeffrey, P., & Mark, G. (1998). Constructing Social Spaces in Virtual Environments: A Study of Navigation and Interaction. In K. Höök, A. Munro, D. Benyon, (Eds.) Personalized and Social Navigation in Information Space, March 16-17, 1998, Stockholm (SICS Technical Report T98:02) 1998) , Stockholm: Swedish Institute of Computer Science (SICS), pp. 24-38.

Krikorian, D.H., Lee, J.S., Makana Chock T., & Harms, C. (2000). Isn't That Spatial?: Distance and Communication in a 2-D Virtual Environment. Journal of Computer Mediated Communication, 5(4).

Kraut, R. E., Fussell, S. R., Brennan, S. E., & Siegel, J. (2002). Understanding effects of proximity on collaboration: Implications for technologies to support remote collaborative work. In P. Hinds & S. Kiesler (Eds.) Distributed Work (pp.137-162), Cambridge: MA: MIT Press.

Latané, B. (1981). The psychology of social impact. American Psychologist, 36(4), 343-356.

Flavonoid

Speaking about 1st Life and 2nd Life connections, the Flavonoid project by Near-Future Laboratory colleague Julian Bleecker is of great interest. To put it shortly, it's a mechanism for translating embodied, kinesthetic activity into 2nd Life actions.

A homebrew, Internet-enabled kinesthetic sensor, conceptually similar to a traditional pedometer, is being designed as a networked object that bridges the geophysical worlds (1st Life) and online digitally networked worlds (2nd Life). By providing data feeds about the kinesthetic activities of the person wearing Flavonoid, various embodiments representing that data can be created in 2nd Life, such as the appearance of online avatars, or that avatar’s wealth or capabilities.

So how does it work?

The Flavonoid Kinesthometer, a wearable networkable device, can transfer data as a networked object, providing simple data feeds of one’s movement over long periods of time. This data provides a channel of RSS information used as a baseline of information that can be translated to 2nd Life representations. (...) Flavonoid is envisioned as a platform, using standard, open feed technologies, for a variety of embodiments. The initial embodiment being a dynamic site “badge” — a small snippet of HTML that can be embedded on virtually any site, such as one’s blog or social networking home page.

The Flavonoid project proposal gives a more thorough description of what is aimed at here.

Why do I blog this because this project takes the "Internet of Things" in a more interesting mode that what we've seen so far. By creating a framework for linking digital environments and the material world ("the leakage of digital networks into the physical world turns that world into a framework for a hybrid 1st Life/2nd Life"), it redefines the notion of embodiment in both environments.

This is an issue that interest me both to think about the future of ubiquitous applications and also as a user experience researcher. From a psychological point of view, there are intriguing questions to address here; especially regarding the overlap of spatial environments, their perception and how the interaction in each of them have an influence in the others.

Building a discourse about design and foresight

Currently completing my PhD program (thesis defense is next week), it gave me the occasion of looking back and think about what interest me. My original background is cognitive sciences (with a strong emphasis on psychology, psycholinguistics and what the french calls ergonomie) and the PhD will be in computer sciences/human computer interaction. In most of my work, I have been confronted to multidisciplinary/interdisciplinarity (even in my undergraduate studies). It took me a while to understand that my interest less laid in pure cognitive science research (for example the investigation of processes such as intersubjectivity, and its relation to technologies) but rather about the potential effects of technologies on human behavior and cognitive process. In a sense this is a more applied goal, and it led me to take into account diverse theories or methods. Of course, this is challenging since mixing oil and water is often troublesome in academia. Given that my research object is embedded in space (technology goes out of the box with ubicomp) and social (technology is deployed in multi-user applications), there was indeed a need to expand from pure cogsci methods and including methods and theories from other disciplines. The most important issues regarding my work for that matter were the never-ending qualitative versus quantitative methods confrontation (I stand in-between using a combination of both, depending on the purpose) AND the situated versus mentalist approach (to put it shortly: is cognition about mind's representation? or is it situated in context?). So, this was a kind of struggle in my phd research.

However, things do not end here. Working in parallel of my PhD as a consultant/user experience researcher for some companies (IT, videogames), I had to keep up with some demands/expectations that are often much more applied... and bound to how this research would affect NPD/design or foresight (the sort of project I work on). Hence, there was a need to have a discourse about these 2 issues: design and foresight. No matter that I was interested in both, it was not that easy to understand how the research results/methods can be turned into material for designers or foresight scenarios. SO, three years of talking with designers, developers, organizing design/foresight workshops, conferences helped a bit but I am still not clear about it (I mean I don't even know how to draw something on paper).

Recently, I tried to clear up my mind about this and the crux issue here is the constant shifting between research and design (or foresight, sorry for putting both in the same bag here but it applies to both). The balance between research that can be reductionist (very focused problem studied, limits in generalizing or time-consuming) and design that needs a global perspective is fundamental. The other day,I had a fruitful discussion with a friend working on consumer insight projects for a big company. Coming from a cognitive science background as this friend, I was interested in his thoughts concerning how he shifted from psychology to management of innovation/design of near-future products/strategy.

I asked him about "turning points" or moments that changed his perspective. He mentioned two highlights. The first one was the paradigm shift in cognitive science in the late 80s when the notion of distributed cognition (Dcog) appeared. Dcog basically posited that cognition was rather a systemic phenomenon that concerned individuals, objects as well as the environment and not only the individual's brain with mental representation. To him, this is an important shift because once we accept the idea that cognition/problem solving/decisions are not an individual process, it's easier to bring social, cultural and organizational issues to the table.

The second highlight he described me is when he use to work for a user experience company that conducted international studies, he figure out that the added value not only laid in those studies but also in the cumulative knowledge they could draw out of them: the trend that emerged, the intrinsical motivation people had for using certain technologies, the moment innovation appeared. This helped him change the way he apprehended the evolution of innovations and made him question the fact that they can follows long s-curves.

material to design the future

Why do I blog this? random thoughts on a rainy sunday afternoon about what I am doing. This is not very structured but I am still trying to organize my thoughts about UX/design/foresight and how I handle that. I guess this is a complex problem that can be addressed by talking with people working on design/foresight/innovation. What impresses me is observing how individual's history helps to understand how certain elements encountered shape each others' perspective.

The picture simply exemplify the idea that conducting design/foresight projects need a constant change of focus between micro and macro perspectives. This reflects the sort of concern I am interested in by taking into account very focused perspectives (user interface, user experience, cognitive processes) and broader issues (socio-cultural elements, organizational constraints...).

Share your life

I already blogged about onlife, this program now called Slife that tracks and help you to visualizes traces of your interaction with Mac applications. There is now a "social component" called Slifeshare:

A Slifeshare is an online space where you share your digital life activities such as browsing the web and listening to music with your friends, family or anyone you care about. It is a whole new way of staying in touch, finding out which sites, videos and music are popular with your friends, meeting new people and discovering great new stuff online. Take it for a spin, it's free, easy to set-up and quite fu

The "how page" is quite complete and might scare to death any people puzzled by how technologies led us to a transparent society (a la Rousseau). Look at the webpage that is created with the slife information:

Why do I blog this? Slife was already an interesting application, in terms of how the history of interaction is shown to the user. This social feature add another component: using Jyri's terminology (watch his video, great insights), it takes people's interaction with various applications as a "social object". This means that designers assume that a sociability will grow out of the interaction patterns (in a similar way to the sociability of Flickr is based on sharing pictures).

Boundary Functions

Boundary Functions is a project by Scott Sona SNIBBE:

"If you participate in this work, you will see a line as a boundary between you and others, which is usually supposed to be invisible, to identify your territory. The boundary changes according to the position of each individual on the floor, but the rule is that the person at the center must always be the closest to the boundary. This line-producing program relates to the "Voronoi Diagram" and "Dirichlet Boundary Conditions", which are used to analyze natural phenomena with mathematical rules: patterns of ethnic settlement, animal dominance, or plant competition in anthropology or geography, the arrangement of atoms in a crystal structure in chemistry, the influence of gravity on stars or star clusters in astronomy, and so on. The boundary that surrounds participants does not exist on their own but changes in a subtle way like conflicts between the individual and society."

Why do I blog this? I thought it's a nice project that exemplify the spatial aspects of interactive technology.

Space, cognition, interaction 1: space/place

This is the first blogpost of a serie that concerns my thoughts about the topic "Space, cognition, interaction" that I address in my dissertation. This issue has been tackled by various disciplines ranging from environmental psychology to sociology, architecture and human-computer interaction when technology is involved. This blogpost serie summarizes some important notions and results arising from these fields. In each of the post I try to describe how this is important to the the object of my research: the location awareness of others. Step 1 is about the differentiation between "space" and "place". A recurrent discussion concerning spatiality targets the differences between the concepts of “space” and “place”. Harrison and Dourish (1996) indeed advocated for talking about place rather than space. They claim that even though we are located in space, people act in places. This difference opposes space defined as a range of x and y coordinates or latitude/longitude to the naming of places such as “home” or “café”. By building up a history of experiences, space becomes a “place” with a significance and utility; a place affords a certain type of activity because it provides the cues that frame participants’ behavior. For instance, a virtual room labeled as “bar” or “office” will trigger different interactions. In a sense, it is the group’s understanding of how the space should be used that transform it into a place. Space is turned into place by including the social meanings of action, the cultural norms as well as the group’s cultural understanding of the objects and the participants located in a given space. However, as Dourish recently claimed, this distinction is currently of particular interest since technologies pervade the spatial environment (Dourish, 2006). This inevitably leads to the intersection of multiple spatialities or the overlay of different “virtual places” in one space. Thus, location-awareness of others also relates to how people make sense of a specific location: depending on the way the location of others is described, it could lead to different inferences. For example, knowing that a friend is at the “library” (place) frames the possible inferences about what the friend might be doing there.

Additionally, partitioning activities is another social function supported by spatiality (Harrison and Dourish, 1996). For example, in a hospital, corridors are meant to be walked in to go to waiting rooms where people wait before meeting doctors who operatein operating rooms. Research concerning virtual places also claims that a virtual room can define a particular domain of interaction (Benford et al. 1993). Chat rooms, for example, are used to support different tasks in collaborative learning: a room for teleconferences and a room for class meetings (Haynes, 1998). Different tasks correspond to virtual locations: a room for meetings related to a project, office rooms related to brainstorm, public spaces related to shopping and so on. Fitzpatrick et al. (1996) found that structuring the workspace into different areas enables to switch between tasks, augments group awareness and provides a sense of place to the users as in the physical world. Since work partitioning can be supported by space, knowing others’ whereabouts is an efficient way to make inferences about the division of labor in a group. Once we know that a person is in a particular place, we can infer that he or she is doing something (as we saw in the distinction space/place) and how this may contribute to the joint activity.

References: Benford, S.D., Bullock, A.N., Cook, N.L., Harvey, P., Ingram, R.J., & Lee, O. (1993). From Rooms to Cyberspace: Models of Interaction in Large Virtual Computer Spaces. Interacting With Computers, 5(2), 217-237.

Dourish, P. (2006). Re-Space-ing Place: Place and Space Ten Years On. In Proceedings of CSCW’2006: ACM Conference on Computer-Supported Cooperative Work (pp.299-308), Banff, Alberta.

Fitzpatrick, G., Kaplan, S. M. Mansfield, T. (1996). Physical spaces, virtual places and social worlds: A study of work in the virtual.. In Q. Jones, and C. Halverson, (Eds.) Proceedings of CSCW'96: ACM Conference on Computer Supported Cooperative Work (pp.334-343), Boston, MA.

Harrison, S., & Dourish, P. (1996). Re-Place-ing Space: The Roles of Place and Space in Collaborative Systems. In Q. Jones, and C. Halverson, (Eds) Proceedings of CSCW'96: ACM Conference on Computer Supported Cooperative Work (pp.67-76), Cambridge MA, ACM Press.

Haynes, C. (1998). Help ! There’s a MOO in This Class. In C. Haynes, and J.R. Holmevik, (Ed.s) High Wired: On the Design, Use, and Theory of Educational Moos (pp.161-176). Ann Arbor: The University of Michigan Press.

The uselessness principle

Free creatures: The role of uselessness in the design of artificial pets by Frédéric Kaplan is a very relevant short paper, which postulates that the success of the existing artificial pets relies on the fact that they are useless.

Frédéric starts by explaining that the difference between an artificial pet and robotic application is that nobody takes it seriously when an AIBO falls, it's rather entertaining.

Paradoxically, these creatures are not designed to respect Asimov’s second law of robotics : ‘A robot must obey a human beings’ orders’. They are designed to have autonomous goals, to simulate autonomous feelings. (...) One way of showing that the pet is a free creature is to allow it to refuse the order of its owner. In our daily use of language, we tend to attribute intentions to devices that are not doing their job well.

What is very interesting in the paper is that the author states that giving the robot this apparent autonomy is a necessary (but not sufficient) feature for the development of a relationship with its owner(s).

Then comes from the uselessness principle:

The creature should always act as if driven by its own goals. However, an additionnal dynamics should ensure that the behavior of the pet is interesting for its owner. It is not because an artificial creature does not perform a useful task that it can not be evaluated. Evaluation should be done on the basis of the subjective interest of the users with the pet. This can be measured in a very precise way using the time that the user is actually spending with the pet. (...) be designed as free ‘not functional’ creatures.

Why do I blog this? first because I am more and more digging into human-robot interaction research since I feel the interesting convergence between robotics and pervasive computing (that may eventually lead to a new category of objects a la Nabaztag). Second, because I am cobbling some notes for different projects for the Near Future Laboratory (pets, geoware).