Filtering by Category: Research

About technology non-use

An inspiring article in ACM interactions called "on the importance and implications of studying technology non-use" by Eric Baumer, Jenna Burrell, Morgan Ames, Jed Brubaker and Paul Dourish. It revolves around the idea that the "the dominant discourse in HCI still focuses primarily on technology users" and that "non-use and other forms of technological relationships" are both common and relevant to analyze. The topic that caught my attention is the typology of non-usage:

"Non-use could be understood as the absence of action and, as such, may not be amenable to study through methods traditionally used to study participants’ actions. [...] In contrast, Jonathan Lukens’s study of visual artists who avoid using tools such as Photoshop for specific portions of their work demonstrates how non-use can require as much, if not more, conscious, deliberate, effortful action as technology use does. In this way, while non-use is often understood as the absence of a phenomenon or practice, something else likely exists in place of use, and it is that something we should be studying. [...] Lindsay Ems’s research highlights that even individuals or groups famous for non-use, such as the Amish, do not avoid information and communication technologies entirely, but rather selectively take them up, mediated by cultural norms and religious values. [...] non-use could be understood not as an identity, where a given individual is either a user or a non-user, but rather as a continually negotiated practice. For example, Alex Leavitt’s work studying situational non-use of Google Glass points to the moment-to-moment negotiations, often around privacy, between the Glass wearer and others about when and how the technology should (and should not) be used."

See also the position papers from the workshop that led to this paper.

Why do I blog this? Because this kind of blind spot might be interesting to focus on in a design ethnography class.

Using historical research in HCI/Ubicomp

"Historical Analysis: Using the Past to Design the Future" by Wyche, Sengers and Grinter is an article about how the discipline of history similarly can contribute to research about human-computer interaction and ubiquitous computing. The authors takes the example of a specific context, domestic environments, to show that history can go beyond inspiring "new form factors and styles such as retro" by providing "strategies that, like anthropology, unpack the culture of the home and, like art-inspired design, defamiliarize the home".

The process is described as the following:

" First, we analysed historical texts to identify major themes in the development of technologies (often automation) for the activities under investigation, in our case housework. Second, we gained a broader understanding of the existing technological design space through the search of patents. Third, we developed a personal sense of the changing nature of housework through examination of primary sources from popular culture. Finally, as part of broader fieldwork we gathered oral histories from older people, using a designed, material artefact that reflected the popular history of housework to stimulate memories and reflections."

And here's how they saw a contribution:

"It was effective in helping us understand the subtle changes that have resulted with the introduction of new domestic technologies and in opening new space for design. Although the historical texts already revealed themes pertinent to ubicomp design (i.e. labor-saving debate and technology’s gendered character), by drawing on popular texts, patents, and interviews with elders as well, we learned things that could not easily be gleaned from texts alones. (...) With current interest in restoring felt experience as central to design, we believe that historical analysis is an important source for becoming aware of sensual aspects of experiences that have become lost but could be addressed in new forms of technology design. (...) In addition to revealing how felt qualities are altered with the introduction of new technologies, another benefit of our historically grounded approach is its potential to inspire radically novel design concepts. A collection of speculative design proposals resulted from our process [see 30 and 36 for details]. Like ethnography, history forces designers to become more aware of their preconceptions about a topic."

Why do I blog this? Working on the history of game controller, I am currently putting together a list of references about the role of history and historically informed approaches in (interaction) design research. This paper gives some interesting pointers about it.

"Journals of Negative Results"

A recent trend in academic sciences consist in the publication of "negative results". This is based on the idea that scientific articles published in traditional journals frequently provide insufficient evidence regarding negative data. More specifically, the point is to give a voice to negative results, experimental failures or results with low statistical significant. Some examples: Journal of Interesting Negative Results in Natural Language Processing and Machine Learning As described on their website:

"The journal will bring to the fore research in Natural Language Processing and Machine Learning that uncovers interesting negative results. (...) Insofar as both our research areas focus on theories "proven" via empirical methods, we are sure to encounter ideas that fail at the experimental stage for unexpected, and often interesting, reasons. Much can be learned by analysing why some ideas, while intuitive and plausible, do not work. The importance of counter-examples for disproving conjectures is already well known. Negative results may point to interesting and important open problems. Knowing directions that lead to dead-ends in research can help others avoid replicating paths that take them nowhere. This might accelerate progress or even break through walls!"

Journal of Negative Results in Biomedicine As described on their website:

"Journal of Negative Results in BioMedicine is an open access, peer-reviewed, online journal that promotes a discussion of unexpected, controversial, provocative and/or negative results in the context of current tenets.

The journal invites scientists and physicians to submit work that illustrates how commonly used methods and techniques are unsuitable for studying a particular phenomenon. Journal of Negative Results in BioMedicine strongly promotes and invites the publication of clinical trials that fall short of demonstrating an improvement over current treatments. The aim of the journal is to provide scientists and physicians with responsible and balanced information in order to improve experimental designs and clinical decisions."

Journal of Pharmaceutical Negative Results As described on their website:

"Journal of Pharmaceutical Negative Results is a peer reviewed journal developed to publish original, innovative and novel research articles resulting in negative results. This peer-reviewed scientific journal publishes theoretical and empirical papers that reports the negative findings and research failures in pharmaceutical field."

Journal of Negative Results in Ecology and Evolutionary Biology As described on their website:

"The primary intention of Journal of Negative Results is to provide an online-medium for the publication of peer-reviewed, sound scientific work in ecology and evolutionary biology that may otherwise remain unknown. In recent years, the trend has been to publish only studies with 'significant' results and to ignore studies that seem uneventful. This may lead to a biased, perhaps untrue, representation of what exists in nature. By counter-balancing such selective reporting, JNR aims to expand the capacity for formulating generalizations. The work to be published in JNR will include studies that 1) test novel or established hypotheses/theories that yield negative or dissenting results, or 2) replicate work published previously (in either cognate or different systems). Short notes on studies in which the data are biologically interesting but lack statistical power are also welcome."

Why do I blog this? Writing the conclusion of my book about technological failures lead me to discuss the importance of documentation. I highlighted (bold) the variety of purposes, which are sometimes different from one journal to another.

Besides, the title of the papers are utterly fascinating. See for yourself: "Failure of calcium gluconate internal gelation for prolonging drug release from alginate-chitosan-based ocular insert of atenolol", "Influence of some hydrophilic polymers on dissolution characteristics of furosemide through solid dispersion: An unsatisfied attempt for immediate release formulation", "Some commonly observed statistical errors in clinical trials published in Indian Medical Journals".

User's involvement in location obfuscation with LBS

Exploring End User Preferences for Location Obfuscation, Location-Based Services, and the Value of Location is an interesting paper written by Bernheim Brush, John Krumm, and James Scott from Microsoft Research. The paper presents the result from a field study about people’s concerns about the collection and sharing of long-term location traces. To do so, they interviewed 32 person from 12 households as part of a 2-month GPS logging study. The researchers also investigated how the same people react to location "obfuscation methods":

  • "Deleting: Delete data near your home(s): Using a non-regular polygon all data within a certain distance of your home and other specific locations you select. This would help prevent someone from discovering where you live.
  • Randomizing: Randomly move each of your GPS points by a limited amount. The conditions below ask about progressively more randomization. This would make it harder for someone else to determine your exact location.
  • Discretizing: Instead of giving your exact location, give only a square that contains your location. Your exact location could not be determined, only that you were somewhere in the square. This would make it difficult for someone to determine your exact location.
  • Subsampling: Delete some of your data so there is gap in time between the points. Anyone who can see your data would only know about your location at certain times.
  • Mixing: Instead of giving your exact location, give an area that includes the locations of other people. This means your location would be confused with some number of other people."

Results indicate that;

"Participants preferred different location obfuscation strategies: Mixing data to provide k-anonymity (15/32), Deleting data near the home (8/32), and Randomizing (7/32). However, their explanations of their choices were consistent with their personal privacy concerns (protecting their home location, obscuring their identity, and not having their precise location/schedule known). When deciding with whom to share with, many participants (20/32) always shared with the same recipient (e.g. public anonymous or academic/corporate) if they shared at all. However, participants showed a lack of awareness of the privacy interrelationships in their location traces, often differing within a household as to whether to share and at what level."

Why do I blog this? Gathering material about location-based services, digital traces and privacy for a potential research project proposal. What is interesting in this study is simply that the findings show that end-user involvement in obfuscation of their own location data can be an interesting avenue. From a research point of view, it would be curious to investigate and design various sorts of interfaces to allow this to happen in original/relevant/curious ways.

"Ethno-mining": combining qualitative and quantitative data in user research

Jan Blom told me yesterday about this approach called "ethnomining", a mixed methods approach drawing on techniques from ethnography and data mining. It comes from Intel and you can get a description about it called "Ethno-Mining: Integrating Numbers and Words from the Ground Up by R. Aipperspach, T. Rattenbury, A. Woodruff, K. Anderson, J. Canny, P. Aoki. The idea is to benefit from the integration of results coming from both the processes of ethnographic and data mining techniques to interpret data, inspire design [23] or facilitate finding patterns in social behavior. Some excerpts I found relevant in this paper:

"in practice, either qualitative or quantitative analysis is typically used in service of the other. (...) However, ethno-mining is unique in its integration of ethnographic and data mining techniques. This integration is carried out in iterative loops between the formation of interpretations of the data and the development of processes for validating those interpretations. (...) here are two key characteristics of the iterative loops in ethno-mining. First, they can be separated into three categories based on the amount of a priori knowledge used to find and validate interpretations of the data. Second, the results of the iterative loops are frequently, although not exclusively, represented in visualizations. Visualizations have two basic affordances: they can represent both quantitative and qualitative analyses, and they exploit the visual system to support more comprehensive data analysis, particularly pattern finding and outlier detection. (...) our method seeks to expose and explicitly address the selection biases in both qualitative and quantitative research methods by checking them against one another. Ethno-mining extends its scrutiny of these biases beyond simply comparing the biases embedded in standard qualitative and quantitative techniques. It does so by tightly integrating the techniques in loops, generating mutually informed analysis techniques with complimentary sets of biases."

Why do I blog this? great article that covers methodological aspects we discussed internally. The combination of both quantitative and qualitative techniques to collect data (and make sense of it) is definitely something that we try to apply (both in Fabien's PhD research and mine). The paper here offer a relevant framework and a discussion of cases.

Making intentions explicit through social media

An interesting diagram by John Battelle found in the June issue of Wired UK (and pointed to me by Rémy)

Called "How the check-in extends the great database of human intentions", this categorization describe how user intents are made explicit by various platforms. It shows of social media make them explicit.

Besides, the "check-in" item corresponds to what I described in my PhD dissertation with "self-declaration of location".

Why do I blog this? this is connected to research issues I tackled in my research. The difficult thing would be to add a fourth line to describe the meaning of each of these signals: what inference can be drawn from my whereabouts or my interest. This is course addressed in human-computer interaction and there combination of each column can also be important.

EPFL IC research day about invisible computing

[Local news] The School of Computer and Communication Sciences at EPFL (my alma mater) has a research day on June 4 about Invisible Computing: Novel Interfaces to Digital Environments organized by two ex-bosses (Pierre Dillenbourg and Jeffrey Huang). There's going to be three interesting talks about this topic:

"Programmable Reality by Ivan Poupyrev (Sony CSL, interaction Lab, Tokyo): What would happen when we will be able to computationally control physical matter?

The Myth of Touch by Chia Shen (Harvard University): Are multi-touch tabletop interfaces as intuitive as they seem in YouTube demonstrations? Do we perceive better by touching? How do users really perform on a touch surface?

Labours of Love in the Digital Home of the Future by Richard harper (Microsoft Research, Cambridge, UK): What will homes of the future be? Will they offer all sorts of automation that will let the occupants be lazy and indolent? Will this make for contentment? We think that automation will have a place in the home of the future, but our concern is not with proving the individual with machines that take over their every labour: we think contentment at home will also be delivered through allowing people to invest in labours of love. These can take many forms and can be supported in various new ways."

Why do I blog this? I am especially interested in the last speaker as I follow his work about ethnography and design, as well as his interest in automation. On a different note, it's intriguing to see this faculty finally having an event about human-computer interaction.

Awareness, visibility and Twitter

Looking at the webosphere, it's funny how certain topics appear to be new and shiny when they exist for quite a while. The notion of "ambient awareness" is one of these terms that you hear here and there as if it was brand new. It generally refers to the possibility to stay tuned with what your contacts/social network is doing, will do or think. A social radar of some sort, enabled by microblogging platforms such as Twitter or Jaiku (or Facebook status). What's intriguing is that the whole discourse about these services neglect the large array of research about "awareness". In the last twenty years, authors such as Paul Dourish, Saul Greenberg or Thomas Erickson have produced a lot of material, studies, guidelines, theories and recommendations about this. In this blogpost, I wanted to get back to this issue since it's important to describe what has been done in the past about it, before looking at the Twitter example. A brief recap of the research about Awareness

All started in a research domain called "Computer-Supported Collaborative Work", the branch of Human-Computer Interaction which looks as how technologies can support and enrich collaboration practices. Now this field is more and more concerned by other contexts than work (such as education or games) so we can perhaps use the term "social computing" to make it broader. The last twenty years of research in this field acknowledges the relationship between collaboration "efficiency" and the visibility of group members’ activities across time and space; namely enabling what has been called awareness by the research community (see for example Dourish and Bellotti, 1992, Dourish and Bly, 1992; Gutwin et al., 1995, Erickson and Kellogg, 2000).

Historically, the notion of awareness has been drawn from two domains. On one hand, it emerged from field studies of collaborative work in co-present work settings (Heath, & Luff, 1992; Heath and Hindmarsh, 2000), which focus on how workers systematically coordinate their activities by relying on changes in the local context as well as on their partners visible contributions. In this context, awareness is seen as the ‘mutual visibility’ of each other’s actions, conveyed by the continuous broadcast of information generated during the course of action. Of course, as Heath and Luff (1992) point out, this mutual visibility/observability of actions relies on the active practice of team members who make their own actions ‘visible/observable’ to the others. On the other hand, the notion of Awareness appeared in computer science, as a concept relevant to design collaborative technologies (Dourish and Belloti, 1992; Gutwin et al, 1995; Erickson and Kellogg 2000; Gutwin and Greenberg, 2002). As opposed to the richness of a copresent situation, geographically dispersed collaboration engages participants in joint activities with a low visibility of each partner’s contribution to the main goal of the group. This is why ‘awareness interfaces’ or ‘awareness tools’ have been designed to convey more visibility, showing group members representation and actions.

As stated by Schmidt (2002), the term awareness is highly equivocal in the sense that it is used in a lot of different ways and is often qualified by many adjectives like ‘general awareness’ (Gaver, 1991), ‘workspace awareness’ (Gutwin and Greenberg, 2002) or ‘informal’ or ‘passive’ awareness’ (Dourish and Belloti, 1992). Definitions indeed range from knowing who is present in the environment to the visibility of others’ actions (Heath and Luff, 1992). These limits, however, did not prevent the CSCW community from using the ‘awareness’ concept as the starting point for many original and innovative collaborative technologies. In this thesis, we will not enter into this debate about setting a proper definition but instead focus only on the awareness of people’s location in a shared environment, be it physical or virtual.

Even though awareness is a broad and blurry concept, in the epistemological sense, there are some recurrent definitions set by scholars. Among all the terms that are used in conjunction with this notion of awareness, the one that has received the most important attention is certainly the “workspace awareness” that Gutwin and Greenberg (2002) describe as “the up-to-the-moment understanding of another person’s interaction with the shared workspace” (Gutwin & Greenberg, 2002). More precisely, according to these authors, awareness refers to the perception of changes that occur in the shared environment. These authors also highlight that awareness is part of an activity, such as completing a task or working on something. The main objective of awareness is not only to perceive information but also to recognize the contextual elements required to carry out a joint activity. This is what Dourish and Belloti expresses by saying that awareness corresponds to: “an understanding of the activities of others, which provides a context for your own activity” (Dourish and Bly, 1992, p.107). These definitions emphasize the idea that awareness is meant to enrich the context of collaboration; they also implicitly state that maintaining awareness is not the purpose of an activity but instead a basis for completing the task.

Types of Awareness and usage

Starting from the previously described definitions of awareness, Gutwin and Greenberg (2002) differentiated the core components of awareness according to simple questions such as “Who, What Where, When”. According to these authors, awareness can be described in terms of the period of time it covers, conveying information about the present state of the environment (“synchronous awareness”) and or about past occurrences of events (“asynchronous awareness”), which corresponds to the “When” question.

So, to some extent:

  1. the "who" question corresponds to notification tools which enables to know who is contacted on your IM client.
  2. the "where" question corresponds to location-based services as they allow the user to be aware of his or her's contacts whereabouts.
  3. and so on.

Knowing what others are doing or where they are located can be useful for various reasons: simplification of communication, help people to coordinate, supports inferences regarding the partners’ intentions, know if a partner/colleague/friend needs help or just to get a vague feeling of presence (belonging to a community of friends, family...).

Awareness and Twitter

Time for more recent applications. In her paper called "The Translucence of Twitter" presented at EPIC 2008, Ingrid Erickson examines Twitter in conjunction with the aforementioned theories about awareness and visibility. The paper is a field study which examines the use of Twitter and conclude "there are certain obvious ways that Twitter showcases people’s thoughts and behaviors, but less obvious ways in which interlocutors signal their awareness of being noticed. ". The author compares the notion of Awareness as described in the work of Erickson and Kellogg (about "social translucence") and Twitter usage to show the discrepancies and alternate means of establishing awareness:

"Indirect Awareness: awareness can be evoked via Twitter, just not always in a direct manner. (...) Twitter here is a visible trigger for a host of possible awareness-oriented response mechanisms, from the completion of a work task to a physical meet up to a phone call. (...) receives a phone call because of a Twitter post he makes, this act raises his awareness that his messages are not falling on deaf ears. In turn, he is less inclined to falsify or make irresponsible posts in subsequent communications. (...) Awareness by Incident: Microblogging during a critical incident, such as inclement weather, appears to bring together individuals across community levels (i.e., perceived close and extended) out of a common need for timely information exchange. (...) Within this critical incident, Twitter became a real-time forum to make reports from respective outposts both to signal well being and to check in with others, despite varying levels of intimacy."

Why do I blog this? I am trying to sort out some ideas about microblogging platforms and theories about Awareness. Of course, one of the underlying theme I am interested in refer to mutual-location awareness and how tools such as Twitter and Jaiku engage people in new ways to discuss spatial issues.

Historical analysis as a design tool

In "Historical Analysis: Using the Past to Design the Future", Wyche and her colleagues shows how history can be valuable for ubiquitous computing research; namely, that it can employed to provided insight and methodologies in the same vein as anthropology or philosophy. They point out in what respect historical analysis is relevant:

  • sheds new light on recurring cultural themes embedded in domestic technology, and by extension, ‘smart homes.’ Questioning these themes has the potential to lead designers to rethink assumptions about domestic technology use. For example, rather than using “ease of use” as a guiding principle, elders described difficult, yet enjoyable aspects of housework that technology removed
  • exploring the past helps us understand who we are today and where we are going. For ubiquitous computing, historical awareness can deepen designers’ understanding of the context they are designing for.
  • history can spur designers’ imaginations by revealing the contingency of the present situation, rendering it less obvious and inevitable
  • using history to defamiliarize the present supports designers in envisioning future domestic life less constrained by present-day cultural assumptions embedded in technology
  • Like ethnography, history forces designers to become more aware of their preconceptions about a topic. Because of its ability to defamiliarize the present, history can be a powerful recourse for inspiring innovative computational devices and systems."

They apply this approach to domestic technology use with some interesting techniques such as scrapbooking or the the use of personal histories of technology use (asking people to remember the first time they use a certain technology).

Talk in Paris tomorrow about pervasive art

Will be in Paris tomorrow at École Nationale Supérieure des Arts Décoratifs (ENSAD) to give a talk as part of conference called Mobilisable. I'll be in the "pervasive art" session around 7pm along with Samuel Bianchini, Lalya Gaye and Usman Haque. Mobilisable is a conference about research and artistique creation with mobile and locative media. It is “a project of the research program « Forms of Mobility ». It is directed by the École nationale supérieure des arts décoratifs and the Université Paris 8 in cooperation with the École nationale supérieure d’architecture de Toulouse, the Haute école d’art et de design-Genève and the Tokyo University of the Arts.

The whole thing is free but certainly in french. Here's the description of the session:

"Environment, performance, « in situ », body art, … far from restricting itself to the object, contemporary art has long since conquered all sorts of spaces and situations. As mobile, diffuse and interconnected devices are increasingly operated, this development is still growing. Contemporary art seems to merge with the new informational ubiquitous paradigms of the « everyware ». The communicating objects are leaving their object status behind, as they reduce in size and invade more and more diffuse situations in our environment, by grafting onto ordinary objects that can also be worn (« wearable technology ») or even by being implanted in our bodies. If the first ones mentioned are not necessarily mobile, the second ones are, since they accompany us in each and every movement. And all of these are able to communicate and to create networks ad hoc, which are continually reconfigured. In this way, our public, professional and private spheres will soon be invaded. Which artistic form can be given to these devices and to their applications ? Between « ambient intelligence » and society of access and/or of control, how can the artist take a position regarding these new forms of hybrid networks ? How can he put in place, infiltrate or even exfiltrate these mobile and diffuse technologies ?"

Will talk about why, as a researcher in the ubicomp area, I am interested in what artists are doing, and how it helps the field to move forward.

Design research topography

The first issue of Design Research Quarterly features the above map that represents the "topography of design research" (made by Liz Sanders).

She basically mapped the different approaches to design research based on two dimensions:

  • Their impetus: tools and methods coming from design practice versus those coming from the research perspective. To date, as she points out, most of the methods seem to be coming from the research field.
  • The mindset of those who practice and teach design research: the expert versus the participatory mindset. Which actually corresponds to the level of engagement designers have with people (informants, users, etc.)

The visualization of the different methods and approaches allows to represent different clusters of activity (human factors/ergonomics, applied ethnography, usability testing). You can also read an update of this article in the last issue of ACM interactions.

Why do I blog this? an interesting mapping to be used in my course about UX research. Coming from the user-centered design cluster (ergonomics mostly but now doing applied ethnography), it took me a lot of time to get the global picture represented on this representation. It also took me some time to understand the underlying issues and struggles between the different perspectives as most of them are grounded in different vision of what humans beings are and why they do what they do.

What shows up and what doesn't show up

Reading this article in The Economist about some problems encountered by scientific research, I stumble across this intriguing paragraph:

"There also seems to be a bias towards publishing positive results. For instance, a study earlier this year found that among the studies submitted to America’s Food and Drug Administration about the effectiveness of antidepressants, almost all of those with positive results were published, whereas very few of those with negative results were. But negative results are potentially just as informative as positive results, if not as exciting."

Why do I blog this? That's surely a problem I encountered when writing during my PhD work. It's funny to notice this bias, especially when you know the value of negative results. It's clearly true that there is a tendency to prefer selling "positive" experimental results than negative ones... which are in general less described.

Being interested in UX research, I also draw a parallel between this example and the sort of things we are looking for in user research: most of the work is about usage (what people do with/without technology), but it's also important to consider what people don't do. There are surely some relevant questions to ask concerning what shows up and what doesn't show up in field research.

HCI and grounded versus speculative reasoning

There's an insightful discussion going on at "interaction culture" (Jeffrey Bardzell's blog) about grounded versus speculative reasoning in HCI. It basically revolves around how HCI, though crying out for new approaches is still based on the normative notions of science, and therefore have trouble accepting other forms of knowledge productions. Namely, more speculative forms coming from philosophy (but I would also add, to some extent, more design-based discourse). What generally happens in peer-review is the following, as decribed by Bardzell:

"Part of social science’s rigor is in “grounding.” There are two acceptable ways (well, more than two but I’m focusing on two here) to ground reasoning in social sciences: one is through the careful collection, analysis, and interpretation of data. One eye opener for me as a humanist entering HCI years ago was (to me, at that time) obsessive care with which claims were made. It seemed to me then that social scientists would only make the tiniest, safest, most conservative claims; they shied away from the bold and interesting ones that really push understanding. Now I understand why that is the case: when you are making truth claims about reality, unless you have that care, there can be serious consequences as a result of speculation not only to knowledge of a state of affairs, but also action taken based on the assumption that that knowledge is true (policymaking, design, and other human interventions intended to change our world for the better). The other acceptable way to ground reasoning is by appealing to some other authority who has already done such an analysis. In this special and limited context, appeals to authority in the social sciences are, if not logically airtight, at least able to provide the epistemological foundation required for the work of the field. (...) Philosophy and more generally the humanities, in contrast, are not as strongly oriented toward truth claims about the world as it actually is. (...) So the most important question of a philosophical paper about principles in HCI is not whether the argument is grounded (an ontological concern), but rather whether the paper helps us think more productively about our field (an epistemological concern)."

To which I generally agree, based on the comments I received on paper I submitted in journal or conferences. This sort of issue was one of the reason I turned myself to design research.

Interestingly, Adam Greenfield commented on the blogpost, which is quite interesting as "Everyware" is a highly relevant piece even if, by science standards, it falls out of academic work in HCI. An excerpt from Adam's comment:

"I’m under no illusion that my work is informed by any particular intellectual rigor, let alone anything that would pass academic muster, but by the same token I obviously feel it represents some contribution to the field. Prior to publication, my expectation was very much that my book on ubicomp would be ignored by HCI-at-large, which is to say not discussed and certainly not cited. I was very pleasantly surprised that this has not been the case, which seems to me to constitute proof from existence that the field (at least as instantiated by certain institutions and powerful individuals, and at certain times and places) is able to welcome input external to almost all of its mechanisms for assessing rigor/”groundedness.”

As far as I’m concerned, that presents a felicitous picture: one of a discipline with considerable reserves of intellectual confidence and maturity."

Why do I blog this? this is an important discussion about the evolution of a field such as HCI/interaction design. Although I generally agree with Bardzell, I hope that examples such Adam's work can pave the way for the integration of more speculative work in the field.

Simondon on technical and cultural objects

In his "On the Mode of Existence of Technical Objects", french sociologist Gilbert Simondon interestingly addresses the flawed distinction between culture and technique:

"Culture has become a system of defense designed to safeguard man from technics. This is the result of the assumption that technical objects contains no human reality. We would like to show show that culture fails to take into account that in technical reality there is a human reality, and that, if it is fully to play its role, culture must come to terms with technical entities as part of its body of knowledge and values. (...) The opposition established between the cultural and the technical and between man and machine is wrong and has not foundation. (...) It uses a mask of facile humanism to blind us to a reality that is full of human striving and rich in natural forces. This reality is the world of technical objects, the mediators between man and nature"

And he goes on be raising an interesting point: art pieces and more aesthetic objects are not criticized in the same way. A painting is part of culture but a robot isn't:

"Culture is unbalanced because, while it grants recognition to certain objects, for example aesthetics things, and gives them their due place in the world of meanings, it banishes other objects, particularly technical things, into the unstructured world of things that have no meaning but do have a use, a utilitarian function. (...) Our culture this entertains two contradictory attitudes to technical objects. On the one hand, it treats them as pure and simple assemblies of material that are quite without true meaning and that only provide utility. On the other hand, it assumes that these objects are also robots and that they harbor intentions hostile to man, or that they represent for man a constant threat of aggressions or insurrection."

Why do I blog this? Simondon is always refreshing and his writings (not very common in english) quite pervaded sociology and philosophy nowadays (Bruno Latour, Bernard Stiegler) and theories such as ANT. What I find relevant here is the importance of locating technique (i.e. technologies) where it belongs and not distinguish from other human-based creation.

From AI to ubicomp

"Interactionist AI and the promise of ubicomp, or, how to put your box in the world without putting the world in your box" by Lehau, Sengers and Matcas makes an interesting analogy between Ubiquitous Computing and the situation encountered by Artificial Intelligence in the 1980s. They state how the current debate in ubiquitous computing regarding how a computational system can both make sense of the environment AND respond to it in a sensible way belongs to the same class of problems AI had to face in the past:

"In particular, ubicomp is currently facing a series of challenges in scaling up from prototypes that work in restricted environments to solutions that reliably, robustly work in the full complexity of human environments. These challenges echo problems AI researchers tackled as the field sought to move beyond ‘blocks-world’ solutions to build real-time systems that could work in dynamic, complex environments."

Part of the paper is about this analogy (in terms of the difficulty encountered by both fields), another part is about proposing interactionist AI (e.g. autonomous agents) as a potential solution to scale ubicomp prototype to real-world deployment. Why do I blog this? For people interested in the debate about the capture of context, there are some interesting points here about how to reframe classic ubicomp issues, as well as answers to some concerns raised by Bell and Dourish.

Tech Report about designing multi-user location-aware applications

A recent EPFL Technical report I wrote with Fabien Girardin and Pierre Dillenbourg: A Descriptive Framework to Design for Mutual Location-Awareness in Ubiquitous Computing.

"The following paper provides developers, designers and researchers of location-aware applications with a descriptive framework of applications that convey Mutual Location-Awareness. These applications rely on ubiquitous computing systems to inform people on the whereabouts of significant others. The framework describes this as a 3 steps process made of a capturing, retrieval and delivery phase. For each of these phases, it presents the implications for the users in terms of interpretations of the information. Such framework is intended to both set the design space and research questions to be answered in the field of social location-aware applications."

The paper actually gives an overview of the main issues regarding location-based services, and more specifically multi-user location-aware applications/mobile social software.

Extreme case of location-based services: parole offenders

In Accountabilities of Presence: Reframing Location-Based Systems, Troshynski, Lee and Dourish address the extreme case of paroled offenders tracked by GPS and describe lessons that can be drawn from this unconventional realm of location-based systems.Here is how the system works:

"Location information is continuously reported to a monitoring center through a direct link to a localized cellular telephone network. (...) The GPS system allows correctional officials to define geographic areas from which released and supervised offenders are prohibited, a condition of their parole (...) The GPS monitoring devices are able to trigger alarms or warning notices upon approach of any such previously defined prohibited zones."

Some excerpts about this that I found relevant to my research:

"the use of GPS tracking technologies are intended to maintain a series of spatial prohibitions for this population, to limit their mobility and enforce a series of proscriptions that are part of the conditions of their parole (...) In a dispute between MapQuest’s view and the evidence of the odometer, it is MapQuest that will generally “win. (...) it is the representation of the space provided to the technological system that matters, because, however inaccurate it may be, it is the system against which measurements are made. (...) This study illuminates the relationship between technology and the legibility of space, that is, the way in which spatial organization manifests itself for people who occupy and navigate it. (...) The participants in our study are primarily concerned with understanding how their movement appear to their Parole Officers. The question of course is how that understanding is developed. How does one learn how one is seen by another through the system? How does one learn, for example, how to account for the vagaries of GPS positioning or the problems of “drop-out”? (...) The offender tracking system is inherently asymmetric, at least in its current configuration, so that offenders are unable to see how their movements can be read as potentially appropriate or problematic except as a consequence of infractions, at which point the mediating technology may become a point of discussion. (...) The issue is not where one might be, and when; it is to whom one might be accountable for one’s presence, to whom, under what circumstances, and how one might be called to account. (...) accountabilities to different social groups are heterogeneous—the settings in which action is undertaken are rich and complex. (...) the heterogeneous nature of accountabilities does not presuppose any particular structure of everyday space but rather situates accountability within the context of the practices from which spatial organization emerges (...) the heterogeneous nature of accountabilities necessitates an orientation towards spatiality as an ongoing form of participation in social and cultural life."

Why do I blog this? The study of less common case of LBS is interesting a it leads to different issues and effectively help to reframe the perspective about their design and usage. I rather insisted on spatial consequences but the discussion about the temporal implications is important (charging time of the GPS unit, dynamic reconfiguration of places where the parole can or can't go...) as well as the GPS system as a device affixed to the body

Paul Dourish on reflective HCI

Been reading this paper from Paul Dourish tonight in the train: "Seeing Like an Interface" (a paper he presented at OzCHI 2007). The author concluded about "the burgeoning interest in a reflective approach to HCI" that would be concerned by the "critical dimensions of design". He basically describes technologies such as computers as "an effective site" at which to engage in critical engagements about the cultural values and assumptions. What does that mean for the everyday researcher/practitioner? Here are some hints described in the paper:

"Reflective HCI suggests an approach to interaction design in which cultural assumptions and values play as important a role as traditional usability metrics both as measures of success and as elements of the design process. (...) The discipline of HCI has evolved considerably over several decades, but so too have computer systems themselves. What I want to draw attention to here is not simply the fact that computers have become faster, smaller, and more powerful as technological artifacts, but that they have emerged as cultural objects in a radically different way than they did before. They are elements of the landscape of daily life in many different forms. Digital devices are embedded in our cultural and social imagination in very different ways than they were when HCI was emerging. To the extent that our discipline thinks not simply about user interface design but about interactions between humans and computers, these transformations suggest that we need to look more broadly for theoretical perspectives that help us understand how computation manifests itself as a cultural object"

Why do I blog this? surely some elements to be connected with what Anthony Dunne described in Hertzian tales about "critical design" (although the two visions are not the same). On a more general level, I find interesting to see when different disciplines (such as human-computer interaction or design) come-up with close concepts.

What is then interesting for the layman (for example when I am working with game designers on gestural interfaces usage for the Nintendo Wii) is to see how these ideas can be turned into (pragmatic) actions. In other words, what would "reflective HCI" brings to the table when I am surrounded by level designers, scenario planners, the production manager and the lead coder? Well, it's certainly different from showing graphics about usability issues (that bloody tester missed the door 14 times on that level!) but it does bring questions, insights, discussions that sometimes allow to reconsider problems and results from tests/observations/ethnographic accounts of playtests.

A user study of Dodgeball

The last issue of the Journal of Computer-Mediated Communication is devoted to social networks. Among all the papers they have about this topic, there is one that is closer to my own research about location-based applications and services: Mobile Social Networks and Social Practice: A Case Study of Dodgeball by Lee Humphreys. It interestingly investigate the nature of interactions that develop around a mobile social network site such as Dodgeball and how these interactions might change the way users think about and experience urban public spaces.

The interesting thing in taking Dodgeball as a location-based application is that it's based on self-disclosure of one's whereabouts. Looking at the findings from this year-long qualitative field study is very informative. As the conclusion summarizes:

"The messages exchanged through Dodgeball did help my informants to coordinate face-to-face meetings among groups of friends. In addition to this functional purpose, Dodgeball messages also served a performative function by allowing informants to associate their identity with the branding of a particular venue. Sometimes a Dodgeball message could be interpreted as a member demonstrating a kind of social elitism. At other times, sending check-in messages with one's location to Dodgeball was a means of social and spatial cataloguing. In this way, Dodgeball can serve as a social diary or map. (...) Some of the social connections and congregations facilitated by Dodgeball are similar to those found in third places, but Dodgeball congregations are itinerant spaces for urban sociality. In contrast to place-based acquaintanceships, third spaces allow for habitual, dynamic, and technologically-enabled face-to-face interaction among loosely tied groups of friends. (...) A related implication of Dodgeball use was social molecularization. By communicating about locations in the city, my informants could cognitively map urban public space. In addition, Dodgeball users can move through the city differently, based on the social-location information available to them. If they know friends are at a bar, they can go join them. In fact, the more friends who check in to a bar, the greater the pull to meet up with them. In this way, Dodgeball use contributes to a collective experience and movement of social groups through urban public space."

Why do I blog this? being interested in my research in the role and affordances of location-awareness, this study is important as it unveil some usage of that information. It complements some of the other affordances described in HCI (I am currently writing two journal papers about this).

Furthermore, since I am interested in how such features may affect urban environment and cities, the last result is quite interesting. It's actually very close to other writings about micro-coordination. See for example “Nobody sits at home and waits for the telephone to ring:” Micro and hyper-coordination through the use of the mobile telephone by Rich Ling and Birgitte Yttri.

Humphreys, L. (2007). Mobile social networks and social practice: A case study of Dodgeball. Journal of Computer-Mediated Communication, 13(1), article 17.

The Mobile City conference

The Mobile City is a conference on locative and mobile media and the city that seems to be interesting:

"Locative and mobile media can be seen as the interface between the digital domain and the city, bringing the digital world into the physical world, and at the same time uploading and sharing real world experiences back to the digital world.

  1. From a theoretical point of view, what are useful concepts to talk about the blurring/merging of physical and digital spaces?
  2. From a critical perspective, what does the emergence of locative and mobile media mean for urban culture, citizenship, and identities?
  3. From a pragmatic point of view, what does all this mean for the work of urban professionals (architects,designers, planners), media designers, and academics?"

Why do I blog this? the program has not been announced yet but the topics seem to be very interesting.