Will Wright about trends in video games

POPsci features a very long and insightful interview of Will Wright (game designer of The Sims and working on his next project called Spore). IMO, the article is important because it describes the current trends in the gaming industry. Let's see some of them below with quotes: The first trend is certainly the interest towards user-generated content. Wright wants to turn players in "Pokemon designers, Neopet designers, or Pixar designers":

I think Second Life is interesting because they have given the players such huge control over the environment (...) In Spore, the tools are more and more powerful than they were in The Sims, so the next step is, now, how do we take those things and use them to build a narrative (...) Every time the player makes something in the game – creature, building, vehicle, planet, whatever, it gets sent to our servers automatically, a compressed representation of it. As other players are playing the game we need to populate their game with other creatures around them in the evolution game, other cities around them in the civilization game, other planets and races and aliens in the space game, and those are actually coming from our server and were created by other players. So there's an infinite variety of NPCs that I can encounter in the game that are continually being made by the other players as they play. (...) We're going to have different feedback mechanisms. One of the things we're going to be doing continually is rating the most popular content, so when you make a creature you're going to be able to go to what we call the metaverse report and get a sense of what is your creature's popularity ranking relative to other people's creatures.

And he recognizes that an economy that emerges out of it is inevitable: as in Second Life, it will develop, go on eBay or other platforms and might lead to "some sort reward".

Second, gaming foster an "augmented sociality" that is based on the content and is achieved not in the game itself but with other channels:

the asynchronous socializing through content, which we're already seeing in The Sims web community. huge communities form with very well-known people based on the content they've made, other people taking that content and telling cool stories with it.

Third, the educational model of using games is now less about directly teaching content/facts but rather making people know processes. This has been a long discussion in psychology and educational sciences but there are still some people trying to design games to make kids learn irregular verbs or Napoleon's battles. Actually, the thing is that video games are less good at declarative learning (content) and better for procedural learning and problem solving. And it's good to see a game design such as Will Wright agreeing with that:

I think in a deep way yeah [answering the question "Do you see Spore, or the rest of your games for that matter, as being educational?"] – that's kind of why I do them. But not in a curriculum-based, 'I'I'm going to teach you facts' kind of way. I think more in terms of deep lessons of things like problem-solving, or just creativity – creativity is a fundamental of education that's not reallytaught so much. But giving people tools.

And finally, concerning the future of gaming, Wright addresses the articulation between interactions in the physical environment and digital interactions. In a sense, the question can be rephrased as how to turn data generated from real-world interactions and put them back in the game to enrich the playful experience:

One thing that really excites me, that we're doing just a little bit of in Spore... I described how the computer is kind of looking at what you do and what you buy, and developing this model of the player. I think that's going to be a fundamental differentiating factor between games and all other forms of media. The games can inherently observe you and build a more and more accurate model of the player on each individual machine, and then do a huge amount of things with that – actually customize the game, its difficulty, the content that it's pulling down, the goal structures, the stories that are being played out relative to every player.

Why do I blog this? this is a quite good overview of the current game trends (and I left aside some other issues). Besides, it's pretty refreshing to hear them from a game designer and not from observers/researchers who try to shake the game industry.

Podotactility: feeling texture with your feet

Podotactility This is what the french calls a "podotactile", namely a textured strip which runs along the edge of the metro/tram station platform or even sidewalk, which one can feel with the feet. It's meant to warn people (blind or not) that there is limit/boundary between a space one is free to walk in and another area that can be dangerous. So the texture affords the limit (Bruno Latour would say that this "non-human" artifact is a way to delegate a function to an object).

This leads to another kind of "touch" feeling: in a sense "podotactility" is about feeling with the feet.

So why this is interesting? I quite like this example because it shows how textures are important and can have affordances (especially in physical space). Would it be possible to use podotactility in innovative way, beyond signaling people that there are dangers? Yes, of course, but what will happen if it has several affordances? A possible solution would be to use different granularities. Will then people learn these new codes (lot of space between dots = low danger, close dots = big danger)? Certainly food for thoughts for near-field interactions.

And of course, in terms of digital equivalent, there are some projects that propose some rugosity in mouse interactions/force feedback that can be perceived by similar (felt by the hand though).

Accepted paper about the CatchBob! project

Another accepted paper for the Common Models and Patterns for Pervasive Computing (CMPPC) workshop at Pervasive 2007. I co-authored this with Fabien Girardin (Barcelona) and Mike Blackstock (Vancouver). It's called "Issues from Deploying and Maintaining a Pervasive Game on Multiple Sites" and basically describes how the deployment of the CatchBob! pervasive game has been carried out in two different settings (in Lausanne and Vancouver).

Abstract: In this paper we present the lessons learned from the deployment of a collaborative pervasive game on two different sites. We emphasize on the practical aspects of getting a pervasive systems deployed without any extra special infrastructure. Based on our experience, we describe the issues providers and administrators must take into consideration to deploy and maintain pervasive environments. In this perspective, we highlight that ubiquitous technologies must be consciously attended, as they are unevenly distributed, unevenly available.

The roles of theory in interaction design


"Acting with Technology: Activity Theory and Interaction Design (Acting with Technology)" (Victor Kaptelinin, Bonnie A. Nardi)

Reading Kaptelinin and Nardi's book, I was interested in the chapter entitled "Do we need theory in interaction design?" because it describes why developing and using theory is needed.

The authors essentially summarizes the evolution of theories in the field of human-computer interaction (HCI), starting form the "cognitive years" to what they call "postcognitive" paradigm that appeared consecutively to Lucy Suchman's book Plans and Situated Actions. HCI indeed started as a coupling of cognitive psychology and computer sciences models that envisioned human cognition as an information-processing system. With Suchman's work (and the use of the ethnomethodology paradigm), the investigation of new lines of research as been favored with the inclusion of social/organizational factors, CSCW and the importance of context/artifacts in cognition. However, the problem of the ethnomethodological approach was that it succeeded in bringing detailed/rich/precise depictions of practices and interactions but it lead to no generalizable accounts (the essence of a Theory).

As a matter of fact, a theory is helpful for 4 reasons:

1. Theory forms community through shared concepts 2. Theory also helps us make strategic choices about how to proceed 3. To move forward, to know where to invest our energies (...) otherwise we will always be going to the square one of detailed renderings of particular cases. As interesting as the cases might be, we have now way of assessing whether they are typical, whether they are important exceptions to which we should pay particular attention, or if they are corner cases we do not have time for at the moment 4. Theoretical frameworks will facilitate productive cooperation between social scientists and software designers. Not only can such approaches help formulate generalizations related to the social aspects of the use of technology and make them more accessible to designers, they can support reflection on how to bring social scientists and software designers close together.

The criteria needed for such a theory are that it should: (a) be rich enough to capture the most important aspects of the actual use of technology (which is not met by classic cognitive psychology since it does not account for some important phenomenon), and (b) be descriptive and generalizable enough to be a practical tool for interaction design. A possible way to meet these criteria is to take theories that model phenomenon as complex systems. At this point, I would have been interested in having more development about the second criteria ("be descriptive and generalizable enough to be a practical tool for interaction design") because it's often the case that designers complain about this. And still, I have to admit that I have a hard time figuring out how a theory (or even a guideline) can meet this criteria.

Then the authors proposed that Activity Theory is the perfect candidate for that matter, and the rest of the book is describing to what extent this holds true. A final chapter however discusses other "postcognitive theories": Distributed Cognition, Actor-Network Theory and Phenomenology.

Why do I blog this? because those questions are crux issues in my research work. Coming from a cognitive science background, it took me a while to understand how inadequate cognitive psychology or experimental psychology were to address human-computer interaction problems. That lead me to take other paths (such as more bottom-up approach like ethnography) but I tried to not forget what cognitive sciences could bring to the table.

And maybe the problem here is the one of the granularity of theories. There are sub-domains in cognitive sciences that can be of interest for HCI. For example, psycholinguistics offer interesting insights about how people interact with each other, how each others' intents are mutually inferred (I quote this example because that's what I addressed in my PhD research). Thus, of course the information processing model is somewhat passé but cognitive sciences is a HUGE field that have sub-aread of interest.

How to write gestures and movements

The coming of gestural interactions on mass market products such a the Wii brings lots of question about how to design movements, how to express them and discuss their relevance. This question is of particular importance in the video game industry and there is currently lots of discussion about how to create gestural grammar/vocabularies. I've attended seminars about people try to describe the movements (both the physical movements and their translation in the virtual counterpart) and there has not been any satisfactory solutions. Reading a newspaper, I stumbled across this exhibit called "Les écritures du mouvements" (i.e. The writings of movements) in Paris that presents the different notation systems used in dancing and it seems strikingly pertinent for explaining movements. As described on this website about the show, each notation system attest of the peculiar way to perceive movements, which also depends on the historical, scientific and cultural context of the society in which this system occur. These systems are used either as mnemonic helps but also as a way to train people or even to create. Historically, there has been lots of different systems such as the ones represented below (left: by Bagouet, right: by Zorn):

The most common today are the Laban's system and Benesh's system. Below is an example of Laban:

Of course,t here are tools that allows to use these annotations: see for example Benesh Notation Editor or Credo.

Why do I blog this? This sort of notation systems seems interesting and pertinent for describing gestural interactions. Might have to dig this more deeply. Will wee see superb game design documentations with pages showing this sort of depictions?

Paper for CSCL 2007

Our paper "Partner Modeling Is Mutual", Sangin, M., Nova, N. Molinari, G and Dillenbourg, P for the CSCL 2007 conference (Computer Supported Collaborative Learning) has been accepted. The paper, that one may categorize as belonging to cognitive science research, basically described our empirical research about how the modeling of partners' intentions is a mutual process. This research stems from a project we carried out at the lab for the Swiss Research National Fundation.

Abstract: Collaborative learning has been hypothesized to be related to the cognitive effort engaged by co-learners to build a shared understanding. The process of constructing this shared understanding requires each team member to build some kind of representation of the behavior, beliefs, knowledge or intentions of other group members. This contribution reports interesting findings regarding to the process of modeling each other. In two empirical studies, we measured the accuracy of the mutual model, i.e. the difference between what A believes B knows, has done or intends to do and what B actually knows, has done or intends to do. In both studies, we found a significant correlation between the accuracy of A's model of B and the accuracy of B's model of A. This leads us to think that the process of modeling one's partners does not simply reflect individual attitudes or skills but emerges as a property of group interactions. We describe on-going studies that explore these preliminary results.

Ubiquitous computing and foresight

The Bell&Dourish paper I've blogged about last week is still sparking some interesting discussions (interestingly it's not only ubicomp researchers but also architects). What is interesting to me is how this discussion about focusing on the ubicomp of today and less about proximal future connects with the discussions I had with Bill after the LIFT07 foresight workshop. The "here today" versus "could be tomorrow" argument is indeed one of the underlying questions of foresight versus design practice. In Bell and Dourish article, the authors critique these earlier visions of a proximal future not to complain about past visions, nor to understand why we haven't gotten there but rather because it allows them to question an important assumption made by ubicomp researchers: the coming of a so-called seamless world with no bugs and perfect could of connectivity (that do not hold true as Fabien described it at LIFT07).

So the point here is the importance of the "why question", the crux issue that the LIFT07 workshop addresses; critical foresight is about asking why something worked, why someone would want the future you propose or why the path proposed is possible. In the context of this ubicomp paper, some additional questions about the future of ubiquitous computing can be asked: what would we want: a short term vision of the next incrememental ubicomp 'project' or a new strong vision (as Weiser's calm computing was). But what might be needed for having this strong vision is clear and lucid description of the why that eventually lead to a point people could aim at.

So there could be an interesting exercise to think about when criticizing the intelligent fridge, CAVES, intelligent assistants or other ubicomp dreams that failed. That could be a good agenda for a possible workshop at some point.

Designing to care of the messes

A good read in the ACM Ubiquity: What if the experts are wrong by Denise Caruso. It's about how societies prepare themselves to be wrong when creating innovations that can have have important consequences on the world. Some excerpts:

"long-term stewardship" of man-made hazards; that is, how a society prepares to take care of the messes it has made that it can't get rid of, generations into the future. (...) To think that other people might suffer as a result of their actions is not part of the expert's world, or it gets pushed away in the drive to deploy the technology," said La Porte. "But what are the consequences if it turns out that all the things they believed in are wrong? That's really hard. And most technical people can't talk about this. What they do is theology to them, not science.

<centerR0010493

Why do I blog this? even though this article addresses tech such as nuclear power and DNA manipulation, the author has a good point about designing new elements/artifacts (given the messiness of the world). And it leads to two questions: is it about designing to avoid future messes or designing in a way that this inherent mess could be taken care of?.

(the picture is a shot I've taken last week end: remnants from a restaurant that is refurbished)

Music production through haptic interface

Amebeats is a project by Melissa Quintanilha that allows "people to mix sounds by manipulating physical objects instead of twisting knobs or clicking on a music production software".

As the Melissa states it:

The amoeba shaped board has little boxes in its center that when moved to the arms, activate different sounds. My interest in music and design merged to create a haptic interface (based on touch) that allows people to use gesture to mix sounds with their hands. My inspiration for this robotic installation came from going to parties and seeing DJs create the music on their tables, but no one knowing what they do to make the sounds. Generating music using gesture allows for a much more expressive way of creation.

Why do I blog this? yet another interesting device to be added to the list of interactive tables.

Trashed mailboxes, direct digital equivalent

snail mail We certainly have no problem to find the digital equivalent for this. That's usually how digital mailboxes look like nowadays, less colorful though.

If you look carefully (and if you speak french), behind the added mailbox on top of the others, there is a written message that says "No Pub!" (= "No ads!") as come kind of last attempt to avoid being flooded that has desperately failed (through the addition of another mailbox!)

Kevin Slavin on big games and location-based applications

(Via Fab), this Where2.0 2005 talk by Kevin Slavin (Area Code) is full of great insights about urban gaming ("big games"), and the user's apprehension of location-based technologies. There actually three aspects that I've found relevant to my research (excerpts are very basic transcriptions of the podcast).

First, Slavin explained how places where space + story

places need stories to look real. Big games: to make the most real and most fake stories they are large scale multiplayer real world games, things that transform the space around space in a game space basically a layer of fiction added on the spatial environment games with computers in them rather than the other way around

Second, from the user experience point of view, it's interesting to see how they evaluate when one their game is successful:

we also measured success because people started to cheat (when people screw things, that proves you're on the right track). the way we're going to misuse technologies are perhaps the most valuable way that we use them

And third, Mr. Slavin has a very relevant take on location (in the context of location-aware applications such as most of the big games):

location is not just GIS data, whether we're indoor/outdoor, whether the phone can hear you're on busy street or not... and build games that draw on that

it may not have been about location but maybe what's more valuable is dislocation: the most valuable experiences may have to do with disinformation, it might be more interesting/valuable for people to get lost than to know where they're going, to forget where where they are maybe the goal here is not emulate the PSP but rather to know what' different from a PSP and do that and instead of doing reportage, let's make it up, there's something else there, it's much more about misrepresentation and accuracy we're working on a often wrong version of "here"

I fully agree with this approach, which kind of resonate with the discourse I am building in my PhD dissertation: location is definitely more than what is implied by a dot on a map or x/y coordinates. Where Slavin advocates for expanding the notion of location (for example: to get lost or to forget where one is), my work is more about how the distinction between automated location-awareness and the explicit disclosure by the users. In both cases, these elements ponder the overemphasis lots of people put in location-based applications (especially buddy-tracking or place-tagging) Why do I blog this? I am currently in the process of finding the right angle for my talk at Geoware ("The user experience of location-awareness"). This is definitely food for thoughts for next upcoming writings/talks about how to go beyond current location-based applications.

The user experience of elevators

It seems that the elevator hacking trick could have been a rumor. Looking for articles about the user experience of lifts/elevators, I ran across this piece in the new yorker:

Richard Gladitz, a service manager at Century Elevator, an elevator-maintenance company in Long Island City, concurred. “It really shouldn’t operate like that, unless there’s something wrong with it,” he said. “People will think that someone did something to make it pass by, but it might have something to do with the dispatcher, various elevator-bank issues, something of that nature.” (...) “There’s so many misconceptions about elevators.” Could it be that engineers had designed elevators to have this door/floor feature but, for the common good, didn’t want civilians to know about it? Might there be an elevator conspiracy?

Maybe a weird solution for this would be a random lift button (by "chris speed"):

The Random Lift Button project was conceived as an opportunity to exemplify further the role of space at the mercy of time. Certainly in large commercial buildings lifts are implemented to squash space and enable people to move more quickly from one work activity to the next. (...) The random lift button would place us directly in the centre of a non-linear moment, its outcomes uncertain and unpredictable. A sensation that would be both rewarding and entropic. Random Lift Buttons are currently installed in two lifts in Portland Square at the University of Plymouth, UK.

Why do I blog this? elevators are one these technological artifacts that keep puzzling people (like doors but it's even worse). Since there is a large variety of elevator user interfaces, there are often anecdotes about them. What is curious is that it's possible to design curious experiences even in artifacts that look boring (I haven't mentioned a friend project that aimed at adding a empty floor on top of an elevator so that people can just breath the atmosphere and then get back to where they wanted to go). What do these 2 stories above tell us?

Code and architecture

"When code matters" by Ingeborg M Rocker is an article in Architectural Design that deals with the role of computation in the discourse and praxis of architecture. It gives a well summarized overview of historical computational models and concepts and then interestingly discuss their role in architecture.

While previously architects were obsessed with the reduction of complexity through algorithms, today they are invested in exploring complexities based on the generative power of algorithms and computation.(...) Most architects now use computers and interactive software programs as exploratory tools. All their work is informed by, and thus dependent on the software they are using, which inscribes its logic, perhaps even unnoticed, onto their everyday routines (...) The computer is no longer used as a tool for representation, but as a medium to conduct computations. Architecture emerges as a trace of algorithmic operations. Surprisingly enough, algorithms – deterministic in their form and abstract in their operations – challenge both design conventions and, perhaps even more surprisingly, some of our basic intuitions.

Why do I blog this? curiosity towards architectural practices, and - of course - how technology reshape how people do what they do.

Infrastructure for calm computing?

Source of power Simply, this is the sort of infrastructure that gives birth to ubiquitous computing; at some point people have to give some power to the devices that allow them to access the information superhigways or activate their second lives. And the power is brought to networked cities of the globe through this kind of lines.

Maybe this is what calm computing really is. You hike in the mountain and sit under one of those big power lines and listen to the vibes.

LIFT07 workshop "Re-designing the city of the future"

Some notes about the foresight methodologies discussed at the LIFT07 workshop "Re-designing the city of the future" that I co-organized with Bill Cockayne last week. The purpose of the workshop was to a gather an heterogeneous crowd of people to discuss topics regarding the city of the future. The point in preparing this workshop was also to deal with new methodologies, to better structure foresight ideas (for instance to go beyond the design scenarios developed in the past series of blogject workshops). This is why I teamed up with Bill who gave an insightful presentation of critical foresight tools that I describe hereafter.

As opposed to design (i.e. build/invent/create), foresight is about critically explore assumptions, build models & develop questions about the long term future. One of the pre-requisite of the workshop was to read various papers that exemplified different visions of the future: Fast, Huge and Out of Control, Metropolis (1999), 'Future Cities', Time, 1929 and 'January 3000 A.D.', Harper's New Weekly Bazaar, 1856. The reading of those papers was meant to spark some discussion about critical foresight: Did any of the authors did each guess correctly? If no, why were some guesses so bad? What were the changes? Were these changes social or technical? Was it a driver or a reaction? Global or local?

Then Bill introduced the first "tool" in the form of a petal graph. This is basically a diagram where he mapped each critical aspect of change that the group listed. The goals is then to find the commonalities of all these aspect, what goes in the center of the flower.

The petal graph is indeed a good tool to realize how the future is a complex problem. One one hand, it's uncertain (not measurable). On the other hand, it's ambiguous and we do not even know what to measure. However, this does not mean that we cannot make assumptions: "You can't predict the future, but you can invent it" as the motto says. The point here is not too do futurism but to look at data and use analytical reasoning to discern what might exist and what we could build. Thus, the value do not lay in predictions but in the underlying discussions: the "why" of predictions: we thus focus on the questions generated, not the answers. This said, the crux issue in foresight is to be critical about what others says about "the future". This is why we looked at different material, be it press article, journal papers or the Walt Disney's EPCOT center video. In a sense, the main goal is to explore, deconstruct, and critique the futures envisioned by others as a methodology of understanding, using a multidisciplinary approach. The following step was to use three tools for foresight thinkings: S-curves, x/y axes and white/hot spots.

The s-shaped curve is the canonical representation of how an invention evolves over time from the idea to the mass-market commercialization (plateau) with every technologies/instance that occurred in between (and caused the raised of the curve). This tool enables the discussion about the social/technical changes that allowed this progression. Lots of questions can be asked using this curve: Why do so few futures seem to follow the path? This helps contextualizing what's going next.

Then we picked up 2 dimensions/topics that can interact and represented them on cartesian axes.

The choice of these axes is important since it is meant to generate "questions". Once the axes are defined, this is a tool to discuss stories/concepts/inventions and position them in the quadrants according to the 4 dimensions that has been set. This allows to have white spots that can be considered as opportunities (or they don't exist for a certain reason that should be discussed) and hot spots with a high density of existing examples.

Based on white/hot spots and depending on the time range, one can then unfold the history backward as represented on that picture to answer the question: how did we get to this spot, when were the changes? Doing so need to think about early indicators of change, whether those changes are already in view, what type of events? where would this events be likely to occur.

Once this was done, Bill introduced tools for "Foresight Thinking for Designing": observe, analyze and prototype. Observing is a matter of thinking about people today and at future time: assuming that people will change, what would be the reasons/motivation/driver, when thinking about change what are the early indicators (triggers or incipient)?. The analysis part is mostly about questions: ideas are fine but questions are more important and assumptions critical. Finally, the prototyping part concerns the models but also the underlying assumptions, questions, and changes. The best models generate questions around the areas of highest change. And finally the last step is to communicate, which can take various forms: stories (short stories, speculative fiction, science fiction, counterfactuals), scenarios/personas, movies, maps (Cross-impact, Trends, S-curves) or even tangible artifacts.

We then constituted 5 groups who had to use the previous tools to had to develop a future to report about the "city of the future" and tell this to the others at the end of the workshop. If I have time I'll post about the workshop results but to me the most important thing was the discussion it fostered (especially among groups).

Yet another kosher phone

Steve Portigal pointed me on this jpost article about a kosher telephone "that minimizes Shabbat desecration" for military soldiers in the israelian army.So first, look at the problem from the user point of view:

"Until now, every telephone call [on Shabbat] that was not a matter of life and death or close to it raised questions and deliberations among religious soldiers regarding halachic permissibility. Now the calls can be made without any qualms,"

And then, solutions:

Dialing and other electronic operations on the "Shabbat phone" are performed in an indirect way so that the person using the phone is not directly closing electrical circuits. Instead, an electronic eye scans the phone buttons every two seconds. If a button has been pressed, the eye activates the phone's dialing system. This indirect way of activation is called a grama. (...) the Shabbat phone was just one of several devices that helps minimize Shabbat desecration. "The IDF is already using electric gate and door openers based on grama technology," he said. "And pilot versions of proximity sensors, magnetic cards and electronic eyes have been created." (...) Another gadget that is now widely used in the IDF is a self-erasing pen. Writing is one of hundreds of activities prohibited on Shabbat. However, writing in ink that does not remain legible is a less severe transgression that is permitted when necessary, even if there is no danger to life.

Why do I blog this? because design is about constraints and it's very intriguing to see how technological artifacts can be designed with those constraints in mind. I am always amazed by the workarounds for constraints that we don't experience in our usage of the same technology. Yet, as in Jan Chipchase talk at LIFT07 about illiterate users, this I about delegation. In that case, the delegation is done to the machine and not another human. This topic is also close to what Bruno Latour describes about how we humans delegate morals to objects (see his safety belt example).

The ubiquitous computing of today

Finally, after a LIFT I managed to have more time for reading good papers such as Yesterday's tomorrows: notes on ubiquitous computing’s dominant vision by Genevieve Bell and Paul Dourish (Personal and Ubiquitous Computing, 2006). The paper deeply discusses Mark Weiser's vision of ubiquitous computing, especially with regards to how it has been envisioned 10 years ago and the current discourse about it. In fine, they criticize the persistence of Weiser's vision (and wording!). To do so, they describe two cases of possible ubicomp alternative already in place: Singapore (example of a collective uses, computational devices and sensors) and South Korea (infrastructural ubiquity, public/private partnerships).

Their discussion revolves around two issues. On one hand, the ubicomp literature keeps placing its achievements out of reach by framing them in a "proximal future" and not by looking at what is happening around the corner. Such proximal future would eventually (for lots of ubicomp researchers but also journalists and writers) lead to a "seamlessly interconnected world". The authors then express the possibility that this could never happen ("the proximate future is a future infinitely postponed") OR more interestingly that ubiquitous computing already comes to pass but in a different form

On the other hand, ubicomp research is very often about the implementation of applications/services, assuming that the inherent problems would vanish (think about privacy!).

Therefore, what they suggest to the research community is to stop talking about the "ubiquitous computing of tomorrow" but rather at the "ubiquitous computing of the present": "Having now entered the twenty-first century that means that what we should perhaps attend to is ‘‘the computer of now.’’". Doing so, they advocate for getting out of the lab and looking at "at ubiquitous computing as it is currently developing rather than it might be imagined to look in the future". And of course, they then points to an alternate vision that Fabien discussed last week at LIFT07:

the real world of ubiquitous computing, then, is that we will always be assembling heterogeneous technologies to achieve individual and collective effects. (...) Our suggestion that ubiquitous computing is already here, in the form of densely available computational and communication resources, is sometimes met with an objection that these technologies remain less than ubiquitous in the sense that Weiser suggested. (...) But postulating a seamless infrastructure is a strategy whereby the messy present can be ignored, although infrastructure is always unevenly distributed, always messy. An indefinitely postponed ubicomp future is one that need never take account of this complexity.

So what's the agenda? Based on William Gibson famous quote about the future being there and not evenly distributed, they encourage that:

If ubiquitous computing is already here, then we need to pay considerably more attention to just what it is being used to do and its effects. (...) by surprising appropriations of technology for purposes never imagined by their inventors and often radically opposed to them; by widely different social, cultural and legislative interpretations of the goals of technology; by flex, slop, and play. We do not take this to be a depressing conclusion. Instead, we take the fact that we already live in a world of ubiquitous computing to be a rather wonderful thing. The challenge, now, is to understand it.

Why do I blog this? Best paper for weeks. This particularly resonates to the way I think about Ubicomp... meaning that no the recurrent intelligent fridge some have dreamed of 10 years ago is not the "fin de l'Histoire" (end of History). I really like when Bell and Dourish bring forward issues like ubicomp can rather be exemplified as Cairo with its freshly deployed WiFi network set to connect all the local mosques and create a single city-wide call to prayer than having a buddy-finder locator.

Moreover, the authors express their surprise to the fact that researchers are still positing much the same vision as years ago. This reminds me the ever-decreasing time-frame futurists tried to predict: the year 2000 was really the ending point and prediction were always targeted to that period. Now that we're in the (so-called?) 21st century, it's as if there could be no other future.

Anyway, that's a call to go "on the field" and see what's happening and the effects of technologies.

The architecture of research facilities

The last issue of Metropolis features different articles about the "The Architecture of Research" that addresses the extent to which architecture can inspire science practice. There is a lot to draw there but have a look at the one called "The DNA of Science Labs". It postulates that scientific research labs now receive more and more attention form architects

Both Rubin and McGhee, who has spent the last 20 years studying lab design and refining his theory of space planning, constantly refer to the most successful research centers from the past century (...) tracing relationships between the physical structures and their enormous scientific and technological achievements (...) connectedness emerged as one of the project’s overriding themes (...) “The best thing you can do is to a have single corridor, because that’s the one place where you always run into people.” (...) Another major theme for Janelia Farm’s space planning was flexibility, which emerged partly as a negative observation about the flaws of existing research facilities. The rapidly changing nature of scientific equipment and the need to adapt quickly to different research projects, as well as to adjust to individual preferences, meant that the labs should be capable of being transformed without the wasted time and expense of a total retrofit.

The article about labs in skyscrapers is a good read too.

Glass-walled labs provide a visual connection between the benches and offices, as well as between colleagues as they pass through the long wavy corridor. They also let in natural light and views of the world outside. Picture by Jeff Goldberg/Esto

Why do I blog this? this connects my interest in how the spatial environment shape social/cognitive processes, and conversely how can it be possible to design environments to improve collaborative behavior.

Interaction design research

Reading "Interaction Design: Foundations, Experiments" by Lars Hallnäs,Johan Redström, I was quite fascinated by the chapter about methods concerning "interaction design research". Maybe it's because my research work is more and more linked to design. Some excerpts that I find relevant:

Is this science? Certainly not in the sense of natural science or in the sense of social science. It is simply not “knowledge production” (...) The idea of verifiable knowledge about the design process,validated models and working methods etc. is simply wrong here. It is a different situation, we find ourselves so to speak on the opposite side; in some sense it is research through defining in contrast to research through analytical studies. It is like the difference between studying how people open a certain door and experimenting yourself with different ways of opening that particular door. In both cases we could say that it is research in answering to a question about what it means to open the given door. In the first case it is important that your studies rely on sound methodology, as you presumably want to derive some general knowledge from your work. In the second case the situation is different. A good method for opening the door is what you want to find through your experiments. The aim is not to derive general knowledge about door opening practice, but to define, to suggest, a particular way of opening that door.

This said, the expected results also take a particular shape:

‘Results’ does not come in form of knowledge about things at hand, but in the form of suggestions for change of a present state, suggestions for a change in how things are done. ‘Results’ will here always refer to methods of practice in some sense;methods are in research focus. Suggestions of change will always refer to ‘new’ ways of doing things, it can be a matter of very specific methods,general guidelines, new programs for practice,new material to work with etc.

Why do I blog this? simply this help me making the difference between what designers wants (as opposed to what academic researchers do). Besides, it reminds of Jan Chipchase's presentation at EPFL who made the point that his work was no to produce facts but rather "informed opinions" that are employed as material for designing solutions.

It's interesting to see how the word "research" is definitely a boundary objects and refer to various meaning depending on the community of practice that employs it.