Processing power versus Soul

Last saturday, I had my weekly share of city flanerie (with a digicam) in Lyon, France. I stumbled across weird stuff. Among others, there was this poster: brain to be sold

It says:

My brain is too intelligent for me Auction! Sell brain: - good setting - 15/3 full - accessories not included - slightly overheating Price: 3 francs 6 sous

Why do I blog this? few centuries ago, people wanted to sell their soul to the devil. Now that the epoch is more geared towards efficiency and the computer-as-a-metaphor-for-the-brain, it's processing power in the form of a brain that some folks ironically wants to sell. Each epoch has its own emphasis on body parts/functions...

User's perceptions of visual and arphid tags

User Perceptions on Mobile Interaction with Visual and RFID Tags by Sara Belt (University of Oulu, Finland), Dan Greenblatt, Jonna Häkkilä (Nokia Multimedia, Finland), Kaj Mäkelä (Nokia Research Center, Finland). This paper has been presented at the workshop "Mobile Interaction with the Real World (MIRW 2006)" at Mobile HCI 2006 in Espoo, Finland.

It describes a study of user perceptions on mobile interaction with visual and RFID tags, which seems to be a dismissed topic in HCI. The methodology is straightforward:

The study consisted of interviews, which were carried out in the city center of Oulu, Finland, in June 2006... held on a pedestrian mall next to a busy shopping area at the city center. Participants were chosen from those present on the street, to achieve a balance of male and female, with ages ranging from teenager to middle aged (50+) (...) During the interview, each participant was shown two posters, one employing an RFID tag and one a visual tag. Participants were first asked about their familiarity with a particular tag technology, and then given a brief easy-to-understand explanation of how the tag works (though they were not told how to interact with it). The participant was asked what kind of information they would expect to receive from the tag, and then given a properly-equipped mobile phone andemonstrate how they would interact with the tag,The study included 26 participants (11 female, 15 male). All study participants happened to own a mobile phone.

The paper summarizes the results, I am was interested by some of them:

it was found that the used tag technologies were generally unknown to the participants (...) RFID tags were known from security tags on clothing or compact discs. Despite of visual recognition of the tag, they were not aware of their usage in the current context. In general the participants were receptive and enthusiastic towards the presented information acquisition methods and came up with suggestions for novel applications. (...) It was apparent from the interviews that the participants had developed a diverse range of mental models governing what kind of information the tags could store, and how that information could be transferred to their mobile phone. For the visual tag, once it was established that it was just ink printed on the surface of the paper, most users deduced that you need to use the camera to access the information. Some users suggested that they actually needed to take a picture of the visual tag, while others just pointed the camera at the tag and waited for it to register automatically.

RFID tag: Given its decreased visibility (i.e. hidden behind the paper), and more advanced technology, it makes sense that the appropriate interaction technique with the RFID tags proved slightly more elusive for participants. When asked how they would interact with the RFID tag, responses included utilizing text messaging,the visual tags because they were cheaper to use and caused less waste. (...) One issue that people were unclear on is the distinction between the content of the tag and how the phone will actually utilize that content. When asked what information the tag may contain, many users correctly guessed it would contain band-related information, and some suggested specifically that the tag may contain an mp3 file. One user even asked how much data the RFID tag can hold. although the tag itself does not need to be visible.

Why do I blog this? this kind of study is very pertinent in the sense that it shows how potential users of this technology (puzzled by new usage such as this poster thing) can be perceived. When technology is situated and pervasive, the assumptions about how things work are more and more complex and diverse. Misconceptions are always interesting to look at and observe. This makes me think about the "naive/folk psychology": "the set of background assumptions, socially-conditioned prejudices and convictions that are implicit in our everyday descriptions of others' behavior and in our ascriptions of their mental states" (Wikipedia's definition). Designing pervasive computing applications may benefit from having a look at such naive psychology and how people attribute meaning/behavior/functionalities to these new technologies.

Jan Chipchase interview

Convivio has a very smart interview of Jan Chipchase (did by Fabio Sergio). Some excerpts I found interesting and pertinent regarding my work in human-centered design:

One of my assumptions during interviews as well as more ad-hoc conversations is that everyone has something interesting to say you just need to figure out what it is. More often than not the listener enters a conversation assuming the opposite, doesn’t take the time to properly hear what’s actually being said, or quite simply the listener doesn’t have the skills or cultural context to appreciate the subtleties of what is communicated. Everyone can reflect on their life experiences but that most people don’t choose to, and only a few choose to do so in a public forum. The issue is not whether we are ‘always on’, but what we are always on to. What is it that is noticed? How much time is spent in absorbing, or in reflection, or in applying what is learned? (...) One of the assumptions of contextual design processes is that two weeks, two days or even two hours spent in the context of whatever or whomever we are researching is better than none. (...) The perception of those “immersed experiences” also plays a role when it comes to communicating the research results. Its one thing to say that you conducted qualitative research in a 3rd tier city in northern China, it’s another to show the richness of that context through a video of an interview conducted in a two room family apartment. (...) Generally I prefer to go in the field with a specific interest area, for example Mobile TV or illiterate contact management, clear topics that can be researched and delivered. During the project-planning phase I try to ensure methodologies that allow us to collect data on related issues and I always leave enough time to scout new topics. The role of research is to explore the boundaries of what’s out there. It’s typical for some research to continue existing trajectories whilst others are at more of a tangent to current practices. (...) If there is frustration in the way research is enterpreted then much of the blame falls on the researcher: not taking the time to understand the design needs of the research team; an inability to clearly communicating ideas, and not making the effort to re-package research results to arising needs. (...) In deciding what methods to use we always start with the participants and their need to be comfortable with the research process. Given that we want to collect data from pretty much every context where the phone is used from when people get up to when they go to bed, and techniques such as wallet mapping can expose very sensitive data.

Why do I blog this? lots of insightful tips and insights there for people like me who are doing user experience research. There are also good practical issues (default accommodation is often a multi-national hotel chain?) ;)

Death switch to pretend you are not dead has become an art form

In the last issue of Nature, David Eagleman wrote a good paper about "death switches". According to Wikipedia, a death switch is:

"an automated program by which a computer regularly probes a subscriber. The subscriber is required to make a response -- consisting of logging with a secret password -- to prove that she is still alive. When the subscriber fails to make a response for a certain amount of time, the program assumes she is dead and emails out pre-scripted messages to her pre-defined recipients."

Eagleman has several good points about that: he describes some curious practices related to death switches:

It soon became appreciated that death switches provided a good opportunity to say goodbye electronically. Instead of sending out passwords, people began programming their computers to send e-mails to their friends announcing their own death. "It appears I'm dead now," the e-mails began. "I'll take this as an opportunity to tell you things I've always wanted to express..."

Soon enough, people realized they could program messages to be delivered on dates in the future: "Happy 87th birthday. It's been 22 years since my death. I hope your life is proceeding the way you want it to."

With time, people began to push death switches further. Instead of confessing their death in the e-mails, they pretended they were not dead. Using auto-responder algorithms that cleverly analysed incoming messages, a death switch could generate apologetic excuses to turn down invitations, to send congratulations on a life event, and to claim to be looking forward to a chance to see them again sometime soon.

And his point is that there is hence an existingafterlife: So an afterlife does not exist for us, per se, but instead an afterlife exists for that which exists between us. When an alien civilization eventually bumps into Earth, it will immediately be able to understand what humans were about, because what will remain is the network of relationships: who loved whom, who competed, who cheated, who laughed together about road trips and holiday dinners. Each person's ties to bosses, brothers, lovers are written in the electronic communiqués. The death switches simulate the society so completely that the entire social network is reconstructable. The planet's memories survive in zeros and ones.

Why do I blog this? this is a nice example of how tech services can be tinkered in intriguing ways. As the author says "a good-spirited revolution against the grave's silence" :)

Location-based applications, failures and a second wave of applications to be expected

Discussing with some friends lately about location-based applications, I tried to sort out my ideas about that. Anne for instance asked how what I meant by the fact that LBS failed (which I mentioned in my interview of Regine). My take on this could be exemplified by this project that Fabien sent me. This system supposedly use GPS with weather info and social networking system on Honda cars:

Honda car drivers in Japan will be able to receive in real time (updates every 10mn) the EXACT weather info at their present location or at their destination, thru the InterNavi Premium Club (InterNavi Weather). If you don’t weather conditions at your current location are useful on a GPS, you may find interesting to know the roads or districts that are flooded, or cut-off by the snow. The system can also tell you that. An exclamation mark on the map tells you there is a problem in a particular area.

Honda also offers a real SNS (Social Networking Service) which allows InterNavi Premium Club subscribers to provide some information about a precise location. For example, if you’ve had a bad experience in a restaurant (the food made you sick), you can mark the place on your GPS and let the other users know the tacos at the local Taco Bell gave you the runs

Obviously such an application combine different bricks such as GPS positioning, weather information flow and social network capabilities. In terms of location-based service, it also employs primitives elements like place-based annotations (the omnipresent rate the restaurant example), receiving location-based information (weather...). This leaves me kind of speechless in terms of the potential of LBS. I mean, ok navigation and related information are the most successful service regarding LBS. But it's just an individual service; when it comes to multi-user LBS applications, the large majority of systems that has been designed failed: there were not big acceptance by the users/markets.

Of course there were nice prototypes like place-based annotation systems (with diverse instantiations such as GeoNotes, Yellow Arrow, Urban Tapestries, Tejp... mobile or not... textual or not), buddy-finder applications (Dodgeball...), cool games (Uncle Roy... Mogi Mogi...). Of course there will big buy-outs like Dodgeball acquired by Google.

But so far, we haven't seen any big success over time. So on one side it's a failure but on the other side, I noticed in workshops and focus groups with people not from the field (and hence potential users) that these ideas of place-based annotations, buddy finders (or even shoes-googling) are now very common and seen as "great/awesome/expected" projects everybody would like to have and use. And this, even though studies (from the academia or companies) showed the contrary. So on the marketing side, these LBS ideas seems to be quite successful: those applications are well anchored in people's mind.

Consequently, there would be a story to write about "how LBS failed as a technology-driven product but how it was a success in the dissemination of such applications in people's mind"

Now, as a more positive note, it occurs to me that some more interesting ideas are starting to appear and a "second wave" of LBS is to be expected. For instance, Jaiku is more compelling to me because it's less disruptive. When you look at the user's activity: the information (about other's presence) is available and that's it, like moods/taglines in IM system. From the user's point of view, it's very different than what we have so far and what the designers promoted is more an idea of "rich presence" than a "yo cool I can now which of my friends are around"... Why do I blog this? I just finished writing my dissertation chapter about mutual location-awareness applications and how they are used. This made me think about some critical elements about them,

Détournement at its best

Russian website fishki offers very intriguing examples of "détournement" (i.e. tinkering/hacks/DIY bricolage) that I found irresistible. Some instances:

Why do I blog this? some material to keep it handy up my sleeve, jut in case Michel de Certeau's concepts comes up in the conversation (creativity of people).

User-centered design and vision-driven design

In a UXMAtters article called "Designing Breakthrough Products: Going Where No User Has Gone Before", George Olsen explains how user-centered design (UCD) is of interest to new-product projects but often failed when designing breakthrough products. Some excerpts about this topic I found relevant:

When it comes to matters of aesthetics and fashion, UCD techniques offer little assistance. They can’t tell you how people will respond to products they’ve never seen before, products people have difficulty imagining [examples quoted: the internet, many examples of Web sites] (...) These were cases where the power of the designers’ vision created the demand, showing vision-driven design is sometimes the right approach. In such cases, the role of UCD is to help better the odds that a particular idea will resonate with a product’s target market and screen out those ideas that won’t. (...) UCD techniques have focused more on how to approach projects for which the problem space is fairly well understood—both by UX designers and by users. UCD techniques are best at helping us determine how to solve such problems—which is not to downplay the challenges of those sorts of projects. However, the situation is different for breakthrough products, where potential users often have difficulty imagining a solution to a problem. UCD techniques have some role to play, but often these sorts of projects require UX designers to make decisions more on the basis of their conceptual modeling skills and design experience than on direct user feedback.

And the continues by giving some meaningful advices to use UCD:

Show users something that’s technologically cool, and they’re likely to say they’d use it, even if they really wouldn’t. So, rather than asking users whether they’d use a product, it’s far better to ask them how they’d use it in their everyday lives. (...) When users can’t provide what seem to be realistic answers when asked how they might use a product, that’s a serious red flag. (...) During usability tests, it’s useful to ask people to describe your product “as if they were telling a friend about it”—not only to see whether they understand the product concept, but also to learn about bridging concepts that you can use to get people interested in using the product, even if they don’t fully grasp its potential. (...) as you’re gathering user feedback about a digital product, be sensitive to ways in which people might misunderstand or “misuse” what you’re building (...) While user-centered design normally warns us against designing for early adopters and power users, these are exactly the people whose needs you want to meet when developing a breakthrough product

Why do I blog this? because it gives some pertinent ideas that I could use in various projects regarding projects I have. Besides, the debate between UCD and vision-driven design is a recurrent discussion I have with colleagues, and I am trying to figure out where should I use UCD and when not...

CatchBob visualization using Proce55ing

I started playing with processing today, a easy-to use open source programming language and environment targeted a people who want to program images. This language seems to offer an simple platform that can be used to visualize my CatchBob logfiles. The logfiles store all the players' interaction with some annotations the researcher (hmm myself) made about what information they exchanged. As I explained previously, I'd like to visualize the exchange of “coordination devices” among players: the mutually recognized information that would enables the teammates to choose the right actions to perform so that the common goal might be reached.

Along with Fabrice Hong, we did some prototypes using the replay tool he designed, but I also wanted to give a try with another tool and Processing seemed to be the perfect candidate to have more appealing viz. My first attempt is quite simple and depicts how 3 players exchanged messages during the 3 phases of the game; squares represent messages, links between square show occurences of dialogues.

Why do I blog this? I am trying tools, let's see if it's easy to use them, Processins seems to have easy XML import (my annotated logs are in XML).

Various vectors

Various link that may or may not make sense in the near future:

  • halloweenmonsterlist is a comprehensive list of DIY hack/make for halloween. There are some very smart motion detector stuff and of course BBQ boneyards
  • Bitchun society is a web platform that aims at applying Cory Doctorow's Whuffie notion of social capital. Whuffie is the personal capital with your friends and neighbors: you can give and receive Whuffie... a sorta social software and use a whuffie tracker. The website does not describe the implications of such a platform... that would only show its effects if the there is a critical mass of users.
  • Using brain signals to play video games appears to be more and more common. Some scientists managed to make a kid playing Space Invader by recording brain surface signals through electrocorticographic (ECoG) activity detection. The good thing is that it is "non-invasive" (meaning that you don't need to have some crazy electrodes inside the brain).

Why do I blog this? those are just hints/signals that I ran across during lunchtime. What's the connection between them?

Drop Spots

Thanks Vlad for pointing me on Drop Spots:

A dropspot is a kind of alternative mailbox. It’s a hiding place in a public space, where people can leave things for exchange. Anything. It’s a weird and wonderful way to add personal character to the streets that we live in. Stash something fun and see what you get back.

To find a Drop Spot in your neighborhood, visit the Drop Spots map. Select a Drop Spot map marker near you, make note of its location and visual description and head out the door to find it! Once you locate the spot and discover your mystery gift, make sure to leave one in its place to keep the exchange going.

Why do I blog this? yet another interesting potlatch-like approach of sharing in today's environment. Similar to bookcrossing but there is here the notion of exchange. I also appreciate the idea of "alternative mailbox", which is somehow a portion of territory where people leave traces. I have to admit that I am more interested by this sort of innovation than yet-another-place-based-annotation-systems (virtual post-its) that seem to pops up everywhere. This is exactly what Georges Amar explained at the CINUM2006 presentation last week: he described how pedibus (a non technological innovation but a practice: a walking school bus) is one of the most interesting innovation he ran across lately.

Combining foresight and ethnographical insights

Embed: Mapping the Future of Work and Play: A Case for “Embedding” Non-Ethnographers in the Field is a paper by Andrew Greenman and Scott Smith which has been presented at EPIC 2006. The paper describes a very curious idea of combining an “ethnographic walking tour” with futures and foresight methods. The point of this is to improve and validate foresight exercises with direct observation.

we wish to explore the possibilities of how ethnographers might create spaces designed to encourage business decision makers to witness the sensemaking that is produced during ethnography. (...) Walking the city became an opportunity to experience the situated learning explorations ethnographers often make. The act of walking was critical for physically embodying participants in a milieu, rather than showing them a video or interpreting textual accounts. The rationale was to engage in contemplating what de Certeau termed the “ensemble of possibilities”, from which, individuals evolve “ways of operating”, as they navigate the constraints and opportunities of urban places (1984). Walking was presented as an opportunity to explore the city as an “archive” of culture (Donald, 1999, p7).

Here is how the process looked like:

Embed was the name given to a half-day walking tour, DVD and map devised to compliment a two day futures workshop in London. The event was held in June 2005 and focused on the future of work and play in Europe. Day one consisted of a workshop introduction to Futurist research. Participants were encouraged to conduct scenario planning. This involved synthesizing major trends and transitions which the Futurists expect will impact on work and play over the next 20 years in Europe. On the second day participants were invited to witness three “zones of change” in London to further explore, validate, or amend the views developed on the first day. The driving forces included the following; immigration, technology development, cultural values, economic policies and an aging population.

Why do I blog this? I found interesting this idea of combining an ethnographic approach with futurist consulting methods. Looking at the paper is also good to see how they organized it and what came out. Also, it is worth to check the PDF of the expedition "map" (5MB).

SYS/*016.JEX*02/1SE6FX/360°

Discussing lately the issue of augmented playground with some folks, I remembered one of the bets piece from Lyon Art Biennale in 2001: "SYS/*016.JEX*02/1SE6FX/360°" is a project by Mathieu Briand. It is basically a participatory interactive 360° environment made of a big trampoline on which the participant hop around, he/she s then scanned by 75 input points and this data are then displayed on panaramic screens which encircle one at lagging speeds.

The adding together of images with a common viewpoint creates a movement that can confuse our mind. We think that it is a camera that is turning since we have to move in order to look at an object from every angle. In this situation we are everywhere and the object is able to move.

Joseph Nechvatal describes his thoughts about it:

Briand takes participatory principles found in virtual environments (VEs – or that which is better know as VR (virtual reality) and externalises them. For example, his clearly mature participatory interactive 360° environment called "SYS/*016.JEX*02/1SE6FX/360°" manifests the principle of what I have been calling the ‘viractual’ brilliantly. The viractual is the space of connection between the computed virtual and the uncomputed corporeal world which here merge. This space can be further inscribed as the viractual span of liminality, which according to the anthropologist Arnold van Gennep (based on his anthropological studies of social rites of passage) is the condition of being on a threshold between spaces. This term (concept) of the viractual (and viractuality) is the significant connivence/complicity experienced in the show - a connivence/complicity helpful in defining the third fused inter-spatiality in which we increasingly live today as forged from the meeting of the virtual and the actual - a concept close to what the military call "augmented reality".

Why do I blog this? this example of tangible computing at a higher level is curious, especially if we think in terms of how people perceives one's activity on the panoramic display. I was also unaware of this "viractuality" concept.

Next Nabaztag version: nabaztag/tag

This - sort-of - businessy presentation of Nabaztag is very interestint because the founder show the new version: Nabaztag/tag. Among its new capabilities, it can obey to voice orders ("it has a belly button and everyones knows that rabbit hear through their belly buttons"): like "weather in NY?" and voice capabilities has been improved since it can read stream for any sources (podcasts, web radio...). My favorite part is when the rabbit smells stuff like carots and says "I am wifi rabbit for god's sake, I cannot eat carots".

As described by network worlds:

Version 2 will be announced, which includes speech recognition functions, to allow users to use the rabbit as an input device, or even as a push-to-talk or VoIP phone. "Everything that you can do with an audio input device you'll be able to do with V2," Haladjian says. In addition, the V2 will be able to stream audio from the Internet through the device, which allows for things like listening to podcasts or Internet radio streams. The company says V2 will launch in November and will likely cost more than the current Nabaztag, which sells for about $150.

Why do I blog this? even though it's just a small step, this new version have slight improvements (I'd like to try the voice recognition). They also said that other devices produced by Violet will be released so that it can communicate with the rabbit: "this is leading the way for the Internet of Things" as Rafi Haladjian says.

Flying saucer in Oslo

In good rezonance with UFO-like architecture in Geneva (see here), here is the Oslo version of the flying saucer: UFO in Oslo

Why do I blog this? left over in a curious part of the city, this rusty unflying saucer is a very nice object from a future yet to come, yet to envision but that some folks there do not want to forget. I quite like it and the gloomy atmosphere around adds a lot to my first impression of steampunk scifi.

NordiCHI Workshop highlights (day 1)

A kind of super-quick synthesis of the main highlights from our workshop at NordiCHI "Near Field Interactions the user-centered internet of things". This is not the final report and it only reflects what I found relevant with regards to my research practices. the workshop

Timo started by introducing the aim of the workshop. His point was to start with some examples of the industry view of the Internet of Things (arphid-like supplly chain management...) and stating that we would like to address the other side of the coin: from the user's viewpoint, how would this look like? He then listed some possible examples of such approach: blogjects, spimes, everyware or spychips. Actually, near-field interactions (now allowed by NFC technology) could be a way to meet this end: it brings new way to interact with technology raising important questions about near-fieldness and touch. Timo then quoted applications such as Thinglink (ulla-maaria mutanen), NFC presence (janne jalkanen), hovering, cookies (katherine albrecht), "pick, drag and drop" or spyware (people avoiding to be tracked by puttin copper lines in the pockets of their jeans). With this characteristics (near-fieldness and touch), and those technologies, the interface semantics might change leading to new way to bridge first and second life (physical/virtual environments). These tropes could create new affordances and the workshop was meant to explore that.

After each others' introduction, we had a 5-minute madness presentation (everybody presented his or her work in 5 minutes). This was followed by talks by some presenters; we actually picked up 5 persons with different research angles so that we could address various perspectives. All presenters deserved to be quoted for their work but I would only describe 3-4 highlights:

Chris Heathcote described how NFC is about what is here (near-field interaction) and he wondered about what happen when you're not here... how would I access to "my things far away"? what would be "actions on my things far away"? He presented some examples like Smart2Go which allowed to get an helicopter view starting from one's location to see what is around. The same goes with time: NFC is about the present but how can I access past interactions? Another concerns he had was that "a touch is a touch" so it's discrete but can we record other touch so that we make something out of it? (To which Timo added that he read how the Nintendo Wii will log every user interactions in the internal calendar). Actually, Chris listed the design decisions they made at Nokia when describing the NFC standard specifications.

Ben Cerveny explained how people won't understand these new affordances easily: or only if they are drawn to it step by step. That's what he explored in his recent work, by learning how people interact with objects and presenting clues of how interaction might takes places so that people know how to interact with them.

Ulla-Maaria talked about how to create "social affordances" for material objects (According to Gibson, affordances = material properties of an object that indicate the possibilities of interaction with it). So how could affordances could be socially constructed and shared? She is interested in how to do that but not in a pre-determined way, rather as a user-generated manner. Her point was that it would pertinent to add objects with a new property: personal relation to the object. For instance, it could be about tagging an objects "I made it/I own it/I like it/I want it...", so that it accumulates and it is hence organized around shared motives. Matt Biddulph then exemplified what a middleware for such a system would work.

Also with an interest about bottom-up approach, Alexandra Deschamps-Sonsino focused on how to design for sustainability: how to do more with less. To her, personalization could be a solution for users to engage with high-tech devices and not trashing them two months after buying them. She showed how the "positive history" an user have with an object could be of interest: "I want to keep that things because I made it and it reflects my positive history with it": the design trop here = beyond product obsolescence, you keep an hold on it: use positive history to create precious objects. Some argued that we may not allow back-ups in that context (so that an object is really dead once data are lost). Alexandra also mentioned the concept of "Agathonic design": designing objects so that they improve over time.

Why do I blog this? these are just quick issues that has been raised on the first day of the workshop. Others things have been mentioned and we will describe them later on in the write-up.

A blogging purse

Cyril pointed me on this quite unusual blogject (which is wrong since there is no prototypical blogject representation) is this 'blogging purse': " It looks like it just uploads images. The details are a bit on the weak side, but some of the stuff looks neat. The purse contains a camera, basic stamp, pedometer and Nokia phone".

Here is the blog it creates, a contextual uploader actually.

remote control gardening

Via, look at this Aiterrarium: Remote-control gardening:

On October 11, Matsushita Electric Works, Ltd. announced plans to begin selling an indoor gardening system whose lighting, temperature and water supply can be remotely monitored and controlled via the Internet. The system, called Aiterrarium, is slated for release on December 20 and will initially target research facilities for universities and businesses.

Why do I blog this? I am wondering why could not it be the other way around: sensors on a cell phone (or whatever object that can be mobile, "visiting" diverse environments) that would remotely control elements of the plans (for instance water distribution with different levels of Sodium, different light exposition, noises... or even radiowaves and touch sensors) so that the plant development itself is a by-product of your own movement in space... Matching your own experience (the light you have access, the radiowave you encountered, the food you eat) or not... I mean it's a matter of turning your cell phone in a blogject input and your plant as a blogject output.

Research as material for design

I was struck this morning by two parts of blogposts written by Anne Galloway. First on the Touch's project blog for which Anne now collaborate:

I’m a social researcher working at the intersections of technology, space and culture. (...) When Timo and I first started talking about the project, I was working through some ideas about the relationship between design and social science, and more specifically, about how social and cultural research could serve as materials for design. When I was offered the opportunity to put some of this thinking into practice, I simply couldn’t refuse!

Second, in her blog, about a talk she'll sooner give:

my objective wasn't to suggest abstract guidelines for the development of new technologies, but rather to articulate and explore specific arenas on-the-ground in which intervention and action are possible and productive

Why do I blog his? since I am interested in how research can inform/enrich/help/... design, I appreciated how Anne describes what she wants to do in those projects. That's also what my work aims at, even though my perspective is less about cultural anthropology and more related to psychology (situated , social and cognitive). The words she's using ("material", "articulate and explore specific arenas on-the-ground in which intervention and action are possible and productive") are compelling to me.

Barcode Jesus

Scott Blake is an artist who plays with barcode; maybe one of his bets piece is this Jesus portrait made out of barcodes. In Scott's words:

This is the Bar Code Jesus that I created using my first refined bar code halftone program. The bar code images used look like regular bar codes, but they go beyond the normal density allowed by the bar code technolgy. I created a bar code signature, in the lower left corner using the bar code from a Pepsi 2-Liter.

Why do I blog this? Using barcodes as patterns à la Roy Lichtenstein dots to create new structures seems to be curious. With all those folks trying to find the face of whoever in whatever, it's strikingly curious to see artists taking it the other-way around: employing non self-revealing pieces like barcodes to create the face of Jesus. What's next? This is about using everyday artifacts to creat higher-level representations.