Tangible/Intangible

tangible@home

Quick talk at MIT Medialab this afternoon, during the "tangible interfaces" course of Hiroshi Ishii. It's called "Tangible@home". The presentation is a very brief overview of the work I am pursuing in terms of UX research. After a quick description of the devices I am interested in and methods I use (mostly ethnography-inspired), I described 5 research issues or usage patterns.

Thanks J*B for the invitation.

Game controllers evolution and game design

Game controller In "The Evolution of Game Controllers and Control Schemes and their Effect on their games", Alastair H. Cummings interestingly traces the history of video-game controllers. A good read in conjunction with my earlier post about this very topic. What is relevant in that paper is the second part of the issue: how the evolutions of game controller schemes is reflected in the game play, what is the mutual relationship between both. See for example:

"The first controllers were made of whatever was available to the scientists in their electronics labs and the games were equally simple. Highly simplified versions of sporting activities such as table tennis, shooting galleries and space shooters. With the creation of the gamepad games became more complicated. Games didn’t have to be simple concepts, although the gameplay was still limited by the computing power of the era. 2D platform games took players on long journeys with them in control of simple movement of their characters. With 3D came the analogue stick, providing players with a way to guide their characters around their new 3D environment. The latest consoles let players perform the actions that they want their characters to perform and they can become part of the game more than ever before. (...) Finally there is the purely functional purpose of the PC control schemes. Whilst reflecting little on the actual actions taken in the game, the simple control schemes can become second nature to players, to give them a feeling of immersion on par with the best novelty controller. Despite this it can be seen that there has been minimal development on new types of games on the PC, these control schemes work, and so these games are the only ones that will be played."

Why do I blog this? interesting material for current project about tangible interfaces. There would be something to write about the evolution of game controllers, which were the forces that shaped them and how it influenced the whole game design. This paper only begin to deal this issue and I'd find intriguing to know how the schemes had been chosen and discussed by a broad range of actors (developers, game designers, etc.) in the design process per se. From my experience, I realized how much power developers in game studio had on the Wii controllers scheme decision, simply because some game designers were not really able to understand how the device worked. Things used to be different with old-school pads.

Tesla on wireless electricity

Electricity Being involved in a project about the Internet of Things and electricity consumption led me back to some stunning texts by Nikola Tesla, written back at the end of the 19th century.

For instance, in "On Electricity",

"I wish much to tell you on this occasion—I may say I actually burn for desire of telling you—what electricity really is (...) But we shall not satisfy ourselves simply with improving steam and explosive engines or inventing new batteries; we have something much better to work for, a greater task to fulfill. We have to evolve means for obtaining energy from stores which are forever inexhaustible, to perfect methods which do not imply consumption and waste of any material whatever. (...) In fact, progress in this field has given me fresh hope that I shall see the fulfillment of one of my fondest dreams; namely, the transmission of power from station to station without the employment of any connecting wire. Still, whatever method of transmission be ultimately adopted, nearness to the source of power will remain an important advantage."

Also more to draw from World System of Wireless Transmission of Energy:

"The transmission of power without wires is not a theory or a mere possibility, as it appears to most people, but a fact demonstrated by me in experiments which have extended for years. Nor did the idea present itself to me all of a sudden, but was the result of a very slow and gradual development and a logical consequence of my investigations which were earnestly undertaken in 1893 when I gave the world the first outline of my system of broadcasting wireless energy for all purposes. (...) The transmitters have to be greatly improved and the receivers simplified and in the distribution of wireless energy for all purposes the precedent established by the telegraph, telephone and power companies must be followed, for while the means are different the service is of the same character. Technical invention is akin to architecture and the experts must in time come to the same conclusions I have reached long ago. Sooner or later my power system will have to be adopted in its entirety and so far as I am concerned it is as good as done. I"

Why do I blog this? of course Tesla's exhuberant (and ultra-positivist) claims are kind of weird today (although you can find them stated by lots of people) but what I find intriguing here is how his long-chased goal is still a research purpose lately. Some great lessons about the relationship between time and innovation. If your read his stuff, you can notice how the end of the 19th century was described and seen as an accelerating moment in time, where innovations was sparkling here and there "like never before".

Wii-like consoles

Digging some material for a project about gestural interfaces in France lately, I stumbled across this sudden (and curious) surge of Wii-like platform, see for example these 3 devices:

First, the technigame, a very rouge game console which allows to play bowling/soccer/tennis with a stick that has "infraroufe" connectivity (the typo is funny). The game seems to be entirely ripped off from the Sega Master System reshuffled with manga-style characters in a very weird way. The name itself is also stunning.

Then you have this other "technigame" version sold at the lowcost shop "La foirfouille" for 39.99euros. it looks like a Wii although reshaped by people who misunderstood Karim Rashid's blobject concept.

Perhaps, the "Kiu" by Videojet is a tad more personality, with its own globular shape. The console only offers 5 built-in games.

Why do I blog this? It's always intriguing to look at product copies as they are generally curious attempts to re-appropriate ideas in a new way. From a more abstract POV, it also shows how certain people think the Zeitgeist is. What seems to be the value proposition here is clearly the price, ranging from 40 to 90 euros, cheaper than Nintendo's platform. However, the only thing these devices appears to bring to the user, apart from the nasty wii-ripped shape, is the use of gestural interaction (as if it was the only innovation on the Wii). Of course, in addition, the way these devices are advertised, using the family-tech momentum of the Wii, is revealing.

That said, I haven't tested these consoles (yet).

User experience of potentiometer in gaming

DS controller An interesting add-on for the Nintendo DS is this lovely potentiometer by Taito, somewhat reminiscent from paddle controller. Using a geared potentiometer actuation mechanism, the user experience is quite basic with brick-games such as Arkanoid. Rotating that sole button is intriguing and quite smooth. Of course some folks nailed it down more thoroughly and manage to control Mario Kart DS. Surely something to think about tangentially to this.

Spontaneous kid play activities with cell-phones

Yet another interesting reference for a project about children and mobile gaming devices (ranging from the Nintendo DS to cell phones): In the hands of children: exploring the use of mobile phone functionality in casual play settings by a swedish team of researchers: Petra Jarkievich, My Frankhammar, and Ylva Fernaeus (taken from the Mobile HCI conference 2008). The paper reports the results of a field study concerning swedish kids (10-12 y.o) and their use of mobile phones in indoor and outdoor settings. The authors mention that they were interested by unsupervised social play and "spontaneous play activities" taking kids as a particular use case of mobile devices target. The locus of their study was therefore peculiar: situations where children were able to play fairly undisrupted for a longer period of time, and in explicit social settings. This is why they chose play centres located in parks. In terms of methodology, it's a mix of observation and kids interview (focus groups) about cell phone usage over the course of 6 weeks.

A quick overview of the results (although reading the whole finding section is very important to get the sense of what happens):

"The first general observation concerns the dual nature of the phones; simultaneously being serious and important communication tools for parents, as among the children being treated and valued primarily as resources to act locally in the group (...) Sharing media content was one of the key activities that we observed and seemed to play a central role in these respects, where individual ownership of the media content was assessed and valued largely based on its social context. (...) Our second general observation has to do with the skills that the children displayed at using the different features of the technology, and how these were constantly appropriated in a variety of ways. Existing physical play activities were sometimes altered and expanded to suit the technical resources, and the discovery of new functionality also inspired entirely new play scenarios. The children thereby also made use of functions in the phones to do things that these functions were clearly not intended for. We also observed several ways to overcome, and even make use of, the technical limitations of the devices. This suggests that children at this age put much value into the freedom of creating their own play scenarios, as a way to make meaningful use of the technologies at hand. (...) Our last more general observation is related to the long-established worry that computing technology may make children less physically stimulated, often favouring passive forms of learning, and how it has tended to force children’s play environments to move indoors."

And the following "implication for design" is also intriguing:

"some of the most meaningful and interesting technical functions were those that allowed users to invent and develop their own activities. We see no reason to suspect that this would not be a much appreciated feature also among adult users, at least in certain settings"

Why do I blog this? accumulating material about kids and mobile devices for a client project about mobile gaming. I am preparing a field study about that topic and try to get both methodological/results from other researchers. Reading the findings also worth it as it shows mobile phone usage is articulated with kids games such as 'cops and robbers'.

Internet of Things+PicNic

If you by any chance you go to PicNic next week in Amserdam, be sure to check this nice special event called "Internet of Things: Toys for hackers or real business opportunities" put together by Vlad Trifa:

"The purpose of this session is to raise awareness that a new ecology of tiny interconnected objects - the Internet of Things - is quickly and silently pervading even the most intimate corners of our lives. Still, many companies are reluctant to invest in this field, as these devices are perceived as unreliable toys that are not mature enough to be turned into real products. As a counterpart of the Mediamatic Hacker’s Camp - where the focus is on brainstorming and fast prototyping of new gadgets and ideas - this special event will focus on what happens when such an idea gets turned into a commercial product. To encourage research in this field, six world-class experts in this field accompanied with a bunch of interactive demos will present how they have transformed some toys for hackers into readily available products used both in research and industrial applications."

With good people such as David Orban, Mike Kuniavsky, Rafi Haladjian and others.

"Making" WiFi

Yet another one(presence of wifi in Geneva, revealed on a tree)

My interest in the invisibility of the digital (on par with its pervasiveness in the physical) led me to Katrina Jungnickel's research project called Making Wifi. Her work basically explores the role and importance of visual representations and practices in the making of a new digital technology: Wireless Fidelity (WiFi):

"Drawing on an ethnography (participant observation and interviews) of the largest not-for-profit volunteer community WiFi network in Australia, she examines how members design, make, tinker, break, fix and share a wireless network that spans across the city of Adelaide. To do this she foregrounds the visual representations members make in everyday situated practice and examine what types of work they do. She shows how members regularly encounter trees, thieves, animals, neighbours, legal restrictions, technical complications, a myriad of materials and the weather in the daily practice of making WiFi. However, rather than filtering out and tidying up mundane mess, members build it into their visual practices. They make WiFi because of uncertainty and ambiguity, not in spite of it."

More specifically, she is interested in DIY as well as the role of mess as a conduit to new forms of expression and innovation:

"One thing I’m currently exploring is the paradox implicit in DIY WiFi. If WiFi is an invisible, fragile, temperamental and complicated technology that predicates meticulous precision, advanced technical skills and abstract diagrammatic schema then what constitutes DIY or "homebrew" high tech? How do members negotiate the intersection of wireless technology and tinkering? What is the role of hands-on knowledge in making, understanding and innovating and how, if at all, does a hands-on engagement influence their relationship to the technology and the network as a whole?"

More on her research blog Why do I blog this? it seems a relevant exploration of intriguing DIY practices as well as situated practice regarding technological development, always good to read/investigate to understand the complexity of technology and how it is hybridized with other elements (be they legal restrictions or the presence of animal). Would be nice to read the whole phd or papers. Personally, it's in this sort of research that I like reading thick description of how design and usage, the kind of material I like skimming as case studies of "messy" innovation.

The E on touch interface

Playing with a touch-screen Although I don't share the optimism described by this article about touch interface (in the insightful Technology Quarterly in The Economist), there are some good elements discussed there. I recommend reading it in conjunction with Bill Buxton's perspectives about that very topic.

The article in the E gives an overview of touch interface (table, mobiles, etc.) showing how they have been around for quite a while as well as interesting quick descriptions of the available technologies. As with other technologies, I am less interested in the interface per se than how it evolved over time. See for example this description of the limiting factors:

"If touch screens have been around for so long, why did they not take off sooner? The answer is that they did take off, at least in some markets, such as point-of-sale equipment, public kiosks, and so on. In these situations, touch screens have many advantages over other input methods. That they do not allow rapid typing does not matter; it is more important that they are hard-wearing, weatherproof and simple to use. (...) But breaking into the consumer market was a different matter entirely. Some personal digital assistants, or PDAs, such as the Palm Pilot, had touch screens. But they had little appeal beyond a dedicated band of early adopters, and the PDA market has since been overshadowed by the rise of advanced mobile phones that offer similar functions, combined with communications. Furthermore, early PDAs did not make elegant use of the touch-screen interface, says Dr Buxton. “When there was a touch interaction, it wasn’t beautiful,” he says. (...) That is why the iPhone matters: its use of the touch screen is seamless, intuitive and visually appealing. (...) Another factor that has held back touch screens is a lack of support for the technology in operating systems. This is a particular problem for multi-touch interfaces. "

Furthermore, the article also deals with a topic I am researching (mostly with the Nintendo Wii and DS): the one of gestural language for tangible interfaces:

"Microsoft is also developing gestures, and Apple has already introduced several of its own (...) The danger is that a plethora of different standards will emerge, and that particular gestures will mean different things to different devices. Ultimately, however, some common rules will probably emerge, as happened with mouse-based interfaces.

The double click does not translate terribly well to touch screens, however. This has led some researchers to look for alternatives."

Why do I blog this? some interesting elements here about the evolution of technologies, especially showing how slow such interface (almost 20-25 years old) takes time to find its niche.

Anticipatory or representational visions of ubiquitous computing

Catching up with accumulated RSS feeds, I read with great pleasure the slides from Sam Kinsley's presentation at the RGS-IBG annual international conference.

Kinsley interestingly addresses the vision of ubiquitous computing and how it is employed in the domain of corporate R&D. He takes the example of HP's Cooltown project and what "stories" were set to define the project and the vision. Of course there were some issues with the large quantity of material produced in the Cooltown project. Some excerpts I enjoyed from Kinsley's notes:

"After CEO prominence came, some HP managers went to this producer to create a ‘vision’ video for CoolTown. From a corporate ‘vision’ perspective: the video was a very compact articulation of a lot of things CoolTown as a research project was trying to say about the type of world being created by these types of technologies. From the technology research scientist standpoint - there were things about the video they liked, but many things that made them cringe and say 'we didn't say it would work like that'. As some of the researchers saw it, the producer wasn't very ‘tech savvy’.

The video became an interesting double-edged sword. It had a particular effect on how CoolTown was received. It wasn't accurate to technological development the ensued but represented a ‘vision’. The researchers felt that the overly emotive and simplistic corporate vision elided some of the interesting and important things they were trying to achieve to make the world better. (...) whilst visions are not necessarily realised, nor likely to be, they are productive of particular types of relation between researchers, business managers, clients and various places and things. (...) Vision texts and videos are, in most cases, certainly not glimpses of a future. Rather, they are representational constructs born of anticipatory impetus. "

Why do I blog this? I often find interesting when this sort of gap is revealed as it shows the importance of culture and imaginary expectations in technological developments. The notion of "visions" less teleogical but representational is also important here as it shows that reality is more complex than presented in the pop press/PR communication.

"Networked cities" session at LIFT Asia 2008

(Special fav session at LIFT Asia 2008 this morning since this topic is linked to my own research, my quick notes) Adam Greenfield's talk "The Long Here, the Big Now... and other tales of the networked city" was the follow-up of his "The City is Here for You to Use". Adam's approach here was "not technical talk but affective", about what does it feel to live in networked cities and less about technologies that would support it. The central idea of ubicomp: A world in which all the objects and surfaces of everyday life are able to sense, process, receive, display, store, transmit and take physical action upon information. Very common in Korea, it's called "ubiquitous" or just "u-" such as u-Cheonggyecheong or New Songdo. However, this approach is often starting from technology and not human desire.

Adam's more interested in what it really feels like to live your life in such a place or how we can get a truer understanding of how people will experience the ubiquitous city. He claims that that we can begin to get an idea by looking at the ways people use their mobile devices and other contemporary digital artifacts. Hence his job of Design Director at Nokia.

For example: a woman talking in a mobile phone walking around in a mall in Singapore, no longer responding to architecture around her but having a sort of "schizeogographic" walk (as formulated by Mark Shepard). There is hence "no sovereignty of the physical". Same with people in Tokyo or Seoul's metro: physically there but on the phone, they're here physically but their commitment is in the virtual.

(Oakland Crimespotting by Stamen Design)

Adam think that the primarily conditions choice and action in the city are no longer physical but resides in the invisible and intangible overlay of networked information that enfolds it. The potential for this are the following: - The Long here (named in conjunction with Brian Eno and Steward Brand's "Long Now"): layering a persistent and retrievable history of the things that are done and witnessed there over anyplace on Earth that can be specified with machine-readable coordinates. An example of such layering experience on any place on earth is the Oakland Crimespotting map or the practice of geotagging pictures on Flickr. - The Big Now: which is about making the total real-time option space of the city a present and tangible reality locally AND, globally, enhancing and deepening our sense of the world’s massive parallelism. For instance, with Twitter one can get the sense of what happens locally in parallel and also globally. You see the world as a parallel ongoing experiment. A more complex example is to use Twitter not only for people but also for objects, see for instance Tom Armitage's Making bridges talk (Tower Bridge can twitter when it is opening and closing, captured through sensors and updated on Twitter). At MIT SENSEeable City, there is also this project called "Talk Exchange" which depicts the connections between countries based on phone calls.

Of course, there are less happy consequences, these tech can be used to exclude, what Adam calls the "The Soft Wall": networked mechanisms intended to actively deny, delay or degrade the free use of space. Defensible space is definitely part of it as Adam points out Steven Flusty's categories to describe how spaces becomes: "stealthy, slippery, crusty, prickly, jittery and foggy". The result is simply differential permissioning without effective recourse: some people have the right to have access to certain places and others don't. When a networked device does that you have less recourse than when it's a human with whom you can argue, talk, fight, etc. Effective recourse is something we take for granted that may disappear.

We'll see profound new patterns of interactions in the city:

  1. Information about cities and patterns of their use, visualized in new ways. But this information can also be made available on mobile devices locally, on demand, and in a way that it can be acted upon.
  2. Transition from passive facade (such as huge urban displays) to addressable, scriptable and queryable surfaces. See for example, the Galleria West by UNStudio and Arup Engineering or Pervasive Times Square (by Matt Worsnick and Evan Allen) which show how it may look like.
  3. A signature interaction style: when information processing dissolving in behavior (simple behavior, no external token of transaction left)

The take away of this presentation is that networked cities will respond to the behavior of its residents and other users, in something like real time, underwriting the transition from browse urbanism to search urbanism. And Adam's final word is that networked cities's future is up to us, that is to say designers, consumers, and citizens.

Jef Huang: "Interactive Cities" then built on Adam's presentation by showing projects. To him, a fundamental design question is "How to fuse digital technologies into our cities to foster better communities?". Jef wants to focus on how digital technology can augment physical architecture to do so. The premise is that the basic technology is really mature or reached a certain stage of maturity: mobile technology, facade tech, LEDs, etc. What is lacking is the was these technologies have been applied in the city. For instance, if you take a walk in any major city, the most obvious appearance of ubiquitous tech are surveillance cameras and media facades (that bombard citizen with ads). You can compare them to physical spam but there's not spam filter, you can either go around it, close your eyes or wear sunglasses. You can compare the situation to the first times of the Web.

When designing the networked cities, the point is to push the city towards the same path: more empowered and more social platforms. Jef's then showed some projects along that line: Listening Walls (Carpenter Center, Cambridge, USA), the now famous Swisshouse physical/virtual wall project, Beijing Newscocoons (National Arts Museum of China, Beijing) which gives digital information, such as news or blogposts a sense of physicality through inflatable cocoons. Jef also showed a project he did for the Madrid's answer to the Olympic bid for 2012: a real time/real scale urban traffic nodes. Another intriguing project is the "Seesaw connectivity", which allows to learn a new language in airport through shared seesaw (one part in an airport and the other in another one).

The bottom line of Jef's talk is that fusing digital technologies into our cities to foster better communities should go beyond media façades and surveillance cams, allow empowerment (from passive to co-creator), enable social, interactive, tactile dimensions. Of course, it leads to some issues such as the status of the architecture (public? private?) and sustainability questions.

The final presentation, by Soo-In Yang, called "Living City", is about the fact that buildings have the capability to talk to one another. The presence of sensor is now disappearing into the woodwork and all kinds of data is transferred instantly and wirelessly—buildings will communicate information about their local conditions to a network of other buildings. His project, is an ecology of facades where individual buildings collect data, share it with others in their "social network" and sometimes take "collective action".

What he showed is a prototype facade that breathes in response to pollution, what he called "a full-scale building skin designed to open and close its gills in response to air quality". The platform allows building to communicate with cities, with organizations, and with individuals about any topic related to data collected by sensors. He explained how this project enabled them to explore air as "public space and building facades as public space".

Yang's work is very interesting as they design proof of concept, they indeed don't want to rely only on virtual renderings and abstract ideas but installed different sensors on buildings in NYC. They could then collect and share the data from each wireless sensor network, allowing any participating building (the Empire State Building and the Van Alen Institute building) to talk to others and take action in response. In a sense they use the "city as a research lab".

From "force" to "touch"

haptic Some ads of "Haptics UI" for mobile in full spin in South Korea (Gimpo airport above and COEX center in Seoul below). The semantic of that word may be mysterious; from a Greek word which means “contact” or “touch” but it's definitely interesting to see it applied here.

Haptic user interface

In the 90s, it was often employed for the future of input/output interfaces in virtual realities; especially with a focus on force-feedback. Now, the emphasis is a bit more subtle and seems definitely on "touch": somehow the word made it by loosing the strength characteristic and turned into something more Weiser-ian: a calm "touch" computing paradigm.

Theories of embodiment

Gestural interface(A gestural interface tested in South Korea last year)

How Bodies Matter: Five Themes for Interaction Design by Scott Klemmer, Björn Hartmann and Leila Takayama (DIS 2006) gives a relevant overview of different themes of interest for interaction designers focused on tangible/gestural interactions. It covers a broad range of topics concerning how our body is fundamental in our experience with the world.

Drawing on theories of embodiment in philosophy, psychology and sociology, they came up with 5 themes:

"The first, thinking through doing, describes how thought (mind) and action (body) are deeply integrated and how they co-produce learning and reasoning. The second, performance, describes the rich actions our bodies are capable of, and how physical action can be both faster and more nuanced than symbolic cognition. The first two themes primarily address individual corporeality; the next two are primarily concerned with the social affordances. Visibility describes the role of artifacts in collaboration and cooperation. Risk explores how the uncertainty and risk of physical co-presence shapes interpersonal and human-computer interactions. The final theme, thickness of practice, suggests that because the pursuit of digital verisimilitude is more difficult than it might seem, embodied interaction is a more prudent path."

What does that mean for tangible computing? see what the authors say:

" we should not just strive to approach the affordances of tangibility in our interfaces and interactions, but to go beyond what mere form offers. As Dourish notes, “Tangible computing is of interest precisely because it is not purely physical. It is a physical realization of a symbolic reality”. For a combination of virtual representations and physical artifacts to be successful and truly go beyond what each individual medium can offer, we need a thorough understanding what each can offer to us"

A left-hand wii player (picture taken from one my home ethnography study)

A current research project about the user experience of the Nintendo Wiimote lead me to investigate that last theme concerning the "pursuit of digital verisimilitude. Some excerpts from the paper about it:

"It may seem a platitude, but it is worth repeating that, “if technology is to provide an advantage, the correspondence to the real world must break down at some point” (Grudin). Interaction design is simultaneously drawn in two directions. (...) This section argues that interfaces that are the real world can obviate many of the difficulties of attempting to model all of the salient characteristics of a work process as practiced. This argument builds on Weiser’s exhortation to design for “embodied virtuality” rather than virtual reality. Designing interactions that are the real world instead of ones that simulate or replicate it hedges against simulacra that have neglected an important practice."

Although I fully, "interactions that are the real world" are not so easy to design depending on the technology one have: the hand movement captured when playing Wii tennis is only a basic representation of the complex hand movement when playing tennis. Therefore, as I observe in different field studies, if the interaction per se is relevant for Wii players, there are often misunderstandings between the expected events on the screen (based on what gestures the players felt she did) and what really happens in the game. So what I mean here is that "digital verisimilitude" is also hard in tangible computing as capturing movement is definitely tricky. Think about human physiology, the fact that movement is a dynamic (and capture may imply statefulness), the role of context, etc.

Attach a knob to your display

(via), an intriguing assemblage between a touch/gestural interface and a classic laptop screen: Sense Surface allows to use real physical controls added on the display on the top row:

Here's their description:

"SenseSurface can be used with most laptops with a USB input. The sensing knobs have a custom designed movement sensor to determine position within a range of 180 degrees with a 10 bit digital output, linearity typically 1%. The magnetic knobs can be removed and repositioned immediately by picking them up and moving to a different part of screen. A unique sensing x/y matrix is attached to the rear of the laptop screen to detect the control's position. The distance of the sensor from the screen can also be detected. The rotary controls are low friction and there are no screen finger prints as with normal touch surfaces. Linear sliders and switches can also be used on the lcd surface. For audio use, a logarithmic response can be programmed. The system is multitouch and scaleable , the number of controls on the screen is limited by the size of the screen. The screen can be at any angle."

Why do I blog this? I find intriguing the notion of gestural interface through knobs as a an add-on to a normal input/output device.

Ubiquitous computing vision flaws

Thinking about ubiquitous computing and the so-called "internet of things" lately, I have started to recognize the underlying process and how it is engineered. It's as if the starting point was the "social" which is then cut in different chunks and "places": home, work, etc... and then a second differentiation in "objects" or "things" that engineers try to "augment" or "make intelligent": smart fridge, augmented maps, intelligent car, house 2.0 and so on. It's as if the process was always like this, following both an incremental innovation path AND the assumption that objects should stay the same with an augmented smartness permitted by different sorts of Gods (AI, connection with 3D virtual worlds, networked capabilities). Janne Jalkanen has a good post which also deals with these issues, it's called "Ubicomp, and why it's broken"). He basically describes 3 reasons why he things ubiquitous computing is flawed, some excerpts:

  1. "People want to feel smarter, and in control. When you are overwhelmed with choice, you feel stupid. When you have five options, you can weigh them in your mind, and make a choice which you feel happy about - you feel both smart and in control. Apple gets this - the reason why iPhone is so cool is because it makes you feel powerful and in control as an user: you understand the options (no geekery involved), you can use it with ease, and you get to go wherever you want. Granted, your array of choice is limited, but that only exists so that you can feel smarter.
  2. The second big reason why the ubicomp vision is broken is cost. Building infrastructure costs money. Maintaining infrastructure costs money. Making your environment smarter means that it needs to have maintenance. Yes, it can be smart and call a repairmain to come by - but as long as it's not a legal citizen, it can't pay for the repairs. Is it really ubiquitous, if it works only in very selected patches of the world where people can afford it? (...) However, consider your personal electronics - like the mobile phone. You get a new one every two years (...) Personally, I think the iPhones and Androids and Limos and Nokias of the world have a lot more claim to the ubiquitous computing vision than the internet-of-things folks. They're already connected, and they're everywhere.
  3. The third thing that I find broken in the whole thing is how the human factor has been cut from the equation. Yes, it is said to transform our lives, but I've yet to hear one good reason what exactly would make two home appliances want to talk to each other? And note - I am specifically saying want. Because at the moment, they don't want anything. They do as they are told, without any personality or desires. We need to figure out what a toaster wants (and not ask the one in Red Dwarf) to understand why they would need to network - and if they do, why aren't they talking to me instead of each other?"

    Why do I blog this? some great thinking here, especially about the underlying visions of ubiquitous computing and how it's tackled by people who really implement stuff. It's therefore interesting to see the perspective from someone at Nokia and about this claim that phones better relate to ubiquitous computing than other internet-of-things projects.

    Different sorts of touch-screen technologies

    An interesting short description of common touch-screen technologies on by AP described by Peter Svensson:

    • " Resistive (Palm Treos, HTC phones and the Samsung Instinct.): Two layers of clear conductive material lie on top of the display. Pressing them together makes current flow between them. Resistive displays are cheap and can be used with a simple plastic or metal stylus, but are prone to damage because the sensor is on top of the display.
    • Projected capacitive (Apple iPhone and the LG Prada): this touch sensor can lie underneath a protective sheet of glass, making it more durable. The mere proximity of a finger or other object of similar size changes the electrical properties of the sensor's conducting layers, which is why the iPhone is so good at sensing light touches and quick swipes. Projected capacitive sensors can register more than one touch at a time.
    • Surface capacitive (ATM, kiosks): Like resistive screens, they usually need recalibration, and because they're mounted on top of the display glass, they're prone to damage and wear.
    • Surface acoustic wave (ATM, large screens): these touch screens vibrate very rapidly. Sensors pick up how the touch of a finger affects those vibrations. The screens can be crisp and clear, but the sensor can't be sealed against the elements."

    Why do I blog this? a quick and dirty overview only to be aware of the field.

    Hacking and pervasive computing

    This summer issue of IEEE Pervasive Computing is especially focused on hacking and its role in the field of pervasive/ubiquitous computing. As Roy Want, puts it into his editorial introduction, hacking can play a powerful role in pervasive computing as it can inspire "thought processes and reduce the time it takes to create a viable prototype. This process can take many forms: taking a device that performs one function and tweaking it so that it makes another, gathering unrelated components and commercial products to be repurposed or rapid prototyping. In their introduction, the guest editors also highlight how "The advent of the Web along with the rise of open source communities have brought a resurgence in hacking" along with a good bunch of websites about this topic.

    The issue covers examples about the Nintendo Wii, Chumby, bluetooth in cell phones among other things, as well as a more theoretical description of how hacking is valuable for user innovation by Eric von Hippel and Joseph A. Paradiso. In this paper, they show how the hacker is a “lead user” who reinvents and modifies products to better achieve his or her own needs.

    Why do I blog this? simply looking at how the recent evolution of object hacking scene pervades the academic/engineering field.