A telepresence garment

Skimming through Eduardo Kac's "Telepresence and Bio Art : Networking Humans, Rabbits and Robots (Studies in Literature and Science)", I ran across his 10-years old project called The Telepresence Garment and found it of particular interest nowadays:

I first conceived the Telepresence Garment in 1995 to investigate the notion of the mediascape as an expanded cloth; i.e., to consider wireless networking as a new fabric that envelops the body. The Garment, which I finished in 1996, gives continuation to my development of telepresence art. This time, however, instead of a robot hosting a human, we find the roboticized human body itself converted into a host. The Garment was designed as an interactive piece to be worn by any local participant willing to allow his or her body to be engaged by others remotely.

A key issue I have been exploring in my work as a whole is the chasm between opticality and cognizance, i.e., the oscillation between the immediate perceptual field, dominated by the surrounding environment, and what is not physically present but nonetheless still directly affects us in many ways. The Telepresence Garment creates a situation in which the person wearing it is not in control of what is seen, because he or she cannot see anything through the completely opaque hood. The person wearing the Garment can make sounds, but can't produce intelligible speech because the hood is tied very tightly against the wearer's face. An elastic and synthetic dark material covers the nose, the only portion of flesh that otherwise would be exposed. Breathing is not easy. Walking is impossible, since a knot at the bottom of the Garment forces the wearer to be on all fours and to move sluggishly.

Why do I blog this? this nicely expresses how clothing is changing (will change), reshaped by emerging technologies such as ubiquitous/pervasive computing.

Visualization and Immersion of Life Sciences Data

Seeing is Believing is a very interesting article in The Scientist about information visualization. It tackles the fact that lift scientists have to deal with a huge amount of information. The challenge would be to develop relevant visual techniques.

Computers do a great job of finding patterns in data when they're programmed to look for them, notes Jim Thomas, who heads the National Visualization and Analytics Center at Pacific Northwest National Laboratory (PNNL) in Richland, Wash., "but many times, you are discovering what questions to ask. Only the human mind has the ability to reason with what is seen, apply other human knowledge, and develop a hypothesis or question." High-end visualization tools have been long used in applications such as the study of jet turbulence and by security experts looking for "chatter" in reams of telephone calls and transmissions, but only now are such tools being used in the life sciences, says H. Steven Wiley, director of the Biomolecular Systems Initiative at PNNL.

What is also intriguing is this sentence: "Without them, more data won't necessarily translate into better science", a nice evocation of Latour's inscription theory.

For that matter, it seems that VR is still around:

A next generation of visualization software may strive not just to offer a view, but allow the viewer to enter the data. This total immersion concept is the idea behind Delaware Biotechnology Institute's "cave," a Visualization Studio that Silicon Graphics developed, which allows users to literally immerse themselves in the data, both visually and physically. (...) One of the great benefits of the immersive system, Steiner says, is that scientists can "walk around" the data and peer at it from every angle, and do so collaboratively, either remotely or from the same room. And that, Steiner adds, is the great benefit of visualization in general: It can foster interdisciplinary collaboration by helping scientists from a variety of backgrounds understand a problem in order to solve it in a more effective manner.

(image taken from the Delaware Biotechnology Institute)

Why do I blog this? it's interesting to see that VR is still relevant in data manipulation.

Telebeads: Social Network Mnemonics for Teenagers

I've recently read j-dash-bi latest paper and it's very nifty: Telebeads: Social Network Mnemonics for Teenagers by Jean-Baptiste Labrune and Wendy Mackay (IDC2006). It's actually a participatory design paper that describes how they designed a curious artifact:

This article presents the design of Telebeads, a conceptual exploration of mobile mnemonic artefacts. Developed together with five 10-14 year olds across two participatory design sessions, we address the problem of social network massification by allowing teenagers to link individuals or groups with wearable objects such as handmade jewelery. We propose different concepts and scenarios using mixed-reality mobile interactions to augment crafted artefacts and describe a working prototype of a bluetooth luminous ring. We also discuss what such communication appliances may offer in the future with respect to interperception, experience networks and creativity analysis.

The ring addresses two primary functions requested by the teens: providing a physical instantiation of a particular person in a wearable object and allowing direct communication with that person. (...) We have just completed an ejabberd server, running on Linux on a PDA, which will serve as a smaller, but more powerful telebead interface

See the bluetooth telebead ring and how to associate the ring and a contact image:

Why do I blog this? I like this idea of "mobile mnemonic artefacts" as part of a situated and cognition framework: that's an interesting instantiation of communicating objects. Besides, the paper is full of good references about such devices.

Every extension is more than an amputation

Reading "Everyware : The Dawning Age of Ubiquitous Computing" by Adam Greenfield, I am trying to articulate the different "theses" with what I do in my research. One of the most relevant connection is the "Thesis 43" (p148): "Everyware produces a wide-belt of circumstances where human agency, judgement and will are progressively supplanted by compliance with external, frequently algorithmically-applied standards and norms". In this thesis, Adam exemplifies this by a quote by Marshall McLuhan I had also been amazed by: "every extension is [also] an amputation" (Understanding the Media, 1969).

This is exactly one of the conclusion of my PhD research that addresses collaboration in a pervasive game (which can be consider as a first step into an "everyware" world). In the context of my research, I found that automating location awareness information of others can be detrimental to how a small group behave (regarding the division of labor among them, the way they communicate, negotiate and infer others' intents). This is better described in a paper called The Underwhelming Effects of Automatic Location-Awareness on Collaboration in a Pervasive Game.

My point is that giving automatically information about others' location in space can undermine group collaboration. This had been showed by a field experiment we conducted last year. We compared different groups: some had this automatical awareness information and some had not. Groups with the automatical positions of others had a less rich collaboration: they less discussed, less negotiated the strategy, rather sticked to the plan they decided before the game and did not recall their partners' paths very efficiently.

It goes actually even further: I would say that here "every extension is more than an amputation". The user gain the spatial positions of others but loose some important value of letting people express this information by themselves. This is bound to the misconception that automatically sending my position is the same as letting me sending a message to my buddies about it: in the former it's sending and information whereas in the latter it's sending an information AND an intention that I sent something relevant to my addressee.

Why do I blog this? I am glad to see how concrete user experience of pervasive computing can be articulated with higher levels thoughts described by Adam in his book. It shows how a "user experience" angle is needed to better understand what is at stake when we are talking about "everyware". That's what we need in this academic CSCW project.

Today's terminology is weird

In terms of weird terminology, gtr consulting offers very curious concepts about the sociological impacts of emerging technologies. For instance, see their last report (see the pdf table of content):

  • iJunkies - The world at the touch of a button
  • Technomadism — Wireless life
  • TechnoBling — Technology must look good in addition to working well
  • Insulationships—How technology is mediating teens’ relationship with the world around them
  • The Neighbornet—Teen world expanded on the net
  • Ego Anglers—Looking for positive strokes on the net
  • The Digital Disguise—Transforming identities on the net
  • ACME Auteur—Creating, Producing & Directing on the net
  • Life Caching—Memory replaced by knowing where to find it
  • Brain Blur—Multi-tasking in 2006
  • Dataddiction—Teens can’t live without the web
  • The Chill-Challenged—Idle hands? Not today’s teens.

Of course, there is a lot of marketing frenzyness here, sort of having category to refer to subgroups, but the underlying rhetoric is interesting. Some trends appears: junkies+addiction / neighbornet+insulationships (creation of subgroups, do they talk to each other?) / disguise (ok maybe that's the way a person from one subgroup has to behave to take to another group / blur (the brain is blurred but what about the social bonds?) / eco anglers looking (the world is bad and they're looking for sth better?).

Why do I blog this? it's interesting to see how today's trends are reflected into language, with odd portmanteau concept.

Spotscout: a real time space exchange marketplace

Via SpringWise: Spotscout is a web2.0 + car park application:

SpotScout provides a system that creates and facilitates a real time space exchange marketplace. Formally established in 2004, SpotScout's aim has been to create the applications, develop the marketplace, and secure intellectual property rights to real time mobile to mobile space exchange.

SpotScout's mission is to be the world's first en-route space reservation mechanism for public, private and garage parking and to pioneer mobile commerce solutions and technologies placing SpotScout at the forefront of this exciting industry. SpotScout is an easy to use voice and web-enabled service that connects parking spaces with drivers searching for them.

SpotScout also allows users to post their personal parking spots (we call these people 'SpotCasters') for other motorists to use, thereby monetizing an increasingly scarce resource in our cities and towns.

The SpotScout community grows daily, and will continue to do so until every driver feels there is a mechanism that will reliably find them a parking space the moment they need one.

Why do I blog this? "a real time space exchange marketplace": what a concept! after trading virtual objects gathered in a city with the location-based game MogiMogi, you can now trade real space. One of the side-effect of the social web? What happens then, will we have game theory situations?

Using crossed self-confrontation to analyse intersubjectivity in a collaborative pervasive game

I am currently in the process of thinking about new field experiments using our pervasive game (CatchBob!). What I am interested in, is to improve my understanding of the intersubjective experience: how players infer others' activities and intents (what is called Mutual Modeling). For that matter, I am using qualitative methods, very common in the french culture of "ergonomie" or "psychologie ergonomique" known as self-confrontation. There is a good description of self-confrontation in the paper "Methodologies for evaluating the affective experience of a mediated interaction" by Cahour et al. (2005):

The general idea of self-confrontation is to provide a subject with traces of his/her activity (more frequently audio or video recording, but also writings, schemas, annotations,…) in order to collect verbal descriptions of what was going on by putting him/her in the context of the past setting. In the same time external traces enables the analyst to control the correspondence between the verbal report produced by the subject (first person data) and the traces of the activity being observed (third person data). We also use some techniques of the explicitation interview when stopping the video and asking the subjects about what they lived (affectively, cognitively, bodily) during the sequence watched.

I already used this method in the first field experiment we completed. Now, in order to move forward, there is another interesting add-on called "crossed self-confrontation" (developed by Yves Clot) which is very well described by Philippe De Leener in his paper "Self-analysis of professional activity as a tool for personal and organisational change":

The two workers who have experienced self-confrontation review the picture of their own activity but now through the eyes of their fellow-worker. The first worker comments on the activity of the second and vice versa. Again a dialogical activity is initiated about the activity, but this time the players principally confront their experiences. The discussions and exchanges of points of view about the same activity give them an opportunity to re-examine their respective real-life activity and to reveal what is not necessarily self-obvious. So workers, be they researchers or developers, are in a better position to talk about what they have actually lived or about what they actually live when working in a participatory way.

Why do I blog this? I want to apply this crossed self-confrontation method to our next CatchBob! experiment. This means that after playing the game, I will conduct an interview with one of the player, showing him/her traces of the gaming activity (with our replay tool) of a partner (so that player B puts players A's socks for instance). Then I'll do the interview of this player (player A in my example) so that I could cross the descriptions.

The benefit I am expecting is to get an insightful description of the activity, on which I could rely on to examine the player's intersubjectivity.

Hunaja: user study of a mobile social software

Three years ago, while scanning the literature+web about a PhD topic about location-awareness, I stumbled across Hunaja, avery pertinent mobile social software developed by some good finnish folk at Aula. I remember at that time being briefly in contact with Jyri.

Hunaja is an RFID access control system that enables users to remotely check who is logged in at a physical location by using the Web or a mobile phone. Hunaja was developed in 2001-2002 by Aula Cooperative, which is a non-profit organization based in Helsinki, Finland. In addition to controlling the doors of the Aula space, Hunaja has three unique features:

  • Linkage to Aula's weblog - enabling online members to remotely see who is logged in at Aula's physical space
  • SMS access - enabling members to check who's there with their mobile phones
  • A speech synthesizer at the door - enabling online members to send greeting messages. The messages are announced by a computer voice when the recipient logs inat Aula's door

For three months (May-July, 2002), Aula issued a trial set of 50 RFID tags to its members. Of those 50 members, 9 members were selected for this user study. All of the participants were in the 20-35 year-old age group, and their use of the system was followed and recorded for two weeks during the month of July.

What is of particular interest for me is the fact that they conducted a very relevant user study. It was a focus group consisted of 9 people between the ages of 20-35, whose usage patterns were followed for two weeks in July, 2002.

was interested by the reason why people are scouting moves: Entertainment / Time-saving / Spying / Romance / Avoidance / Professional interests / Recruitment. Here is an extract I found relevant to my work:

For the observers, Hunaja provided three media of ”browsing” other people: Web, SMS, and the Aula space. Hunaja worked as a personal intelligence system that enabled the users to optimize their actions (scout useful next steps) and build a strategy for personal postitioning in the network.

Examples of observer behavior: A male user intends to meet a female user without wanting that person to know that he is looking for her A user does not want to meet a specific person and uses Hunaja to avoid meeting that person in Aula A user browses Aula member cards to recruit suitable people for a project For the observed, Hunaja provided a method to ”be noticed”. Motives linked to this included the desire to make new contacts, showing commitment to developing the user community, personal branding, and career-building.

Some users were motivated by the desire to belong to a close-knit group. Stephanie, the 29-year old French graduate student, for instance, had become a Hunaja user because she had a strong desire to establish herself in communities of like-minded people in Helsinki. She placed strong symbolic value on the RFID tag as a token of group membership. For her, appearing on Hunaja was a prerequisite for group membership, and she took care to establish herself as an active user in the eyes of others. In a similar vein, Lisa, a 26-year old manager at an e-learning company, used the term “addiction” to describe her relationship to Hunaja. In the interview she said: “I thought, do people thing that if I don’t show up on Hunaja or visit the weblog at least once a week, will they think that I want to keep up some kind of super privacy and that I’m fed up with Aula or something.” She felt a strong obligation to use Hunaja so as to “not give the wrong impression” of ignorance and passivity. Such instances describe situations where use of the technology becomes a prerequisite for group membership. You have to use the system in order to”exist” in the community. This may be a strong driver for adoption of future mass-market technologies geared for “small worlds” like Aula.

Why do I blog this? because my phd research work is about how location awareness of others impacts social and cognitive processes.

Interview of an iRbobot founder

An interesting interview of Helen Greiner, one of the founder of iRobot (the company which is doing the vaccum cleaner robot Roomba as well as tactical military robots used in Iraq).

Knowledge@Wharton: I don't think anyone would object to having a robot vacuum the floor, but do you find resistance to robots as a concept -- doing tasks that humans have been doing? Is there a science fiction element of this that makes people nervous?

Greiner: I don't really think so. When computers first came out, you had a lot of people worried that computers were going to obsolete humans and that they were going to take over everything. So you had everything from [the movie] The Colossus Project to Hal in 2001. I think it's a way for society to work through their fears. Once people have a computer on their desk and they see what it's good at doing and, more especially, what it's not good at doing, they don't have the same fear anymore. It's the same with robots. Once people have a Roomba in their home and it's doing the sweeping and vacuuming for them, but they see the things it can't do yet, they really don't fear robots taking over the world.

"Naming" the object seems to be one interesting behavior that popped up:

The only thing in their experience that has acted that way has been a pet. So people actually start to name it. You don't see anyone name their toasters but a lot of people tell me they have named their Roomba.

I would just ponder this by saying that I've seen some friends (few years ago, while living alltogether in a big condo) calling their old-school vacuum cleaner "Daisy". Was it already a trend to call certain kind of home artifacts?

It's also refreshing to hear what she says about how people tinker:

Knowledge@Wharton: Have you heard stories of what people have done with this?

Greiner: Well, a few stories. One [involved] making a webcam on wheels so you can control your robot through the Internet and see what the robot sees and hear what the robot hears as you drive it around. Somebody made a robotic plant-moving system, so plants can always be in the sun. Someone was talking about making a swimming pool-skimming robot. And most recently, just this past week, some hackers did a physical instantiation of the video game Frogger. Now we don't condone this type of activity [laughs], but it shows you just where creativity can go when you make a system open.

The openness of the system is indeed FUNDAMENTAL if you want creative things to happen.

Why do I blog this? robots are an interesting domain where innovation starts to appear, leaving the anthropomorphic paradigm to become closer to the pervasive computing world in which objects are interconnected and open (so that people can modify them).

Underground Trend Watching

To go beyond trend-spotting, underground watchers should pay attention to Brainsushi:

Avant-garde technologies, social mutations and cultural turmoil... New York vampyres, Mexican freaks, Silicon Valley nerds, Guatemalan gangsters, London fetishists or Japanese otakus, the Brainsushi agency is specialized in documenting contemporary phenomena that foresee the world of tomorrow.

Through its exclusive reports and documentation, brought together by a team of press and TV professionals who tirelessly travel the world and the digital networks for novelty, Brainsushi brings you to these ill-known territories where our of our societies’ future is brewing.

Documentary films, photographic reports or in-depth articles, our work is both meant for the most demanding connoisseurs and a mainstream audience. Beyond our portfolio, the member zone of this website (accessible on request), will allow you to appreciate the quality of our written and audio-visual productions.

Our main fields of expertise: Pop culture and counter culture / New technologies / Digital and outsider art / New body practices / Urban tribes and lifestyles / Extreme sports / Information society / Alternative sexualitiesWhy do I blog this? I found interesting and curious this kind of underground trend watching consultancy.

Sharing in-game screenshots

Via AEIOU: Multitap is new video-game related webservice:

Multitap.net is a service that allows you to share your in-game screenshots with your friends. You can rate, discuss and categorise your screenshots as you see fit. Do something funny, interesting, bizarre or impressive in a game, and share a screenshot!

Multitap.net was designed to allow gamers to post screenshots of action during play, something we have done ourselves using forums and various image hosting services. (...) People are already starting to find out about it, and of course, suggestions are flying in on what features we should add…

Players upload their in-game pictures and comment on them. There are friends' lists.

Why do I blog this? I am wondering about the potential usage of such platform.

Nabaztag + Everyware

In his book "Everyware : The Dawning Age of Ubiquitous Computing", Adam Greenfield says that:

I've never actually met someone who owns one of the "ambient devices" supposed to represent the first wave of calm technology for the home. There seems to be little interest in the various "digital home" scenarios, even among the cohort of consumers who could afford such things and have been comparatively enthusiastic about high-end home theater. (p91)

The Nabaztag wifi rabbit created by french company Violet tries to go against this stance: nabaztag + everywareActually, and to be fair with Adam, what he is criticizing in his book is rather the very complicated technologies that were supposed to be "calm", "intelligent", "ambient" in the digital home of the future imagined few decades ago.

Why do I blog this? it's funny that I received my Nabaztag and Adam's book the same morning. I fully agree with lots of Everyware's claims, I'll post more about it when read.

What are "Futurists" responsibilities

(Via the dr fish mailing list), a "futurist" position is available at the NYT:

The New York Times Company is looking for a Futurist for its new Research & Development group.

The ideal candidate will be highly imaginative and well-informed about the social and technology trends affecting the creation, distribution and consumption of all forms of media now and in the future. We are looking for someone who has an innate curiosity and a passion for new ideas; someone with a facility for market research data and who can use that data to vividly paint a picture of how the world around us is evolving.

Responsibilities:

  • Spot trends in consumer behavior, in government regulation, and in marketplace conditions by continually mining available data sources and keeping abreast of influential thinkers and publications.

  • Project these trends into the future and suggest new directions for the Company's products and business development. Present these "crow's nest"/future trends briefings to senior management and other stakeholders.
  • Monitor the competitive landscape for The New York Times Company's portfolio of brands; help identify disruptive forces, threats and opportunities.
  • Participate in the brainstorming process with Creative Technologists on R&D team to help define new product prototypes for the company to test.
  • Provide context for the technology prototypes developed by R&D as these technologies are exposed to the business units.
  • Partner with the Business Catalyst on R&D team to identify early stage companies who are executing on new trends for potential partnerships and collaboration.
  • Help develop and execute an ongoing communications plan for R&D unit to share ideas within and throughout the Company.

Requirements:

  • Bachelor's degree preferred.
  • Experience with statistical and market research a must.
  • Media research experience recommended but not required.
  • Strong communication skills; ability to present to senior management and all levels of company.
  • Ability to write with clarity precision and imagination in order to vividly portray possible futures.

Why do I blog this? the description of the responsibilities/requirements are very pertinent and insightful; they show which kind of activities and skills might be valuable regarding forecasting and trend watching.

Red Associates

Red Associate seems to be an interesting company:

ReD Associates is one of Europe's leading innovation agencies working with sophisticated user insights, product development and innovation strategy. ReD Associates is focused on generating top line growth for our clients through relevant innovation. We do this by applying cutting edge social science methods to business development, design, innovation and R&D.

Our strength lies in our ability to convert advanced and complex user insights into tangible business results. For us user insight is not the answer in itself but the means to create new innovation opportunities, new offerings and organic growth. Successful innovation poses three main challenges for a company: 1. Gaining relevant insights 2. Transforming insights into the right concepts and products etc. 3. Implement the solutions as an integrated part of the company

Therefore we have divided ReD Associates into three professional domains specialized in meeting these three challenges (click the menu or headers to read more about each domain).

1. Explore: User research, contextual research, innovation analysis

2. Create: New Product Development, Ideation, Prototyping

3. Anchor: Implementing, Innovation strategy

Why do I blog this? I am interested in the connection between R&D and tangible impacts on practitioners, that's a problem I often encounter while doing research (for private client mostly, as a consultant). Besides, I am always wondering about how to conduct "independent research" (if there is such things as this concept, like 'freelance research).

WiGLE.net: a submission-based catalog of wireless networks

WIGLE:

WiGLE.net is a submission-based catalog of wireless networks. Submissions are not paired with actual people; rather name/password identities which people use to associate their data. It's basically a "gee isn't this neat" engine for learning about the spread of wireless computer usage.

WiGLE concerns itself entirely with 802.11b networks right now, since it's REALLY hard to deal with cellular networks, 802.11a is so hard to catch, and everything else is so small-share. 802.11b appears to be experiencing an explosive growth, and it's neat to see it cover cities. (...) Overall, WiGLE aims to show people about wireless in a more-technical capacity then your average static map or newspaper article.

Here is "The wireless world this morning (GMT-6:00)" as they say:

Why do I blog this? that's an intriguing community-based catalog of wireless networks.

Street Sudoku

Two days ago, I spotted a girl in Lausanne, Switzerland, solving a Sudoku on a street poster; It's actually an advertisement for a swiss game but some folks seem to like doing the Sudoku on much bigger dimensions than a newspaper format: real lift sudokuI spotted this picture in Geneva, it's the second street-sudoku that I saw solved.

How does the size of sudoku changes the way it's solved?

How does the fact that it is embedded in a broader context (being on a poster, in public space...) modifies the way the passers-by can be engaged in such an activity?

Do you play sudoku on walls?

What is funny is that few meters from this poster, one year ago, I also spotted this nice drawings on the ground, did by few kids (which made me think of a street tetris):

a real tetris?

A server on a mobile phone

After the server on a USB key, there is this project at Nokia of having a server running on a mobile phone (via). The motivation here is quite technology-driven:

For quite some time it has been possible to access the Internet using mobile phones, although the role of the phone has strictly been that of a client. Considering that the modern phones have processing power and memory on par with and even exceeding that of webservers when the web was young, there really is no reason anymore why webservers could not reside on mobile phones and why people could not create and maintain their own personal mobile websites.

But things gets more interesting when they talk about the implications:

As a mobile phone contains quite a lot of personal data it is straightforward to semi-automatically generate a personal home page. And contrary to websites in general, a website on a mobile phone always has its "administrator" nearby and he or she can even participate in the content generation. For instance, we have created a web-application that prompts the phone owner to take a picture, which subsequently is returned as a JPG. That is, on a personal device the website can be interactive.

Further, that a website becomes mobile implies that certain properties of websites that hitherto have been mostly meaningless now need to be taken into account. As long as a website resides on a stationary server the physical location of that server lacks meaning, because it will never change. With a mobile website it does change and it is meaningful as the content that is shared may depend upon the current location and context. For instance, if you browse to a mobile website and ask the "administrator" to take a picture, the image you get depends upon the location of the website. Current search engines that update their indexes rather rarely may need modifications to be able to cope with the dynamism introduced by mobile websites.

Implications

We believe that being able to run a globally accessible personal website on your mobile phone has the potential of changing the Internet landscape. If every mobile phone or even every smartphone initially, is equipped with a webserver then very quickly most websites will reside on mobile phones. That is bound to have some impact not only on how mobile phones are perceived but also on how the web evolves.

Why do I blog this? even though the motivation at first glance was very engineer-centric, there are some curious implications, especially when thinking of the internet of things/blogject mumbling.

Arm-worn device for service technicians

the abb mobile service technician is a project led Daniel fallman.

Based on the findings of an ethnographic study at two vehicle manufacturing companies, we have designed and implemented computer support system for service technicians. The system is arm-worn as opposed to traditionally handheld. It allows the user to interact with the physical environment by pointing as well as it lets the user one-hand navigation of the graphical user interface by tilt.

The research goals of this project are to explore novel interaction styles, pointing and tilting, for mobile human-computer interaction applied in a specific work environment.

More about it here (.pdf)

Why do I blog this? I am often intrigued by whether such arm-worn devices are really used and how. I tend to think that it's easier in specific contexts like manufacturing workers, for which there are normative behavior and procedures.

Novo Infotainment Table

Via joystiq, this interesting gaming table: the Novo Infotainment Table. The table is actually 32" LCD touch screen, a built in Shuttle PC or an Xbox or PS2, a rotating controller/keyboard panel, 60s spy-movie stlye. It's then a multi-purpose table: I don't really like the keyboard version (I hate those keyboards) but the one with the joystick is pretty cool: Why do I blog this? It's interesting to see that it's a way to repackage existing devices. This is the same phenomenon as the ambx system by philips: instead of focussing on a gaming device per se, some companies innovate by providing a service that embeds the console/pc in a larger context/experience.

In fact a cheaper version of this table is the following one, that Jan Chipchase showed last week: