A micro-jet engine in your cell phone

A bit old season (fall 2004), but still stunning: Miniature jet engines could power cellphones:

Engineers have moved a step closer to batch producing miniaturised, jet engine-based generators from a single stack of bonded silicon wafers. These chip-based “microengines” could one day power mobile electronic devices.

By spinning a tiny magnet above a mesh of interleaved coils etched into a wafer, David Arnold and Mark Allen of the Georgia Institute of Technology, US, have built the first silicon-compatible device capable of converting mechanical energy - produced by a rotating microturbine - into usable amounts of electrical energy. The key advantage of microengines is that they pack in at least 10 times more energy per volume of fuel than conventional lithium batteries, take up less space and work more smoothly than much-touted fuel cells. (...) The US Army expects that soldiers - who currently rely on battery-powered laptops, night-vision goggles and GPS systems - will be the first to use the microengines.

An Autogrill Monument to remenber

The autogrill monument located on an Autogrill. A project by Zeo-th and Ivar Lyngve, Luther Thie.

AutoGrill Monument is a sublime ambient display of real-time highway fatalities integrated into the popular Italian roadside restaurant AutoGrill in Novara, Italy. Each time a highway fatality occurs on the Italian Autostrada, an integrated alert system activates a jet of blue liquid that shoots 20 meters high to fill the water-filled column that pierces the roadside restaurant. Viewing of the Memorial Cloud is available both inside AutoGrill and from a distance of 2 KM.

AutoGrill Monument serves two purposes: 1) To remember those who have lost their lives on Europe's most dangerous highways. 2) To alert and possibly cause speeding motorists to decelerate.

How do people share information

(via) A report I had a quick glance today: Toward Understanding Preferences for Sharing and Privacy (.pdf) by Judith Olson, Jonathan Grudin and Eric Horvitz (Microsoft). It's about how people share information. This is based on survey conducted on 30 persons working in small and medium size companies.

We report on studies of preferences about privacy and sharing aimed at identifying fundamental concerns with privacy and at understanding how people might abstract the details of sharing into higher-level classes of recipients and information that people tend to treat in a similar manner. To characterize such classes, we collected information about sharing preferences, recruiting 30 people to specify what information they are willing to share with whom. Although people vary in their overall level of comfort in sharing, we discovered key classes of recipients and information. Such abstractions highlight the promise of developing simpler, more expressive controls for sharing and privacy.(...) Overall, participants in our study were unwilling to share most things with the public. Not everyone is comfortable sharing everything with their spouse. The pattern of information our participants are willing to share with their managers and trusted co-workers tracks those that they are willing to share with their families, except that work-related items are rated higher.

The paper provides a nice cluster analysis of the different kinfs of sharing.

Homeplay: a trackball to explore a town

While googling I ran across Homeplay, a less known project of collectif fact. I like the concept a lot!

Spectator has a trackball in a hand and stands in front of the model of a town. On top, on the roof of the building, images are projected, those can be images of inside apartments, looking from the top, or webpages of furnitures products (trademarks). Spectator can move from one to the other apartment or go downstairs. Furthermore, spectator can create his own apartment by drag and drop words meaning objects, furniture or actions. This artwork is a reflexion about representation modes connected to physical, mental and virtual architecture.

The installation is explained here.

Mobile Music Technology: 2nd international workshop

There is a smart workshop about Mobile Music Technologorganized by the great Future Application Lab in association with NIME 2005 in Vancouver, May 25. The organizer are Lalya Gaye (Hi lalya!) and Lars Erik Holmquist + Atau Tanaka (great musician!).

In the late 1970's, the Walkman liberated recorded music - it allowed you to carry the listening room with you. Today, iPods and mobile phones allow new forms of private and social music experiences. What are the trends in mobile music technology? What kinds of new modes of musical interaction are becoming possible? Will peer-to-peer sharing and portable MP3 players destroy the music business - or will new technology let artists reach more people than ever before?

The programme will consist of presentations, interactive posters and hands-on break-out sessions. Accepted papers and interactive posters include:

* Papers: - "From Calling a Cloud to Finding the Missing Track: Artistice Approaches to Mobile Music" by Frauke Behrendt - "Location 33: A Mobile Musical" by William Carter and Leslie S. Liu - "The New Cosmopolites: Activating the Role of Mobile Music Listeners" by Gideon D'Arcangelo

* Interactive Posters: - "Solarcoustics: CONNECT" by Morgan Barnard - "Experimental Design for the Musicology Mobile Music Player" by Trevor Pering - "Mobile User Interface for Music" by Takuya Yamauchi and Toru Iwatake

The number of participants is strictly limited. To register, please FIRST contact Lalya Gaye, lalya@viktoria.se to confirm there is space. After your participation has been confirmed use the main NIME registration page to register and pay: http://hct.ece.ubc.ca/nime/2005/registration.html

CatchBob! analysis: division of labor

I wrote a script to parse the CatchBob! logfiles. It allowed me to get interesting indexes with regard to the collaborative behavior of the players. CatchBob is a treasure hunt; thus it's a spatial task in which participants have to collaborate to do the shortest path to the object (that's what they are required to do during the experiments we conducted). That means that the division of labor concerns the way they spread over the campus and how they explore it thanks to the proximity sensor of their tool. What I would like to express here is that an index of the division of labor among the group would be the number of "zones" explored by each player. I divided the campus in a certain nuber zones that correspond to squares (20meters since it's the accuracy of our positioning device). My script gives me this and other interesting stuff:

  • the number of squares explored by each player
  • the percentage of squares explored by each player (not so useful)
  • the number of backchannel for each player: the number of square explored more than 1 time by a player
  • the path overlap between A, B an C: the number of square explored by 2 or 3 players (for each player).

Monkey research and computer games

David Washburn a cognitive scientist, uses video games to investigate various monkey psychological processes. For instance, in his research about how macaques explore virtual mazes, he employed the following device:

The apparatus is used for computer task research with rhesus monkeys. The monkey reaches through the mesh of his home cage to manipulate a joystick, which in turn controls the movements of a cursor on the screen. Pellet rewards are automatically dispensed upon successful completion of each trial.

Audio Clouds: head, hand and device gestures for input on mobile devices

(via). Still the same lab in Glasgow, they also have a nice project related to the investigation of 3D audio on wearable computers to increase display space plus how head, hand and device gestures may be used for input on mobile devices. It's called "Audio Clouds". There is news on the BBC about it.

"The idea behind the whole thing is to look at new ways to present information," Professor Stephen Brewster told. (...) "We hope to develop interfaces that are truly mobile, allowing users to concentrate on the real world while interacting with their mobile device as naturally as if they were talking to a friend while walking." "Lots of times, you need to use your eyes to operate a gadget - even with an iPod, you need to take it out of pocket to look at screen to control it. "If you could do something with your hands, or other gestures you would not have to take it out of your pocket," explained Professor Brewster. The researchers have developed ways to control gadgets, such as personal digital assistants (PDAs) and music players, using 3D sound for output and gestures for input. (...) Professor Brewster and his Multimodal Interaction Group realised that they could get other information out of accelerometers too. The actual variations in a person's gait could be read and harnessed for different uses.

This kind of stuff is now closer to the market. Phone companies are up to releasing similar projects. I am eager to see people waving in the streets just to zip files or to shuffle songs in their ipods!

A Bovine Rectal Palpation Simulator for Training Veterinary Students

No it ain't spam. This haptic cow simulator done by the Glasgow Interactive Systems Group and the Faculty of Veterinary Medicine (University of Glasgow, UK):

Bovine rectal palpation is a necessary skill for a veterinary student to learn. However, lack of resources and welfare issues currently restrict the amount of training available to students in this procedure. Here we present a virtual reality based teaching tool - the Bovine Rectal Palpation Simulator - that has been developed as a supplement to existing training methods. When using the simulator, the student palpates virtual objects representing the bovine reproductive tract, receiving feedback from a PHANToM haptic device (inside a fibreglass model of a cow), while the teacher follows the student's actions on the monitor and gives instruction. We present a validation experiment that compares the performance of a group of traditionally trained students with a group whose training was supplemented with a simulator training session. The subsequent performance in the real task, when examining cows for the first time, was assessed with the results showing a significantly better performance for the simulator group.

Definitely a tangible-oriented design! The corresponding scientific paper: Baillie, S., Crossan, A., Brewster, S.A., Mellor, D. and Reid, S. Validation of a Bovine Rectal Palpation Simulator for Training Veterinary Students. In Proceedings of MMVR 2005 (Long Beach, CA, USA).

Group uses of mobile devices

There is a very relevant interview of Jeff Axup in newsletter and discussion group Mo:Life (he also put it on his weblog). Jeff works on mobile technologies for backpackers, using ethnographic and participatory methods. Some pertinent excertps hereafter; The group usage of mobile devices (like cell phones) is an amazingly new and by-product of their massive use.:

Several recent research studies have shown a variety of examples of communal phone usage, including turn-taking, borrowing, and sharing of communication content. In addition to usage of devices by groups in-person, remote users also affect our individual use.

Jeff goes on describing how he envision the phone of the future:

If designed properly they will complement existing group goals and behaviours. They will enable us to communicate with networks of people in ways that were impossible or insufficiently usable before. To give a tangible example: backpackers currently communicate face to face, via physical message boards in hostels and to some degree via SMS, IM and phone calls. In the future they could be informed of interesting people they could talk to, form instantaneous, short-term communication channels while on tours, or tap into community-authored travel advice. People are inherently social, but we still lack the ability to easily communicate to groups in many circumstances where we would like to.

And his take on communication problem is also interesting:

We recently ran a study looking at a group of three people using a mobile discussion list prototype to search and rendezvous at an unknown location. We discovered a number of usability problems related to SMS discussion list usage including: multitasking during message composition and reading; speed of keyboard entry; excessive demand on visual attention; and ambiguity of intended recipients. More generally speaking, mobile devices still suffer from expensive wireless data connectivity, poor input devices and lack of contextual awareness. Mobile users still have difficulty easily communicating with groups, transferring information between their phones, and finding software to support their daily activities. Groups face challenges of visualizing their own behaviour, coordinating actions and communicating physical location and plans efficiently

Concerning "Web-based travel diaries are increasingly used to communicate location and travel experience to family and friends and soon picture-phones will integrate seamlessly with this.", that reminds me what my friend Anne Bationo analyses for her PhD thesis. She is working for telco operator France Telecom on travellers' narratives. She applies a user-centred approach to envision new instruments, to support travellers when performing their activities. A description of her work might be found in this paper: Travelling narrative as a multi-sensorial experience: A user centred approach of smart objects.

A Typology of spatial expressions

Ioannidou I. & Dimitracopoulou A.,Final Evaluation Report. Part II. Children in Choros & Chronos Project. Esprit/I3, 2001. The authors report on a study about how kids collaborated (2 teams: one on the field and the other in a 'control room') on a treasure hunt (a bit different than our CatchBob! thing). I found in this report an interesting typology of spatial expressions used in their quantitive analysis:

Topological referents: where positioning, orientation, or motion in space is determined via reference to objects located in space. Specifically we include expressions that refer to relations of objects (close to, in front of etc) in space. Intrinsic referents (projective or body-centered, or body-syntonic): where positioning, orientation, or motion in space are determined with regard to a specific viewpoint from which the objects are observed. Under this category we ranked expressions that result from the pupils’ own point of view -which according to Piaget and Inhelder (1967) is the source of simple projection-or from the pupils’ changing of view point (on our left, on your left respectively etc). Euclidean referents: where positioning, orientation, or motion in space is determined by using the metric system, making calculations and using coordinates. The spatial expressions under this category refer only the use of metric system and estimation of relative distance. Combination of referents: Where more than one of the above types of referents are used to determine one positioning or direction in space Context-bound referents: Where positioning, orientation or motion in space is determined in terms of a specific representation or environment (the computer screen or the real space). Context bound expressions were used mostly in within group communication and in several cases were accompanied by gestures (e.g they are here – shows the place on the screen. Down is considered context bound because it occurs from the two dimensional representation of space on the computer screen and defines the area on the lowermost side of the screen. Context bound referents were primarily exchanged in within group discourse where pupils could see each other and mediate his/her talking with gestures and reference to the experience the group was sharing. Context-bound – intrinsic referents: Where positioning, orientation and motion in space are determined with regard a specific point of view but they also include references to idiosyncratic elements of the environment in which they are produced. Expressions like “the store room is here” are reported under this category because the pupil shows with a gesture a position in space taking as referential point its own point of view.

From space invaders to rubik space

French artist Spce Invader now investigates a new domain: Rubik's Cube His new exhibition started on March 24th at GALERIE PATRICIA DORFMANN in Paris (61 rue de la Verrerie / 4th arrondissement).

As a matter of fact, the guy is now moving from space invaders to rubik's cube (he already worked on this concept for the NYC "While you were playing Rubik's Cube" in 2003). Stay tuned!

Use a mannequin hand to get a Nintendo DS!

A weird contest ran by Nintendo: Give Me a Hand. The point is to personify that touching is good, by incorporating a spooky detached mannequin hand!

What do you like to touch? Sand, silk, snot? Ever notice how real life can sometimes look like a game? Grab a camera (and a mannequin hand), create something cool and enter our contest. Who knows, you might even make it into the Gallery.

Plus, we'll pick a few of the best and send them a Nintendo DS or... some cold hard cash... which feels real good when you touch it. e

very creepy and odd! The best thing is that you can even ask Nintendo and they give you one (this is for people like me who does not have a spare mannequin hand in my backpack!).

Don't have a mannequin hand laying around to use in your pictures and videos? We had a couple thousand ourselves, but you guys cleaned us out! We'll see if we can get more, but in the meantime...

You don't need to start tearing the hands off the poor mannequins at your local department store, there's a much better way to get yourself a hand. Just click on the icon [No I put the gzip hand here] below to download your very own digital hand. Then all you have to do is print it, cut it out, and start touching.

A breath-based controller?

I am struggling to find an example of a breath-based controller to be used in video games or any computer application. This breath-based MIDI controller could certainly be hacked: The music industry (mostly electronic) has brought a lot of things from the video game world (like blip music, chip music...for instance) but what about the contrary? I am pretty sure it would be funny to:

  1. hack existing MIDI controller to play video games: like using a piano or a bass guitar to play Super Mario. Using knobs or weird buttons.
  2. produce/design relevant game controller based on a MIDI protocol taking the advantage of innnovative interactions modes. For instance a breath-based interface.

any known hacks here? If there are Hamster-powered MIDI sequencer there must be someone on earth who did a crazy MIDI game controller.

Human-to-pet interaction improvements

(via), Poultry Internet, a novel cybernetics system to use mobile and Internet technology to improve human-to-pet interaction. It can also be used for people who are allergic to touching animals and thus cannot stroke them directly. This interaction encompasses both visualization and tactile sensation of real objects.">Poultry Internet developed at the University of Singapore:

a novel cybernetics system to use mobile and Internet technology to improve human-to-pet interaction. It can also be used for people who are allergic to touching animals and thus cannot stroke them directly. This interaction encompasses both visualization and tactile sensation of real objects.

Figure 1 shows in the Office System, where the pet owner touches the doll, and at the same time feels the movement of the doll as driven by a positioning mechanism table. Figure 2 shows the pet (we use a rooster) with a "pet dress" worn on its body. The pet dress consists of electronics that simulates touch (or haptic) sensation. It feels it when the pet owner fondles with the doll in the Office System.

The advantage of this system is to bring the sense of physical and emotional presence between man and animal. It thus attempts to recapture our sense of togetherness with our animal friends, just like times gone by on the prairie, village, or jungle. It can be used also for people who are allergic in touching animals and can not fondle them directly. This interaction encompasses both visualization and tactile sensation of real objects.

Then they provide plent of rendering of this system:

NFC powered presence

Finally a interesting hack using NCF (Near Field Communication which is besides closer to the market reality): Janne Jalkanen developed a NFC powered presence application:

I took two NFC tags (essentially very small memory cards with a radio that can be read/written from up to a few centimetres), wrote the URL of my web service on both of them (using the ServiceDiscovery app included), and wrote a little JSP page that handles the interfacing with my blog.

Then I stuck one tag on my work monitor, and another one at home. Now I can just touch one of these tags with my phone, and a few seconds later (some delays are involved with starting the Java midlet and connecting to GPRS) the little box on the right changes to show my location. Voila: NFC-powered presence.

This is in essence no different from doing a Trackback ping; I'm just doing it by touching something with my phone. Not traversing menus, not using the keyboard, not even glancing the screen.