From location to places

(via pierre) Extracting Places from Traces of Locations by Jong Hee Kang, William Welbourne, Benjamin Stewart, Gaetano Borriello; WMASH 2004: 110-118.

Location-aware systems are proliferating on a variety of platforms from laptops to cell phones. Locations are expressed in two principal ways: coordinates and landmarks. However, users are often more interested in “places” rather than locations. A place is a locale that is important to an individual user and carries important semantic meanings such as being a place where one works, lives, plays, meets socially with others, etc. Our devices can make more intelligent decisions on how to behave when they have this higher level information. For example, a cell phone can switch to a silent mode when the user is in a quiet place (e.g., a movie theater, a lecture hall, or a place where one meets socially with others). It would be tedious to define this in terms of coordinates. In this paper, we describe an algorithm for extracting significant places from a trace of coordinates, and evaluate the algorithm with real data collected using Place Lab [14], a coordinate-based location system that uses a database of locations for WiFi hotspots.

One of the algorithm for extracting significant places from a trace of coordinates.

Cognitive fooding laboratory by Loris Gréaud

An art installation by Loris Gréaud called "Cognitive fooding laboratory": eating modified food (cresson saturated of anthocyanin pigments) may allow visitors to expand their nightvision skills... when food is meant to augment cognition...

Loris Gréaud nous invite ensuite à améliorer notre acuité visuelle avec du cresson saturé en anthocyanine, pigment naturel que l’on a l’habitude de donner aux pilotes de chasse afin d’augmenter leur vision de nuit. Goûtez-en avant de vous placer devant les Dream Machines où vous aurez à fermer les yeux pour «voir». Ces caissons lumineux, inspirés de la Dream Machine du peintre et alchimiste moderne Brion Gysin, recevront vos pensées qu’ils convertiront en images. La pensée se donne à voir dans cette œuvre qui rejoint la fascination du XIXe, mais aussi de Kandinsky et Kupka, pour les phénomènes vibratoires.

On the left: Loris Gréaud, CFL (cognitive fooding laboratory / compact fluorescent light), 2004. Laboratoire, raccords en aluminium, profils en aluminium, tubes en plexiglas, mousses, pousses de cresson modifié, tubes néons de croissance. Design: James Heeley.

Picture credits: © Elisa Pone. Courtesy gb agency

On the right: Loris Gréaud, Dream machines, 2004.Développements électriques: Jérôme Barbé. Production: gb agency et Le Plateau / Frac Ile-de-France.

Picture credits: © Marc Domage

New blog about space/place/locative tech: smartspace

Found via Technorati: smartspace by Scott Smith of Social Technologies (an international futures research and consulting firm based in Washington, DC):

Welcome to Smartspace, a new blog about annotated environments, intelligent infrastructure and digital landscapes--the merging of technology with the environment around us, and the overlay of digital environments on the physical ones we inhabit.

This includes discussions, observations and insights on ubiquitous and embedded computing, mapping, location-based services, surveillance and tracking, geotagging, smart homes, intelligent environments, the annotated reality, and virtual worlds, where the increasingly intersect with the physical.

An increasing amount of interest, research, development, investment and regulation is being directed at the world of smart spaces. The purpose of Smartspace is to provide context and explore implications of the convergence of the above mentioned factors as they relate to these activities. Hopefully we will feature interviews, guest authors, and other interesting features and contents that make Smartspace a compelling read.

I found it because he expanded the discussion about my post about the giving of one's location while calling with a cell-phone, Scott adds this intriguing walkaround:

Meanwhile, I find it interesting that, while we are waiting for applications that alert the person on the other end of a mobile discussion automatically as to our location as the call comes in, it would be easier at the moment to take a picture of myself on the train and MMS it to my wife using something like ZoneTag, allowing her to see where I am before I call. Talk about a workaround.

Indeed, image can bring the context that the user wants to show, with the level of accuracy (in terms of contextual cues) the user may want to show and convey in his/her message.

Why do I blog this? another interesting contributor in the field of social usage of space/place/locative tech, very relevant ideas so far.

BookCrossing Zones

I am a great fan of the concept of bookcrossing, it really corresponds to an habit for me. Now what I find interesting is the notion of BookCrossing Zones. According to wikipedia:

Official BookCrossing Zones, which are sometimes called OBCZs or OBZs are located in places like Starbucks coffee shops, restaurants or other places where accessible to the public. These OBCZs refer to bookshelves placed there so that BookCrossers could catch or release books.

Look at this OBCZ World Map! Still a work in progress but very interesting.

Toward a common RSS icon

I am not interested that much in web icons and usability but rss syndications icons are sometimes a bit too... different, as workbench points out:

Considering the number of ways that web publishers show their readers they offer feeds, it's amazing we've gotten that many:

In an effort to make the concept of syndication easier for mainstream users, the next versions of the Internet Explorer and Opera browsers will identify RSS and Atom feeds with the same icon used in Mozilla Firefox. Since the market share of these browsers tops 95 percent, the icon will become the de facto standard for syndication overnight when the next version of Microsoft Windows comes out later this year.

Pet master electronic guide

I am always stunned by pet technology (that's why I like petistic), there are really incredible innovation in this field. Look at this Pet Master Electronic Pet Guide:

Find out exactly what you need to know about your cat or dog, instantly—with the push of a button! Get emergency information—even on the road! Find the likely causes and treatments for common symptoms. Even get training, nutrition and exercise tips! Product Hightlights: • Includes a built-in shopping list for pet care! • Frames a photo of your pet on the back! • On the road? Find a pet-friendly motel fast! • After hours emergency? Get the location and phone number of pet clinics close to you, wherever you are in the U.S.! • In fact, get practically all the information you’ll ever need about your best friend with Excalibur’s Pet Master! • A donation is made to animal charities with the sale of each Pet Master!

Why do I blog this? even though pet tech is somehow cliché there are sometimes interesting innovative practices, which are not so far from what we with human-beings. And besides, locative/spatial issues related to pets or humans are tightly related too.

Reconfiguration of social, cognitive and spatial practices in cities due to technological innovations

After my post about the inevitable existence of electronic ghetto in cities (quoting Mike Davis and William Gibson), I had a discussion with Anne about how technologies (and hence interaction designers) are sometimes not aware of side-effects due to their creation, especially in terms of social, politics or even cognitive practices. For that matter, I am interested in reconfiguration of specific practices in cities due to technological innovations. It's been some time that I am trying to list interesting case studies about that. Books like "City of Bits: Space, Place, and the Infobahn" (William J. Mitchell), "Smart Mobs: The Next Social Revolution" (Howard Rheingold) or "Beyond Blade runner: Urban control, the ecology of fear" (Mike Davis) gives some elements. I tried to find other examples.

Before the introduction of elevators/lift, there was a different social repartition of people in the spatiality of buildings. Rich people were leaving on the first floor, to avoid them having to climb stairs. The higher you went into buildings, the less wealth you had in city-dwellers. The usage of elevators in building where people were living (previously elevator was just used to carry materials such as coal), inverted this repartition: the last floor, now accessible with the technology were for rich people. This is an example of how a technology created a social reconfiguration in space.
Another kind of effects is of course related to cognition. There are indeed important consequences of having information about public transport now allowed with new technologies (urban information display in the vehicle or on an information board) or the organization and the interoperability of information. For example, I like this example by Vincent Kauffman (urban sociologist here at the school): the regularity of different train schedule (there is a train geneva-lausanne every 20minutes with regular shifts: 7:45, 8:15...) plus the interoperabilty of transport means (the departure of city bus is coordinated with trains arrivals) allows people to easily remember commuting schedule and hence better predict how they would manage their spatial practices. These new technologies (urban displays) and the organization of information (due to technological advances) impacts cognitive mechanisms (i.e. memory in the example I described). What's next? would such a intelligent system achieve its goal (i.e. facilitating navigation by suggesting all possible alternative shortest route that connect two or more transition points on a map)?
Likewise, there are interesting concerns lately about whether location-based services might modify behaviors and practices in cities. This question often pops up when people think about location-based games. Results from the MogiMogi game test showed very interesting behaviors: players who wander around in the city using their car or the metro when new objects are released; or once a player complained because he went to a place where he though an object would be but it was not present since it was just there when the moon was full.

Also Daniel Blackburn (manager of Carbon Based Games) questions whether the bluetooth social games might modify people’s behavior in physical space by creating new technosocial situations:

With GPS games such as mogi some players would detour from their everyday routes to go and pick up a virtual object. With Bluetooth enabled game will people try to get within range of someone while there phone is in their bag so they are unlikely to hear it so that they can steal virtual objects without their knowledge. Or will they stay clear of people at work because they are at a high level than the game than them and they want to avoid defeat again. Or will they be constantly checking their phone because they’re convinced someone is trying to virtually assassinate them an could set of a bomb at any time. Meaning they would need to run with there phone to get it out of range of the blast.

Even though I like the example, I am still dubious by this last example (compared to the two others); there are still lots of big expectations with lbs.

Why do I blog this? well, what do I want to show here is that technologies sometimes reshuffle human practices in terms of spatial dispersions, cognitive appraisal of space and social organizations of infrastructures. Maybe I should write a better discussion of this and wrap this up in a paper, here is quite messy. This said, there is still the question of foreseeing the future reconfiguration due to emerging technologies.

Life on cell phone?

(Via emily) The Korea Times has a good piece about researchers at Samsung electronics who want to bring cell phones to life through the use of avatars that will have the ability to think, feel, evolve, and interact with users.

The team, led by Prof. Kim Jong-hwan at the Korea Advanced Institute of Science and Technology, is hooking up with Samsung to create the attention-grabbing software outfitted with ``artificial chromosomes.''

``This software can feel, think and interact with phone owners. It will breathe power into cell phones, bringing the gadgets to life,'' Kim said. (...) h's former top lieutenant Lee Kang-hee said a three-dimensional avatar will lurk inside the cell phone and adjust itself to characteristics of the cell phone carriers.

``It's just like a sophisticated creature living inside a cell phone. An owner will be allowed to set its first personality by defining the underlying DNA,'' said Lee, who will join Samsung Electronics tomorrow.

``However, it is up to the avatar how its personality develops with the owner. Its personality can get better or worse depending on how people treat it,'' he said.

Lee added folks will be able to deal with loneliness felt by the avatar, which will pop up on the phone when they feel alone, by touching a button.

Should the owner refuse to respond to the signal, the avatars will change their personalities either to express such feelings more often or just to become depressed, according to Lee.

Why do I blog this? this is very close to one of blogject scenario we thought at the workshop.

Intentional affordances of objects

JOINT ATTENTION AND CULTURAL LEARNING IN HUMAN INFANCY by Tomasello, 1999.

Early in development, as young infants grasp, suck, and manipulate objects, they learn something of the objects’ affordances for action (Gibson, 1979) (...) but the tools and artifacts of a culture have another dimension - what Cole (1996) calls the ‘ideal’ dimension - that produce another set of affordances for anyone with the appropriate kinds of social-cognitive and social learning skills. As human children observe other people using cultural tools and artifacts, they often engage in the process of imitative learning in which they attempt to place themselves in the ‘intentional space’ of the user - discerning the user’s goal, what she is using the artifact ‘for’. By engaging in this imitative learning, the child joins the other person in affirming what ‘we’ use this object ‘for’: we use hammers for hammering and pencils for writing. After she has engaged in such a process the child comes to see some cultural objects and artifacts as having, in addition to their natural sensory-motor affordances, another set of what we might call ‘intentional affordances’ based on her understanding of the intentional relations that other persons have with that object or artifact - that is, the intentional relations that other persons have to the world through the artifact

Why do I blog this? Through intention reading and imitation kids learn the functions, the “intentional affordances” of objects used for instrumental purposes. I like this distinction between natural and intentional affordances

Slogging = sensor logging

In the slides of his presentation (.pdf, 11.6Mb, great document anyway), Mark Hansen describes the concept of "slogging": sensor logging, which is very similar to the blogject concept:

Slogging:

  • What would happen if sensing technology became as easy to use as a blog or a vlog?
  • What would it mean for users to have “varying degrees of participation” in slogging?
  • What would happen if a Web grows atop a collection of such sensor networks?
  • Would we see communities spring up around data, around sensor logs? A neighborhood monitors its own air or water quality. New images of urban life are already being considered in instrumented cities

Support for the slog

  • Once filters are designed to identify higher-level events, how should we “publish” them?
  • Maybe we can again take guidance from the blogging and vlogging community
  • Would some variant of RSS be appropriate?
  • Perhaps we can consider specialized aggregators that serve the function of the backyard bird watchers or the amateur seismologists and identify events
  • Or, we feed it all to google...
  • And speaking of google, what would a search engine look like in this context?

And finally a philosophical question

  • When data collection and interpretation is not left to organizations like the EPA or other official bodies, there is bound to be a social shift

Why do I blog this? even though 'slogging' sounds like an underwear brand, the idea is relevant to the blogject world. Sensors technology added on top of blogs is also the idea of datablogging, that I already mentionned. It gives anyway loads of ideas for the blogject implementations!

An example of slog (oh man, I cannot be used to that word) Mark Hansen mentions is the Suicide Box by Natalie Jeremijenko and Kate Rich, which is nicely described by the nydigitalsalon:

The video element of this project documents the set-up of a motion activated camera aimed at the underbelly of the Golden Gate Bridge, the intention being to capture on film anything falling off the bridge. One can only assume that the blurred, unspecified objects shot from a great distance are people making the four-second descent from bridge to ocean. One suspects this to be true, especially with the view of the restrictions placed on the bridge. Subtitled throughout, the film informs us of bridge-related data: For instance, visitors can be arrested for throwing anything over the side or for appearing sufficiently despondent.

GPS/Wifi roboduck fleet for marine sensing and sampling

Roboduck is a project led by Gaurav S. Sukhatme. It's actually a fleet of robotic air boats which serve as a test bed for evaluating algorithms including bacterial navigation for marine sensing and adaptive sampling.

There is a need to provide a platform for better monitoring and sampling in Marine environments. Such a platform should be able to withstand the highly dynamic nature of such an environment as well as cope with its vastness. The platform should be simple and easily scalable. A platform of this type would provide the scientists an invaluable tool in order to further the marine research by monitoring phenomena of biological importance. As part of our research, we are building a fleet of autonomous roboducks (robotic air boats) for in-situ operation (data collection and analysis) in marine environments. The platform would support a variety of sensor suites and at the same time be easy to operate. It can operate in both exploration mode and intelligent mode. It can also collaborate (via communication) with other entities (sensor nodes) in the local neighborhood making intelligent decisions.

Why do I blog this? this is close to the blogject idea (context-aware device + sensors). It's an example of a network of object sensors, useful in the marine context.

The "breath mouse": a breath-controlled device

Sometimes when I'm looking at weird game/computer controller, I run across good things. Tonight I found this breath-based controller by David MacKay; it's still a bit rough but it exemplifies the idea. It actually connects lung volume to the mouse y-coordinate More about it in the following paper: Efficient communication by breathing by Tom H. Shorrock, David J.C. MacKay, and Chris J. Ball.

The arithmetic-coding-based communication system, Dasher, can be driven by a one-dimensional continuous signal. A belt-mounted breath-mouse, delivering a signal related to lung volume, enables a user to communicate by breath alone. With practice, an expert user can write English at 15 words per minute. (...) first breath mouse, made from an optical mouse, a belt, and a piece of elastic. The mouse is fixed to a piece of wood, to which a belt is also attached. Two inches of the belt are replaced by elastic, so that changes in the waist circumference produce motion of the belt underneath the eye of the mouse. This sensor measures breathing if the user breathes using their diaphragm (rather than their rib cage). We oriented the mouse so that breathing in moves the on-screen mouse up and rotates the pointer anti-clockwise along the curve; and breathing out moves the on-screen mouse down and rotates the pointer clockwise. The sensor also responds to clenching of the stomach muscles, but we encourage the user to navigate by breathing normally.

And yes, this is a breath mouse:

Why do I blog this? even though it seems funny, sometimes weird controllers (in this context, the point was rather engineering-based than creating a new product) end up into nothing but gives some ideas about the future of interactions.

EU research project that focuses on "Mobile Entertainment Industry and Culture"

The mGain project is a FP6 EU research project that focuses on "Mobile Entertainment Industry and Culture".

What constitutes mobile entertainment? Our approach is inclusive instead of restrictive, including all entertainment delivered through a mobile device, whether it be a mobile phone, a personal digital assistant or a handheld gaming device. This way we can address the foreseeable convergence of the various mobile technologies. Examples of such mobile entertainment include but are not restricted to mobile games, music, video and gambling.

The mGain study project has six connected objectives:

  • To understand mobile entertainment concepts and culture, including legal and social aspects of mobile entertainment.
  • To understand possibilities and restrictions of existing and emerging mobile entertainment technologies (including wireless communication and handheld devices).
  • To understand the business models of the emerging mobile entertainment industry.
  • To benchmark the European situation with North America and Asia-Pacific.
  • To provide guidelines for industry and policy makers, including instruments and incentives needed to encourage implementation of the guidelines.
  • To provide input for preparation of Framework Programme 6 in the areas of mobile entertainment services and technologies.

Why do I blog this? this project targets interesting and pertinent questions. Some documents are available on-line, they provide very good insights about the European players, business models, the technology involved and what is at stake. Check the Mobile entertainment State-of-the-Art for instance. This is a nice complement of the iPerg project which looks at different questions and is more related with pervasive gaming.

Cow data and geolocation of a three-legged poodle

Help Jed Berk finding ideas about what to do with peculiar geographical data generated by wherifywireless:

Write him there: berk (at) artcenter (dot) edu

If you're curious about what jed berk does with animal data, have a look at his project COWdata:

This is an ongoing project, begun in 1995. The idea was conceived while standing in a cow field, thinking of my self as a cow. What emerged is a documentation of a peripatetic bovine, calmly observing life, as it is, in our global environment. The project represents a study in time and is added to on a continual basis. Over the course of the past ten years a considerable archive has been achieved. I used the data to help realize patterns within the data to help determent new direction(s) for the future of the project. (...) My first consideration in gathering this data was to find a way to organize the documentation of the cow photos. I tuned to flickr, an online photo management and sharing application; that offers a good tool set for organizational purposes. (...) -Time line (10 years) or in parts -Behavior, tendencies and narratives -About the herd -The time when pictures are not taken -The places they go -Night picture vs. daytime -Time of year (seasonal) -Favorites -Participants in the project (other people contributing photos) -Geographic location -Dates of travel -Patterns of recurrence, -The unknown images?

Then he investigated the relationship between the horizon lines in relation to the placement of cows.

Qualitative data analysis in CatchBob!

This afternoon, I tried to formalize a bit my current research approach to analyse qualitative data of CatchBob! The point is to benefit from users' annotations (in game) and the interview I conducted after the game (based on a replay of the activity). This leads me to the extraction of different valuable information that concerns coordination processes in the game.

This is based on Herbert Clark's framework of coordination (as explained in the book "Using Language"). In this context, coordination is a matter of solving practical "coordination problems" through the exchange of what he calls ‘coordination keys/devices’; that is to say, mutually recognized information that would enables the teammates to choose the right actions to perform so that the common goal might be reached. As a matter of fact, such information allows a group to mutually expect the individual actions that are going to be carried out by the partners. According to Clark, a coordination device is not only defined by its content but also by the way the persons who collaborate mutually recognize it. For that matter, Clark differentiates four kinds of coordination devices: conventional procedures (when a convention is set by the participants), explicit agreement (when the participants explicitly acknowledge the information), precedent (when a precedent experience allows participants to form some expectations about others’ behavior), manifest (when the environment or the information sent makes the next move apparent within the many moves that could conceivably be chosen).

This framework then leads to the creation of two coding schemes to analyze my data:

  • What a participant inferred about his/her partner during the game. This coding scheme is clearly data-driven in the sense that it emerged from the players’ verbalizations (namely those extracted during the self-confrontation phase after the game)
  • How a participant inferred these information about their partners: this one is theory-driven since I used Herbert Clark’s theory of coordination keys/devices to have clear categories about what happened

Now, there is another dimension that should be taken into account: TIME: different coordination keys are used at different moments in CatchBob, so I'm trying to put this together in a global model of spatial coordination. In the end, in the would express which kind of coordination keys are used to solve certain coordination problems in the context of a task mobile collaboration such as CatchBob. The potential outcome for this would be to understand whether specific tools can supports the coordination process (for instance would a location awareness tool be useful at a certain point the process').

A robot powered with flies

(Via social fiction) When San Francisco is interested in turning dog poo into power, some other folks have designed a robot that does not require batteries or electricity to power itself but instead, it generates energy by catching and eating houseflies.

Dr Chris Melhuish and his Bristol-based team hope the robot, called EcoBot II, will one day be sent into zones too dangerous for humans, potentially proving invaluable in military, security and industrial areas. (...) The EcoBot II powers itself in much the same way as animals feed themselves to get their energy, he said. At this stage, EcoBot II is a "proof-of-concept" robot and travels only at roughly 10 centimeters per hour. (...) The EcoBot II uses human sewage as bait to catch the insects. It then digests the flies, before their exoskeletons are turned into electricity, which enables the robot to function.

(Image taken from der spiegel)

Few years ago, it was just a project, and now it works...

An autonomous robotic fish

Less sexy than Aibo but still nifty, this autonomous robotic fisch seems interesting. Designed by Dan Massie, Mike Kirkland, Jen Manda, Ian Strimaitis

An autonomous, micro-controlled fish was designed and constructed using sonar to help guide it in swimming. It was predetermined that constructing a mechatronic fish would be a large and demanding project due to the complex shape of a fish body, the unfamiliar territories of sonar sensing, the intricacies of fluid propulsion, and the challenge of keeping submerged electronics dry. However, the team was willing to put in a lot of time and produced an exceptionally successful first prototype by the name of Dongle.

The most important part is about the design and construction of this robotic pet: using soft-clay, a tail servo, microcontrollers...

Weather in video-games

Just ran this interesting discussion about the weather in video games. The author, Matt Barton (University of South Florida) worked on this topic for a paper called "How's the Weather?: A Look at Weather and Gaming Environments" (in the "Playing with Mother Nature: Video Games, Space, and Ecology" book).

hat are some examples of good and bad use of weather in videogames? I'd really like a list of games that used weather not just as decoration or "atmosphere" but in ways that really affected gameplay. An example off the top of my head was Weather War, where players controlled hail, sleet, lightning, and rain? to destroy each other's castles. Help me out here, please.

1. What are some games you know of that make interesting use of weather? 2. What were the first games to include weather? How did they use it? 3. What are examples of games that turn the weather into a character, or feature bosses and such that manipulate the weather?

Why do I blog this? I won't enter into the details of the discussion but the questions brings some interesting ideas about the connection between game design and video games weather. The weather is one of the contextual feature of an environment.