Pet collar with smart sensor and locative technology

Via petistic, this incredible new location-based application: Float-A-Pet by Jed Berk, which is an illuminated inflatable pet collar with smart sensor and locative technology.

The collar serves to support two main situations. First, the passive system is used to recognize where your pet is located at night. The flexible solar cells gather the suns energy during the day and store it in small rechargeable batteries. A light sensor recognizes low light conditions and triggers LEDs to illuminate the collar. Second, the active system is used in disaster relief situations. For example: In the event of a hurricane or the act of simply slipping into a pool. The Collar has a clipped on CO2 cartridge designed to break away. When the integrated humidity sensor reaches its threshold, it is activated. It dispenses CO2 and inflates the collar into a float. The passive solar system will support the floatation device at night by blinking intermittently to get one's attention.

Why do I blog this? because this "locative media" is intriguing and I always found pet-based technologies at the forefront of innovation.

Location awareness and rendezvousing

Dearman, D., Hawkey, K. and Inkpen, K.M. Rendezvousing with location-aware devices: Enhancing social coordination. Interacting with Computers 17, 5 (2005), 542-566. A very interesting paper directly connected to my current research about the influence of location-awareness on collaboration. It examines how location awareness impacts social coordination when rendezvousing.

This paper presents a field study investigating the use of mobile location-aware devices for rendezvous activities. Participants took part in one of three mobile device conditions (a mobile phone, a location-aware handheld, or both a mobile phone and a location-aware handheld) and completed three rendezvousing scenarios. The results reveal key differences in communication patterns between the mediums, as well as the potential strengths and limitations of location-aware devices for social coordination. (...) close observation of the behavioural and communication differences demonstrates that the technology available significantly altered how the participants’ managed their social coordination

Results about the functions of location-awareness were quite pertinent too (as in my case, they also found detrimental effects of it):

Having access to location-awareness information has obvious benefits. Users can make more informed decisions and have a stronger sense of ambient virtual co-presence. The participants in our study made extensive use of location-awareness information as a background communication channel to monitor their partner’s location (as well as their own) in an unobtrusive manner. (...) we observed instances where location-awareness information was extremely beneficial and other instances where it was detrimental. It was beneficial because participants could see their partner’s location and track their progress in an unobtrusive manner. This arguably provided the waiting partner with enough information to wait contently. However, when their partner appeared to be lost or not making progress, it was very disconcerting to the waiting partner because they did not have enough information to determine what the problem was. This uncertainty was strong enough in some cases to actually draw the waiting partner away from the rendezvous location.

Why do I blog this? this goes straight to my literature review.

Gary Gigax, RPG and the Web

In a recent blogpost, Charlie Stross - the american sci-fi writer - described the main thread of his next novel. Supposed to be set in 12 years ahead, the story will deal with how "existing technological trends (pervasive wireless networking, ubiquitous location services, and the uptake of virtual reality technologies derived from today's gaming scene) coalesce into a new medium". Even though the whole post and the comments are worthwile (the underlying process of finding the story thread, a quick and personal summary of the Internet as seen by the author...), what I found more curious was the part about how Role Playing Games (and one of its very well known proponent) shape today's virtual reality:

Sad to say, the political landscape of the early to mid 21st century has already been designed -- by Gary Gygax, inventor of Dungeons and Dragons.

Gary didn't realize it (D&D predates personal computing) but his somewhat addictive game transferred onto computers quite early (see also: Nethack). And then gamers demanded -- and got, as graphics horsepower arrived -- graphical versions of same. And then multi-user graphical versions of same. And then the likes of World of Warcraft, with over a million users, auction houses, the whole spectrum of social interaction, and so on.

Which leads me to the key insight that: our first commercially viable multi-user virtual reality environments have been designed (and implicitly legislated) to emulate pencil-and-paper high fantasy role playing games.

The gamers have given rise to a monster that is ultimately going to embrace and extend the web, to the same extent that TV subsumed and replaced motion pictures. (The web will still be there -- some things are intrinsically easier to do using a two dimensional user interface and a page-based metaphor -- but the VR/AR systems will be more visible.)

And given the fact that Stross envisions VR as being the new metaphor for Web evolution, he thinks that paper based RPG prefigured the future of the coming technosphere.

Stairway to nothing

A curious assemblage between two buildings close to my appartment: Stuff

Why do I blog this? I find those Chaotic Escher-ian stairs that lead to nothing quite intriguing. It reminds me how architecture could afford curious behavior: if this was an interface, what would it afford?

How unstable coordinates can be

You Are Here: Museu (MACBA, Barcelona; 1995) by Laura Kurgan is a very relevant (and early) project about locative media that I ran across recently; via Alex Terzich's contribution to the book "Else/Where: Mapping — New Cartographies of Networks and Territories", (Univ Minnesota Design Institute).

In the fall of 1995, the Museu d'Art Contemporani de Barcelona became both the subject of, and the surface on which to register, the flows and displays of the GPS digital mapping network. "You Are Here: Museu" installed a real-time feed of GPS satellite positioning data, from an antenna located on the roof of the gallery and displayed in it, together with the record of mapping data collected in September, in light boxes and inscribed onto the walls of the gallery.

What is great is that the artist represented the scatter of points caused by the uncertainty/discrepancies of the system (either caused by interferences and military scramblings (which is certainly of interest for Fabien's project):

Where we are, these days, seems less a matter of fixed locations and stable reference points, and more a matter of networks, which is to say of displacements and transfers, of nodes defined only by their relative positions in a shifting field. Even standing still, we operate at once in a number of overlapping and incommensurable networks, and so in a number of places -- at once. (...) The possibilities of disorientation, not in the street or on the roof but precisely in the database that promises orientation, are of an entirely different order, and GPS offers the chance to begin mapping some of these other highways as well: drift in the space of information.

In terms of "blogjects"-related concept I like this too:

The network is a machine for leaving traces, and so we can draw with satellites. The record of the interaction appears at the foot of each display: the identifying numbers of the NAVSTAR satellites, the time spent in contact with them, the number of data points collected by the receiver. What remains of that correspondence is something like a line, a sequence of points that registers the movement of the receiver across some physical space. But the line that results [Line], what is left over not exactly from a relation between given places but rather from the transmission of data, charts more than one drifting pathway

Why do I blog this? because I like this interactive art project and how it addresses pertinent questions related with geolocation. Of course, nowadays GPS is less likely to have troubles it had in 1995 but there are still flaws (there will always be limitations, at least with this technology); so representing them is curious from an human-computer interaction viewpoint.

SAP Labs' Ike Nassi about wireless networking

Computerworld features an interview of SAP Labs' Ike Nassi about how he foresees the future of wireless networking. Some excerpts I found interestng:

The integration of the real world and the IT world is going to happen, and it's going to accelerate. It's going to be driven by the increase in RFID in sensor networks and the rise of embedded microprocessors. We are doing things here that couldn't have been done three to five years ago.

He then gives some examples to support this:

For example, we are working with the city of Palo Alto to outfit fire trucks with a variety of wireless communications gear so we can track fire engines back to SAP's back-end systems. One thing the fire department was interested in, for example, was ... understanding why a fire truck would take what appeared to be a nonoptimal route to a fire. (...) The automobile has a tremendous number of microprocessors but has been slow to adopt networking. We are exploring back-end Web services [for] network-enabled cars. For example, my car told me I needed an oil change. But in the mail, I got a notice saying my car needed a software change. If the whole thing were network-enabled, I could have gotten an e-mail saying, "Your car needs to be serviced. (...)" [There is] a potentially very large number of back-end services that can be delivered to the car or driver.

Gartner about LBS

According to Information Week, the last "Gartner Hype Cycle" report about emerging technologies has some thoughts about location-aware technologies:

Among the high-impact technologies under Real World Web were location-aware technologies and applications. The former includes the use of global positioning systems and other technologies in the cellular network and handset to locate a mobile user. The technologies were expected to reach maturity in less than two years.

Once devices were location-aware, business applications were expected to take advantage of the capabilities in the next two to five years. Uses include field force management, fleet management, logistics and goods transportation, Gartner said.

Isn't it always the same paragraph? Last year it was:

Location-aware applications. These are mobile enterprise applications that exploit the geographical position of a mobile worker or an asset, mainly through satellite positioning technologies like Global Positioning System (GPS) or through location technologies in the cellular network and mobile devices. Real-world examples include fleet management applications with mapping navigation and routing functionalities, government inspections and integration with geographic information system applications. Mobile workers will use either a PDA or smartphone, connected via Bluetooth to an external GPS receiver, or stand-alone positioning wireless device.

Why do I blog this? things evolve slowly and foresight companies have to rescale their predictions

I walk. Principally I walk

A nice quote for a rainy saturday I which got back to some situationist writings:

—What do you do anyways? I don’t really know . —Reification, Gilles replied. —It’s serious work, I added. —Yes, he said. —I see, Carole said with admiration. It’s very serious work with thick books and a lot of papers on a big table. —No, Gilles said. I walk. Principally I walk.

This is from Michele Bernstein's Tous les chevaux du roi (quote translation from french to english taken here).

Why do I blog this? I like the punchline "Non, je me promène. Principalement je me promène" (i.e. "I walk. Principally I walk"). This is by no means related to my Human Computer Interaction research but I appreciated the quote from an aesthetical point of view (any maybe because I gather lots of ideas by walking too).

Andrew Hudson-Smith on city visualizations

In Londonist, there is an insightful interview of Andrew Hudson-Smith (UCL, Department of Advanced Spatial Analysis) about new ways of visualizing the city in three dimensions. The picture below shows an air pollution map.

So, tell us a bit about your background, and how you came to be playing god with the London skyline.

The story dates back to a phone call from Professor Mike Batty (CBE) asking me to do a PhD after seeing an early webpage on communicating architecture to the public. I always said that I wouldn’t do a PhD unless I could change how things are planned and how the public are informed about planning and architecture in general. You only need to look around London to see some of the mistakes of the past and if we can use the latest technology to inform the public so they can have a free and open say then maybe things will be better planned in the future. It may sound dull (and maybe that’s why I don’t get asked to many parties!) but it makes me wake up each day and think woohoo work, honestly it’s a fun job.

His perspective about the future of such technologies is also intriguing:

How do you see virtual environments in general, and Google Earth in particular, developing over the next few years?

If you look at chat systems using avatars such as Second Life and then merge it with Google Earth I think that’s the one to watch. To fly into the cities of the world and have people walking around them as avatars would suddenly make an inhabited virtual earth. I can see this happening in the next few years.

About sequential data analysis

Fisher, C. and Sanderson, P. (1996): Exploratory Sequential Data Analysis: Exploring continuous observational data, ACM interactions, 3(2), pp. 25 - 34. The paper is an overview of the sequential data analysis that are available with an exploratory perspective. It's a very broad description but it gives some valuable hints. They refer to "Exploratory Sequential Data Analysis" with this "ESDA" acronym.

Analysis techniques that use sequential information include conversation analysis, interaction analysis, verbal and nonverbal protocol analysis, process tracing, cognitive task analysis, and discourse analysis. In addition, there are many powerful sequential data analysis techniques that deal with sequential information statistically, such as Markov analysis, lag sequential analysis, and grammatically based techniques. (...) all empirical ways of seeking answers to research or design questions; they all use systems, environmental, and behavioral data in which the ordering of events is preserved, and they all involve data exploration at critical points in analysis, especially the outset.

The paper then presents three broad traditions of observational research traditions (behavioral, cognitive and social) and discussed them:

In his work on design meetings, Tang [7] implicitly distinguished the behavioral, cognitive, and social traditions when justifying his choice of interaction analysis—a naturalistic social technique—over a formal experimental approach or a cognitive approach. He reasoned that factors influencing design were not known well enough to develop a fully controlled experiment, and that designers probably were not sufficiently aware of how they made design decisions to provide a valid verbal report. As this example shows, the best approach to use depends on the question being asked, the kind of data that have been collected, and the form of statement eventually required.

Why do I blog this? Even though it's a bit old, therd are some good references about seminal work (which was what I was looking for). More about that in Human-Computer Interaction, 9, 3 (1994) which was a special issue about that.

Mobile technologies and social coordination in urban environments

In the last issue of the Receiver, there is a paper by Lee Humphreys about mobile technologies and social coordination in urban environments which is of great interest to my research. Starting from Rich Ling & Birgitte Yttri's seminal work about that question (see the paper “Nobody sits at home and waits for the telephone to ring:” Micro and hyper-coordination through the use of the mobile telephone), she is investigating "how people use mobile phones within their social networks in the course of their everyday lives". What is interesting is that tit does not only described coordination patterns but "also the subtle communicative exchanges used in a complex mobile world (...) What do you communicate? How do you communicate? With whom do you communicate? ".

An efficient way to coordinate in her study was "mobile broacasting" (" Text messages can also be broadcast from one person to several or even many people.").

The mobile phone becomes a good tool for the exchange of duration information and coordinating the when of casual social interactions. (...) The where of coordination is also more complex than just a venue name or address. A venue name can suggest quite a bit of social information used by people in order to determine who will meet up. (...) Location is not just longitude and latitude or even a street address, but also includes important social information (...) the proximity of the venue is also an important determinant in who will show up (...) The who of coordination is also a complex negotiation of casual social interaction. One of the interesting elements of broadcasting is that users can see who is coordinating meeting up — to whom was the message sent. This visibility allows for the exchange of complex social information

She also discusses issues that needs to be negotiated such as freedom vs. constraint and social performance vs. social functionality, but this is less my focus. Why do I blog this? the research I am carrying out in my PhD is about how people use the location of others as a resources for coordination. Even though it's much more CSCW-oriented that Lee's work, there are some interesting lessons to draw from her work. I have to grab an academic paper about that.

What about voice?

I am not following voice-recognition and its potential applications but today I've been confronted to three papers about it in my daily scans. Even though it's still R&D oriented, each papers delivered some promising messages about a technology that I am skeptical about (based on previous research project and research readings). First there is this ACM Queue discussion by John Canny (University of California, Berkeley), which is actually a great piece about the future of HCI. Canny quote Jordan Cohen's work (formerly of VoiceSignal, now of SRI International)

"The killer application is probably going to end up being some kind of interface with search, which seems to be the very hot topic in the world today; for mobile search especially, speech is a pretty reasonable interface, at least for the input side of it,"

This "search" concept is what I ran across this morning in a Business Week article by Steve Hamm, there is a presentation fo a curious application called TellMe about voice-driven Web information:

The idea is to create mobile search services that can make it easy for those on the go to find people, businesses, and information. That goes for any phone, but especially those equipped with browsers. A tourist might bark "restaurants," "sushi," and "downtown" into his cell phone and then see listings, read online reviews, make reservations, and retrieve a map with directions. "It has taken us six years to get to this point, but now we can really start to deliver on our original mission," says McCue, TellMe's CEO. (...) Skeptics point out that despite technology advances, voice recognition still turns off many consumers, who remember past glitches. But experts say that will change when systems combine voice, text messaging, and graphic info from Web pages. Each mode will be used for what it does best. "People will be using voice to launch into their search, and they'll want to see the information on a screen," says David Albright, executive director for marketing for Cingular Wireless, which is working with TellMe.

Yes, of course these last pointed I quoted are recurrent, but as presented in this Speech Technology Magazine Issues, there are others applications:

Use your telephone or cell phone to talk with Google—search the Web for answers to your questions, extract the information chunks you need, and listen to the results...Rather than struggling to find the answer to a specific question by chasing links across a Web site, you can simply click a button on the GUI screen and be connected to a human or artificial agent... instruct your oven through your cell phones...

Why do I blog this? don't know whether it's apophenia but I ran across those 3 articles today. So what? I am still dubious about speech technologies but there seems to be confidence in this avenue.

Artwork that changes to suit your mood

People from Bath University (UK) developed artwork that changes to suit your mood. It's called "empathic painting", the university webpage is more verbose about it:

"empathic painting" - an interactive painterly rendering whose appearance adapts in real time to reflect the perceived emotional state of the viewer. The empathic painting is an experiment into the feasibility of using high level control parameters (namely, emotional state) to replace the plethora of low-level constraints users must typically set to affect the output of artistic rendering algorithms. We describe a suite of Computer Vision algorithms capable of recognising users' facial expressions through the detection of facial action units derived from the FACS scheme. Action units are mapped to vectors within a continuous 2D space representing emotional state, from which we in turn derive a continuous mapping to the style parameters of a simple but fast segmentation-based painterly rendering algorithm. The result is a digital canvas capable of smoothly varying its painterly style at approximately 4 frames per second, providing a novel user interactive experience using only commodity hardware.

Why do I blog this? if the world infrastructure reacted to my emotion it would be crazy. Imagine mellow sidewalks...

New sony handheld

Sony's Mylo (My Life Online) seems to be a cross-over between a Danger's Sidekick and a PSP. It's basically a new handheld device that has interesting capabilities as reported by BBC:

The pocked-sized gadget, called the mylo, will sell for about $350, according to the Associated Press. It has a small display and keyboard and is pitched at the young, mainstream market who use IM and are interested in making net telephone calls. Sony has formed a partnership with Skype for net phone calls and with Yahoo and Google for instant messaging. The mylo, which stands for "my life online", will only be available in the United States.

The so-called personal communicator doubles as a portable media player. It can play music, and screen photos and videos that are stored on its internal one gigabyte of flash memory or optional Memory Stick cards.

What about the PSP and Mylo? BBC's comment about that is also true:

It too has wi-fi, can play music and video, display photos and is technically capable of supporting instant messaging and internet telephone calls. But the wi-fi functionality has yet to be taken advantage of by the company. It is not clear if the mylo will be a rival to, or complementary to, the PSP.

Why do I blog this? yet another handheld, nice design, time will tell. With this sort of device (Swisscom release a sort-of similar product), I am always wondering about pricing, especially regarding IM but in this case; but if it can take advantage of Wifi, that might be easier (the next step is to find a free hotspot).

AOL data release and data mining freaks

It seems that data mininer researchers/hackers had been crazy about the the recent AOL release of tons of data. This "A chance to play with big data" blogpsot gives some hint about it:

Second, the new AOL Research site has posted a list of APIs and data collections from AOL.

Of most interest to me is data set of "500k User Queries Sampled Over 3 Months" that apparently includes {UserID, Query, QueryTime, ClickedRank, DestinationDomainUrl} for each of 20M queries. Drool, drool!

Update: Sadly, AOL has now taken the 500k data set offline. This is a loss to academic research community which, until now, has had no access to this kind of data.

There's also a NYT column about it:

A list of 20 million search inquiries collected over a three-month period was published last month on a new Web site (research.aol.com) meant to endear AOL to academic researchers by providing several sets of data for study. AOL assigned each of the users a unique number, so the list shows what a person was interested in over many different searches.

The release of the data shines a light on how much information people disclose about themselves, phrase by phrase, as they use search engines.

The Internets, really?

From the wikipedia: "The Internets":

Internets was originally used as shorthand for cluelessness about the Internet or about technology in general[citation needed] but is often used today as an homage to when U.S. President George W. Bush referred to "the Internets" in the 2nd Presidential Debate with U.S. Senator John Kerry on October 8, 2004.

Anyway, even though I am not sure about Bush's thoughts about the Internet, I think this "the internets" concept makes sense. Besides, I have always been crazy about all the names and the confusion about the Internet and the Web.

Lollipop as user-interface

Regine completed my yesterday's post about tongue-based interactions with this right-on-the-spot innovation: lollipop as a user interface (by Lance Nishihira and Bill Scott):

Participants suck on lollipops embedded with sensors to control robotic babies in a race. (...) Sensors transmitted each sloppy stroke to a laptop that was controlling the movements of several robotic toys. ``I'm trying to think which one of our properties can be driven by a lollipop,'' joked Scott, a member of Yahoo's platform design group. ``Maybe Yahoo Games.'' The ``Edible Interface'' was one of 10 prototypes featured at Yahoo's University Design Expo, an annual event that explores how humans interact with technology

(picture by Gary Reyes / Mercury News)

Why do I blog this?a curious interface; what happen when the interface is more "invasive" than just a joypad? Would I like to control cell-phones games or billboard through this sort if interface...

About tongue-based interactions

People interested in tongue-based interactions should have a glance at this thesis (in japanese though), there are results from different tests/analyses of potential stimulus recognition (at least judging from what babelfish managed to translate).

The next step is then to find uses as in Nikawa's work: "Tongue-Controlled Electro-Musical Instrument", The 18th International Congress on Acoustics, Vol.III, pp.1905-1908, (2004.4)

This study aims to develop a new electronic instrument that even severely handicapped people with quadriplegia can play in order to improve their quality of life (QOL). Ordinary orchestral and percussion instruments require fine movements of the limbs and cannot be used by those with quadriplegia. In this study, we made a prototype of an electronic musical instrument that can be played by tongue movement. This instrument is composed of an operation board inside the mouth and a sound generator. The signals emitted from the operation board are transmitted to the sound generator equipped inside a personal computer. Music is generated through speakers.

Another example is the Nintendo's tongue controlled GBA which is a curious hack too using a New abilities TTK: a tongue-touch wireless keyboard transmitter (an orthodontic retainer with nine membrane buttons).

Others also use it as a "third arm" for astronauts:

The proposed alternative hands-free computer control system ACCS - Alternative Computer Control System - (...) ACCS will provide pilots and astronauts with an additional flight control contour, which will allow for continuous computer control of the flying apparatus at max. G-force, vibration, as well as blindly due to blood surge back from retina. ACCS is placed in a person's mouth (and comprises a tongue controlled directional command module along with 12 additional commands). It does not interfere with breathing, talk and consumption of fluids.

Why do I blog this? websurf about curious human-computer interaction systems...

SAFEGE: old-school suspended monorail

Here is a superb webpage that shows some picture of the SAFEGE. According to the Wikipedia:

SAFEGE is an acronymn for the French consortium Société Anonyme Française d' Etude de Gestion et d' Entreprises (en: French Limited Company for the Study of Management and Business). The consortium, consisting of 25 companies, including the tire-maker Michelin and the Renault automotive company, produced an arial railway technology. The design team was headed by Lucien Chadenson.

Nowadays, the SAFEGE is in a more 80s-metro-style:

Why do I blog this? summer cool websurf during a break writing a chapter of the PhD dissertation...

Digital Kids can't warding off ennui

Some results from a Los Angeles Times/Bloomberg poll are worth reading:

a large majority of the 12- to 24-year-olds surveyed are bored with their entertainment choices some or most of the time, and a substantial minority think that even in a kajillion-channel universe, they don't have nearly enough options. (...) A signature trait of those surveyed is a predilection for doing several things at the same time (...) Young people multi-task, they say, because they are too busy to do only one thing at a time, because they need something to do between commercials or, for most (including 64% of girls 12 to 14), it's boring to do just one thing at a time. (...) Throughout Hollywood, the race is on to develop entertainment that captures the attention of this distracted generation (...) Despite the technological advances that are changing the way entertainment is delivered and consumed, good, old-fashioned word of mouth — with a tech twist, thanks to text messaging — continues to be one of the most important factors influencing the choices that young people make. (...) Yet a surprisingly high number of teenage boys (58%) and even more teenage girls (74%) said they were offended by material they felt disrespected women and girls.

The part about continuous partial attention is interesting too:

"It's like being in a candy store," said Gloria Mark, a UC Irvine professor who studies interactions. between people and computers. "You aren't going to ignore the candy; you are going to try it all."

Mark, who has studied multi-tasking by 25- to 35-year-old high-tech workers, believes that the group is not much different from 12- to 24- year-olds, since both groups grew up with similar technology. She frets that "a pattern of constant interruption" is creating a generation that will not know how to lose itself in thought.

"You know the concept of 'flow'?" asked Mark, referring to an idea popularized by psychologist Mihaly Csikszentmihalyi about the benefits of complete absorption and focus. "You have to focus and concentrate, and this state of flow only comes when you do that Maybe it's an old-fogy notion, but it's an eternal one: Anyone with great ideas is going to have to spend some time deep in thought."