Query species on the web

It's funny that I found two links to this iSpecies in the last fives minutes (one on and the other on a google watchlist). It is a species search engine led by Roderic Page. You can query species and the data displayed are generated "on the fly" by querying other data sources:

iSpecies uses web services to talk to source databases, extract data, and assemble a page for each species. The code makes extensive use of XML. Essentially, each web service returns XML in one form or another, and I use and XSL style sheets to transform the result into HTML. (...) iSpecies queries NCBI using the Entrez Programming Utilities. It uses ESearch to look up a taxon name then, if the name is found, uses ESummary to get basic statistics on what NCBI holds for that taxon. (...) iSpecies uses Yahoo's Image Search web service to find up to five images for the query term. (...) This uses a Perl script I created to search Google Scholar. The script screen scrapes Google Scholar, extracts references and identifiers (such as DOIs and PubMed identifiers), then returns the results in RDF.

They have a blog about it.

Why do I blog this? this is somehow a search engine for blogjects, or they should add a new feature: connecting this to a near real-time animal track...

Self-reproduction of a physical, three-dimensional 4-module robot

(via) this is amazing Self replication project carried out at Cornell University by Viktor Zykov, Efstathios Mytilinaios, Bryant Adams, Hod Lipson.

Self-replication is a fundamental property of many interesting physical, formal and biological systems, such as crystals, waves, automata, and especially forms of natural and artificial life. Despite its importance to many phenomena, self-replication has not been consistently defined or quantified in a rigorous, universal way, nor has it been demonstrated systematically in physical artificial systems. Our research focuses both on a new information-theoretic understanding of self-replication phenomena, and the design and implementation of scalable physical robotic systems where various forms of artificial self replication can occur. Our goal is twofold: To understand principles of self-replication in nature, and to explore the use of these principles to design more robust, self-sustaining and adaptive machines.

The website provides an example:

Self-reproduction of a physical, three-dimensional 4-module robot: (a) A basic module and an illustration of its internal actuation mechanism; (b) Three snapshots from the first 10 seconds showing how a 4-module robot transforms as its modules swivel simultaneously. (c) A sequence of frames showing the self reproduction process that spans about 2.5 minutes. The entire reproduction process runs continuously without human intervention, except for replenishing building blocks at the two 'feeding' locations circled in red.

The video is stunning. Lots of precisions can be found in the faq.

A good read about this: Zykov V., Mytilinaios E., Adams B., Lipson H. (2005) "Self-reproducing machines", Nature Vol. 435 No. 7038, pp. 163-164

Why do I blog this? during my undergraduate studies I often encountered the very idea of self-replication, this is a very concrete example of how it can be embedded into real artifacts.

Understanding the context of bodynets implementations

Ana Viseu's work seems very interesting with regards to wearable computing, human/nonhumans interactions. Her PhD work: "Sociotechnical Worlds: The Visions and Realities of Bodynets" seems to be very appealing to my current readings about STS studies about IT.

Bodynets are bodies networked for (potentially) continuous communication with the environment (humans or computers) through at least one wearable device—a body-worn computer that is always on, always ready and always accessible. Bodynets can be thought of as new bridges between individuals and the environment (constituted by humans & nonhumans, or things and non-things). (...) For my doctoral research I propose to study how this new interface/bridge between the individual and the environment is developed and put in place, and how the relationship between both is redefined. In other words, I will study the development and implementation of bodynets and the emerging sociotechnical worlds that sustain them. (...) The study outlined in this paper is composed of two parts. In part A I propose to survey the field of bodynets, focusing on the visions drive its development and prototyping. This survey will provide data regarding the development/innovation phase of bodynets, that is, the expectations, goals, problems, solutions and activities of those involved (directly or not) in the field. Since bodynets are, in many ways, representative of the ‘dreams’ of the new information age, the research conducted here will also provide useful data relating to the values and ideals that guide our understanding of everyday life. For this purpose, three interviews with bodynet developers, from Europe and the United States, have already been conducted. Data is also being collected from a variety of sources, including books, newspapers, popular and scientific journals and websites.

Part B will consist of an in-depth case study focusing on one or two concrete artefacts and settings. This case study will provide a thick description (Geertz 1973) of the mutual adaptation of humans and bodynets. Like Part A, this part will trace the development phase of a bodynet, (i.e., the ways in which the technological artefact came to be. However, this study will be more systematic and in-depth, the aim here is to investigate the archaeology of the project and its current reality. Different actors (human and nonhuman) will be interviewed, offering their views on the project. This phase will also focus on the reactions of the users/wearers once the product leaves the lab and hits social reality.

The goals of the studies proposed here are the following: To understand the motivations, negotiations, problems and solutions behind the different actors involved in the process of developing and implementing bodynets; and, to understand the new social and cognitive dynamics that arise from the introduction of this new sociotechnical artefact.

Why do I blog this? lately I am interested in the "why" question of technological development... this research seems to address that issue in a very interesting way (the methodology used).

Workshop about Human-Robot Interaction

In the context of the Human Robot Interaction HRI2005 conference, there is an intriguing workshop about "HRI Young Researchers Workshop". Some of the topics addresses there that I find interesting to my research practices:

  • Lilia Moshkina - Experimenting with Robot Emotions: Trials and Tribulations
  • Julie Carpenter - Exploring Human-Centered Design in Human-Robot Interaction
  • Sara Ljungblad - Developing Novel Robot Applications for Everyday Use
  • Users: What do we need to know about them?
  • Marek Michalowski - Engagement and Attention for Social Robots
  • Kristen Stubbs - Finding Common Ground: A Tale of Two Ethnographers, Seven Scientists, Thirteen Engineers, and One Robot

Why do I blog this? this kind of topic are important in the sense that it will eventually lead to some issues raised by interactive toys, video games merge with toys and of course the blogject concept... I am interested in experiences and field studies about robots (not really about affective behavior but rather how the robot might disrupt human activities/sociocognitive processes or the spatial issues related to the robot/human interactions)

About MIT Medialab agenda

An article by Techreview mentions the fact that MIT MediaLab is going to be "more focused":

venture capitalists no longer readily throw money at "vague" projects, and government funding is drying up. Today, 70 percent of the lab's annual budget of around $35 million comes from corporate sponsors, with whom they must forge ever-closer ties. Since corporate benefactors want practical technologies, the Media Lab has to strike a balance between meeting sponsors' needs and maintaining its traditional philosophy of open-ended research. (...) These challenges now face a new director, Frank Moss (...) Moss says: "What has changed over the past seven or eight years is that simply coming here and rubbing shoulders with very smart, creative people is often not enough for our sponsors. They need us to help them make a connection between all the wonderful creative work we have here and problems they have." (...) "I think we're all entrepreneurs, but I'm coming from a commercial environment. I think the reason MIT went in that direction is that in many ways running an academic research lab in today's world requires a keen understanding of the sponsors and what their needs and wants are" (...) "I think in the next 20 years we're going to see tremendous advancements in using technology to deal with lingering social problems -- delivering health care, dealing with aging, education -- things that go beyond the digital lifestyle we enjoy today. The lab is going to be looking at how we can use existing or new technologies to make a big difference and solve social problems."

Well... he brings out some questions about research/innovation... and some issues...

SmartFish: innovative aviation

Smartfish is a project carried out by various lab and industrials such as EPFL, RUAG, German Aerospace...

The objective of team SmartFish is to develop and commercialise a revolutionary general aviation aircraft technology that is highly innovative in terms of safety, economy and emotion. This technology can be used for a wide range of applications, from UAV to high performance sports planes to business jets that can accommodate up to 20 passengers.

SmartFish differs from conventional aircraft by its innovative aerodynamic design, while relying on standard technologies for building materials and propulsion.

There is also HyFish, a SmartFish powered by a fuel cell.

Nokia and the future of gaming

A gamasutra news deals with the future of gaming according to Nokia (Jani Karlsson). It addresses the n-gage experience and what they learnt form it.

The basic learning is that experience is everything. Experience is the key. Not features for features sake, not power for power's sake - but always leading with the experience, with what the user actually wants and enjoys. (...) GS: So… you can talk about the future of N-Gage?

JK: Sure - that's all about expansion, into the smartphone areas.

GS: So, there'll be an N-Gage smartphone?

JK: I wouldn't go that far. There's going to be a platform. There hasn't been a brand announcement of yet. (...) I think our responsibility is two-fold. One is to enable the content industry in exploiting the mobile market as effectively as they can. On the other hand, being the leader in our field we need to lead by example - By focusing on the areas that may not make the most financial sense at the moment, but are essential for the evolution of mobile gaming and entertainment.

Richer content convergence in games versus other interactive entertainment - tied in with the community features. (...) [About innovation related to peripherals:] we are always looking for new innovations in the design side. Like the N Series devices are utilizing the video capabilities, and the N91 is really simplifying the music experience. So I can definitely see possibilities where there are more gaming orientated devices (...) Do you think mobile phone games exist in a different consumer cultural space, and if so, do you think that gap is going to continue to exist?

JK: I would say that the gap is both closing and widening at the same time. The performance power of the soon to market devices is really catching up on the console performance. But at the same time, the expansion of the user experience means we need to cater for the current mobile gamer being really light content. That content would really look out of place on a PSP - but on a mobile phone, the quick fix is totally viable.

Why do I blog this? this interview gives some interesting highlights about how Nokia people sees the mobile game future: platform convergence (smartphones), cultural and market convergence (the mobile game industry catches up the console game industry, eventually...), new input/output capabilities (related to music interface for instance)...

Beyond the QWERTY keyboard of gaming

An eTech2006 talk that might be interesting for completing a report on game controllers I did last year: From Paddles to Pads: Is Controller Design Killing Creativity in Videogames? by Tom Armitage

The videogames market is stagnating. The primary cause is not the domination of the industry by larger companies, the rising costs of next-gen games, or even lack of imagination.

The primary cause is the interfaces we play the games with.

There is almost no emerging technology in the field of physical videogame interfaces. The field is stuck at the Dual Shock, the QWERTY keyboard of gaming, and this is a bad thing--it is an unnecessary barrier to entry. Nintendo is bucking trends left, right, and center, but they're going to have to work against public reaction and the hell that is modern cross-platform development.

The talk covers:

History How we got where we are now: a history of interfaces, from Pong paddles and trackballs through to modern joypads.

Creativity Some examples of one-off controllers and interfaces that demonstrate real ingenuity, through to controllers that are endlessly adaptable.

Assumed skills There are unwritten conventions gamers know. The difficulty in coordinating two thumbsticks, for instance. What are the skills that develop through a history of gaming? What do we need to stop assuming?

Development What's been touted for next-gen. Are we looking at a leap forward or back? Just how much control do we demand anyway? The boundary between hardware and software interfaces.

What's needed A conclusion. How the barriers to entry can be lowered--and the gaming demographic widened-- through interface design.

Why do I blog this? I am interested in how game controller evolves and how they could redesigned to better support innovative game design and be adapted to gamers' context and cognitive skills.

RFIDs seminar in Geneva (ITU)

Once in a while, some news coming from the ITU (International Telecommunication Union) pops up into my RSS feed aggregator. This time, it's about a workshop that happened last week in Geneva about "Networked RFID: Systems and Services". It addresses arphids (RFID) capabilities, security concerns, new services (ranging from ladies' shoes inventory management system to container tracking) and new business models. A session interesting from my point of view is the one called "Introducing RFID - Visions and Implications". The conclusion of this session are:

  • RFID is part of a larger vision of future technological ubiquity, combined with sensors & developments in nanotechnology, creating an “Internet of Things” [Yes Fabien the ITU does not talk about a web of things...]
  • The future will be ubiquitous, meaning “universal, user-oriented, and unique”, but also “alive”!
  • It will be deployed by end-users and not necessarily centrally managed (“paintable”)
  • The pervasive nature of RFID comes with key challenges: standardization, governance of resources, consumer protection, namely privacy and data protection
  • (...)
  • However, standardization remains fragmented, interoperability and interference keys hurdles
  • In addition, user acceptance suffers from concerns over consumer privacy, data protection and security
  • ITU can play an important role in furthering international standardization efforts in addition to raising awareness about the challenges and opportunities of this exciting technology

Why do I blog this? The ITU is the place where people scale in scope, importance, and innovation to provide the necessary frameworks, protocols, and service capabilities for the achievement of new ITs platforms. In this context, it's interesting to see that they seem to be more enthusiast towards this than it was for the Web (they did not believe in the web few years back). Besides, it's good to see that they don't this "Internet of Things" for granted given existing issues (security/privacy, interoperability...).

Finally, it makes me wondering about how this thingy-internet/web might appear (especially if we think in terms of blogjects or postblogjects) and a corollary issue: can we do that (i.e. a world of communicating objects) without the internet?

Tissue technologies as a medium for artistic expression.

This is an intriguing project carried out by Oron Catts & Ionat Zurr in Collaboration with Guy Ben-Ary. It's an artistic research and development project into the use of tissue technologies as a medium for artistic expression.

In the last five years, we have grown tissue sculptures, "semi-living" objects, by culturing cells on artificial scaffolds in bioreactors. Ultimately, the goal of this work is to culture and sustain, for long periods, tissue constructs of varying geometrical complexity and size, and by that creating a new artistic palette.

The acquisition of living cells and tissues for artistic purposes has created concerns and has focussed attention on the ethical and social implications of creating "semi-living" objects. Thus our goal is to create a contestable vision of futuristic objects that are partly artificially constructed and partly grown/born. These semi-living objects consist of both synthetic materials and living biological matter from complex organisms. These entities (sculptures) blur the boundaries between what is born/manufactured, animate/inanimate and further challenge our perceptions and our relations toward our bodies and constructed environment.

In this project we have used pig's bone marrow stem cells and three dimensional bio-absorbable polymer scaffolds in order to grow three sets of wings.

More information about it on the website of the Pig Wing Project:

The Pig Wings installation presents the first ever wing shaped objects grown using living pig tissue, alongside the environment in which such endeavour can take place. We will attempt to present living tissue engineered pig wings that will be animated using living muscles. This absurd work presents some serious ethical questions regarding a near future where semi-living objects (objects which are partly alive and partly constructed) exists and animal organs will be transplanted into humans. What kind of relationships we will form with such objects? How are we going to treat animals with human DNA? How will we treat humans with animal parts? What will happen when these technologies will be used for purposes other then strictly saving life?

Why do I blog this? still a sunday afternoon browse, I was also wondering about tissues as a new interface (input/output) for certain technologies.

Sound and Ceramics: 6500 y.o voices recorded in pottery? (april joke in 2005)

(Update: Thanks cb for telling me that this is really an april joke) Via, this 2005 april joke news (though I did not manage to find any other references about it). As the blog mentions:

Researchers from Belgium have been able to extract voices and sounds from a pottery that is 6,500 year old. The person making the pottery at the time was using something very sensible to vibrations which recorded the sound vibrations on the pottery. This amazing video is in French so I hope you will not mind. However at the end of the video there is a recording shown and you can hear somebody laughing from 6,500 years ago.

The group of is led by belgian researcher Philippe Delaite. Check the video on youtube (in french though).

BUT, after a quick scan in some scientific articles, it seems that other persons are working on that issue: Bart Lynch for instance:

In architecture, natural harmonies occur in Renaissance structures. Harmonic relations of form and space were often based on the golden section and the ratios therein. These same ratios occur in the growth patterns of flowers, fish and other components of nature. I am currently concerned with understanding why these ratios occur and why they are pleasing to us.

I have been translating sounds into three-dimensional pottery using several computer programs in order to see if pleasing sounds make pleasing pottery and vice versa. Using the sound program Sound Edit Pro, I can get a visual representation of a sound that is time dependent. That visual is saved as a picture and imported to the program Swivel 3D where the sound form can be lathed to resemble pottery and used as a template to create actual ceramic works. Using these programs, I have also been animating the figures so that the pottery forms on the computer screen dialogue with the sounds that created them. I see these processes as data-gathering exercises that help me to understand the nature of the harmonic relations so that I will be able to use them more effectively in the future.

Why do I blog this? ... curious sunday browsing found... but it's unfortunately not a real thing ;) a good fake project for regine's collection!

Mattel+IDI workshop about new play experiences

Via Putting People First, Play Experiences for the Next Generation is a workshop that has been led by Mattel and the Interaction Design Institute at Ivrea.

"Play is a critical and healthy part of growing up and remaining balanced during adulthood. But there are many changes in play today that provoke thinking about the next generation of play in a different way. Changes like the prevalence of technology based play through computers, game consoles and cell phones. Changes like the time compression most kids and families are dealing with in the developed world and the way kids seem to be growing up faster. Changes like the degree that parents and kids being bombarded with adverts and rich visual media. With these and other issues the nature of the next generation of play and of how to attract the attention of adults and children is already changing fast."

The website is very well-documented with things like case studies (check "From user research to experience design A case study: robot toys for 4-5 years old | LEGO !).

Most of the projects can be found here. My favorite one is certainly robosquad by David Mellis and James Tichenor (yes it's close to a blogject but it does not blog, the good thing is that it interacts with other robots):

Robo-Squad SND is a series of modular robots which can be remote-controlled or operated autonomously. The basic package contains a full unit, consisting of three parts: the vehicle or locomotive element, the character, and accessories. (...) Imagine a play experience where the toys in a child's room are alive: moving, walking, talking. At one moment the child is one of the actors in the toys' stories and the next the child is above the toys, changing their relationships and actions. Robo-Squad SND makes this happen.

(...) Children transform their play spaces in their imaginations. To do this, Robo-Squad SND units needed to react to three elements: their environment, each other, and the child.

The wild watches by Aram Armstrong, Vinay Vankatraman and Pei Yu is also interesting in the sense that it is a wearable game and role play facilitator in the form of a watch. Both a platform in the software and hardware sense, on which many games and roles can be developed and played, what I found relevant is also the scenario they envisioned:

The animal role expresses itself by giving the child appropriate feedback, which come in the form of visual, auditory or tactile cues. These can be triggered by the proximity of predator or prey, or by making appropriate animal-like gestures. The physical and on-screen design of the watch gives the impression of an extension of the animal, so your arm becomes the elephant's trunk, the tiger's paw, or the snake's head and thereby moves the focus of the child's activity from the watch unto the entire body. Wild Watches allows children to play games, both alone and with friends. The games we explored and tested with children were new games and adaptations of old games but given new contexts and tech-enhanced twists; hand games, tag games, hide and seek games with names like Ant Race (cooperative play), Frog Hop (hot potato), Dragon Battle (strategy hand game), Virus (grouping and re-grouping tag), Bat Chase (sonic evasion), Snake, Mongoose, Bulldog (triangle tag), and Dolphin Treasure (hot and cold).

Why do I blog this? when I visited IDI last year I've heard about this workshop and was looking forward to see what could have emerged from it (in terms of interactive toy forecasts).

3D printings of your WOW avatar

Following on this morning post about the connection between bruce sterling's shaping things and game design, I ran across this very interesting project about doing 3D prints of Second Life or World of Warcraft avatars. It's based on eyebeam's OGLE project

OGLE (i.e. OpenGLExtractor) is a software package by Eyebeam R&D that allows for the capture and re-use of 3D geometry data from 3D graphics applications running on Microsoft Windows. It works by observing the data flowing between 3D applications and the system's OpenGL library, and recording that data in a standard 3D file format. In other words, a 'screen grab' or 'view source' operation for 3D data. The primary motivation for developing OGLE is to make available for re-use the 3D forms we see and interact with in our favorite 3D applications. Video gamers have a certain love affair with characters from their favorite games; animators may wish to reuse environments or objects from other applications or animations which don't provide data-level access; architects could use this to bring 3D forms into their proposals and renderings; and digital fabrication technologies make it possible to automatically instantiate 3D objects in the real world.

Example: 3D-printing your World of Warcraft character:

It can also be used to put avatars as mash-ups in Google Earth. Check their blog to stay tuned.

Why do I blog this? this is another interesting step towards having new artifacts generated after virtual content, like for spimes. It opens lots of possibilities (especially if the avatars can be tagged). I'd be interested in printing my nintendogs, putting an arphid on it and leaving it in geneva... and see what happen... especially if there could be some interactions with people passing by (with their cell phones)....

Deferring context-awareness elements to users?

Intelligibility and Accountability: Human Considerations in Context-Aware Systems , Victoria Bellotti and Keith Edwards, Human-Computer Interaction, 16(2-4), 2001, 193-212. The paper is a very high-level computer science article about context-awareness and its corollary social issues. It is focused on the problem of defining which context-aware elements might be automatically extracted and shown to the users of interactive systems.

In particular, we argue that there are human aspects of context that cannot be sensed or even inferred by technological means, so context-aware systems cannot be designed simply to act on our behalf. Rather, they will have to be able to defer to users in an efficient and nonobtrusive fashion.

Why do I blog this? This is really one of the conclusion of my phd research: certain processes (like location awareness) should not always be automated, sometimes deferring it to users can be more important as we saw in Catchbob!.

BUT:

Further, experience has shown that people are very poor at remembering to update system representations of their own state; even if it is something as static as whether they will allow attempts at connection in general from some person (Bellotti, 1997;Bellotti & Sellen,1993) or, more dynamically, current availability levels (Wax,1996). So we cannot rely on users to continually provide this information explicitly.

This might depend on the ACTIVITY, in catchbob people kept updating their positions on the map so that others could be aware of what they were doing because it was relevant for the time being and the cost of doing it was low.

Not directly related to my work, the paper also describes two principles for ubiquitouis computing:

Intelligibility: Context-aware systems that seek to act upon what they infer about the context must be able to represent to their users what they know, how they know it, and what they are doing about it.

Accountability: Context-aware systems must enforce user accountability when, based on their inferences about the social context, they seek to mediate user actions that impact others.

Contextual Flickr Uploader: a step towards a camera blogject

Transcripting the notes from the blogject workshop, I connected the first project (a blogject camera) to a contextual flickr uploader Chris recently sent us: the Context Watcher developed by a team led by Johan Koolwaaij:

The Context Watcher is a mobile application developed in Python, and running on Nokia Series 60 phones. Its aim is to make it easy for an end-user to automatically record, store, and use context information, e.g. for personalization purposes, as input parameter to information services, or to share with family, friends, colleagues or other relations, or just to log them for future use or to perform statistics on your own life. The context watcher application is able to record information about the user's:

  • Location (based GPS and/or GSM cell based)
  • Mood (based on user input)
  • Activities and meetings (based on reasoning)
  • Body data (based on heart and foot sensors)
  • Weather (based on a location-inferred remote weather CP)
  • Visual data (pictures enhanced with contextual data)

See the example here: for instance on this blogpost, the content is made up of a picture and contextual elements: I visited Enschede (43.9%) and Glanerbrug (56.1%), mainly Home (56.2%) and Office (42.4%). I met lianne.meppelink (30.2%). My maximum speed was 23.0 km/h.

Why do I blog this? this application is definitely one step towards having blogject. It achieves the first part of the process, which is about having an object that grasps contextual elements (the second would be to let objects have conversations) and upload then on the web.

What is impressive is " I met lianne.meppelink (30.2%)": the fact it can notice the presence of others, this is another good step for a blogject world.

Shaping Things and game design

Thanks Julian for pointing me on Raph Koster's thought about Bruce Sterling's Shaping Things. The blogpost deals with the connection game designers can draw from the book. Here are some exerpts I found pertinent:

Gizmos are what we live in and around today: networked objects, highly featured and accreting more every day, user-alterable, and essentially interfaces more than objects. Those who use them are now end-users. (...) Our use of metrics in the game industry is nigh on nonexistent. We know close to nothing about how exactly people play our games. Despite the fact that we play on connected computers, running software that is full of event triggers that could be datamined, we still playtest by locking a few dozen people in a room and asking them what they think. Regarded in that fashion, it’s simply astounding that the games are working at all. (...) We tend to datamine a fairly good set of metrics from our games, but they are almost all aimed at tuning the game, rather than being aimed at understanding the player. One of the comments that Bruce makes about gizmos is that they invite the user into the process (...) The passive consumer is a dying breed. (...) Bruce goes on to discuss rapid prototyping, which he dismisses as primitive. His real goal is something he calls “fabbing,” which is basically the apotheosis of the current 3d printers. But it strikes me that just as virtual spaces with user modeling are pretty good pre-visualizers, it’s objects in a virtual world like Second Life that are really true spimes: ‘fabbed,’ in his sense, by being created just by specifying them; often higher in detail in the spec than can actually be rendered; networked and capable of intercommunication, tracking their own history, and so on; and even possibly transparent, in the event of the ability to copy some of the script code off of one.

Why do I blog this? the connection between the book and game design is not explicit of course but Koster has interesting points, especially about active consumerism ('consumactor' as we saw at Lift06) and the potential of virtual world to be pre-spimes.

Wearable mobile communication and safety device

Via Medgadget, this wearable mobile communication and safety device for the elderly and for those prone to getting lost:

Sound the alarm, locate and communicate. Anytime, anywhere. That´s the key to Tadiran LifeCare´s SKeeper™ - a "peace of mind" product line designed to make life easier and safer for elderly, chronically ill, children or lone workers, as well as for their relatives and caregivers. (...) Using its built-in speakerphone, SKeeper™ enables cellular voice calls to be made to pre-defined numbers (e.g. a relative or a family doctor) or to be received from any caller or from the remote monitoring center when in need. Text messages (SMS) can be sent to the remote center or to relatives in case of an emergency. (...) SKeeper™ can take advantage of mobile operators´ location-based services, so that in the event the user wanders outside a specified zone (e.g. a neighbourhood or a school area), the system can immediately alert the monitoring center and/or send a SMS message to another mobile phone. Future versions of SKeeper™ will be GPS-enabled to provide a greater level of security. (...) Many of the device´s functions, such speed dialing numbers, authorized callers or preset text messages can be remotely programmed by the monitoring center or by the users or their authorized relatives via a Web-based interface.

Why do I blog this? the tool seems interesting and useful, the remotely-controlled interface is somehow innovative and the global design is fancy. But hey when I see this device, my first impression is not about a elderly person tracker but rather a game platform without any screen! A device that can allow people to collect objects around in cities, or trace gps drawings, having a pokecon application (there is a Built-in Cellular Speakerphone in this device!)...

Vermersch's 'explicitation' interviewing technique

Today JB gave us a course about Vermersch's 'explicitation' interviewing technique (mostyl used in France in the field of ergonomics and within the education system). Meant to elicit verbalisations of an activity, the idea of this technique is to favor evocation versus rationalisation from the actor. Here is the process, in two words:

  1. Contract between actor and observer: "if you agree, I will ask you to remember a specific moment...", "if there is something that you don't want to mention, don't tell it".
  2. Initial anchor: "put yourself back into the situation", "can you recall the moment when you were..." or "when you think at that moment, what was the first thing that came into your mind?" or Fishing: "what is the first thing that came into your mind?". The point is to talk about a particular moment (anchor), the interviewer can specific a moment or let the person choose one.
  3. Prompting: "when you [do] what are you doing?", "when you see X, what are you doing?", "When you say you did X, what did you do?" trying to identify in the discourse when it's general and ask the interviewee to precise his/her action, the interviewer also has to avoid introducing his/her own presuppositions. Use the present, use temporal marker "and then?", "and what happen next?", or use spatial marker "where are you when you do X?"

It's possible to use specific cues (like in NPL), like the interviewee gaze to see whether he/she is evocating or not (when he/she stares into space).

Also more about this here.

Why do I blog this? even though I use other techniques (such as self-confrontation for instance), this kind of exercise is interesting for our next catchbob experiment, to reconstruct the game activity.

Robots as educational toys

The Robota project at EPFL is about using ROBOTA dolls, a family of mini humanoid robots, as educational toys. The dolls can engage in complex interaction with humans, involving speech, vision and body imitation. They have been used for instance in projects related with kids and autism (as I mentioned here) There are some intriguing videos, especially the ones about language acquisition (16.5 Mb) or imitation of user's gestures (5.8 Mb).