robot

A robot called Gerty

Finally had some time to watch Moon by Duncan Jones yesterday evening. Certainly a good sci-fi movie with different implications to ruminate and ponder. Slow and with a nice music. I found the props quite curious and not necessarily super showy.

One of the most intriguing feature of the movie is certainly GERTY, a robot whose voice is played by Kevin Spacey. Based on the Cog project, there is both a prop for static scenes and CG when it's moving around.

A convincing character, GERTY has a limited AI, as discussed by the director in Popular Mechanics:

"There is limited AI. GERTY is not wholly sentient. He really is a system as opposed to a being in his own right--that was one of the things I wanted to get across. The audience, and the different Sams, bring their own baggage to GERTY. They're the ones who anthropomorphize him and basically make him out to be more than he is. GERTY's system is very simple: He's there to look after Sam and make sure that he survives for 3 years. That's it. When you start watching the film, you're already making unwarranted assumptions about GERTY because of the HAL 9000 references and Kevin Spacey's slightly menacing voice. That's what the Sams do as well. The company itself, Lunar Industries, is nefarious. GERTY is not. He's doing his job. He has conversations with the company but he doesn't tell Sam because he's programmed not to. It's as simple as that. (...) The idea was to create a machine that was incorporating more than one type of sense data. So it had cameras for eyes, tactile fingertips and a moving robotic arm. It had an audio capture system. It was basically taking all of these various forms of data, giving it the eyes to see something and have the arm reach out and touch it in the right place"

See also some interesting elements about him from this interview in fxguide

Perhaps the most interesting aspect of GERTY (IMHO) is its smiley-face display to express its feelings. This little screen is meant to express the robot's emotion in a very basic ways with different permutations. Here again, it's good to read the director's intents:

"I use a lot of social networking sites. I’m on Twitter all the time. I use all these various forms of networking, including the text version of Skype. I tend to use smiley faces to make sure people know that I’m joking. That’s my own reason for using it on Gerty. I also like the idea that Gerty’s designed by this company which doesn’t have much respect for Sam and treats him in a patronizing way. So they use smiley faces to communicate with him."

I really liked the way the smiley are used, a sort of simplistic (and patronizing as he mentionned) representation of an assistant. Very much reminiscent to Clippy. This use of smileys reminded me of the Uncanny Valley and this excerpt from Scott McCloud's Understanding Comics: The Invisible Art:

Scott McCloud

For McCloud, a smiley face is the ultimate abstraction because it could potentially represent anyone. As he explained, "The more cartoony a face is…the more people it could be said to describe". Besides, it's really curious anthropomorphically because the robot design has two characteristics: the smiley face (with eyes and a mouth) and a camera. It's quite funny because in lots of sci-fi movies/comics, the camera looks as an eye and is sometimes perceived by people as having the same function. In Moon, the combination of the camera and the smiley face makes it very quirky.

Why do I blog this? trying to make some connections between this movie I saw and some interesting elements about robot design.

Robot exhibit at the Design Museum in Zurich

Robots Went to Zurich last wednesday, for the robot exhibit at the Design Museum. Called "Robots - From Motion to Emotion?", it is meant to give an overview of robotic research, with a presentation of robot highlights (ASIMO, nanorobots or the robotic jockey) as well as addressing issues such as: why robots are accepted or rejected and what characteristics determine the relationship of people to machines.

However, the part that I attracted my interest was the weird desk of a robotic designer: Robot designer desk

It's actually a "staged mess" that may be supposed to show how robot design is grounded into specific references (books, picture, newspaper clipping), artifacts (computers, electronic and electric tools) and prototypes. Unfortunately, this part was documented. I was thus left out with my own musing when examining it. If you look at the books in the picture below, you can see that the references that has been chosen ranges from "The Buddha in the robot" to "Y2K or "Action perception" and Charles Stross's "Singularity". Don't know what lead to this choice but there were also different pieces by Asimov that I haven't captured. Obviously the bible for robot designers/fans (that said, I am often mesmerized by the preponderance of Asimov in this field, there might be a lot to do in terms of Non-Asimovian robot design, as Frederic highlighted already) Robot designer desk

The office floor interestingly features cat food and a cat food dispenser, which may account for the importance of animal proximity in the robot design process. Perhaps some sort of hint to tell us to what extent creating a bot needs a metaphor from living beings: Robot designer desk

Why do I blog this? the whole exhibit gives and interesting overview of the robot scene but I was a bit disappointed by the design/art part since there's a lot going on this field. For that matter, it was a bit conservative. And as usual with robots, there is always a strong emphasis on locomotion as opposed to other characteristics of robots that I would find more intriguing to explore (agency, learning from the history of interactions, networked capabilities, etc.).

Dog vacuum cleaner from 1972

It seems that people wanted to combine toys and robots for quite a long time, as attested by this intriguing dog-shaped vacuum cleaner patented in 1972:

"A toy dog closely resembling a real dog and having a hollow interior in which is mounted a vacuum cleaner having a suction hose which is retractable from the tail end of the dog. This enables vacuuming a dog after a hair cut and grooming without causing fear to the dog, inasmuch as the vacuum cleaner noise is greatly muffed by such enclosure. The vacuum cleaner is convertible to a blower and air issuing from the tail end can be heated so as to serve as a dryer."

why do I blog this? curiosity towards robots and their combination with familiar representations. The dog is interesting as it is a pet (easily acceptable by owners) but it's curious to think about a furry device to clean things up. It's also pertinent to see the time taken by this sort of artifacts to be adopted... in the end with roombas which are a bit more minimalist.

Wizkid: a computer with a neck

Morning partner in commuting Frederic Kaplan finally revealed his latest project called wizkid (conducted with his team). In his words:

"Wizkid is a novel kind of computer permitting easy multi-user standing interactions in various contexts of use. The interaction system does not make use of classical input tools like keyboard, mouse or remote control, but features instead a gesture-based augmented reality interaction environment, in conjunction with the optional use of convivial everyday objects like books, cards and other small objects. (...) Wizkid could be described as a computer display with a camera mounted on top, fixed on a robotic neck. It looks like a computer, but it is a robot that can gaze in particular direction and engage in face-to-face interaction."

Martino d'Esposito, who take care of the design aspects, defines it as "a computer with which we could communicate in a more natural manner, but which would still not look “human”. Why do I blog this? I find the project interesting because it's shows the convergence between computer/ubiquitous computing and robots, plus I quite like approach Frederic describes by: "despite some successful results this kind of natural interaction systems has tended to be used only in the domain of interaction with anthropomorphic or zoomorphic robots and progress in these fields has not impacted more mundane kinds of computer systems". Furthermore, the interaction modes with that device are very intriguing through the "halo" mode (see description in the interview). From the output point of view, the interesting part is the "body language" used by the wizkid to express interest, confusion, and pleasure. To some extent it forces to ask questions close to the one I have to address with wii gestures, except that in the wizkid case it's about output gestures (and not input gestures for the wiimote/nunchuk).

For those who want to see it, Wizkid is part of MoMA's Design and the Elastic Mind exhibit, running from February 24 to May 12, 2008.

Phlogiston-debunking about robotics

Got back to this interview of Bruce Sterling about robots in 2005 and found some intriguing points:

"AM: How do you think robots will be defined in the future?

BS: I'd be guessing that redefining human beings will always trump redefining robots. Robots are just our shadow, our funhouse-mirror reflection. If there were such a thing as robots with real intelligence, will, and autonomy, they probably wouldn't want to mimic human beings or engage with our own quirky obsessions. We wouldn't have a lot in common with them-we're organic, they're not; we're mortal, they're not; we eat, they don't; we have entire sets of metabolic motives, desires, and passions that really are of very little relevance to anything made of machinery.

AM: What's in the future of robotics that is likely very different from most people's expectations?

BS: Robots won't ever really work. They're a phantasm, like time travel or maybe phlogiston. On the other hand, if you really work hard on phlogiston, you might stumble over something really cool and serendipitous, like heat engines and internal combustion. Robots are just plain interesting. When scientists get emotionally engaged, they can do good work. What the creative mind needs most isn't a cozy sinecure but something to get enthusiastic about.

AM: When will robots be allowed to vote?

BS: At this point, I'd be thrilled to see humans allowed to vote."

Why do I blog this? Only because I liked his description and the phlogiston-debunking tone of the interview.

Nabaztag sales figures

Quick note about Nabaztag, launched in 2005. I found some figures that might be of interest: 50,000 rabbits sold as of June 2006 (Source: Libération) 135,000 rabbits sold as of May 2007 (Source: Le Monde)

It's a pity the figures are only for France, but it gives an interesting picture of how this type of communication objects is sold. Sony sold 200,000 AIBOs worldwide (Source). And yes, I know it's like comparing apples and oranges but it gives a picture of the number of devices out there as well as how things evolve over time.

DIY robotics

Some excerpts from an article in Scientific American on "Open Source Hardware Makes its Debut in "Robot Internet Mashup". It's about the "Telepresence Robot Kit", a sort of DIY robotic platform developed by a group led by Illah Nourbakhsh (professor of robotics at Carnegie Mellon, University in Pittsburgh).

"TeRK program aims to allow anyone to use it as a control center for just about any robot they can imagine. Initially, though, Qwerk will be used for teaching and for projects that are "just for fun."

Online, TeRK users can access complete parts lists for robot kits that range from easy (think a three-wheeled spybot with a camera that can be controlled from any Web browser, and which can be built in a couple of hours) to ambitious: LeGrand envisions an arm on a Qwerk-powered robot that would allow it to carry out such functions as pressing elevator buttons in order to navigate entire office buildings. All of the software that runs Qwerk is open source, which makes TeRK incredibly flexible in the hands of the technically savvy. (...) "We also want to have people [akin to mechanics who] go under the hood of the car,'' he says. "At all levels we reveal enough of the interior detail so that users can go in and program at the lowest level they want.""

(pictures taken from TeRK website)

Why do I blog this? observing the robotics-ubicomp convergence, the advent of such kits seem to be interesting. Besides, I quite like this DIY, "reveal the interior" concept.

Nabaztag and Furby

Feeling that robots and ubiquitous computing are converging to a new type of artifact, filling the environment with instances of these systems is a very curious experience. This is why I bought a Nabaztag last year and a Furby recently. The former is often put in the ubicomp/commnicated objects category, whereas the latter is seen as a toy or a robot for kids (although its locomotion is pretty limited). IMHO, they belongs to the same phylum. f+n

The common feature I like in both is the ability to express things by talking: the Nabaztag tells the news, (short) weather forecasts, messages by friends, random thoughts (and moves its ears during tai-chi exercises) and the Furby try to interact with my by saying words (in furbish or french): sometimes at random, sometimes because I asked her a question (yes my furby is a "she"). I don't have the latest Nabaztag version that has a microphone but it does not seem to interact like a furby: the mic can only be used by pressing the button on the rabbit's head and asking for specific things (like radio, weather...). Even when the words they say are random, the experience is intriguing (especially when you have people at home that do not know what-the-hell-is-this-crap-that-screams. Generally the Furby is more talkative than the Nabaztag because it's programmed like this and because the microphone allows her to react. Although the interactions are punctual, it's sufficient to spark discussion between people around: there is a sort of sociability generated by the artificial pet utterance.

What is great is when the pet start to order things or complains about the situation. This is often the best case scenario in which attendants "best" react to the machine ("what? why is he asking us to do that?", "hey? shut up") and sometimes talk to the pet (even to the nabaztag who could not react accordingly). However, it's not the persuasive aspect of the artificial pet that is interesting, it's not because the nabaztag or the furby are funny or absurd when they complain that I as a user want it to remind me to water the plant or go eating. It's rather because there utterances generate a discourse around it, often about its behavior, programming or evolution.

[Besides, it's curious to put them close to each other and see the furby answering the nabaztag (unfortunately the rabbit cannot reply). The next step would be to hook a chatbot to an artificial pet...]

RoboDS

Turn your Nintendo DS into a mobile robot for $99 with roboDS (see video):

"his is a pre-order for RoboDS kit for DSerial2 multiple-interface card for NDS. It is an open robot platform for NDS that can be controlled via NDS Wi-Fi connection using a web browser interface. Install your own wireless camera onto RoboDS and monitor your home remotely! Wire-up your own laser pointer for extra flair, but use it responsibly!"

Why do I blog this? this is the sort of thing I qualify as "intriguing". But why the hell is this interesting? What is funny is the majority of websites and blogs that deal with gadgets never stress why such artifacts have a potential value (apart from their engineering/technical value). So, few points: (1) It raises the question of the "robot" identity? why is robotDS a robot? in this case it's called so because the wheels allows the DS to move around. Well, if a robot is defined by locomotion that's a bit limited and sad; plus it does not account for the current convergence between robotics and ubiquitous computing. (2) Modularity: the idea of turning a mobile device into something more complex through such as add-on is intriguing. Building artifacts or services on top of others artifacts is pertinent and curious especially when done in a DIY way.

Dream-inspired algorithms and robots

Speaking about replay tools and information gathered in the past (see previous post), this paper entitled "What Do Robots Dream Of?" (by Christopher Adami) features this curious bit:

"How would dream-inspired algorithms work in terra incognita? A robot would spend the day exploring part of the landscape, and perhaps be stymied by an obstacle. At night, the robot would replay its actions and infer a model of the environment. Armed with this model, it could think of--that is, synthesize--actions that would allow it to overcome the obstacle, perhaps trying out those in particular that would best allow it to understand the nature of the obstacle. Informally, then, the robot would dream up strategies for success and approach the morning with fresh ideas."

This inspiration from dream is based on the discovery of cognitive processes that occur during sleep:

"There is now strong evidence in human sleep research showing that performance on motor (1) and visual (2) tasks is strongly dependent on sleep, with improvements consistently greater when sleep occurs between test and retest. This is generally believed to be related to neural recoding processes that are possibly connected to dreaming during sleep (3). However, when one considers human dreaming, it is not a simple replay of daily scenarios. It has complex, distorted images from a vast variety of times and places in our memory, arranged in a random, bizarre fashion (4). If we are to model such activity in robots, we would need to have some form of "sleep" algorithm that randomizes memory and combines it in unique arrays."

Why do I blog this? gathering some thoughts about histories of interaction and the usage of asynchrone data to foster more adaptive behavior.

The uselessness principle

Free creatures: The role of uselessness in the design of artificial pets by Frédéric Kaplan is a very relevant short paper, which postulates that the success of the existing artificial pets relies on the fact that they are useless.

Frédéric starts by explaining that the difference between an artificial pet and robotic application is that nobody takes it seriously when an AIBO falls, it's rather entertaining.

Paradoxically, these creatures are not designed to respect Asimov’s second law of robotics : ‘A robot must obey a human beings’ orders’. They are designed to have autonomous goals, to simulate autonomous feelings. (...) One way of showing that the pet is a free creature is to allow it to refuse the order of its owner. In our daily use of language, we tend to attribute intentions to devices that are not doing their job well.

What is very interesting in the paper is that the author states that giving the robot this apparent autonomy is a necessary (but not sufficient) feature for the development of a relationship with its owner(s).

Then comes from the uselessness principle:

The creature should always act as if driven by its own goals. However, an additionnal dynamics should ensure that the behavior of the pet is interesting for its owner. It is not because an artificial creature does not perform a useful task that it can not be evaluated. Evaluation should be done on the basis of the subjective interest of the users with the pet. This can be measured in a very precise way using the time that the user is actually spending with the pet. (...) be designed as free ‘not functional’ creatures.

Why do I blog this? first because I am more and more digging into human-robot interaction research since I feel the interesting convergence between robotics and pervasive computing (that may eventually lead to a new category of objects a la Nabaztag). Second, because I am cobbling some notes for different projects for the Near Future Laboratory (pets, geoware).

Elmer and Elsie: Machina Speculatrix

It's always good to think about past instance of technological artifacts. For example, look at the two turtles created by Grey Walter. Also called "Machina Speculatrix", the turtles have an curious history:

Over fifty years ago W. Grey Walter started building three wheeled, turtle like, mobile robotic vehicles. These vehicles had a light sensor, touch sensor, propulsion motor, steering motor, and a two vacuum tube analog computer. Even with this simple design, Grey demonstrated that his turtles exhibited complex behaviors. He called his turtles Machina Speculatrix after their speculative tendency to explore their environment. The Adam and Eve of his robots were named Elmer and Elsie ( ELectro MEchanical Robots, Light Sensitive. ) (...) His robots were unique because, unlike the robotic creations that preceded them, they didn't have a fixed behavior. The robots had reflexes which, when combined with their environment, caused them to never exactly repeat the same actions twice. This emergent life-like behavior was an early form of what we now call Artificial Life.

Grey reported the robots path as follows:.

Why do I blog this? because these robots looks amazing for different reasons: (1) there not that zoomorphic (I don't believe the added value of a robot lies in the isomorphism with an animal), (2) the way the behavior of the robot works is based on an artifical intelligence model that I found more interesting than other devices).

BallBot: a mobile robot that has only a single spherical wheel.

Just stumbled across the Ballbot (developed by Carnegie Mellon University researchers led by Professor Ralph Hollis): a battery-operated omnidirectional robot that moves by balancing dynamically on a single urethane-coated metal sphere:

Significant insights will be gained from this research toward producing agile motive platforms which in the future could be combined with the research community's ongoing work in perception, navigation, and cognition, to yield truly capable intelligent mobile robots for use in physical contact with people. Such robots could provide many useful services, especially for the elderly or physically challenged, in their everyday work and home environments. Many other uses such as entry into hostile environments, rescue in buildings, and surveillance to safeguard people or property can be envisioned.

Why do I blog this? I have to admit that I like non anthropomorphic bots (even though I am crazy of the big dog).

Water suit for AIBO

Picture of the AIBO water suit (taken a while ago but showed yesterday at the Sony CSL Paris 10 years event in Paris), it has actually been designed by students from ECAL: AIBO water suit

This was part of the exhibit "A Robot's Playroom" (Frederic Kaplan, Pierre-Yves Oudeyer, Martino d'Esposito and ECAL Design Students):

In the exhibition, an intriguing set of objects is displayed that together make up a "playroom" for the Sony AIBO. This is the result of the work of design students from ECAL, supervised by industrial designer Martino d'Esposito and CSL researcher Frédéric Kaplan. These objects offer new learning opportunities for AIBO. At Sony CSL, Frédéric Kaplan and Pierre-Yves Oudeyer have been experimenting for several years with curiosity-driven robots. They were looking for novel environments that the robots could explore. Creating such a playroom was an exciting excercise for designers who are usually accustomed to deal with human needs only. Thanks to the creativity of the ECAL student, AIBO can now draw, ride a bike, control switches, pick up everyday objects, watch itself in a mirror, and even more.

Why do I blog this? that stuff is intriguing at first glance but the whole point is really to see the robot acting in its own playroom, which makes sense. Customizing it with new tangible artifacts is then a way to put the robot in a new environment and see how curiosity/robot enaction works out in that context.

Robot painters

(Via Laurent): Leonel Moura is an artist interested in robot painters. For instance, there is this Robotic Action Painter:

RAP is a new generation of painting robots designed for Museum or long exhibition displays. It is completely autonomous and needs very little assistance and maintenance. RAP creates it's own paintings based on an artificial intelligence algorithm, it decides when the work is ready and signs in the right bottom corner with its distinctive signature. The algorithm combines initial randomness, positive feedback and a positive/negative increment of 'color as pheromone' mechanism based on a grid of nine RGB sensors. Also the 'sense of rightness' - to determine when the painting is ready - is achieved not by any linear method, time or sum, but through a kind of pattern recognition system.

But my favorite is certainly the "The Iconoclast Robot" by Leonel Moura (presented at SHIFT 2006):

Why do I blog this? autonomous activity created by robots is interesting to observe, what happens when "machines that decide what to do for themselves"? This kind of principe is used in AI (problem-solving...) and now it's more and more common to let robots draw. Besides, the look of the iconoclast robot is superb.

Frisbee-shaped robots

Via: among some curious new defense technologies, there is this "lethal frisbee"

Triton Systems, Inc. of Chelmsford MA proposes to develop a MEFP-armed Lethal Frisbee UAV, whose purpose is to locate defiladed combatants in complex urban terrain and provide precision fires to neutralize these hostiles with minimum hazard to friendly forces or bystanders. (...) Both tele-operated (man-in-the-loop) and autonomous modes of operation will be provided, through wireless links to standard tactical data systems. Range, payload, and maneuverability will be tailored to the missions defined during requirements studies

Why do I blog this? curious leisure objects can give rise to big weapons. Is it the future of drones?

Human-robot interactions: dance

Fumihide Tanaka, Javier R. Movellan, Bret Fortenberry and Kazuki Aisaka Daily HRI Evaluation at a Classroom Environment – Reports from Dance Interaction Experiments Proceedings of the 1st Annual Conference on Human-Robot Interaction (HRI 2006), p.3-9Salt Lake City, U.S.A., March 2006

An interesting paper that reports on a study about human-robot interactions:

In this paper we present preliminary results on a study designed to evaluate an algorithm for social robots in relatively uncontrolled, daily life conditions. (...) The goal of the pro ject is to explore the use of interactive robot technologies in educational environments. To this effects two robot platforms, RUBI and QRIO, are being tested on a daily bases for prolonged periods of time. (...) One of QRIO’s most striking skills involves motion generation such as dancing. QRIO is endowed with various choreographed dance sequences, and is also capable of mimicking the motion of its human partner in real-time

What is interesting to me is how the authors experimented "different methods for evaluating and leaning about the interaction developed between the children and QRIO". The paper reports this evaluation of the daily dance inetraction using qualitative methodologies (coding interactions) and quantitative techniques (counting diverse indexes).

Why do I blog this? since pervasive computing, tangible interfaces, everyware, blogject and all this crowd is going to converge, this kind of research is more and more interesting to me, both from the methodological and the design point of view. Issues like artifacts affordance and attributions might then converge.

World Robotics 2006

IFR Stat is going to release their world robotics report 2006. It usually gives an overall picture of the robot world (forecasts, analysis of robot densities, studies on the profitability of industrial robots, service robots). It's mostly about industrial robots (handling, welding, assembly, dispensing...) showing where the big money is.

I looked at that wondering whether some folks works on mecha/robots for amusements parks or art piece using them.

Social communication "eyeball" robot

Via News.3yen, this incredible Muusocia developed by ATR and Systec Akazawa. Described by news.3yen as a "social communication robot":

The website claims that its “purpose is to make the existence consciousness of the person reconfirm who touches the Muu” …whatever the hell that means. The eyeball robot is aimed for RESIDENTS in nursing facilities and the like. The Muu has a general-purpose design which can be used as a receptionist or companion to the autistic using its ability to recognize person’s faces and voices and answer questions. (...) “Muu Socia has voice recognition, voice synthesis, speech processing and face recognition capabilities. And it starts bouncing around when something obstructs its view

A video about it here (.WMV, 5Mb).

Why do I blog this? yet another curious non-anthromorphic robot-like device a la nabaztag. Occurences of such artifacts are interesting to me because it shows the convergence between pervasive computing and robots. What about the user experience of such devices?