robot

Visual Patterns and Communication for Robots

The Future Applications Lab in Gotenborg, Sweden is involved in a very interesting project (from my point of view) called ECagents (meaning Embodied and Communicating Agents).

The project will investigate basic properties of different communication systems, from simple communication systems in animals to human language and technology-supported human communication, to clarify the nature of existing communications systems and to provide ideas for designing new technologies based on collections of embodied and communicating devices.

The project is a huge EU thing but what the FAL is focusing on is about investigating how such mobile ommunicating agents would become a natural part of our everyday environment. In the masters thesis proposals, there is a description of what they're up to:

We have previosuly developed a number of ideas for possible application in the form of personas.

This thesis proposal is inspired by the persona Nadim. It is about developing a language for visual patterns using e.g. genetic programming, cellular automata, boids, diffusion-reaction, naming game or any other combination to visualize patterns on a small e-Puck robot. The robots should be able to develop as well as communicate such patterns through the language so that new and interesting patterns emerge from their perception of their environment and interaction with each other. The goal for the thesis is to either make a real demonstrator on the suggested platform (requires some previous knowledge about software implementation on embedded systems) or to make a simulated demonstrator based on the prerequisites.

Why do I blog this? I am less interested in the implementation and technical aspects but rather by the situatedness (or non) of communication between robots/artifacts of the future and human users. What happen during the interaction? what are the inference made by individuals about objects and vice-versa? How to improve this by creating new affordances?

Interview of an iRbobot founder

An interesting interview of Helen Greiner, one of the founder of iRobot (the company which is doing the vaccum cleaner robot Roomba as well as tactical military robots used in Iraq).

Knowledge@Wharton: I don't think anyone would object to having a robot vacuum the floor, but do you find resistance to robots as a concept -- doing tasks that humans have been doing? Is there a science fiction element of this that makes people nervous?

Greiner: I don't really think so. When computers first came out, you had a lot of people worried that computers were going to obsolete humans and that they were going to take over everything. So you had everything from [the movie] The Colossus Project to Hal in 2001. I think it's a way for society to work through their fears. Once people have a computer on their desk and they see what it's good at doing and, more especially, what it's not good at doing, they don't have the same fear anymore. It's the same with robots. Once people have a Roomba in their home and it's doing the sweeping and vacuuming for them, but they see the things it can't do yet, they really don't fear robots taking over the world.

"Naming" the object seems to be one interesting behavior that popped up:

The only thing in their experience that has acted that way has been a pet. So people actually start to name it. You don't see anyone name their toasters but a lot of people tell me they have named their Roomba.

I would just ponder this by saying that I've seen some friends (few years ago, while living alltogether in a big condo) calling their old-school vacuum cleaner "Daisy". Was it already a trend to call certain kind of home artifacts?

It's also refreshing to hear what she says about how people tinker:

Knowledge@Wharton: Have you heard stories of what people have done with this?

Greiner: Well, a few stories. One [involved] making a webcam on wheels so you can control your robot through the Internet and see what the robot sees and hear what the robot hears as you drive it around. Somebody made a robotic plant-moving system, so plants can always be in the sun. Someone was talking about making a swimming pool-skimming robot. And most recently, just this past week, some hackers did a physical instantiation of the video game Frogger. Now we don't condone this type of activity [laughs], but it shows you just where creativity can go when you make a system open.

The openness of the system is indeed FUNDAMENTAL if you want creative things to happen.

Why do I blog this? robots are an interesting domain where innovation starts to appear, leaving the anthropomorphic paradigm to become closer to the pervasive computing world in which objects are interconnected and open (so that people can modify them).

Welcome nabaztag

I recently bought a Nabaztag, I find it quite nice with its glowing lights and very simple design. What I appreciated:

  • extra easy set-up (no pb with the wifi)
  • a very calm ambient display
  • the package is quite empy but the website is full of information with informative pdf files (like color meaning, usage situations...)
  • the API is available so that people can create their own services
  • there is already a lively community of users, tinkerers
  • at first I was disappointed that the nabaztag was more a recipient of messages (showed through light, sounds and ear movements) but it seems that it can perceive certain inputs (like if you move its ear, it can send a message to the server).

What I found less good:

  • even though it's their business model, I am reluctant to pay for messages services and subscriptions
  • to me, there should be more emphasis on the openness of the device (more than the API) and I miss a social software dimension on the Nabaztag website. Chris has already used Ning to create a Nabaztag social platform.
  • the pictures on the box and on the website are quite weird, a large majority of people do not have a so cold home with empty tables... (of course the targeted group may have this but...). For me, Nabaztag is in a more messy environment: my office at home:

mynabaztagWhy do I blog this? The object is interesting to me because it's not smart, it's a wireless-linked device that allows basic communication and interaction through light, sounds and ear movements. Currently, this guy can only interact with: my computer (through the company's server) and cell phones. That's a cool feature: you can send SMS to your and your friends' rabbits.What is good is that it's a first step in the world of communicating artifacts. I feel like being more interested by this sort of device than by the locomotion of an AIBO (even though I am very curious of the AIBO communication and interaction practices, especially the blog thing).

Ok, now let's take some time to understand the API.

As a user experience researcher, I am very intrigued by possible user interactions with nabaztag; currently there are more outputs than inputs but using the ear could be a good way to interact with it (and consequently with other rabbits). Of course I would have be happy of having proximity-detections of objects and people in the vicinity but I guess it's a matter of time (next version of the rabbit).

More about it later.

Easy mobile-robots for PC enthusiasts

WhiteBoxRobotics is a group of visionaries that aims at becoming an industry leader and innovator of PC based mobile robotic platforms for the entertainment, educational, and personal robotics industry.

White Box Robotics is setting the standard for an entirely new category within the mobile robotics industry. The company will deliver unprecedented value to enthusiasts, educators and OEM's through its PC-BOT platform. Our mission is to develop networked mobile robotic platforms that empower people by eliminating the difficult learning curve typically associated with building robots. (...) This technology allows anyone with a rudimentary understanding of assembling or upgrading PCs to build this new species of robot. The 9-series PC-BOT is a new and open standard which allows us to customize on a common platform, using mature, off-the-shelf parts. (...) The 914 also features integrated capabilities like vision-based navigation, object recognition, speech synthesis and speech recognition - all in an easy-to-use yet powerful point-and-click graphical user interface.

Check their manifesto on their blog.

Why do I blog this? I like this idea of easy access to assemble one's own robot.

Autotelematic Spider Bots

John Marshall sent me some information about this marvelous project: Rinaldo and Howard's Autotelematic Spider Bots: spider-like sculptures which interact with the public in real-time, moving around the gallery to find food sources and projecting images of what they can see onto the gallery walls.

Why do I blog this? first because I like Rinaldo's work and also because those wandering robots seems interesting, in terms of artificial life thinking.

Human-robot interactions in the NYT

It seems that the NYT somehow covered the human-robot interaction conference.

If robots can act in lots of ways, how do people want them to act? We certainly don't want our robots to kill us, but do we like them happy or sad, bubbly or cranky? "The short answer is no one really know what kind of emotions people want in robots, " said Maja Mataric, a computer science professor at the University of Southern California. (...) There are signs that in some cases, at least, a cranky or sad robot might be more effective than a happy or neutral one. At Carnegie Mellon University, Rachel Gockley, a graduate student, found that in certain circumstances people spent more time interacting with a robotic receptionist — a disembodied face on a monitor — when the face looked and sounded unhappy. And at Stanford, Clifford Nass, a professor of communication, found that in a simulation, drivers in a bad mood had far fewer accidents when they were listening to a subdued voice making comments about the drive. (...) "People respond to robots in precisely the same way they respond to people," Dr. Nass said. A robot must have human emotions, said Christoph Bartneck of the Eindhoven University of Technology in the Netherlands. That raises problems for developers, however, since emotions have to be modeled for the robot's computer. "And we don't really understand human emotions well enough to formalize them well," he said.

Above all, I like this excerpt:

"If robots are to interact with us," said Matthias Scheutz, director of the artificial intelligence laboratory at Notre Dame, "then the robot should be such so that people can make its behavior predictive." That is, people should be able to understand how and why the robot acts.

Why do I blog this?I like it because it put the emphasis on the importance of mutual modeling in social behavior; mutual modeling refers to the inference an individual does (attribution) about others in terms of their intents or their cognitive and emotional states. The quote above hence raises the fact that there is a need of improving the mutual modeling process between human and robots. Another intriguing issue is that people starts projecting or anthropomorphing with the robotic artifact, as they do with pets. I am interested in this because the blogject concept might lead to similar situations in which people will have to assign certain meaning to the blogject agency.

GPS/Wifi roboduck fleet for marine sensing and sampling

Roboduck is a project led by Gaurav S. Sukhatme. It's actually a fleet of robotic air boats which serve as a test bed for evaluating algorithms including bacterial navigation for marine sensing and adaptive sampling.

There is a need to provide a platform for better monitoring and sampling in Marine environments. Such a platform should be able to withstand the highly dynamic nature of such an environment as well as cope with its vastness. The platform should be simple and easily scalable. A platform of this type would provide the scientists an invaluable tool in order to further the marine research by monitoring phenomena of biological importance. As part of our research, we are building a fleet of autonomous roboducks (robotic air boats) for in-situ operation (data collection and analysis) in marine environments. The platform would support a variety of sensor suites and at the same time be easy to operate. It can operate in both exploration mode and intelligent mode. It can also collaborate (via communication) with other entities (sensor nodes) in the local neighborhood making intelligent decisions.

Why do I blog this? this is close to the blogject idea (context-aware device + sensors). It's an example of a network of object sensors, useful in the marine context.

A robot powered with flies

(Via social fiction) When San Francisco is interested in turning dog poo into power, some other folks have designed a robot that does not require batteries or electricity to power itself but instead, it generates energy by catching and eating houseflies.

Dr Chris Melhuish and his Bristol-based team hope the robot, called EcoBot II, will one day be sent into zones too dangerous for humans, potentially proving invaluable in military, security and industrial areas. (...) The EcoBot II powers itself in much the same way as animals feed themselves to get their energy, he said. At this stage, EcoBot II is a "proof-of-concept" robot and travels only at roughly 10 centimeters per hour. (...) The EcoBot II uses human sewage as bait to catch the insects. It then digests the flies, before their exoskeletons are turned into electricity, which enables the robot to function.

(Image taken from der spiegel)

Few years ago, it was just a project, and now it works...

An autonomous robotic fish

Less sexy than Aibo but still nifty, this autonomous robotic fisch seems interesting. Designed by Dan Massie, Mike Kirkland, Jen Manda, Ian Strimaitis

An autonomous, micro-controlled fish was designed and constructed using sonar to help guide it in swimming. It was predetermined that constructing a mechatronic fish would be a large and demanding project due to the complex shape of a fish body, the unfamiliar territories of sonar sensing, the intricacies of fluid propulsion, and the challenge of keeping submerged electronics dry. However, the team was willing to put in a lot of time and produced an exceptionally successful first prototype by the name of Dongle.

The most important part is about the design and construction of this robotic pet: using soft-clay, a tail servo, microcontrollers...

Workshop about Human-Robot Interaction

In the context of the Human Robot Interaction HRI2005 conference, there is an intriguing workshop about "HRI Young Researchers Workshop". Some of the topics addresses there that I find interesting to my research practices:

  • Lilia Moshkina - Experimenting with Robot Emotions: Trials and Tribulations
  • Julie Carpenter - Exploring Human-Centered Design in Human-Robot Interaction
  • Sara Ljungblad - Developing Novel Robot Applications for Everyday Use
  • Users: What do we need to know about them?
  • Marek Michalowski - Engagement and Attention for Social Robots
  • Kristen Stubbs - Finding Common Ground: A Tale of Two Ethnographers, Seven Scientists, Thirteen Engineers, and One Robot

Why do I blog this? this kind of topic are important in the sense that it will eventually lead to some issues raised by interactive toys, video games merge with toys and of course the blogject concept... I am interested in experiences and field studies about robots (not really about affective behavior but rather how the robot might disrupt human activities/sociocognitive processes or the spatial issues related to the robot/human interactions)