A remote control still wrapped after months... used by one of my accointance... It's obviously not Christo-related.
For Rez fans
Rez is one of those fabulous video games with a nice design. It has been created by Tetsuya Mizuguchi. His blog is for japanese fluent geeks. Anyway, there is an interview of him by Tokyopia which is cool to read.
TOKYOPIA: Ok. Meteos. You are making a game with Sakurai-san, the designer of Kirby among other games. How did this collaboration come about? MIZUGUCHI: When I left SEGA, he left HAL laboratory at almost the same time. At the time, we had dinner; lunch, tea. We discussed future games on PSP, Nintendo DS, and the new mobile types of games. I really love 24. It is a real time drama. I watched that drama, and I felt the human brain changing from single task to multitask. In Japan, some young couples have dates and talk via email on mobile phones. So everyone can do the same things, many things, at the same time. You know what I mean?TOKYOPIA: Yeah, especially in Japan, with Keitai (mobile phones) you are always connected. MIZUGUCHI: Well some people are playing games with chat. [From] that kind of feeling, I wanted to design a new type of game. In Meteos, using the touch pen, you have to do many many things. Like Tetris or even Lumines, [they] exist as a [standard] puzzle game. I don’t know why, but they have one block falling down and then the next block falling down. But Meteos has many many blocks falling at the same time. That concept, I told it to [Sakurai-san], and he took further with the touch pen. [Connecting the blocks and shooting them in the air]. The launch, kinda like a space shuttle. I heard that concept and [thought] “Wow. That is new. Simple, but new.” So we got very excited, [and said] “Let’s make the prototype.” When we were done and finished [with] the prototype, [it] was so fun.
Why do I blog this? I'm eager to see how game designers will more forward in the field of mobile gaming.
A method to collect mobile phone data
A confidential french seminar about mobile and ubiquitous technology dealt with methods to collect cell phone use. I was not there but my radar pointed me on the slides. Rachel Demumieux and Patrick Losquin from France Telecom presented ACIDU (Application de Collecte des Informations des Usages). They say that Orange gives them access to various data like number of SMS/MMS/downloads... But nothing about the use of features (address book, folders, agenda, settings...). This is the reason why they used a symbian client running on the phone that allows them to keep track of various indexes. The presentation of the result is a bit frustrating since they did not elaborate that much on them :( Here are the quantitative results about the usage of the phone feature:
Why do I blog this? I'm into mobile computing analysis; this study is worth in terms of the methodology they adopted to gather information on the client (i.e. installing a piece of software that sniff what's happening when the user interacts with his/her phone). I am a bit disappointed by the fact that I certainly miss information here (the last slides are not enough informative in terms of results decription) but it seems promising!
Learning while collaborating with video games
Short article in the Wisconsin Technology Network about how video games are now a valuable tool for learning.
Firefighters need to be fully prepared, practicing hundreds of times before their first high-rise fire. City officials need to respond to a chemical spill immediately, getting the right information to the right place. Military teams need to work together, planning the best strategy to avoid casualties. These high-risk jobs now have virtual training tools, thanks to a string of developments in video games.(...) According to Constance Steinkuehler, a cognitive researcher at UW-Madison and Co-Lab member, the major benefit that video games have over traditional methods is the high level of user collaboration.
Why do I blog this? howdy! it's because the collaboration feature in video games has just been recently emphasized and I am pleased to see that it now gets some interest! Out point here is not to use game to teach stuff to students. It's rather to use games to understand collaboration and socio-cognitive processes that occur during such tasks.
Presence and Interaction in Mixed-Reality Environments
Presence and Interaction in Mixed-Reality Environments is a call for project in the 6th European Framework Programme (2002-2006).
The objective of the initiative is to create novel systems that match human cognitive and affective capacities and re-create the different experiences of presence and interaction in mixed reality environments. Research should focus on the following:
- Understanding different forms of presence, encompassing aspects of perception, cognition, interaction, emotions and affect. Techniques for measuring presence need to be developed taking into account insights from physio- neuro- cognitive and social sciences. The ethical aspects and the investigation of possible long-term consequences of using presence technologies need to be investigated.
- Designing and developing essential building blocks that capture the salient aspects of presence and interaction based on the understanding of human presence. These blocks should exploit relevant cutting edge software and hardware technologies (e.g. real time display and high fidelity rendering, 3D representation and compression, real-time tracking and capture, light control, haptic interfaces, 3D audio, wearable and sensor technology, biosensors and biosignals, etc.).
- Developing novel systems, able to generate or support different levels and types of presence and interaction in a multitude of situations. The research focus should be on open system architectures for integrating the above building blocks, with open APIs and source authoring tools for programming presence and for designing novel interaction paradigms.
The website is full of interesting material like presentations of different european teams.
Link between Mauro\'s project and mine?
A potential project? We discussed this, Mauro, Pierre and I:
Collaborative interactions by map annotation: investigating applications where people use locative media as a tool where the textual or graphical communicatoon is embeeded into a map. In our current work, these space annotations can be synchro (Nicolas' work) or asynchronous (Mauro's work). It can be specific to the locative media or integrate a standard environment (as a blog in Patrick's work). This raises many interesting issues, for instance if the users prefer to use "automatic location" (position by the system) or self-location (I declare where I am - and hence I can lie, or I can also say where I'll be); whether they write pieces where they are or where is the object they refer to. If the use graphics more than text. etc..
Meeting with phd supervisor
The new catchbob scenario is well received by Pierre. He always complained by the simplicity of the task; now he is more convinced. His point is that Bob should be something credible (fire? objects? birds?...) and we have to think about what this task will prove. We can complexify the task with:
- object: static - mobile
- number of bob: 1 -5
- let them play on different levels of EPFL
- jogsaw? each player have a different information?
- they have to keep a certain distance between them?
Anyway, Pierre was pleased to see that in our experiments, players without the tool self-declare their position. It would be great of we find positive/better results in the condition without the location awareness tool. That led us to think about a new independent variable: the accuracy of the synchronous location awareness tool. An incredible result would be to find that group performance is not affected by the positioning accuracy. I think I'll tackle this issue in the near future. People here at EPFL would be really interested in this kinf of results!
Other points:
- Keep on doing some experiments...so that we can have 5 groups with the tool and 5 without.
- Use Multi-Level Analysis? because it is hard (as I stated yesterday) to define what is at the individual level or at the group level.
Collaborative browsing
(via), jybe, a firefox plugin meant to allow collaborative web-browsing.
Jybe is a Peer to Peer plug-in that enables people to surf the web and chat together all in real time. This is not a proxy service but turns your web browser into a de-facto IM client.
Why do I blog this? I like this co-surfing idea even though I still have to think about creative and original scenarios of use. Anyway, it's another element to take into account in the interface discussion I had yesterday: in this case, the browser is the main interface and is the basis for IM. Let's test this now.
Table already used!
One of the table has been already used this morning to discuss an european project about location-based services...
Students in the process of building their tables for our cscw course
Students finally begins to build one of the table they designed for the cscw course:
New Catch Bob scenario!
Scratchnotes taken in the train between Lausanne and Geneva:As we saw earlier today, there are some problems:
- the task is too simple
- our tool is not sufficiently used compared to how it can be used for this task
- ending condition frustrating and unpredictable
- not enough collaboration/negotiation/discussion
Solutions:
- new task: treasure hunt: players have to find the largest amount of objects which are mobiles (from the players' point of view; actually, we place juste 3 objects for each round and they all move only when one is captured)
- scenario: 1 round = find an object, players have to do a trianlge/web to capture it (roughly speaking, it's the triangle plyaers make in the current catchbob; a captured object appears on the screen); when captured, players has t find others. Doing the triangle around the object does not necessarily means that players see each other! They will re-discuss the strategy.
- there will be some conflicts in the first part when they will have to select which bob they want to head to.
- 30minutes for every groups
To be decided:
- performance: number of objects or shortest path on epfl or combination of both
- what is bob: fire/gas/monsters/...
Then:
- in terms of design, we already have all those objects
- the task is too simple: it's more complicated here because we create conflict that should be negotiated with the tool
- our tool is not sufficiently used compared to how it can be used for this task: the tool must be used
- ending condition frustrating and unpredictable: ending condition easier, more rewarding (the object appeared on the device)
- not enough collaboration/negotiation/discussion: more conflict here = negotiation
Use physical properties as resource for interactions
Bricolage explores how physical properties (such as conductivity, optical qualities or shape) of everyday objects available at hand, can be hacked on the fly to become resources for interaction in ubicomp applications.
Bricolage explores the notion of on-the-fly expressive hacking of everyday objects at hand by Ubicomp end-users. We investigate how to take advantage of physical properties and of perceived affordances emerging from the repurposing of these objects, while building interactions in context - how to design for their hackability. Bricolage builds upon experimental artistic practices and DIY hobby activities involving physical hacking of everyday material objects.
Why do I blog this? It's because I am interested into the environment (re-)shape socio-cognitive interactions. In this project, designers take advantage of physical objects (spatial artifacts) to foster new social interactions.
Catchbob experiment 6
Today was the 6th experiment with CatchBob. We now have: 4 groups without the location awareness tool and 2 groups with the tool. That means 6 groups then 18 participants. I still need to have 2 or 3 more groups with the tool to do some basic statistics. Then I will have some insights about individual behavior towards the catchbob environment. Of course I have plenty of data. The general pattern is that groups with the location awareness tool are slower. I don't know if the distance is significant. I have to specify which data could be analysed individualy and in groups.
Individual data: nasa tlx/number of refresh/number of messages/number of zone searched Group Data: time/path/backtracking/overlap
The general feeling after this experiment (and the others of course) is that:
- the task is too simple
- the coordination between the partner is a bit weak, especially at the end when doing the triangle that circle Bob at the end
- the end is quite frustrating since people don't really know how it ends
- the location accuracy is sometimes bad then it's misleading. I would just say that this can happen in real situations, especially when you're in the wood using a GPS, it's often bad
What should we do? well it's a pain to change one more time all the stuff around but it seems that we need it. The most important thing is the task I think. I would like to keep a balance between the task we have already and something more complicated. The main problem here is the ending condition, the exploratory part is OK but the end sucks. Here are few ideas to refine the task:
- in the ending condition, players should be dispersed on the field and not to close (so that the collaboration is coordinated through the interface)
- the small triangle thing is dumb (not precise and the coordination is too weak), should be have more complex form like: an large equilateral triangle? a straight line? (one player is close to the object and the others should stand on a straight line
- a mobile bob
- ...
- I don't want to change the task to much, because (i) there will be a catchbob 2 anyway (ii) it should not be too complicated. That's why I want to keep it simple, just a little bit trickier and less frustrating.
Anyway, whether we choose the mobile bob or another form with dispersed users on the field, it will foster more coordination among the group because they will have either to reshape what they should do, and communicate more, walk a bit more... I like the mobile bob idea (the position of the virtual artifact changes one or two times) because it's a nice way to force the group to change/reshape/re-stat its strategy. Besides, to meet this end, they will have to communicate WITH the tool.
Cognitive science and SciFi
A french group of cognitive science researcher are setting up a workshop about the link between science fiction and their disciplines. On March, 7th, there will be a seminar about « Hard neuroscience : Greg Egan, sciences cognitives et philosophie de l’esprit» . If you happen to know french and your in Paris area, it seems nice to attend.
IM as THE interface?
I recently tried the SmarterChild, this little bot you put in your AIM contact list.
SmarterChild is an interactive agent built by Conversagent, Inc. Interactive agents are software applications, often called "bots," that interact with users on Instant Messaging or other text messaging services. You can "chat" with an interactive agent, whether on the web, over IM, or on a wireless device, the same way you talk to any other contact.
You can chat with him, talking about news and info, (Headline News/Movie Showtimes/Sports Scores/Sports Standings/Weather Conditions/Weather Forecasts), play games, ask for dictionnary definitions... It's a bit US-centric at the moment, because of the data sources they use ("Not all of our information providers supply us with international information"), but it seems promising. Why do I blog this? This tool seems promising even tough it the bot idea is a bit passé (MUD or MOO bots were close to it). What is new and interesting here is: (i) the interconnexion with lots of database (ii) the fact that your IM is more and more seen as THE interface. This, because now some IM supports asynchronous messenging (you can send messages if your partner disconnected), file transfer and now database query (well the bot thing is nothing more than a database query mixed with some regular expression).
There seems to be 3 important tools nowadays:
- the browser seen by google dudes as the interface that can be substituted to the operating system: to search (google core mission), network (orkut), read (webpages), send messages (gmail)...
- the webfeed aggregator (to gather news, calendars, todo list, weather forecast, remiders...), which is interesting because an aggregator is just an interface to deal with lists (RSS = list of stuff).
- the Instant Messenger to send/read messages and now ask for information
All these tools use the http protocol and are used differently by groups. Newsgroup and email are still here but some group of users tend to leave them (see the korean example in which email is only used to contact old persons). It would be interesting to study how the use of those tools is mixed and for which purpose (with regard to socio-cultural aspects).
Motivations to play Augmented Reality Games
T. Nilsen, S. Linton, J. Looser. Motivations for AR Gaming (.pdf). In Proceedings Fuse 04, New Zealand Game Developers Conference, Dunedin, New Zealand, 26-29 June 2004, pp 86-93.
In Augmented Reality (AR), interfaces consist of a blend of both real and virtual content. In this paper we examine existing gaming styles played in the real world or on computers. We discuss the strengths and weaknesses of these mediums within an informal model of gaming experience split into four aspects; physical, mental, social and emotional. We find that their strengths are mostly complementary, and argue that games built in AR can blend them to enhance existing game styles and open up new ones. To illustrate these ideas, we present our work on AR Worms, a re-implementation of the classic computer game Worms using Augmented Reality. We discuss how AR has enabled us to start exploring interfaces for gaming, and present informal observations of players at several demonstrations. Finally, we present some ideas for AR games in the area of strategy and role playing games.
Why do I blog this? This paper is interesting because it explains why augmented reality games worth it. It is basically because of the feeling of immersion they provide: physical, social, emotional and mental. This might be useful to discuss the relevance of using games in HCI.
A nice brush
Just ran across this: I/O Brush, did by Kimiko Ryokai, Stefan Marti, Rob Figueiredo & Hiroshi Ishii at the Tangible media group (MIT).
I/O Brush is a new drawing tool to explore colors, textures, and movements found in everyday materials by "picking up" and drawing with them. I/O Brush looks like a regular physical paintbrush but has a small video camera with lights and touch sensors embedded inside. Outside of the drawing canvas, the brush can pick up color, texture, and movement of a brushed surface. On the canvas, artists can draw with the special "ink" they just picked up from their immediate environment.
Tables that support collaboration?
Here are four tables our students made up. They had to design a proper shape that should support collaboration among a group (made of 4 persons). We managed to produce them, and now they will have to test this setting with a specific activity. The one-hour session will be videotaped and they will have to analyse the data.
Concerning the analysis, there are 3 options:
- analytical (detailed interaction analysis like in usability studies/ experimental research)
- synthetic (Who draws where ? Where are the laptops? Who talk to who ? Position & direction of the chairs?
- critical events (salient and significant)
The point is to conclude with: Pros and cons regarding to your table design + Suggestions for design a CSCW table (included groupware)
Here is the kind of question they should adress for the synthetic part:
- Does the table support group participation?
- How laptop usage interact with other activities?
- How do they use the table space?
Does the leader occupy a more central position? What’s the position of the ‘left over’ if any? Are some chairs moves away from the table? Do they exchange objects? What’s the orientation of bodies and chairs? Do they sit where you expected them to sit?
Do they move often their laptops for re-organizing their own space Do they move their laptops or fold its screen to facilitate interaction? How often do their turn their laptop to show its screen on the same-side partner or to another-side partner ? How much time is spent looking at the laptop versus looking at each other ?
Is there some ‘dead’ zone versus an ‘interaction’ zone’? What are the objects present on the table, which one are often used? Do they share or exchange some objects? What do they draw on the table? Where do they draw? Who draws? Where do they put the other objects on the table (documents, mugs,..) ? Is there useless space? Is there a ‘focus’ area? Are there different phases in using the table space?
Serious todo list
- catchbob 2 scenario- catchbob analysis to be reshaped (individual + group) - listen to catchbob mp3s - catchbob log parser: number of message + backchannel/overlap - catchbob nasa tlx capture + analysis - catchbob images: qualitative analysis on HypeResearch - catchbob paper for UbiMob - catchbob paper for E-CSCW - email for cscw course: I won't be here next week - meeting: list of people to see at PLAN, London - computational model? what about repast (ask salembier) - improve report for abell - meeting: find a date in march for meeting in Paris: EDF, France Telecom, Ubi Soft
Augmented Reality and children play
Cool research project about interaction design and children in Paris. Why do I blog this? I am interested into the way the guy wants to analyse the activity flow from a temporal perspective, with visualizations.