Designing relevant mobile interactions

In the last issue of ACM interactions, Lars Erik Holmquist's column is about designing mobile applications. He starts from a not-so-commonsensical take (at least for app developers):

the accepted wisdom from decades of research on interfaces for stationary computers simply does not hold for mobile devices. You will even hear HCI researchers and UI designers complaining that mobile devices are too small and "limited" to permit anything interesting. But the real difference has nothing to do with size. Instead it comes down to the fact that what we do with mobile computers and the situations in which we use them are fundamentally different from what we do with the desktop. (...) Mobile devices follow us through the day, which means that they are used in many shifting roles

Then he presents what he's doing at his lab:

The goal was to investigate mobile services that, rather than just being smaller versions of desktop applications, take advantage of the fact that they are inherently mobile.

Many of the mobile services that were created in the project were based on local interaction. For instance, MobiTip from the Interaction Lab lets you share "tips" with other users in the vicinity through a Bluetooth connection. (...) Another example of local interaction is the Future Application Lab's Push!Music. What would happen if the songs on your iPod had a mind of their own? In Push!Music, all MP3 files are "media agents" that observe the music-listening behavior of the user and other people in the vicinity. (...) The eMoto project by the Involve group extends the possibilities of mobile messaging by adding an emotional component. By shaking, squeezing, and otherwise mistreating the phone's stylus after you have written a message, you generate a colorful background pattern that expresses the emotion you want to put across.

And this actually nicely exemplifies his claim about mobile design:

Those who still worry about the "limited" interaction possibilities of mobile devices should note that all the applications mentioned above could be used on a standard mobile phone today (with small modifications). Yet at the same time they drastically expand the interaction parameters of mobile devices by taking advantage of local interaction, observations of the user's behavior, physical input, and so on.

Why do I blog this? I like this emphasis on taking advantage of external elements in the interactions (spatial proximity, tangible inputs...) and not relying on a limited input/output device.

IHT on location-based marketing

Yesterday in the IHT, there was an interesting article about mobile phone/billboard interactions.

JCDecaux, the outdoor-advertising company behind the project, is that consumers consent to receive alerts about digital advertising as they move through the city. "We are switching from a one-time active response to the user's blanket acceptance of many digital messages," he said. "We will, of course, need to be careful in making certain that users get only advertisements that interest them." When participating users are near an active advertisement - it could be part of a billboard or a bus shelter poster - their phones will automatically receive a notice that a digital file can be downloaded. The information could range from a ring tone or short video to a discount voucher. "With this project, we are really starting to create the personalized digital city," Asseraf said. "We eventually will see a rich dialogue running between mobile phones and what are now uncommunicative objects." (...) A cautious and permission-based approach is vital when using technologies that touch consumers so directly,

The permission feature is indeed a crux issue.

What's behind a "personalized digital city"? What are the consequences? having people immersed in different levels of information? what about spam? What are the assumptions? that we already have a different perception of space and place, territoriality and the cues that make us think it's different? or is it just a way to better reach potential customers?

They seem to care about that:

The potential shortcomings would be apparent in any large public space that might have many digitally enabled posters close to one another. "You can imagine a nightmare scenario where someone's mobile phone fills up with half a dozen advertising messages each day as they walk across Waterloo Station," Edwards said. "The most powerful way to use this technology will be offering people something of value that they really want."

The article also addresses two applications:

they also were developing airport signs, called UbiBoards, that will show information in the language spoken by a majority of the people nearby. "If mobile phones near a sign say that the majority of people are Chinese, the sign will show information in Chinese," Banâtre said, adding that such a system would require registrations much like the ad system. "Those who do not speak Chinese will receive the same information in their phone via SMS message in their own language." Another application, called UbiQ, is being developed to allow people in a location like a bank, cinema or fast- food restaurant to give information by cellphone about what they want before getting to the front of the line. "Think about it and you realize how much time is spent giving the same start-up information for a transaction," Banâtre said, citing the time it takes for a teller to enter banking details. "The intention with UbiQ is to speed up the exchange of information through mobile phones."

Why do I blog this? after few years of emergence in the LBS world, location-based marketing seems to be one of the most developed application (after navigation tools) but there is still no consensus about best practices as well as a positive user experience: the added value is often balanced by the risk of information overload (physical spam). This does not mean that location-based marketing is not useful but it's tough to invent something really valuable.

Stuffed-doll that reads emails

Regine pointed me on Ubi.ach, by Min Lee, Gilad Lotan, Chunxi Jiang. Close to the Nabaztag, it's a "ubiquitous, personalizable stuffed- doll that is able to read out your emails wirelessly and transmit voice messages" as the designers put it.

In search of using calm technology in our project, we have come up with a friendly-looking stuffed-rabbit that speaks out your gmail, according to your preset preferences on the web. This way, you do not have to solely rely on your personal computer to retrieve your emails. The user has the freedom to preset the importance of his emails, and categorize them as well as be alerted when a new email is received. They can also have personal messages recorded, allowing for the voice to be transmitted. Essentially, we have chosen to use RF (Radio Frequency) as a method to transmit and receive data between the doll and the internet, and a set of walkie talkies to output the emails using Text-to-speech technology, while also allowing for the use of personal speech. Radio Frequency can travel up to 125ft and the walkie talkies transmit and receive up to a distance of 5 miles.

An email is sent to ubiach@gmail.com, with either the word "alert" in the subject, the bunny will read the subject of that email. And a user can also record personal messages for the bunny to speak.

The ubi.ach is a hacked mechanical rabbit that dances around. Inside, there is a board with a microcontroller, radio frequency, LEDs and switches. There is also a walkie talkie that speaks out the emails. On the computer side is the receiver with a toy that is attached to the computer with a similiar board inside.

The project is better described here, the video is funny to watch.

Why do I blog this? a very simple object (one feature = reading email) but it's interesting to see that there are more and more design work around this issue of embedding interactions in a tangible device. The next step is to use this device also an input interaction device; a dimension which is somehow lacking even for the Nabaztag

From Artifical Intelligence to Cognitive Computing

There is now a language shift from the previsouly so-called "Artifical Intelligence" to "Cognitive Computing" as attested by the news in Red Herring (an interview of Dharmendra Modha, chair of the Almaden Institute at IBM’s San Jose and IBM’s leader for cognitive computing).

Q: Why use the term “cognitive computing” rather than the better-known “artificial intelligence”?

A: The rough idea is to use the brain as a metaphor for the computer. The mind is a collection of cognitive processes—perception, language, memory, and eventually intelligence and consciousness. The mind arises from the brain. The brain is a machine—it’s biological hardware.

Cognitive computing is less about engineering the mind than it is the reverse engineering of the brain. We’d like to get close to the algorithm that the human brain [itself has]. If a program is not biologically feasible, it’s not consistent with the brain.

The emphasis is then less in the "artifical" but in the information treatment processes (cognitive) that should be re-designed through reverse engineering. What is also very intriguing is this:

Q: Can even the simplest artificial “mind” have practical applications?

A: That’s my goal, to take the simplest form and put it into a system so a customer can use it. We hope to appeal to what business can do with it.

OK, it's IBM, it's a company research lab, and even though there are still very high-level, there is this mention to "the customer can use it", which is very curious in terms of what (of course I have ideas about it but it's not explicated in this interview) and with regards to the "consuming process" (let's consume this cognitive computing device).

Why do I blog this? it's interesting to see language shift in the domain of technology, it's always meaningful.

JPod by Douglas Coupland

I'm looking forward to read JPod by Douglas Coupland.

From Publishers Weekly: Coupland returns, knowingly, to mine the dot-com territory of Microserfs (1996)—this time for slapstick. Young Ethan Jarlewski works long hours as a video-game developer in Vancouver, surfing the Internet for gore sites and having random conversations with co-workers on JPod, the cubicle hive where he works, where everyone's last name begins with J. Before Ethan can please the bosses and the marketing department (they want a turtle, based on a reality TV host, inserted into the game Ethan's been working on for months) or win the heart of co-worker Kaitlin, Ethan must help his mom bury a biker she's electrocuted in the family basement which houses her marijuana farm; give his dad, an actor desperately longing for a speaking part, yet another pep talk; feed the 20 illegal Chinese immigrants his brother has temporarily stored in Ethan's apartment; and pass downtime by trying to find a wrong digit in the first 100,000 places (printed on pages 383–406) of pi. Coupland's cultural name-dropping is predictable (Ikea, the Drudge Report, etc.), as is the device of bringing in a fictional Douglas Coupland to save Ethan's day more than once. But like an ace computer coder loaded up on junk food at 4 a.m., Coupland derives his satirical, spirited humor's energy from the silly, strung-together plot and thin characters. Call it Microserfs 2.0.

Why do I blog this? because I like Douglas Coupland's novels and that one seem to be quite curious. I expect it to epitomize the beginning of the XXIst century's habits/trends in terms of work/cultural practices.

Metaverse Roadmap: pathways to the 3D web

The Metaverse Roadmap is a ten-year forecast and visioning survey of 3D Web technologies, applications, markets, and potential social impacts.

What happens when video games meet Web 2.0? When virtual worlds meet geospatial maps of the planet? When simulations get real and life and business go virtual? When your avatar becomes your blog, your desktop, and your online agent? What happens is the metaverse. (...) Areas of exploration include the convergence of Web applications with networked computer games and virtual worlds, the use of 3D creation and animation tools in virtual environments, digital mapping, artificial life, and the underlying trends in hardware, software, connectivity, business innovation and social adoption that will drive the transformation of the World Wide Web in the coming decade.

The MVR is organized by the Acceleration Studies Foundation, a nonprofit research group, and supported by a growing team of industry and institutional partners, all pioneers in this important space.

So check out:

Creation of the Roadmap begins with an invitational Metaverse Roadmap Summit May 5-6 2006 at SRI International where a diverse group of industry leaders, technologists, analysts, and creatives will outline key visions, scenarios, forecasts, plans, opportunities, uncertainties, and challenges ahead. Below are a few distinguished attendees. Click 'view all participants' for the full list

The first steps of the roadmap 2016 is presented here

Why do I blog this? This is helpful for my foresight research about video-games.

A place like a Muscle

I am really enjoying this Muscle NSA project carried out at the Hyperbody Research Group at Delft University. This is a programmable building that can reconfigure itself.

For the exhibition Non-Standard Architecture ONL and HRG realized a working prototype of the Trans-ports project, called the MUSCLE. (...) Programmable buildings change shape by contracting and relaxing industrial muscles. The MUSCLE programmable building is a pressurized soft volume wrapped in a mesh of tensile muscles, which change length, height and width by varying the pressure pumped into the muscle.

What is interesting is the interaction they designed engaging people in a playful activity:

Visitors of the Architectures Non Standard exhibition play a collective game to explore the different states of the MUSCLE.

The public interacts with the MUSCLE by entering the interactivated sensorial space surrounding the prototype. This invisible component of the installation is implemented as a sensor field created by a collection of sensors. The sensors create a set of distinct shapes in space that, although invisible to the human eye, can be monitored and can yield information to the building body. The body senses the activities of the people and interacts with the players in a multimodal way. The public discovers within minutes how the MUSCLE behaves on their actions, and soon after they start finding a goal in the play. The outcome of this interaction however is unpredictable, since the MUSCLE is programmed to have a will of its own. It is pro-active rather then responsive and obedient. The programmable body is played by its users.

There is also a slight connection with the blogject concept:

For the behavioral system this means that the produced sensorial data is analyzed in real-time and acts as the parameters for pre-programmed algorithms and user-driven interferences in the defined scripts. These author-defined behavioral operations are instantly computed, resulting in a diversity of e-motive behaviors that are experienced as changes in the physical shape of the active structure and the generation of an active immersive soundscape. The MUSCLE really is an interactive input-output device, a playstation augmenting itself through time.

Why do I blog this? what I like in this project is that it mixes different aspects of the HCI world: games, games software, architecture, usage of sensors. In the end, the outcome is pretty original and the visitors' experience seem to be intriguing. I also like how it modifies the relationship of the visitors to a dynamic place.

Special issue of Psychnology about Mobile Media

The Psychnology journal (an online research journal) is going to have a special issue on Mobile media and communication – reconfiguring human experience and social practices? (edited by Ilkka Arminen):

Mobile media have already become an essential aspect of everyday life. They alter existing communication patterns, enable new kinds of contacts between people, and yet remain embedded in prevailing social relations and practices. Mobile communication has said to have created “timeless time” and freedom from place. This new social and communicative development has been characterized revolutionary. Still, the usages of mobile technologies are solidly anchored on local circumstances and prevailing forms of life. Also not all mobile technologies have proven successful. The adoption of mobile media has been in many respects much slower than anticipated. Is there a contradiction between revolutionary technological potential of mobile media and embodied, habitual human experiences? This special issue addresses the potentially tense relationship between the development of mobile technologies and mundane experience.

Possible topics include:

Reinvention of mobile media.

Limits of mobile technologies.

Mobile technologies and local realities.

Mobile technologies and new forms of social interaction.

Mobile technologies and social networks.

Submissions are accepted of any length, discipline and format provided their scientific relevance and accuracy. They should be sent in electronic form to both: articles(at)psychnology.org, and Ilkka.Arminen(at)uta.fi no later than October, 30 2006. Inclusion of color pictures, videos and sound files is welcome.

Why do I blog this? again this is indirectly connected to my research about how new technologies reshape social/cultural/cognitive practices.

Workshop about space/place

In the context of the Participatory Design Conference, there is a workshop about place, space, and design (.pdf).

While we are "Expanding Boundaries in Design", perhaps we should think for a moment on the significance of boundaries, which are essentially the separation of "this place" from everything "not this place". And what constitutes "this place"?

The intent of this workshop is to bring together researchers and practitioners who have studied place and space and are engaged in exploring the ways in which place and space affect design and the use of technology and the ways in which technology changes the places where it is used.

The day of the workshop will be divided between exercises and discussions. It will begin with a brief round of introductions, followed by an exercise on location. This is intended to explore differences in awareness of location and the differential meanings carried by the respective erminologies of place and space. The next segment will be the presentation and discussion of participants' reports on their own studies of place and space, either sent in advance or brought to the workshop. The morning will conclude with a game on place, space, and design.

Why do I blog this? This is related to my PhD research, especially the relationship between space/place and socio-cognitive interactions with regards to pervasive computing applications.

Video Games Event in Milano

Today was the games@IULM in Milano (could not be there...), an event co-organized by some good people I follow:

The Humanities Lab at IULM University in Milan, Italy, is organizing a digital games conference and exhibition for May 3rd 2006. The event brings together game researchers from Italy, the United States (Stanford University), and Europe (the Computer Games Research Center in Copenhagen, Denmark.

Stanford's Jeffrey Schnapp, Henry Lowood and Fred Turner will take part in the event in mediated form. Their contributions will be delivered via video interviews recorded by SHL visiting scholar and game researcher Matteo Bittanti.

Jeffrey Schnapp examines the role of humanities in the digital age; Henry Lowood discusses the status quo of game studies and game culture, while Fred Turner comments on the politics and ideology of digital games. The video interviews will be freely available for viewing and downloading on the Games@IULM official website from May 3rd 2006

The program is there

Why do I blog this? this event seems to propose interesting and refreshing perspectives in the domain of video-games research.

Meeting at the IFTF

I had lunch today with my friend Alex Pang at the Institute For the Future in Palo Alto. The discussion was around the Internet of Things, spimes and blogjects. Starting by discussing Bruce Sterling's Shaping Things, we were thinking about the fact that as Sterling says there is no smartness in the objects; the smartness better resides in the was those objects and networks help us to make better choices; especially with regards to specific actions or meeting people. Wired and connected objects may indeed help choosing what tools can be used to consume less energy, sharing certain types of objects with others that would be trackable is also of interest (and is actually a topic discusses in one of the story Bruce Sterling wrote in "Visionary in Residence"): a kind of community hammer or driller for instance. IFTF

Alex and I also discussed some potential ideas about the blogjects serie of workshop I am organizing along with Julian. Additionaly, Jason tester updated me on their pervasive gaming projects that is a very relevant synthesis about context-aware games. This project interestingly started first by looking at the history of video games from the POV of users and then continued as an overview of the pergames directions.

Alex finally encouraged me to go deeper in the Science Technology and Society world, which is quite a good idea.

Palo Alto in 2006

Two interesting signs. One the left, the company indicates its own subsidiaries, which is often done by luxury companies (like Louis Vuitton indicating the glamorous places where they are like Paris, Tokyo, Cannes...), the streetwear company (LA, Tokyo...) and now the tech ones who not only put the SVs references but also subsidiaries in India. On the right, it's just company plates printed in the rush on A4 papers, web2.0 frenziness? Look at the cities New ones

Why do I blog this? just few thoughts while walking in downtown Palo Alto this morning.

Yesterday's meeting

Yesterday was a quite super active day in the bay area with a serie of meeting at PARC and a dinner with friends in SF. Located in Palo Alto, PARC is a subsidiary of Xerox Corporation, conducts pioneering interdisciplinary research in the physical, computational, and social sciences.

It first started with a meeting with Elizabeth Churchill and Les Nelson to whom I presented my PhD research and get some more insights about they're up to. Some comments from Elizabeth:

It seems that the automatic location-awareness tool in catchbob could be problematic because there is no context around it. Since the users in this experimental condition did not exchange lots of messages (mostly only about their proximity to the object via signal strength indications), there is a lack of a social context that could be helpful to interprete the locational data. As opposed to the players in the "with the location-awareness" who better discussed the strategy and then had a context to help them doing inferences about the others' location (they indicated through map annotations).

She pointed me at this kind of overtrust on technology (the location-awareness tool): since it's an available information, they pay attention to it.

An additional remark concerns the fact that coordination information liek this location-awareness is more than coordination: it's establishing a common ground of the situation, by wrapping up these information into a strategy context.

She encouraged me to ask players some questions about the way they experienced collaboration: did you feel like you were wandering alone? or being part of a team? so that I can evaluated the level of involvement in a social context.

Les asked me whether there could be a kinf of optimal strategy index that would be helpful to measure the spatial behavior.

I then had a meeting with Nicolas Ducheneaut who explained me how he ended up there doing research about multi-user applications (and how PARC works in terms of project management). One of Nicolas' project is the super neat project about World of Warcraft called "Play On". He showed me some of the ongoing things they are doing, mainly the "social dashboard" they patented. They actually isolated important factors in terms of guild management in WoW (such as guild size...), those who are important so that players keep enjoying the game and then developed some services and tools that would be helpful for that matter (for game community managers!): for instance seeing the evolution of certain parameters, the fact that some high-value players left a guild, the desagregation of guilds, the isolation of rotten classes... I told him that I would be very interested in seeing this also feeded back to the players (and not only the guild manager), like in our virtual mirror project at the lab (giving the group an image of itself to modify the way they collaborate). Thus, they're basically focusing on improving the social aspects of the game (so that players keep playing!) through certain kinds of services. Of course, this is of interests to game editors (even though the content and the gameplay are still tremendously an important feature, there should be also an emphasis on those social aspects; and those tools they develop are helfpul).

He also worked on an relevant project about "social tv" that might be interesting for private research projects.

Regarding my PhD research, he made some insightfuls comments and connections with others' work:

It started to make him think at Aoki, P. M.; Woodruff, A. Making space for stories: ambiguity in the design of personal communication systems. ACM Conference on Human Factors in Computing Systems (CHI 2005); 2005 April 2-7; Portland, OR. NY: ACM; 2005; 181-190.

Then he pointed me on another paper: Dabbish L. & Kraut R. (2004) Controlling interruptions: Awareness displays and social motivation for coordination, in: Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW'04), New York: ACM Press, 182-191

I'll explore these papers in the near future.

For him, the information about others' whereabouts in space can ruduce the richness of the mental map people build of space by just focusing on a certain kind of information (conveyed by the location-awareness tool) and not the others (cf Kevin Lynch, image of the city, p45): players who had the "follower" role maybe had a poor representation, whereas the "explorer" had a richer image of space. I am wondering whether I can apprehend this sort of things with the data I have (given that participants knew the campus). Maybe some studies about ethology or animal behavior in space could be valuable for that mattter.

The final round at PARC was with Victoria Bellotti, to whom I also described my PhD research. Here are some comments she made:

Did the explorers were more successful in terms of performance? did they make more spatial modeling mistakes?

She was concerned by the location accuracy + lag and thought that 15meters would be a problem for this sort of task (not to mention the variability of this accuracy in different places due to the hotspots repartition). In her opinion, people were perhaps relying too much on the location-awareness tool: if the accuracy is 15 meters and if there is a 3 seconds lag, users might be misleaded. I would answer that for our task, and givent the EPFL campus, the 15meters accuracy is not that much of a problem, since it's approximately the size of roomsm and it discriminated different zones with different boundaries (and no line of sight).

She thinks that indeed, more communication can lead to more grounding of the situation, which would be why the richness of communication in the no awareness tool condition had positive on the mutual modeling index.

Also, I have to be careful when referring to the condition "without AT" because it's not really without but without explicit AT because they can dialogue but indeed it's not a awareness tool strictly speaking.

Of course, she pointed me at the dangers of this sort of field experiment study, arguing that results are bound to the system and the context I tested (which I am definitely aware of). Results are then bound to the system configuration: location accuracy, area size, number of users... The nature of the task is also important too, it puts demands and constraints arbitrarily on the context, And she's then wondering how far these results could be generalized.

I am deeply aware of all these comments (right from the beginning of the project actually, when choosing a more quantitative methodology but mixed with qualitative data). And my point is simple: my study is rather here to counterbalance the frenziness and overemphasis of location-awareness technologies. IMO It's rather here to ponder the engineerical madness around those applications that are oooh so neat like the intelligent fridge (!).

Thanks all for these inputs, they're invaluable for the evolution of the PhD project.

I finally had lunch in SF with my finnish friends Jyri Engeström and Ulla-Maaria Mutanen who nicely introduced me to Elizabeth Goodman and Mike Kuniavsky. The discussion was there around various topic but mainly about bottom-up innovation and independent structures a la Squid Labs and others. Europe is especially in the need of this kind of places/structure with crazy folks doing project. Ulla-Maaria was referring to crafting stuff but to me, even peopel doing user experience or more abstract research matter to. These structures (or non-structures) are act as the Research and Development of tomorrow's services, product and memes.

Liz also talked about how the relationship between engineers and interaction designers/user experience specialists should be more a conversation about users' context than just getting a set of requirements.

I was interested by Mike's perspective on user experience of pervasive computing , which is what is going to address in his next book.

wi5d search engine

wi5d (Wireless 5th Dimensional Networking) seems to be an intriging company. It had been created in 2005 and it's focused on the development of a context-aware approach to surfing the Web as they say.

By challenging the myth that the web frees the user from space and time considerations, we hold that the most valuable search engine will not aim to organize cyberspacecc, but rather will aim to better connect individuals with the potential energy of their own spatial/temporal context.

Their system is called MapNexxus (they have this weird habit of puting their text as image files):

As it's explained on the website, the company was purchased by an "anonymous buyer" during the beta development phase...

Why do I blog this? Since I am interested in spatial technologies, I am wondering about how user would employ this sort of search engine. How would they relate on location-based information.

Future of the Internet

Last month, there was a futuristic piece about the Internet on Red Herring, which had interesting points with regards to the relationships between virtual world/objects and the physicality of those.

the barriers between our bodies and the Internet will blur as will those between the real world and virtual reality.

Automakers, for instance, might conceivably post their parts catalogs in the virtual world of Second Life, a pixilated 3D online blend of MySpace, eBay, and renaissance fair crossed with a Star Trek convention. Second Life participants—who own the rights to whatever intellectual property they create online—will make money both by using the catalog to design their own cars in cyberspace and by selling their online designs back to the manufacturers, says Danish economist and tech entrepreneur Nikolaj Nyholm. (...) “Devices will no longer be spokes on the Internet—they will be the nodes themselves,” says Ray Kurzweil.

I am wondering how this would work with networked seams, perplexed users facing the non-interoperability of networks; how would this prediction work: "People will be able to talk to the Internet when searching for information or interacting with various devices—and it will respond". As a user experience researcher, I am wondering whether everybody has in mind how people are currently using the Internet, how one look for information with search engine. I know this is long-term research but there is a huge gap between this and how people use current networks. Of course today's kids will be able to handle that but what about the aging population?

The machine-to-machine communication is also expected to increase:

As so-called sensor networks evolve, there will be vastly more machines than people online. As it is, there are almost 10 billion embedded micro-controllers shipped every year. “This is the next networking frontier—following inexorably down from desktops, laptops, and palmtops, including cell phones,” says Bob Metcalfe, the inventor of Ethernet and founder of 3Com. This is what will make up much of the machine-to-machine traffic, he says.

The article also addresses other concerns like the telco competition, the internet infrastructure and mostly innovation in emerging technologies.

Discussion with taxi driver in Irvine

Me: I want to go to UC Irvine (we were at the Amtrak Station) Taxi driver: mmmh, do you have the address?

Me: mmmh no

Taxi driver: I cannot go there, I am new here and I need an address to put in my GPS

Me: I don't have the address but I have the directions descriptions on this paper

Taxi driver: mmmh but it's not on my GPS, I cannot go there

(we finally got there, he called a friend on the phone...)

Research meeting with Paul Dourish

Had a good meeting for lunch today with Paul Dourish at UC Irvine, chatting about my PhD research and on-going projects here and there. It seems that he's back on writing about space and place, which is very relevant to what I do. Some raw notes from what he said about my research:

  • he acknowledged my concerns about articulating both quantitative and qualitative data (but seems to me very inerested in my qualitative analyses)
  • do we really need to model other's positions (in the game + ...): that's actually something we discuss with the dispositional versus situational Mutual Modeling.
  • He's interested in the following question: to what extent the technology provides a medium for people to develop a meaning of others' actions. In my context, this is related with how people interpret others' paths.
  • Would it be possible to dig my qualitative data to go deeper into Mutual Modeling of location, MM towards the goal and MM of strategy? For him, there should be more emphasis on qualitative data, like ethnographical analyses to understand how people discuss, understand and use other's paths/locations
  • the visualization thing made him think about chalmers' students work (maybe I should more articulate this with what I want to do with the viz?)
  • He asked why choosing the path distance as a performance index: it's meant to foster more strategy discussion among players (+ prevent them from running and possibly break the tablet pcs)

He also encouraged me to submit a poster to ubicomp, I may write something about the asynchronous awareness tool, let's see.

UC Irvine (1) UC Irvine (2)

Visualize the invisible (dataflowviz)

Just found this on information aesthetics: Free Network Visible Network, a project by the Mixed Reality Lab.

Free Network Visible Network is a project that combines different tools and processes to visualize, floating in the space, the interchanged information between users of a network. The people are able to experience in a new exciting way about how colorful virtual objects, representing the digital data, are flying around. These virtual objects will change their shape, size and color in relation with the different characteristics of the information that is circulating in the network.

Why do I blog this? this is something very important to me: the possibility to visualize the dataflows, showing the overlay of information in various environments. This would nicely depicts what we were discussing yesterday at the conference: how a certain place now has different meaning: given that in one place you can be there physically and virtually meeting people on IM, MMORPG or something else, the inherent simultaneity of this situation can be visualized through this sort of project.

So let's start a review about this kind of projects:

Related projects:

Any others dataflowviz?

3D Level design history

There is a good serie of columns on Gamasutra lately about level design by Sam Shahrani. It focused on FPS and 3D level design. What is good is that it gives a comprehensive overview of the different techniques used so far. Some very relevant excerpts about how level designers takes advantage of constraints to create spatial affordances that would support the game scenario and gameplay:

Level designers, or map designers, are the individuals responsible for constructing the game spaces in which the player competes. (...) The level design for Battlezone was relatively straightforward, in as much as it consisted of creating a game space (the “large valley surrounded by mountains”) in which the player could drive around and destroy targets for points. Essentially, the level design was that of a digital Roman arena, wherein the player could do battle, and it was a design that worked well for the limitations of the graphics engine, and provided enjoyable and novel gameplay for the arcade and home computer markets. (...) Not all attempts at 3D games involved the use of polygon-based 3D environments like those used in Battlezone; several games attempted to leverage other technology to provide an impression of a three-dimensional world. Notable efforts include Lucasfilm Games, now LucasArts, 1986 title Rescue on Fractalus!, a first-person title that used fractal generation technology to render the game world. (...) [Then in 3D FPS like Wolfenstein 3D]The emphasis on speed, however, again led to limitations on how detailed the world was. Interactivity in Wolf3D was relatively limited, with the player having only two ways to interact with the world; shooting things to kill them and opening doors by pressing the spacebar, a universal “use” key. Wolf3D upped the ante, though, by adding in “push walls”. These walls appeared like any of the normal solid walls in the game, but if a user hit the spacebar in front of them, the wall would slowly slide back, revealing a hidden room (Kushner, 108). Hidden rooms and secret levels would play a major part in future id games, and First-Person Shooters in general. The push walls were another innovation by Tom Hall, who served as the director of Wolfenstein 3D (Kushner, 108-112), and served to reward the player for thoroughly exploring the game world. It was an interesting gameplay mechanic, and one that grew out of a tradition in the video game industry for including secrets, or “Easter eggs” for players to find (Kent 188-189). While many would consider these “Easter eggs” to be afterthoughts, they present an important opportunity for level designers to maximize player investment and interest in the game world. (...) Doom fundamentally altered the First-Person Shooter genre (...) The Doom engine supported a number of new features that finally made realistic and interactive environments possible. Instead of merely featuring doors that could be opened, Doom featured the ability to alter the game world by using in-game switches and “triggers” to activate events. These events could range from a set of stairs rising out of the ground to unsealing a room full of ravenous near-invisible monsters to bridges emerging out of toxic slime. Additionally, Doom added in lifts, which could raise players to different levels inside the game world or, if used slightly differently, could act as pistons and crush players against a ceiling. Further, the Doom engine’s support of variable height floors and ceilings also meant that in addition to being able to move on all three axes, more complex architecture could also be created. Tables, altars, platforms, low hallways, ascending and descending stairs, spacious caverns and other objects could all be created using geometry. The ability to trigger events that could release monsters or alter geometry led level designers to create a number of surprisingly complex traps for players to uncover as they played through the game, from rapidly rising floors to bridges that would sink into toxic sludge if players moved too slowly. (...) In addition to architectural advances, Doom also added the ability to alter the light levels in a level. (...) The level designs for Doom were accomplished using much more advanced tools than previous id titles. Romero wrote an engine-specific level editing program called DoomEd (...) Doom also illustrates that levels do not have to be based on easily recognizable locations in order for players to enjoy them, nor do they have to conform to preconceptions of what an environment should look like.

An important concept is also this idea "Doom defined the first person genre, but more importantly it made the idea of users modifying a commercial title acceptable to developers.": the level design is the cornerstone of bottom-up innovation in the game world: through modding, end-user manage to create their own version what would be the world they want to play in.

Why do I blog this? What's explained here is of tremendous importance for the comprehension of spatial practices in virtual worlds. The author of this piece is Sam Shahrani, an M.A. candidate at Indiana University in the Master’s in Immersive Mediated Environments program through the Department of Telecommunications. He's making an incredible job explaining level design from the game developers' perspective. I am looking forward reading his dissertation.

It's certainly the most interesting piece about spatiality in video games I've read in the last few months.