HCI research about awareness of others in nightclubs

"DJs' Perspectives on Interaction and Awareness in Nightclubs" is a paper by Carrie Gates (University of Saskatchewan), Sriram Subramanian (University of Saskatchewan), Carl Gutwin (University of Saskatchewan) at DIS2006. This is the account of their project which aims at investigating DJ-Audience Interaction in Nightclubs.

We are examining the ways in which DJs and audiences gain awareness of each other in nightclub environments in order to make a set of design principles for developing new technologies for nightclubs. We expect to discover opportunites to enhance communication and feedback mechanisms between DJs and audiences, and to discover opportunities for developing novel audience-audience communications in order to create more meaningful interactions between crowd members, more playful environments, and a new dimension of awareness in nightclubs. We also expect that these design principles could be explored later within other audience-presenter situations, such as in classrooms or theatres.

Why do I blog this? though a bit curious, this is very relevant from the HCI point of view: the question related to awareness of others are important in that context.

It reminds me of an article by Beatrice Cahour and Barbara Pentimalli about the awareness of waiter in café (in french: Awareness and cooperative work in a café-restaurant). They show how awareness is linked to the attention mechanisms of the participants and how their level of awareness is constantly varying.

Urban post-it in Geneva

I saw this in Geneva today, a sticker on a wall that invite people to "Drop a note: tick here": drop a trace here

Why do I blog this? a curious action in the urban practice. What happened here? I am looking forward to get back their and see if someone ticked and put more annotations. This sort of message is more than just a sticker or a graph, I like this invitation to participate.

Ethnographic studies of ubiquitous computing

Supporting Ethnographic Studies of Ubiquitous Computing in the Wild by Crabtree,M. Benford,S. Greenhalgh,C. Tennent,P. Chalmers,M., in Proc. ACM Designing Interactive Systems (DIS 2006). In this paper, the authors draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to understand interactions and eventually assemble coherent understandings of the social character and purchase of ubiquitous computing systems. Doing this, they aim at identifying key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild.

One of the issue there is the fact that ubicomp leads to distribute interactions in a wide range of applications, devices and artifacts. This foster the need for ethnographers to develop a coherent understanding of the traces of the activity: both external (audio and video recordings of action and talk) and internal (logfiles, digital messages...). Additional problems for ethnographers are: the fact that users of ubiquitous systems are often mobile, often interact with small displays, and with invisible sensing systems (e.g. GPS) and the interaction is often distributed across different applications and devices. The difficulty then lays in the reconciliation of fragments to describe the accountable interactional character of ubiquitous applications

I like that quote because it expresses the innovation here: the articulation between known methods and what they propose:

"Ubiquitous computing goes beyond logging machine states and events however, to record elements of social interaction and collaboration conducted and achieved through the use of ubiquitous applications as well. (...) System recordings make a range of digital media used in and effecting interaction available as resources for the ethnographer to exploit and understand the distinctive elements of ubiquitous computing and their impact on interaction. The challenge, then, is one of combining external resources gathered by the ethnographer with a burgeoning array of internal resources to support thick description of the accountable character of interaction in complex digital environments. "

The article also describes requirements for future tools but I won't discuss that here, maybe in another post, reflecting our own experience drawn from Catchbob. Anyway, I share one of the most important concern they have:

The ‘usability’ of the matter recognizes that ethnographic data, like all social science data, is an active construct. Data is not simply contained in system recordings but produced through their manipulation: through the identification of salient conversational threads in texts logs, for example, through the extraction of those threads, through the thickening up of those threads by synchronizing and integrating them with the contents of audio logs and video recordings, and through the act of thickening creating a description that represents interaction in coherent detail and makes it available to analysis

Why do I blog this? This paper describes a relevant framework of methods that I use even though I would argue that my work is a bit more quantitative, using mixed methods (ethnographical and quantitative) with the same array of data (internal and external). It's full of relevant ideas and insights about that and how effective tools could be designed to achieve this goal.

What is weird is that they do not spend too much time on one of the most powerful usage of the replay tool: using it as a source for post-activity interview with participants. This is a good way to have external traces to foster richer discussion. In CatchBob! this proved to be very efficient to gather information from the users' perspective (even though it's clearly a re-construction a posteriori). This method is called "self-confrontation" and is very common in the french tradition of ergonomics (the work of Yves Clot or Jacques Theureau, mostly in french).

Besides, there are some good connection with what we did and the problems we had ("the positions recorded on the server for a player are often dramatically different from the position recorded by the GPS on the handled computer.") or:

the use of Replayer also relies on technical knowledge of, e.g., the formats of system events and their internal names, and typically requires one of the system developers to be present during replay and analysis. This raises issues of how we might develop tools to more directly enable social science researchers to use record and replay tools themselves and it is towards addressing these and related issues that we now turn.

Geotagthings

Julian Bleecker and Will Carter recently released geotagthings, a simple piece of software that allows to assign geographic meta data to arbitrary web resources.

otagthings, a new web service designed to quickly and easily assign any web resource — anything with a URL — a location in the normal, human physical world. Using Yahoo! Maps' interface and API, Geotagthings makes short work out of a previously complicated process, while providing an open feed-based mechanic for retrieving geotagged resources and displaying them in your favorite news aggregator. (description taken from their Where2.0 presentation

How it works?

Anything with a URL can be given a latitude/longitude by simply clicking a bookmarklet, picking the spot it should be assigned using a map interface, adding a little note and that's that. The URL and note get shoved into a data store where it can be accessed through an RSS feed. Anyone can get a feed for a locale simply by going to the feed generator, picking where you'd like to get a feed from, determining a range around that spot and grabbing the URL from one of the feed badges, and dropping it into your favorite news aggregator, like NetNewsWire.

Registration can be done here

Why do I blog this? because I think it's an interesting service; the why question behind that is pertinent: they ask "why" in their description and answerrfs "the network needs geographic semantics to make data resources relevant to meaningful, useful location-nased services".

Katamari Damacy affordances

Angel Inokon has a good blogpost about the affordances of Katamary Damacy (the PS2 game in which you have to roll a ball to collect items located everywhere):

Three Design Principles Katamari Damacy gets right: - Affordances – affordances enable designers to create gameplay that leverages the natural limitations and features of an object. One of the clear affordances of a ball is that it rolls. Everyone, regardless of age, recognizes a ball and can easily conceive it’s primary function. (...) Users can quickly get immersed because the rolling action is consistent with the simple affordances of a ball. - Visibility – gamers need awareness of the mechanics of gameplay through visuals and audio feedback. Two feedback mechanisms built in the game include a progress icon and sounds. The player is given a simple icon on the corner of her screen that shows the size of the katamari. (...) Gamers need lots of information. Integrating visibility principles allows designers to keep pumping the right information when they need it. - Constraints – constraints prevent gamers from making errors that could decrease enjoyment of the game. Katamari Damacy centers around a single rule – players can’t roll up something that is bigger than their ball. If the player got lost in an area with many big objects, she could get frustrated. So the game blocks the paths to larger objects until her Katamari is large enough to roll over the barrier. It makes the game easier to explore and less overwhelming by essentially modularizing the levels (174). Failure is a critical aspect of gameplay, however good designers know how to constrain the environment so players stay immersed in the game.

Why do I blog this? because I like Katamari and agree with that principles which connects human-computer interaction a la Don Norman to an efficient video game design.

Pick up color readings and transmits them into the viewer's eyes

Monochromeye is a project carried out at the Smart Studio. Part of a more general project, it's actually a portable device with a fingerholder that picks up color readings and transmits them into the viewer's eyes:

Monochomeye is one of several optical machines that were built in an art driven research project about light and perception called Occular Witness. The project attempts to stake out the limits of human vision and it examines how information is malleable and how meaning is formed through image in a time when information is abundant and our culture is saturated with layers of processed imagery. (...) Monochromeye is a portable device that enhances low resolution vision. A fingerholder contains one red, one green and one blue lightsensor that read the environment as you point at it. It feeds back the color information to two tricolored (RGB) light diodes that emit two beams of light straight into the viewers eyes. At such a low resolution, the viewer can only get color readings. They do not contain any information beyond the color that is registered at the point in space where the viewer points his finger.

Why do I blog this? because this project is appealing to me; from the user experience point of view, I like this idea of enhancing resolution vision. Besides the design is nice.

Locative technologies, Where2.0

There is very soon the Where2.0 conference in San José, CA. Lots of promising stuff are going there. Judging from the description, Mike Liebhold's presentation seems to nicely wrap-up what going on so far in the field of "locative technologies/services":

Beyond a growing commercial interest in mobile GIS and location services, there's deep geek fascination with web mapping and location hacking. After several years of early experiments by a first generation of geohackers, locative media artists, and psychogeographers, a second, larger wave of hackers are demonstrating some amazing tricks with Google Maps, Flickr, and del.icio.us. Meanwhile, a growing international cadre of open source digital geographers and frontier semantic hackers have been building first-generation working versions of powerful new open source web mapping service tools. (...) Out of this teeming ecosystem we can see the beginning shapes of a true geospatial web, inhabited by spatially tagged hypermedia as well as digital map geodata. Invisible cartographic attributes and user annotations will eventually be layered on every centimeter of a place and attached to every physical thing, visible and useful, in context, on low-cost, easy-to-use mobile devices.

Lots of pertinent applications will be presented.

Also, some talks seems to discuss relevant issues that has already been brought up by some academics; such as It's Place, Not Space (by Nikolaj Nyholm, Imity, Claus Dahl, Imity):

Location is not about geography. The most important thing isn't the space you're in -- the coordinates -- but the place you're in -- the people, ideas, and interactions between them. Indeed, space is just a gateway to place: we need the coordinates to compare to other coordinates, but what we care about is proximity. The good news is that while for now the coordinate technologies like GPS are mainly available in mapping devices, there's already a great proximity technology out there, deployed in literally hundreds of millions of cell phones: Bluetooth.

And of course, some of the challenge will be described, for instance in Map Spam 2008: A Sanity Check by Michael Bauer:

The Utopian view of a world where social networking, geo-location, and mobility converge to deliver a rich, multimedia database of micro-local content is debunked. All in the spirit of fun, this presentation will apply a dose of reality to a ubiquitous mobile world. The realities of spam, tag abuse, and predictive following apps are profiled, to highlight the issues we ignore at our peril.

Why do I blog this? because my research is directed toward the understanding of such technology usage (from the socio-cognitive point of view). I think these talks efficiently describes the characteristics of the locative technology scene in 2006. What is interesting is that it addresses not only location issues but also its context: software that should support social practices (social software :( ), a real world in which things can fail and where spam exist...

There seems to be still lots of project about location-based annotations and friend-finder. I am wondering about 1) whether they are effectively used 2) how they are used 3) whether there could be innovative scenarios and usage. It seems that both are now the most commons examples, the "intelligent fridge" of the locative community.

The context of a display ecology

In Displays in the Wild: Understanding the Dynamics andEvolution of a Display Ecology, Elaine M. Huang, Elizabeth D. Mynatt, Jay P. Trimble is an in-depth field evaluation of large interactive displays; it exemplifies the "context of a display ecology".

It's a study about large interactive displays within a multi-display work environment used in the NASA Mars Exploration Rover (MER) missions, used in a complex and ecologically valid setting. What is interesting, is the lessons learned from this experiment:

the “success” of a large interactive display within a display ecology cannot be measured by whether a steady state of use is reached. Because people appropriate these tools as necessary when tasks and collaborations require them, there may be a natural ebb and flow of use that does not correspond to success or failure, but rather to the dynamic nature of collaborative work processes. Success is therefore better evaluated by examining the ease and extent of support that such displays provide when tasks call for a shared visual display or interactive work surface. (...) Another important lesson regarding the value of large displays in work environments came from our observation of the interplay between interactive use and ambient information display. In the realm of large interactive display research, a decrease in interactivity is often viewed as a failure of the system to support workgroup practices. We observed a migration from interactive use to ambient information display, and through our interviews discovered how valuable this ambient information was. (...) in the greater context of a display ecology, it is misleading to evaluate the isolated use of a single system; the existence of other displays in the environment means that it is important to understand how the ecology functions as a whole, not just how individual displays are used.

Why do I blog this? I found this paper interesting because it describes how people made use of such a display; the highlights researchers brought forward also show pertinent issues in the domain of ambient/interactive furnitures, which could be helpful for some of our projects at the lab.

Navigating numerically in London

(Via Le Courrier International) In this New Statesman article, journalist Dollan Cannell explains how chinese immigrants manage to navigate through london without knowing how to speak english (and hence being unable to memorize english street/landmark names):

here is a new class of Londoners, however, who navigate numerically. They live at 419 and work at 36. They meet friends at the end of 2, or lovers by 77. London's unofficial new geography derives from its buses. (...) Chinese immigrants brought to London by people smugglers, and they all use this method to find their way around. (...) Buses are their northern star: they need only identify which Mandarin characters correspond to 0 to 9 and the message displayed above the bus driver begins to make sense. When a new arrival is first taken by a contact to the flat he will share with a dozen others he is told the number of the nearest bus route. One thing he must be careful about is the direction a bus is travelling in, and for this his best guide can be trial and error. If he travels for a long time without seeing things he knows, he must alight and try the other direction. In this way most new immigrants build up a repertoire of routes.

When they talked to us, many identified locations that could easily be given names using a bus map and an A-Z.

Why do I blog this? because of my interest towards spatiality and cognition (my research), this story is a relevant anecdote about how human beings use tricks to navigate in space.

Augmenting Guy Debord’s Derive

Talking with Adam Greenfield about his next work, I though back that I already wrote bits and pieces around the topic of how IT renew the urban experience. The report I called "Augmenting Guy Debord’s Dérive: Sustaining the Urban Change with Information Technology" (.pdf) is a bit old (2003), so the examples are a bit outdated. This is just few notes extracted from a paper I wrote with my colleague Mauro Cherubini called “To Live or To Master the city: the citizen dilemma” (Imago Urbis #2). Why do I blog this? Comments are welcome. I am not happy with the whole thing (bad english, naive ideas and almost no critical stance) but I thought that it would be good to put it online.

Mizuko Ito on anthropology and design

The last issue of Ambidextrous has been released. Among the different articles, there is a relevant interview of Dr. Mizuko Ito (the interviewer is Danah Boyd). Some excerpts I like:

DANAH: Fabulous! Can you tell me more about what how you see anthropology being relevant to design?

MIMI: I think there is a role for anthropology along all of the steps of the design process. But of course I would say that. Anthropology can help inspire new designs by providing profiles of users and stories about contexts of use. Anthropologists can play on design teams as designs get developed to sensitive designers to culturally and context specific issues. And finally, anthropologists can evaluate the effectiveness of designs through studies of actual use in context, either prototype, pilot, or after product roll-out.

DANAH: So what advice would you have to young aspiring anthropologists who want to study socio-technical practice and get involved in designing new technologies?

This one is tough. Be prepared for some blank looks from people in your discipline - but a lively audience of practitioners and technology designers who are eager to hear stories from the field. The challenge is to be multilingual and interdisciplinary while also maintaining commitment to ethnographic perspectives and methods.

Why do I blog this? that's sometimes a feeling I have while working with a social science perspective with designers. Though, I am wondering whether going beyond telling stories because I feel there s much more to do.

Wearable gaze detector in the form of headphones

Via, this "Full-time wearable headphone gaze detector" by DoCoMo seems to be curious (ACM subscription required). A paper by Hiroyuki Manabe and Masaaki Fukumoto submitted at CHI2006. It actually describes a full-time wearable gaze detector that does not obscure the user’s view in the form of a headphone.

Full-time wearable devices are daily commodities, in which we wear wrist watches and bear audio players and cellular phones for example. The wearable interface suits these devices due to its features; the user can access the interface immediately, anywhere desired. For full-time wearable devices, the interface should be easy to wear, easy to use and not obstruct daily life. In this article, the “full-time wearable interface” is defined as an interface that the user can wear continuously without obstructing daily life and can use easily and immediately whenever desired.

What is interesting to me is the potential applications:

This system can be used as a simple controller for many daily use devices or applications, such as audio players. It can also be used as a selector that allows the user to choose surrounding objects. When the gaze detector is supplemented with a video camera and a wireless communication device and the surrounding objects have identifying tags like QR codes, the user can get information about the object of interest simply by gazing it.

Why do I blog this? I was just intrigued by this sort of interface, especially from the cognitive standpoint: how would this impact our practices and how can people cope with the cognitive load it would generate.

Codechecking products

Via [telecom-cities], codecheck.ch described by ars electronica as:

The Codecheck project is an effort to create an informed “community” of consumers who are able to critically assess products prior to reaching their purchasing decisions. Whereas certain initiatives pursue this aim primarily by condemning retail offerings that are potential health hazards, Codecheck takes a different approach: it helps consumers decipher the product’s barcode. The way this works is as simple as can be. A potential buyer uses his/her PC to enter the product’s numerical code and sends it via Internet to codecheck.ch; what immediately comes back are comprehensive definitions and information from experts about ingredients like sodium laurent sulfate and E250. The result is the creation of a reference work that is constantly being expanded and updated with contributions from manufacturers, wholesale distributors, specialized labs, consumer organizations and individual consumers. Potential purchasers thus have access to a wide variety of information, opinions and reports, a body of knowledge that constitutes a solid basis on which to form an opinion about a particular product.

Plans are currently in the works to enhance this system by building in mobility. For example, a shopper in a supermarket could use his/her cell phone’s camera to photograph a product’s barcode and then send this image as an MMS to codecheck.ch, and the relevant information would immediately be transmitted back. By linking up diverse technologies (photography, Internet, telecommunications) in this way, Codecheck represents a step in the direction of well-informed consumers.

Why do I blog this? I am less interested in this as a way to better inform consumers than by the usage it creates: "checking objects". This participates in this kind of interaction people have more and more in places: pointing a device to a certain objects: first it was to take pictures (lots of pictures: moblogging, picture that goes right into flickr from the cell phone), now it's codechecking (not really pointing though...), what's next: touching object to do the codecheck: the "wand" metaphor is more and more relevant.

Spatial technology workshop at UpFing06

(sorry bad english below, I took notes in real time and recomposed them quickly) As I mentionned earlier, I had to manage a workshop about "locative media" and spatial technology today. What was interesting is that attendants had quite different ideas in mind when attending it: some were concerned by business models, other by memories in space, one or two by a curiosity towards google earth, place-based annotations, others by mobility and technology. Maybe the description on the website was a bit too narrow: since it quoted google earth, yellow arrow or flickr, different representation has been triggered in people's mind.

After introducing the whole concept and describing the fact that it is a bit messy and cover lots of practices/technologies/services/usage; there was thre presentations. The first one by Yann Le Fichant who is leading a company called voxinzebox; he explained us the different services they propose for city navigation (first on 2nd generation GSM and now on on pocketPC). He recalled us the importance of self-geolocation in that context (people declaring their own location on a cell phone to get some information about a specific place that would eventually guide them to various landmarks). He also underlined the importance of PND (personal navigation display) like TomTom or garmin that are more and more complex (improved memory, communication protocol) and could lead to new innovative tools. Yann provocatively asked why the sex industry has not yet found any big hits using location-based applications. The discussion also led Google's move in the 3D modeling by buying sketch-up (a modeling tool that would eventually allow people to model their house in 3D and put in on a google map)

Then Cyril Burger talked about his PhD research: an ethngraphy of the usage of mobile phones in the parisian subway. Cyril investigated people's behaviro and trajectory while using audio-communication and SMS. He underlined the fact that the transport facility first did not introduce any norms: so the rules that emerge were based on another norm based on how people drive. Through that code, rules of sociability emerged in terms of movements (for instance stopping in location which are not crowded so that the flow is not cut, the arrival of the metro often lead the user to stop the conversation). In terms of gesture, people stay often inanimated while texting, whereas audicommunication leads to more active/lively behavior (gestures, smiles...).

I also like his remark about the very fact that non-material places needed material places: servers need to be located somewhere. This is connected to what Jeffrey Huang talked about at LIFT06: the fact that networked technologies leads to new sort of places (and subsequently that place still matter).

Then Georges Amar (foresight manager at RATP, a subway company in france) presented the new paradigm of his company. Subway companies previously based their development on hygienist theories: efficiency was correlated with fluidity and less contact as possible (which is nicely exemplified by the non-contact RFID subway passes): the subway was disconnected with the city. Automation lead to layoff and the disappearance of controllers and even drivers, this caused the permeability of the subway (more and more insecurity, people taking it without paying): the city entered the subway. Now their model is rather about having both efficiency AND contact: let's take advantage of the presence of people, the city is in the metro and there are opportunities to have relevant services. The crowd is seen as a resource and not as a constraint. In terms of prospective services, places/stations can be transformed, new type of jobs can be created and tansporters' role change accordingly. The subway could then be seen as a PLACE to meet people, or at least to do something with others. One of the attendant mentionned the idea Starbucks had to be a place for business meetings: would the subway have certain area for business meetings? Another point is the signs that are fixed and directed to every users could be individualized for a certain category of customers (with precise interests or disabilities) or even further: the crowd's traces in space would be a material to use to create new kind of signs to foster better navigation or discovery of places or people.

After those 3 presentations, we had a discussion about different projects (current or prospective) like earthTV (seeing real-time events with google earth, this has actually been thought in the japanese subway to see where is the crowd to better avoid it), tags in google earth (very often community-based "I use linux" close to MS buidling), locator of personal objects (googling my shoes, finding my personal belongings), indoor technologies (museum), trackers (kids/prisoner tracking).

Overall, the discussion rather revolved around mobility, people and a lot about meetings, and less about technologies and usage. That's important from the rhetorical point of view: we rather dicussed the contexts and the needs (with a peculiar emphasis on the subway experience) as opposed to the technology-push projects we've seen so far: allowing PEOPLE (with a specific context: mobility, limited amount of time, limited cognitive resources because of route finding) to do something (having meetings and exchange with others, discovering information related or not to the route).

One of the conclusions here was also that innovation in spatial technologies is often due to work of peculiar companies such as RATP (subway companies), JCDecaux (urban ads) which are ubiquitous and bound to specific mobile needs. Soome researchers from a french phone operator acknowledged the fact that innovation is very tough for them because everything is either locked or behind walled gardens when it comes to phone (SIM cards, low interoperability, different standards, hard to use voice / location based application, different kind of phones/handhelds...). This resonates with discussions we had at the lab (see here or there).

Technorati Tags: ,

Workshop about spatial technologies at UpFing06

Currently a Université de Printemps de la FING 06, which is a big gig organized by la FING, a french think tank working on innovation and IT. The venue is quite nice, an old catholic mansion:

upfing (1) upfing (2)

The reason why I am here is because I have to take care of a workshop here about spatial technologies, in the broad sense (locative media, location-based services, place-based annotations platforms...). The event is in french.

Here are the slides of my presentation (in french, pdf, 3.8Mb). I actually described the following issues: - when we look at the terms we use when we talk about spatial tech, it's very diverse (ranging from geowanking to locative media, geotagging or buddy-finder). Sometimes, it's about practices, sometimes about technologies, sometimes services... - we will focus on a specific subpractice: place annotation - what is interesting is that the usages regarding that practice seem to be diverse but this is does not take a diachronic perspective (the fact that people annotated space a LONG time ago), nor the size of the target group of user (% of tech-savvy persons? % of total population). - some of the most interesting examples will be presented (yellow arrow, flickr notes, stamps...) - and I will describe why this is important in terms of socio-cognitive processes: the fact that space affords specific interaction, shape people's behavior and agency. People leave traces in space and then decode them as cues for acting.

I will put some more notes later about people's intervention, the subgroup activity and the conclusion.

Technorati Tags: , ,

Finding a location for a pervasive game

Kuan Huang sent me one of his piece, which seems to be quite intriguing. His project entitled "Space Invaders 2006 (done by Computer Science Department and Interactive Telecommunications Program). The project page is informative and explain the whole process (I like when people explain how they are doing what they're doing like "Since it's a thesis project, the most critical thing is that I need to have a working demo to present in the last week of school. So finding a location is the first step.")

In the past one year, some testings and experiements were conducted within NYU campus. For our thesis projects, we decided to put together all the experience and lessons that we learned from previous testings and make an outdoor playable video game in three months.

Space Invaders 2006 is an outdoor video game that takes advantage of real world architecture spaces and transforms them into a game playground. Basically, the video game is projected onto a building. The player has to move left or right to control the motion of the aircraft. Whenever the player jumps, the aircraft shoots out a bullet.

The playground:

Why do I blog this? yet another example of using the real world as the interface. Of course, the analysis is a bit too rough (testing... surveys...) but it's interesting to read about how they thought about that. I am curious about this location thing, what is a good location for pervasive game, what constraints designers can think about? what about the spatial topology? Look at what Ken highlighted as constraints:

here are some technical issues that I can't solve in a short time: - I am not allowed to climb high to mount a camera onto one of the light stands in the park. - I need an at least 30 meters long power strip to get power supply from a building across the street. - There are some drug dealers hanging around the park after 9PM. It is kind of scary if I carry a laptop, a projector, a video camera at that time. - Too much ambient lights in that space which is bad for large-scale projection.

Wiki science, zillionics and AI

Few quotes from Kevin Kelly's thoughts in Edge (it's called SPECULATIONS ON THE FUTURE OF SCIENCE); it's mostly about "the evolution of the scientific method" as the author precise. Some of the examples are interesting and curious (I don't agree with the novelty of all, like pattern recognition... hmm we already have that?)

AI Proofs – Artificial intelligence will derive and check the logic of an experiment. Artificial expert (...) systems will at first evaluate the scientific logic of a paper to ensure the architecture of the argument is valid. It will also ensure it publishes the required types of data. This "proof review" will augment the peer-review of editors and reviewers.

Wiki-Science – The average number of authors per paper continues to rise. With massive collaborations, the numbers will boom. Experiments involving thousands of investigators collaborating on a "paper" will commonplace. The paper is ongoing, and never finished. It becomes a trail of edits and experiments posted in real time — an ever evolving "document." Contributions are not assigned. Tools for tracking credit and contributions will be vital. Responsibilities for errors will be hard to pin down. Wiki-science will often be the first word on a new area. Some researchers will specialize in refining ideas first proposed by wiki-science.

Zillionics – Ubiquitous always-on sensors in bodies and environment will transform medical, environmental, and space sciences. Unrelenting rivers of sensory data will flow day and night from zillions of sources. This trend will require further innovations in statistics, math, visualizations, and computer science. More is different.

Return of the Subjective – Science came into its own when it managed to refuse the subjective and embrace the objective. The repeatability of an experiment by another, perhaps less enthusiastic, observer was instrumental in keeping science rational. But as science plunges into the outer limits of scale – at the largest and smallest ends – and confronts the weirdness of the fundamental principles of matter/energy/information such as that inherent in quantum effects, it may not be able to ignore the role of observer. Existence seems to be a paradox of self-causality, and any science exploring the origins of existence will eventually have to embrace the subjective, without become irrational. The tools for managing paradox are still undeveloped.

Why do I blog this? Kelly's vision is of course is the one an observer of current technological change; sometimes it's a bit odd with regards to scientific practices but he certainly has some good ideas, and this meta-observation described here is valuable. I agree with some of the highlights he have; nothing really new in what I picked up here but it's relevant to my practice and I share the same feelings.

Turning vacuuming robots into pets

Via THE PRESENCE-L LISTSERV, it seems that Roomba vacuum robots are more and more complexe: myRoomBud allows to personalize iRobot Roomba Vacuuming Robot.

Since 2005, myRoomBud™ has been selling RoomBud™ costume covers to the owners of the 2 million Roomba robots and turning their vacuuming robots into pets. Now, the RoomBuds have been given (multiple) personalities. RoomBud Personalities enhance the Roomba pet experience by "teaching" your Roomba to act like the pet or character trapped deep inside it. Roobit the Frog hops around, Roor the Tiger growls then pounces, and RoomBette La French Maid wiggles its behind at you before vacuuming your room.

Why do I blog this? even tough this is a simple step, it's interesting to see how small organisations participate in this exploration of new affordance of things.

Stuff on the street

Things on the street around my place in Geneva are more and more curious, look at that one I saw yesterday: thing on the street

Might be a mixer or something that is able to rotate. Why do I blog this? even though this is garbage I am always intrigued by this sort of thing dropped here and often think about what was the passé (past) of this object and what would be it potential future (chances are high that it will be tossed but you never know).