Culture

Studies of the impact of the media on people have not produced stable results

Great read tonight: Studying the New Media by Howard Becker (Qualitative Sociology, Vol. 25, No. 3, 2002). The author focuses here on the studies about the "impact of the media on people", the sort of stuff you see popping up in the press on a regular basis (be it about tv, video-games, comic-books or the interwebs). Becker shows that these studies have not produced stable results, because they operate with an unrealistic view of people. He describes how inaccurate the "impact" paradigm is and the fact it never produced any solid findings about the good or bad effects of XXX (where XXX stands for arts experience/TV/video-games, etc.):

"The idea that you could isolate a unique influence of such a thing as TV or movies or video games is absurd on the face of it. Social scientists, operating under the best conditions, have enough trou- ble demonstrating causal relations between any two variables—to tell the truth, I don’t think they ever do, just maybe hint at it. Studying the effect of a commu- nication medium which operates in the middle of ordinary social life, with all its complications, is not working under the best conditions, and the demonstration of cause and effect is, practically speaking, impossible. (...) The “impact” approach improperly treats the public as an inert mass which doesn’t do anything on its own, but rather just reacts to what is presented to it by powerful (usually commercial) organizations and the representatives of dominant social strata."

He exemplifies how "the image of an inert, passive mass audience is a gross empirical error" with various cases where other researchers had shown that "ordinary people" aren't passive: TV-viewing (where "users" explored imaginatively the possibilities of adult relationships), the creation of internet website, or the writing of homosexual pastiches of the Star Trek stories or pornography:

"One of the first uses of any new communication technology has always been to make pornography. Photography was no sooner invented in the mid-nineteenth century than people were using it to make and distribute dirty pictures. (...) I’m talking about the “amateurs” in this field, of whom there have always been a lot. (...) In other words, pornography is a major area of use of digital technology by ordinary folks."

Why do I blog this? reflecting on past paradigms and approaches I used to be taught.

Naming conventions and usage

Naming digital devices such as music players or car-navigation system is always intriguing and it's often curious to see which terms are employed by people. In a world where artifacts do not necessarily rely on existing technical lineages, companies need to create new terms. Eventually, theses names are not the one that make it to the surface. Two examples that I like:

John, saved by THE GPS

The story of a kid who has been "saved by The GPS" (or in French "Le GPS"). GPS which refers to car-navigation assistants that generally use this positioning technology to locate the vehicle. In this case, the name of device emerged from the enabling technique itself.

Another great examples that is commonly used in the swiss press is "Le MP3", i.e. the music player that allows to play audio files. In this case, the name of the device emerged from the file format itself... even if the artifact play different file format (such as .AAA).

Why do I blog this? Just though about this while reading one of my student's dissertation draft. Naming conventions are always interesting and it's curious to follow what terms are picked up by people. This echoes with other trends from the past for which we had obvious examples such as "Frigidaire" (a brand name used as a generic term).

Ubiquitous obama representations

Following Julian, different forms of Obama representations that I refer to as "Obamania" in my Flickr stream. The "Obama" pizza in Paris: Obamania

Street graffiti in Saint Etienne and Geneva: obamania

obamania

An ad poster in Paris: Obamania

Why do I blog this? these iconic representations are quite interesting in terms of diversity and the meaning it certainly evokes to people. A sort of meme that finds it way onto the urban fabric. Nothing really new here but it's always curious to spot this.

The evolution of the "amateur" figure

andré gunthert Raw notes from a presentation by André Gunthert at the Geneva University of Art and Design the other say:

Amateur photography appeared around 1880, after the transition between silver to silver-chloride... which led to photojournalism and scientific photography such as the work of Albert Londe. Curiously, Londe always referred to his work as an "amateur", although he was the medical photographer at the Salpêtrière Hospital in Paris (and perhaps one of the most famous chronophotographer with Étienne-Jules Marey. Gunthert's claim is that the reason why a photo expert such as Londe referred to him as an amateur was because he lived in a transitional time between the Daguerreotype and the new popular activity of photography. Claiming to be an amateur was a peculiar stance, an avant-garde choice that aimed at showing others that "he was on the other side, a promoter of the new technique".

The end of the 19th century saw the emergence of the new figure of the amateur, through Kodak's release of their camera ("You press the button, we do the rest"). To be an amateur at the time would be described today as being a "user" and it's because of the arrival of this stance that amateurs has been opposed to professionals.

However, it's only around 2005-2005 that amateurs became a threat to professionals. Gunthert traces this back to the London subway bombings. This event in July of 2005 can be seen as a turning point in global news coverage; especially because the news (BBC) asked survivors/witness to send them images (taken with cameraphones)... simply because they could not go there.

In parallel, Gunthert describes how the Web started to build its own mythology around the amateur ("We the Media" by Dan Gilmor, citizen journalism, the web2.0 slogan by Tim O'Reilly, etc.)... and eventually services such as YouTube in 2005 were explicitly built (and valued) for their capability to be based on "user-generated content".

To him, the best example of this trend is Be Kind Rewind, a sort of testimony to the notion of Amateur culture. Gunthert describes this movie as the nicest way to depict user-generated content because it shows HOW IT WAS SUPPOSED TO BE. A sort of YouTube were viewers will act as participant and create their own videos. This kind of expectations of course lead to laws and measures taken by governments in certain countries to help the the cultural industry... because lots of people believed in this myth.

But this vision did not materialize. He described them as self-fulfilling prophecies proposed by web gurus and showed that most of the content on platforms such as YouTube are not creations. There is indeed a great shift from television to platforms like YouTube but it's mostly an archive of past productions (with tons of copyright infringements). It's not only an archive but there are also ads and new forms of communication proposed by companies (see the Evian roller babies campaign).

He concluded by stating that most of the interest by researchers/media has been drawn so far to the production part of usage and that he is more interested in how people use these platforms. YouTube is now the second the web search engine and people access it to look for answers (e.g. how to fold a tent). To Gunthert, we are in 2009 in a situation close to the one people in the 20th century encountered with sound recording devices. At the time, inventors and industrial companies has high expectations about these machines, they were supposed to help produce content and store memories of people. But it did not happen and it was mostly employed to listen to music. Nevertheless this does not mean that people were passive and there are lots of interesting and active practices with regards to sound recording devices.

Why do I blog this? my notes here are a bit messy and incomplete. I tried to translate this roughly into English but I was quite interested by his approach. Of course, some other things could be added about the DIY culture and perhaps my transcription is a bit shaky but I found it intriguing to deconstruct the notion of amateurs and usage.

Science consultants in sci-fi shows

ON Sci-Fi wire, there is this curious description of how science consultants have been called to work in Star Trek/Battlestar Galactica:

"former Star Trek writer and creator of the re-imagined Battlestar Galactica Ron Moore revealed the secret formula to writing for Trek. He described how the writers would just insert "tech" into the scripts whenever they needed to resolve a story or plot line, then they'd have consultants fill in the appropriate words (aka technobabble) later.

"It became the solution to so many plot lines and so many stories," Moore said. "It was so mechanical that we had science consultants who would just come up with the words for us and we'd just write 'tech' in the script

La Forge: "Capain, the tech is overteching."

Picard: "Well, route the auxiliary tech to the tech, Mr. La Forge."

La Forge: "No, Captain. Captain, I've tried to tech the tech, and it won't work."

Picard: "Well, then we're doomed.""

Why do I blog this? Reference for later. This is a model for creating design fictions but I wonder about how to go beyond this. Using this kind of process may lead to a certain vision of the future that is very normative. Charles Stross describes on his blog how he works and it's more interesting to me:

"I use a somewhat more complex process to develop SF. I start by trying to draw a cognitive map of a culture, and then establish a handful of characters who are products of (and producers of) that culture. The culture in question differs from our own: there will be knowledge or techniques or tools that we don't have, and these have social effects and the social effects have second order effects — much as integrated circuits are useful and allow the mobile phone industry to exist and to add cheap camera chips to phones: and cheap camera chips in phones lead to happy slapping or sexting and other forms of behaviour that, thirty years ago, would have sounded science fictional. And then I have to work with characters who arise naturally from this culture and take this stuff for granted, and try and think myself inside their heads. Then I start looking for a source of conflict, and work out what cognitive or technological tools my protagonists will likely turn to to deal with it. (...) The biggest weakness of the entire genre is this: the protagonists don't tell us anything interesting about the human condition under science fictional circumstances. The scriptwriters and producers have thrown away the key tool that makes SF interesting and useful in the first place, by relegating "tech" to a token afterthought rather than an integral part of plot and characterization."

Sensory anomalies

Laptop music This morning, while preparing my upcoming course, I stumbled across this great chapter about Sensory Anomalies by Michael Naimark. Some excerpt I found relevant below:

"The single biggest difference between first-hand and mediated experiences is whether sensory anomalies exist. There are none in first-hand experience. Such anomalies always have explanations (...) The physical world obeys the laws of science. When we experience anomalies in the physical world, it's due to human hardware or software issues, such as blindness or psychosis, not because of the environment. (...) "Virtual Reality", in its theoretical construct, is the merging of the feeling of first-hand experience with the freedom from physical-world constraints. (...) the goal is indistinguishability from first-hand experience in the physical world: "just like being there." Such VR doesn't exist and may never (at least not without electrodes). So for now, we live with even the best sensory media having some degree of anomalies. These anomalies are not intentional, and entire industries exist to make higher resolution cameras, better synthesized lighting models, and auto-stereoscopic displays."

In the chapter, Naimark describes several projects that both transcend and exploit sensory anomalies as well as give a series of observations about what happens. It leads him to the following conclusion:

"Sensory anomalies are funny things (...) Metaphor to some is violation to others. "Faithful representation" is a noble engineering goal, but things aren’t quite as clear in art and design. To confuse, or clarify, things further, good metaphor can often be a form of shorthand. If we share similar cultures, backgrounds, or personal experiences, metaphor is a form of abstraction, of compression. So in the end, the degree of faithfulness and the degree of violation depend on what we want to say. "

Why do I blog this? I really love these lines. They very much echo with recent discussions I had with people from the game industry who aim at jumping over the Uncanny Valley. The notion of preferable anomaly seems more appealing to me in terms of opportunities and design constraints.

The image above was taken yesterday at Share GVA, an audiovisual jam session for media artists and technicians that I attended. The whole event ( my picture too, actually) are based on toying with sensory anomalies.

Laptop music

Meme circulation: Parking Wars

The "Parking Wars" application on Facebook was certainly one my favorite game two years ago. I gave it a shot for 3-4 months and then let it go (although one my friend is a "$28,699,245 (Parker Emeritus)". Besides it may have been the only application that attracted me to log in on Facebook back in 2006.

The game, designed by Area/Code was actually a facebook app that was meant to promote a television show:

" In Parking Wars, players earn money by parking -- legally or illegally -- on their friend's streets. Players also collect fines by ticketing illegally parked cars on their own street."

What was fantastic at the time was the fact that this simple games app took advantage of the FB social graphs in curious ways:

  • The underlying logic is simple: you need to have friends to park your cars on their street. The point is therefore to maximize the number of friends who play Parking War... which leads player to participate in the network effect through invitations (on top of word-of-mouth).
  • The game is asynchronous and turn-based so it's good to find friends on different time-shift so that you could place/remove your car when they sleep (a moment during which you don't risk to get any fine).
  • When giving a fine you can send messages to other players, the dynamic here is highly interesting as people repurposed it into some weird communication channel that is public but that address a different audience than the Facebook wall
  • Competition is stimulated with a peculiar kind of score board: you only see scores from other players within your network (who added the game). This is thus a sort of micro-community where each participants' score is made explicit.
  • The "level design" is also interesting with a "neighbor" feature that enable you to park on adjacent streets, which can be owned by people outside your network.
  • The cheating tricks are also social: you can less-active FB users to add the game so that you're pretty sure they won't check that you're illegally parked, you can create a fake FB account or benefit from streets created by people who stopped playing.
  • ... and I am sure there is more to it from the social POV

Interestingly, my curiosity towards Parking Wars came back up to the surface when chatting with my neighbor Basile Zimmermann who works as research scientist at the University of Geneva. In a recent project, he addressed how Chinese Social Networking Sites re-interpreted design concepts already used by existing platforms such as FB and turned them into something different.

Which is how he showed me a curious application he saw on a Chinese SNS called "开心网 / Kaixin001" ("Happy Network") that is a Parking Wars-inspired copy also called "争车位" ("Parking Wars") which appeared in July 2008:

The layout is similar to the one created by Area/code, some cars are more fancy than others but the main difference lies in the presence of advertisement (as shown by the "LG" brand). As a matter of fact, the ad part was not included in the first few months of this Parking Wars version on the Happy Network and it appeared approximately around March 2009 according to Basile. From what I'm told, the game is evolving too with a system of maps that operates differently from the FB version.

More explanation in his upcoming paper about this topic: Zimmermann, B. (forthcoming). "Analyzing Social Networking Web Sites: The Design of Happy Network in China" in Global Design History, Adamson, Teasley and Riello eds, Routledge.

Why do I blog this? dual interest here: 1) my fascination towards Parking Wars and its underlying game design mechanism based on social dimensions, 2) the transfer of this meme in another culture.

Exploded TV

Watching TV while driving One hour after being in this tv-enabled taxi in Jeju the other day, I read this quote in Crooked Little Vein by Warren Ellis:

"Think of it as exploded television. Every station has at least one show you want to see, right? Well, on my network, your favorite show is on all the time. Everyone’s favorite show is on all the time, whenever you want to watch it. Add up all the viewers on my network, and I have a bigger audience than HBO. This ain’t fringe anymore, friend. If you define the mainstream as that which most people want to watch, then I’m as mainstream as it gets. (...) Exploded television. I am the ultra cable company. This is the way of the future. Anything you want on a computer screen, whenever you want it, through a subscription or a micropayment of a few bucks through your credit card. That eel thing? For a buck a time you can download the day's highlights to your iPod and watch it while you're in the can."

Spot on! Korea was indeed a good place to spot different forms of "exploded television" (and certainly a good place to read Ellis). Indeed there are explosions in terms of devices themselves (in-car tv, mobile phone, some urban screens) and services too.

Media facade

IMG_5504

Mobile TV

Mobile TV is certainly an interesting instance of exploding television one can notice in Korea, as shown by the pictures above. Mobile TV penetration is slightly over 30% and of course mobile phones are designed with this in mind. Some of them indeed have screen that can be twisted to get a landscape view. It's now 4 years that the koreans have access to access this (through satellite DMB (S-DMB) and terrestrial DMB (T-DMB) service).

Why do I blog this? connecting the dots and finding curious metaphor for socio-technical trends after a refreshing trip to Korea.

Retro-computing

People interested in retro-computing (i.e. the use of early computer hardware and software today), may want to have a look at 101 Project: an independent creative platform to collect memories and archives in order to develop a documentary film. Selected at SIGGRAPH, it is a kind of collective memory incubator that will be first of all part of the film and it will live also apart and after the film as a web platform.

It's possible to start dropping your memories here.

Correlation != cause and effect

Cluster of services Definitely an awkward combination of services encountered in Chamonix last week: the weather board has been combined with a condom vending machine and a letter-box. As written on the green thing, the "Meteo" box (which means "weather" in french) is a curious cluster.

I take it as an example to express that correlation (i.e. a connection between two or more things) DOES NOT mean causation.

White Glove Tracking

The never-ending discussions about MJ in various contexts sometimes leads you to talk about pop culture in conjunction with concepts such as "crowd sourcing" and "social web". This is what happened yesterday when I brought back the White Glove Tracking project in a social web meeting. It was a crowd sourcing experiment form 2007 created by Evan Roth and Ben Engebreth that asked an online community to help track Michael Jackson’s white glove from a televised performance, in 10'600 frames. It took then 72 hours to go from the txt files to user-generated visualizations of the data collected using Processing. Some of the results can be found in the white glove gallery.

Why do I blog this? beyond the MJ thing, I find this crowdsourcing instance quite curious (among others like the Nasa Clickworkers project) and some hilarious examples (like the big white glove that fits quite well with the Billie Jean bass line). A sort of testbed for distributed visual data analysis where human eyeballs are quite efficient.

What I also find intriguing here is the social dynamic around it, see for instance both the different categories in the gallery ("the winners") and the sort of "rankings" of participants (see below). I am always amazed by pervasive rankings even, perhaps they play a role in crowdsourcing experiments:

(ranking)

A Sony walkman described by a 21st century kid

This account by a brit teenager of how he used a Sony Walkman from back in the days is highly intriguing. The kid tells his story and compare it to the ipod. Some excerpts I enjoyed:

"My dad had told me it was the iPod of its day. (...) Throughout my week using the Walkman, I came to realise that I have very little knowledge of technology from the past. I made a number of naive mistakes, but I also learned a lot about the grandfather of the MP3 Player. (...) It took me three days to figure out that there was another side to the tape. That was not the only naive mistake that I made; I mistook the metal/normal switch on the Walkman for a genre-specific equaliser, but later I discovered that it was in fact used to switch between two different types of cassette (...) Another notable feature that the iPod has and the Walkman doesn't is "shuffle", where the player selects random tracks to play. Its a function that, on the face of it, the Walkman lacks. But I managed to create an impromptu shuffle feature simply by holding down "rewind" and releasing it randomly - effective, if a little laboured. (...) This is the function that matters most. To make the music play, you push the large play button. It engages with a satisfying clunk, unlike the finger tip tap for the iPod."

Why do I blog this? this is both fun and inspiring as it is always curious to see how the naive usage of previous devices is described (especially in conjunction with the use of new artifacts). Of course, beyond the fun read, it's interesting because it allows to grasp todays' users perception of certain features and affordances. The need to have a reference ("the iPod of its day") and the understanding of switches and button is shows how mental model are shaped by previous usage of technologies.

20 years of the first paper about the World Wide Web

Yesterday I attended "World Wide Web@20" at CERN in Geneva where Web founders celebrated the 20 years of the "Original proposal for a global hypertext project at CERN" (1989). Tim Berners-Lee, Ben Segal, Jean-Francois Groff, Robert Cailliau and others gave a set of talks about the history and the future of the Web.

The afternoon was quite dense inside the CERN globe and I won't summarize every talks. I only wanted to write down the set of insights I collected there.

First about the history:

  • The main point of Berners-Lee was that "people just need to agree on a few simple things", which was quite a challenged in the context of emergence of his invention: CERN. At the time, computers and IT in general was a zoo. There were many companies(OS), network standards were different (each company had at least one, CERN had its own... TCP/IP just began to appear form the research environment but were opposed by European PTTs and industry).
  • What was interesting in the presentation was the context of CERN itself: computing was really one of the last important thing in terms of political order (after physics and accelerators). Desktops, clusters and networking were also perceived as less important as big mainframes. And of course, TCP/IP was considered "special" and illegal until 1989! To some extent, there was an intriguing paradox: the Web was created at CERN but from its weakest part and using "underground" resources
  • The Web wasn't understood at first. This paradigm shift was not understood, people did not get why they should put their data on this new database because they did not understand what it would lead to The key invention was the url, something you can write on a piece of paper and give to people
  • Berners-Lee also pointed out the importance of the Lake Geneva area ("looking at the Mont Blanc this morning I realized how much you take it for granted when you live here") and how the environment was fruitful and important in terms of diversity (researchers coming from all around the World).
  • The basic set of things people should agree on was quite simple:
    • URIs
    • HTML was used for linking but it ended up being the documentation language of choice
    • Universality was the rule and it worked, an important step was having CERN not charging royalties
  • It was also interesting to see how the innovation per se was more about integrating ideas and technologies than starting something from scratch. As shown during the demo of the first webserver and by the comments on the first research papers about the Web, it was all a matter of hyperlinks, using an existing infrastructure (the internet), URI and of course the absence of royalties from CERN. People just needed to agree on a few simple things as TBL repeated.

The future part was very targeted at W3C thinking about the web future. As Berners-Lee mentioned "the Web is just the tip of the iceberg, new changes are gonna rock the boat even more, there are all kinds of things we've never imagined, we still have an agenda (as the W3C)". The first point he mentioned was the importance for governments and public bodies to release public data on the web and let people have common initiative such as collaborative data (a la open street map). His second point here was about the complexity of the Web today:

"the web is now different from back then, it's very large and complicated, big scale free system that emerged. You need a psychologist to know why people make links, there are different motivations we need people who can study the use of the web. We call it Web Science, it's not just science and engineering, the goal is to understand the web and the role of web engineering"

And eventually, the last part was about the Semantic Web, the main direction Tim Berners-Lee (he and the team of colleagues he invited in a panel) wanted to focus on for the end of the afternoon. From a foresight researchers standpoint, it was quite intriguing to see that the discussion about the future of the Web was, above all, about this direction. Berners-Lee repeated that the Semantic Web will happen eventually: "when you put an exponential graph on an axis, it can continue during a lot of time, depending on the scale... you never know the tipping point but it will happen". The "Web of data" as they called it was in the plan from the start ("if you want to understand what will happen, go read the first documents, it's not mystical tracks, it's full of clever ideas"): we now have the link between documents and the link between documents and people or between people themselves is currently addressed. Following this, a series of presentation about different initiatives dealt with:

  • grassroots effort to extent the web with data commons by publishing open license datasets as linked data
  • the upcoming webpresence of specific items (BBC programs)
  • machine-readable webpages about document
  • machine-readable data about ourselves as what Google (social graph API), Yahoo, Russia (yandex) are doing: putting big database of these stuff once you have this, you can ask questions you did not have previously. FOAF is an attempt into this direction too.

The last part of the day was about the threats and limits:

  • Threats are more at the infrastructure level (net neutrality)than the Web level, governments and institution which want to snoop, intercept the web traffic
  • A challenge is to lower the barrier to develop, deploy and access services for those devices, which should also be accessible for low literacy rates, in local languages (most of which are not on the web).
  • One of the aim is to help people finding content and services even in developing countries. It's also a way to empower people.
  • The dreamed endpoint would be the "move form a search engine to an answer engine : not docs but answers" and definitely not a Web "to fly through data in 3D".

Why do I blog this? The whole afternoon was quite refreshing as it's always curious to see a bunch of old friends explaining and arguing about the history of what they created. It somewhat reminded me how the beginning of the Web as really shaped by:

  1. a certain vision of collaboration (facilitating content sharing between researchers), which can be traced back to Vannevar Bush's Memex, Ted Nelson's Xanadu, the promises of the Arpanet/Internet in the 70s (Licklider).
  2. the importance of openness: one can compare the difference between the Web's evolution and other systems (Nelson's work or Gopher). What would have happened if we had a Gopher Web?
  3. a bunch of people interested in applying existing mechanisms such as hypertext, document formating techniques (mark up languages)

What is perhaps even more intriguing was that I felt to what extent how their vision of the future was still grounded and shaped by their early vision and by their aims. Their objective, as twenty years ago, is still to "help people finding content, documents and services", the early utopia of Memex/Arpanet/Internet/Xanadu/the Web. The fact that most of the discussion revolved around the Semantic Web indicates how much of these three elements had an important weight for the future. Or, how the past frames the discussants' vision of the future.

Curiously enough the discussion did not deal with the OTHER paths and usage the Web has taken. Of course they talked briefly about Web2.0 because this meme is a new instantiation of their early vision but they did not comment on other issues. An interesting symptom of this was their difficulty in going beyond the "access paradigm" as if the important thing was to allow "access", "answers" and linkage between documents (or people). This is not necessarily a critique, it's just that I was impressed by how their original ideas were so persistent that they still shape their vision of the future.

The more comfortable we become with being stupid

(via) Sometimes you don't expect titles like that in scientific press but "The importance of stupidity in scientific research" by Martin Schwartz in Journal of Cell Science is an intriguing read. Some excerpts I liked:

"we don't do a good enough job of teaching our students how to be productively stupid – that is, if we don't feel stupid it means we're not really trying. I'm not talking about `relative stupidity', in which the other students in the class actually read the material, think about it and ace the exam, whereas you don't. I'm also not talking about bright people who might be working in areas that don't match their talents.

Science involves confronting our `absolute stupidity'. That kind of stupidity is an existential fact, inherent in our efforts to push our way into the unknown. Preliminary and thesis exams have the right idea when the faculty committee pushes until the student starts getting the answers wrong or gives up and says, `I don't know'. The point of the exam isn't to see if the student gets all the answers right. If they do, it's the faculty who failed the exam. The point is to identify the student's weaknesses, partly to see where they need to invest some effort and partly to see whether the student's knowledge fails at a sufficiently high level that they are ready to take on a research project.

Productive stupidity means being ignorant by choice. Focusing on important questions puts us in the awkward position of being ignorant. (...) The more comfortable we become with being stupid, the deeper we will wade into the unknown and the more likely we are to make big discoveries."

Why do I blog this? although I fully agree with the importance of absolute stupidity in scientific research, I think it's also an important attitude in design research. Some ideas to be explored later about this issue.

Lift09 tag cloud

A tag cloud that my colleague John Elbing generated using the information Lift participants entered to describe their interests. Lift tag cloud

Why do i blog this? As a program organizer, it's interesting to see the profile and the interests of Lift attendants. Using this wordle is a good way to get an impression of what defines the conference. It's surely messy but there are some clear vectors: innovation is the main track with subthemes such as design/interaction design, technology and mobile. It's also important to see that there are less and less people interested in the Web ;)

Simplicity according to the Log Lady

Recently got back to the Twin Peaks series, I was struck by this quote form the log lady in the seventeenth episode:

"Complications set in--yes, complications. How many times have we heard: 'it's simple'. Nothing is simple. We live in a world where nothing is simple. Each day, just when we think we have a handle on things, suddenly some new element is introduced and everything is complicated once again. (...) "What is the secret? What is the secret to simplicity, to the pure and simple life? Are our appetites, our desires undermining us? Is the cart in front of the horse?""

Why do I blog this? just found this quite intriguing to think about, especially considering the whole discourse about simplicity and design.

'Epithetized' phenomena: "e-", "m-", "u-"

A follow-up on the internet idioms and letter I dealt with the other day. As Steve Woolgar wrote back in 2002 (in Virtual Society?: Technology, Cyberbole, Reality), the "e-" prefix is part of an 'epithetized' phenomena. That is to say, the addition of this letter to almost any activity or institution to signify novelty (beyond electronics):

"While it is often unclear from these labels exactly how the application of the epithet actually modifies the activity/institution in question, a claim to novelty is usually central, especially at the hands of those promoting the new entity (...) The implication is that something new, different and (usually) better is happening"

Why do I blog this? gathering quotes while reading books about the STS... the use of these letters to embed something new is decidedly fascinating. Besides, the "m-" (standing for "mobile") and "u-" (ubiquitous) are of course the followers...