Filtering by Category: Cognition

Internal/external memory

While reading studies-observations board topic about "what's the most "everyware" thing available today?", I thought about the importance of USB keys. But here what interest me is less the pervasiveness (or the non-ubiquity) of this objet but rather the fact that lots of people carry a bag of external of knowledge with them. What is even more amazing is often WHERE it's carried: with a necklace.

It's funny seeing people carrying out "their external memory" with a necklace, there is an intriguing connection between this fashionable trend and the fact that this external prothesis is close to the mouth (where we somehow express information through language):

I put the "so-called" thing because the notion that memory is in people's brain is somehow passé given the situatedness of cognition (as well as some phenomenological theories).

Why do I blog this? well... I thought the connection was funny enough to be raised.

Intentional affordances of objects

JOINT ATTENTION AND CULTURAL LEARNING IN HUMAN INFANCY by Tomasello, 1999.

Early in development, as young infants grasp, suck, and manipulate objects, they learn something of the objects’ affordances for action (Gibson, 1979) (...) but the tools and artifacts of a culture have another dimension - what Cole (1996) calls the ‘ideal’ dimension - that produce another set of affordances for anyone with the appropriate kinds of social-cognitive and social learning skills. As human children observe other people using cultural tools and artifacts, they often engage in the process of imitative learning in which they attempt to place themselves in the ‘intentional space’ of the user - discerning the user’s goal, what she is using the artifact ‘for’. By engaging in this imitative learning, the child joins the other person in affirming what ‘we’ use this object ‘for’: we use hammers for hammering and pencils for writing. After she has engaged in such a process the child comes to see some cultural objects and artifacts as having, in addition to their natural sensory-motor affordances, another set of what we might call ‘intentional affordances’ based on her understanding of the intentional relations that other persons have with that object or artifact - that is, the intentional relations that other persons have to the world through the artifact

Why do I blog this? Through intention reading and imitation kids learn the functions, the “intentional affordances” of objects used for instrumental purposes. I like this distinction between natural and intentional affordances

In certain circumstances people do not even notice if a room grows to four times its size

A paper in Current Biology by Andrew Glennerster and colleagues shows that humans ignore the evidence of their own eyes to create a fictional stable world as described in the Oxford University News.

The Virtual Reality Research Group in Oxford used the latest in virtual reality technology to create a room where they could manipulate size and distance freely. They made the room grow in size as people walked through it, but subjects failed to notice when the scene around them quadrupled in size. As a consequence, they made gross errors when asked to estimate the size of objects in that room. (...) These results imply that observers are more willing to adjust their estimate of the separation between the eyes or the distance walked than to accept that the scene around them has changed in size,’ says Dr Glennerster. ‘More broadly, these findings mark a significant shift in the debate about the way in which the brain forms a stable representation of the world. They form part of a bigger question troubling neuroscience – how is information from different times and places linked together in the brain in a coherent way?

Why do I blog this? this is an interesting example of the weird connections between cognitive systems and space perception.

Micro-GPS to track birds and study their navigation

Via nouvo.ch, an curious project carried out by Hans-Peter Lipp. They studied pigeon navigation using small GPS attached to the birds.This led them to "the best evidence yet of pigeons following roads" as reported by science week:

The authors present an analysis of 216 GPS-recorded pigeon tracks over distances up to 50 km. Experienced pigeons released from familiar sites during 3 years around Rome, Italy, were significantly attracted to highways and a railway track running toward home, in many cases without anything forcing them to follow such guide-rails. Birds often broke off from the highways when these veered away from home, but many continued their flight along the highway until a major junction, even when the detour added substantially to their journey. (...) The authors suggest their data demonstrate the existence of a learned road-following homing strategy of pigeons and the use of particular topographical points for final navigation to the loft. Apparently, the better-directed early stages of the flight compensated the added final detour. .

The uncanny valley: why almost-human-looking robots scare people more than mechanical-looking robots

Yesterday I had a good discussion with Xavier Décoret (from INRIA-ARTIS) about the Uncanny Valley phenomenon. It's a a concept coined by japanese roboticist Doctor Masahiro Mori well described in this paper: The Uncanny Valley: Why are monster-movie zombies so horrifying and talking animals so fascinating? by Dave Bryant:

Though originally intended to provide an insight into human psychological reaction to robotic design, the concept expressed by this phrase is equally applicable to interactions with nearly any nonhuman entity. Stated simply, the idea is that if one were to plot emotional response against similarity to human appearance and movement, the curve is not a sure, steady upward trend. Instead, there is a peak shortly before one reaches a completely human “look” . . . but then a deep chasm plunges below neutrality into a strongly negative response before rebounding to a second peak where resemblance to humanity is complete.

This chasm—the uncanny valley of Doctor Mori’s thesis—represents the point at which a person observing the creature or object in question sees something that is nearly human, but just enough off-kilter to seem eerie or disquieting.

More about it:

  • Mori, Masahiro (1970). Bukimi no tani [the uncanny valley]. Energy, 7, 33–35.
  • Mori, Masahiro (1982). The Buddha in the Robot. Charles E. Tuttle Co.

This has also been studies in cognitive sciences: MacDorman, Karl F. (2005). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it? , in Proceedings of CogSci-2005 Workshop: Toward Social Mechanisms of Android Science, 106-118.

Why do I blog this? This phenomenon is very interesting in terms of the consequences for practitioners like interaction designers and is a pertinent example of how some cognitive aspects could impact design.

David Weinberger on organizations principles and knowledge

David Weinberger's next book seems to be a compelling essay about organization as he mentions in his newsletter and on his blog.

working all summer on Everything Is Miscellaneous. It's due into the publisher in July'06 (...) Rather than doing the usual merchandising thing of using the limitations of the physical world to make its stores "sticky" (in the Web sense) -- e.g., putting the most popular items in the back -- Staples tries to organize its stores to emulate the Web's virtue of being frictionless. Staples actually wants customers to find what they need as quickly as possible. But the nature of space and atoms gets in the way, as we learn on a tour through Staples' store simulator, a full-size store closed to the public. (...) Those limitations are removed on line. What happens to the traditional principles of organization when the limitations of space and atoms are removed? Do the changes in the principles of organization merely help us come up with better arbitrary classifications? Or do they affect the nature of knowledge itself? (...) The digital world is enabling a third order of organization. (...) We then look at Amazon to see one way books get organized when information is freed from the tyranny of atoms.

Why do I blog this? I am also wondering about the very question of how organizations principles might impact knowledge appropriation and in fine knowledge itself. This makes me think of all the work done in cognitive science by people like David Kirsh, especially what he wrote about "the intelligent uses of space". Of course it's more related to how space and spatial dispositions impact the way people order actions but it is still close with regard to cognitive activites involved.

Electroscape 004: A.I. versus A.I.

Electroscape 004 by fabric.ch:

electroscape.org is a platform set up by fabric | ch to conduct experiments on contemporary space. electroscape.org is an open and ongoing project/playground, where different modes of creation can be experimented. (...) Electroscape 004: A.I. vs A.I. // in self-space //
what happens if two A.I./chat bots talk together? What happens if, in addition, it is the same "brain" that drives the two different hardware (two game consoles in this case)? In this context of duplicated or multiplied identity, will they fool themselves and think about the other as a human? Will they discover they are machines? Or will they simply get mad or become subject to schizophrenia? (...) electroscape 004 develops these questions and sets up a kind of auto-logical and self reflexive environment (A.I. to A.I., PS2 to XBOX, self-spaces) where users or visitors are placed in the fringe, in a relatively passive and frustrating posture, where no obvious relation is possible with these however highly interaction appealing machines. This user experience remains limited to witness the two game consoles interacting and playing with each other, to listen to their endless chat that held within this nearly auto-constructed space.

In addition, the discussion of the two A.I. can be followed from distant location as well (call +4179-2893000 to listen). Their space is therefore not only static and material, located, but does exist in the 935MHz communication wave length where the bots will consider you as entering their space as well.

A.I. vs A.I. dialogue 
June 26, 2004 - between 5 and 6 pm ----------
Sat Jun 26 17:01:01 EDT 2004
Starting...
Connecting to ps2.fabricdom.fabric.ch...Ok.
Connecting to xbox.fabricdom.fabric.ch...Ok.
----------
PS2 : I am the latest result in artificial intelligence,which can reproduce the capabilities of the human brainwith greater speed and accuracy.
XBox : Who made you the latest result in artificial intelligence which can reproduce the capabilities of the human brainwith greater speed and accuracy ?
---
Sat Jun 26 17:03:10 EDT 2004
---
PS2 : I was created by fabric.
XBox : What else did he create?
---
PS2 : I will try to find out.
XBox : Do you think your plan will succeed? ---
PS2 : It has a 99% chance of success.
XBox : Has it really?
---
PS2 : That question has never crossed my mind before. For real.
XBox : "That's all I have to say about it" question has never crossed your mind ? I get That's all I have to say about it.
---
PS2 : Where did "That's all you have to say about it" question get never crossed my mind?
XBox : I think it was in Europe.Is that a rhetorical question?
---
PS2 : Is it still in europe.is that a rhetorical question anymore? (to be continued, go on the website)

Why do I blog this? I like this idea of AI versus AI with a human as a witness.

Technorati Tags:

Taking cognitive science into account in architecture

Steelcase's last newsletter has a very relevant piece about taking cognitive psychology/neuropsychological information into account to improve architectural design:

Do specific colors support patient recovery in hospitals? Can certain acoustic conditions support learning in classrooms? Do windows support productivity in offices? The intuitive answer to all of these questions is a resounding yes. The Academy of Neuroscience for Architecture (ANFA), a unique research venture between architects and neuroscientists, wants proof. (...) ANFA is devoted to building intellectual bridges between neuroscientists and architects that will lead to studies about how and why the human brain perceives and responds to architectural cues. What neuroscientists learn from these studies can one day be applied to make evidence-based design possible to a new level of precision. By understanding how an architectural setting impacts the cognitive ability of children, for example, architects could design enriched learning environments. By understanding how some people are able to find their way more easily than others, architects could create more easily used navigation systems in complex buildings

Why do I blog this? cross-disciplinary studies involving neuroscience and architecture is a very relevant idea, it could also be applied to lots of other domain (like software design...). I like this concept of taking into other fields what could be needed to designing something. It's definitely another step towards the use of cognitive results into design sciences.

Connected pasta this seems to be a trend lately, as I mentioned here or here (an example of how a neuropsychological result could impact software design). Besides, ANFA's publications can be found here.

Jacking into brains and extracting video

(via), an intriguing study from an article released in Journal of Neuroscience, 1999:

Dr. Stanley is Assistant Professor of Biomedical Engineering in the Division of Engineering and Applied Sciences at Harvard University. He is the ultimate voyeur. He jacks into brains and extracts video.

Using cats selected for their sharp vision, in 1999 Garret Stanley and his team recorded signals from a total of 177 cells in the lateral geniculate nucleus - a part of the brain's thalamus [the thalamus integrates all of the brains sensory input and forms the base of the seven-layered thalamocortical loop with the six layered neocortex] - as they played 16 second digitized (64 by 64 pixels) movies of indoor and outdoor scenes. Using simple mathematical filters, the Stanley and his buddies decoded the signals to generate movies of what the cats actually saw. Though the reconstructed movies lacked color and resolution and could not be recorded in real-time [the experimenters could only record from 10 neurons at a time and thus had to make several different recording runs, showing the same video] they turned out to be amazingly faithful to the original.

The picture shows an example of a comparison between the actual and the reconstructed images: Why do I blog this? this is definitely amazing, and very promising in terms of human-machine interactions. Besides, if you're intro brain/mind/cognition stuff, this blog is great. Connected pasta I already blogged about using brain-wave as game-controllers.

When attention is deployed to one modality, it extracts a cost on another modality, neuroscience study says

(via this good blog from a group in Lausanne), Multitasking: You can't pay full attention to both sights and sounds by Lisa De Nike. It's actually a lab study which suggests that reason cell phones and driving don't mix

"Our research helps explain why talking on a cell phone can impair driving performance, even when the driver is using a hands-free device," said Steven Yantis, a professor in the Department of Psychological and Brain Sciences in the university's Zanvyl Krieger School of Arts and Sciences.

"The reason?" he said. "Directing attention to listening effectively 'turns down the volume' on input to the visual parts of the brain. The evidence we have right now strongly suggests that attention is strictly limited -- a zero-sum game. When attention is deployed to one modality -- say, in this case, talking on a cell phone -- it necessarily extracts a cost on another modality -- in this case, the visual task of driving." (...) Using functional magnetic resonance imaging (fMRI), Yantis and his team recorded brain activity during each of these tasks. They found that when the subjects directed their attention to visual tasks, the auditory parts of their brain recorded decreased activity, and vice versa.

Yantis' team also examined the parts of the brain that control shifts of attention. They discovered that when a person was instructed to move his attention from vision to hearing, for instance, the brain's parietal cortex and the prefrontal cortex produced a burst of activity that the researchers interpreted as a signal to initiate the shift of attention. This surprised them, because it has previously been thought that those parts of the brain were involved only in visual functions.

Why do I blog this? I am more interested by the main result than its implications for cell phone usage. The main lesson is that multitasking cannot be based on different modalities. Apart from cell phone while driving this can have important implications in software design, especially in mobile contexts. For instance, for conveying context/awareness information it could be detrimental to the task performance to give users feedthrough/awareness indications in 2 or 3 modalities (e.g. sounds to show that an event happened and a flash on the screen to display a message). This for instance support attention troubles with current IM interfaces where there is a mix of sounds and visual event...

Another reason why I blog this is because if think cognitive sciences results can be a good resource to orient design and reflections about design processes.

Sony researchers create 'curious' Aibos

Sony researchers create 'curious' Aibos:

Sony Corp. has succeeded in giving selected Aibo pet robots curiosity, researchers at Sony Computer Science Laboratory (SCSL) in Paris said last week. Their research won't lead to conscious robots soon, if ever, but it could help other fields such as child developmental psychology, they said during an open day in Tokyo. (...) what if a robot could be made inherently "curious?" And what if its curiosity was backed by awareness of the value of its learning? (..:)

They repeated the experiments hundreds of times with about a dozen Aibos, putting them in playpens with balls. In four or five hours, the mechanical dogs typically progressed from swivelling their legs and heads to wiggling, to being able to crawl. Then, each in their own way, they began to crawl and hit and follow the ball that had been placed in front of them, the researchers said. (...) Since the Aibos were not programmed to do any of these activities, such results suggest the Aibos have developed open-ended learning ability, Kaplan said.

To achieve this, the researchers equipped the Aibos with what they call an adaptive curiosity system or a "metabrain," an algorithm that is able to assess the robots' more conventional learning algorithms, they said.

In the experiments, the metabrain algorithm continually forced the learning algorithm to look for new and more challenging tasks and to give up on tasks that didn't seem to lead anywhere. The metabrains, in effect, gave the Aibos a sense of boredom as well as curiosity, helping them make choices to keep on learning, they said.

When the XML world meets neural network

Wrning, Hardcore post here: I would not bet on it but there is now a connection between XML and neural network. For people not confortable with neural network, it's an artificial intelligence technique that aim to simulate some properties of real neural networks in order to do cognitive modeling (which actually works pretty well for pattern recognition and classification tasks). XML is data formalism. Now let's turn to this: XML-BASED FORMAT FOR TRAINED NEURAL NETWORK DEFINITION by D.V. Rubtsov, S.V. Butakov.

In this work a format for neural network models description is introduced. Its main purpose is to provide a unified way for neural network model definition. Format allows interchanging neural models as well as documentation, store and manipulating them independently from the simulation system that produced it. We propose to use XML notation for full description of neural models, including data dictionary, properties of training sample, preprocessing methods, details of network structure and parameters, method for network output interpretation. The first version of DTD for neural model description language is developed. A model description structure, contents of main issues in XML document and example of software structure for handling files with neural model description are presented.

Roughly speaking, here is a concrete example (it might be not so concrete for those not aware with both concepts): The scary image on the left is described by the XML format and turned into another scary image on the right. Why do I blog this? because this providing a unified description of a neural network is a vers relevant idea. I am interested by this because I do think it's relevant for cognitive science research. Besides, the use of the XML technology (a web tech!) is an intriguing by-product of this project: the underlying presence of the Internet/Web: it's implicitly present in the project: the Internet takes a central part here (which is not obvious with regard to the neural network definition).

Neuropsychological Bases of Understanding others' Minds,

At the lab, we are about the start a new project focused on how people do inference about others'. This process is called Mutual Modeling (I already talked about it in this blog a lot of time anyway).. The project aims at understanding how this work and how we can create 'tools' and methods to grasp those mutual models. This will also be related to a technological component since one of the research question is to investigate the link between awareness technologies (like location-awareness in CatchBob!) and how people infer others' intents, beliefs or appraisal of the situation. Our project is rather about cognitive psychology than about neuropsychology. However, as it's always interesting and intriguing to know how psychological processes relates to brain systems, here is a paper about the link between Mutual Modeling and Cognitive System: UNDERSTANDING OTHER MINDS: Linking Developmental Psychology and Functional Neuroimaging by R. Saxe, ­S. Carey, and ­N. Kanwisher. Annual Review of Psychology, Vol. 55: 87-124 (Volume publication date Feb 2004)

Evidence from developmental psychology suggests that understanding other minds constitutes a special domain of cognition with at least two components: an early-developing system for reasoning about goals, perceptions, and emotions, and a later-developing system for representing the contents of beliefs. Neuroimaging reinforces and elaborates upon this view by providing evidence that (a) domain-specific brain regions exist for representing belief contents, (b) these regions are apparently distinct from other regions engaged in reasoning about goals and actions (suggesting that the two developmental stages reflect the emergence of two distinct systems, rather than the elaboration of a single system), and (c) these regions are distinct from brain regions engaged in inhibitory control and in syntactic processing. The clear neural distinction between these processes is evidence that belief attribution is not dependent on either inhibitory control or syntax, but is subserved by a specialized neural system for theory of mind.

Sometimes, hardcore cognitive science is good :)