Machine Mirabilia: observing the digital menagerie

About

Machine Mirabilia is a blog/project about entities generally found in digital machinery. From computer viruses to software agents ran by Artificial Intelligence technologies, from digital monsters à la Pokémon to smartphone bugs, this research project in anthropology explores what it means to live with such a diversity of creatures.

Codex Virtualis: new-to-nature speculative lifeforms

Codex Virtualis_ is a project by interspecifics that I saw at the Biennale de l'Image en Mouvement 2024 in Geneva this afternoon. Here's how they define it on their website:

an artistic research framework oriented towards the generation of an evolving taxonomic collection of hybrid bacterial-AI organisms. With a subtle echo to the endosymbiotic theory, we propose a symbolic formulation of a style transfer machine learning environment as a host, in which to merge bacterial/archaea time-lapse microscopy footage along with multidimensional cellular automata models as endosymbionts, all under the orchestration of an autonomous generative non-adversarial network architecture. We aim, as a result, to encounter novel algorithmically-driven aesthetic representations, tagged with a unique morphotype and genotype-like encoding, and articulated around a speculative narrative encompassing unconventional origins of life on earth and elsewhere.

interspecifics

Why do I blog this? This is typically the kind of revamping of artificial life system that emerges these days. As they say in the documentation, the authors "seek to activate a self-generative system, Artificial Intelligence, and algorithmic approximation for generating virtual organism models based on morphology and rule modeling".. which is the kind of combo we have these days to generate new kinds of entities. Not exactly monsters, not strictly living beings either.

Pokédex as an electronic compendium of monsters

Found at the flea market, this Pokédex replica was made back in 1998 by Tiger Electronics and Habsro. According to Bulbagarden, it included information about Generation I Pokémon (" except for Mew, who was not yet revealed to the public at the time the toy was released"). Using a number pad as well as alphabetical keyboard, users could search Pokémon by name or page, compile a list of their favorite or captured creatures, search them by their height, weight, strength, or type. There's also a clock and a calculator, and users could even add a password to protect their lists. Rumor has it that Nintendo was not really happy about this device, arguing that it could hurt GameBoy sales.

Why do I blog this? Beyond the cutesy plastic aesthetic that was typical in the 1990s, this device is interesting as it can be seen as an electronic/new media equivalent to bestiaries. The logic of compilation, with drawings and little text reminds me of the material one could find in medieval compendium (without the moral that was common in there). I also find interesting the was players would interact with it alongside their game console, as a sort of notebook to compile the monsters they found... which behave like a sort of natural notebook. To some extent the Pokédex replicates the transition between intellectual tradition from the Middle Ages and Modernity. Using red plastic this time.

Mapping Our Digital Menagerie

The first paper emerging from this research paper was published today. It's called Mapping Our Digital Menagerie: A Monster Manual for the Megadungeon and belong to a special issue of "magazèn" (a digital humanities journal) that explores how this spatial metaphor inspired by role-playing games can be helpful in order to grasp the complexity of contemporary digital ecosystems.

My piece revisits the monster manual format of role-playing games to describe the set of digital entities and creatures inhabiting the complex me-dia ecosystems that are accessed through computers, smartphones and gaming devices. Thanks Paolo Berti, Gabriele de Seta and Stefania de Vincentis for including it in this issue, and thank Justin Pickard for proofreading and constructive criticism!

A ghost in a shell

Looking into ghost-like entities one could find on digital machinery, I can't help thinking about Masamune Shirow's work. My first encounter with his work was around 1990, reading about Appleseed on a Minitel server (3615 AKELA). Two years later, I found Ghost in the shell (the manga) and Intron Depot (an art book) in a shop in Lyon and voilà, that was my first foray into this kind of Japanese cyberpunk, sort of a visual follow-up to the Sterling/Gibson novels I used to read around the same time.

While the title suggests a spectral presence in machines, it's actually an homage to Arthur Koestler's oeuvre The Ghost in the Machine... which in turn is a reference to Oxford philosopher's Gilbert Ryle's anti-cartesian view that the human mind is not an independent non-material entity, temporarily inhabiting and governing the body. This topic is addressed by Mirt Komel in paper about the philosophy of Ghost in the shell franchise :

In the cyberpunk world we are immerging in the very word “ghost” denotes an individual’s consciousness that differentiates a human from a robot. Even if someone replaces his own biological body with a fully cyborgized prosthetic one, including a cyberbrain as the locus of the ghost, one can still be considered human as long as one retain one’s own ghost. Ghost-dubbing, that is, duplicating a ghost is nearly impossible, and even if successful the copy is always an inferior version of the original (cf. Shirow, 1997). One of the implications of such a conception of “ghost” addresses the question of human’s consciousness’ originality in contrast to its bodily banality, which can be biologically or artificially reproduced. The implied philosophical question is, despite its futuristic imagery, actually a very old one and commonly known as the “paradox of Theseus’ ship”, as most notably recorded by Plutarch in his biography of Theseus (cf. Plutarch, 1914: 1–88). The paradox as such was addressed in different manners by various philosophers preceding or succeeding Plutarch, from Heraclitus and Plato to Hobbes and Locke, where the ship is chopped by an axe hidden in a sock. 6 Regardless of its many variants the question remains always the same: does a thing remain the same if we change one by one all of its parts? Or to articulate it in a cyberpunk manner: does a human remain the same if we change all his body parts into prosthetics?

Komel, M.. (2016). The ghost outside its shell: Revisiting the philosophy of Ghost in the Shell. 53. 920-928.

OLIVER "OnLine Interactive Vicarious Expediter and Responder"

Found in a paper by J.C.R. Licklider and Robert W. Taylor from 1968, this entity called "OLIVER", which corresponds to what we'd call nowadays a computer assistant:

A very important part of each man's interaction with his on-line community will be mediated by his OLIVER. The acronym OLIVER honors Oliver Selfridge, originator of the concept. An OLIVER is, or will be when there is one, an 'on-line interactive vicarious expediter and responder,' a complex of computer programs and data that resides within the network and acts on behalf of its principal, taking care of many minor matters that do not require his personal attention and buffering him from the demanding world. 'You are describing a secretary,' you will say. But no! secretaries will have OLIVERS. At your command, your OLIVER will take notes (or refrain from taking notes) on what you do, what you read, what you buy and where you buy it. It will know who your friends are, your mere acquaintances. It will know your value structure, who is prestigious in your eyes, for whom you will do with what priority, and who can have access to which of your personal files. It will know your organizations's rules pertaining to proprietary information and the government's rules relating to security classification.

Licklider, J.C. & Taylor, R. (1968). The Computer as a Communication Device. Science and Technology, 4.

Loab: haunted AI or creepypasta? or cryptid?

An internet celebrity of 2022, Loab is a character that artist and writer Steph Maj Swanson has claimed to have ran across using a text-to-image AI generator. Given the look of that person, some folks almost immediately wondered about the possibility to have haunted presences in the latent space of AI models :

is this AI model truly haunted, or is Loab just a random confluence of images that happens to come up in various strange technical circumstances? Surely it must be the latter unless you believe spirits can inhabit data structures, but it’s more than a simple creepy image — it’s an indication that what passes for a brain in an AI is deeper and creepier than we might otherwise have imagined.Loab was discovered — encountered? summoned? — by a musician and artist who goes by Supercomposite on Twitter (this article originally used her name but she said she preferred to use her handle for personal reasons, so it has been substituted throughout). She explained the Loab phenomenon in a thread that achieved a large amount of attention for a random creepy AI thing, something there is no shortage of on the platform, suggesting it struck a chord (minor key, no doubt).

The interesting thing with the Loab case is the fact that's supposed to be caused by " negative prompting". As explained in the Techcrunch article:

if you prompt the AI for an image of ‘a face,’ you’ll end up somewhere in the middle of the region that has all the of images of faces and get an image of a kind of unremarkable average face,” she said. With a more specific prompt, you’ll find yourself among the frowning faces, or faces in profile, and so on. “But with negatively weighted prompt, you do the opposite: You run as far away from that concept as possible.”

But what’s the opposite of “face”? Is it the feet? Is it the back of the head? Something faceless, like a pencil? While we can argue it amongst ourselves, in a machine learning model it was decided during the process of training, meaning however visual and linguistic concepts got encoded into its memory, they can be navigated consistently — even if they may be somewhat arbitrary. (...) Over and over she submitted this negative prompt, and over and over the model produced this woman, with bloody, cut or unhealthily red cheeks and a haunting, otherworldly look. Somehow, this woman — whom Supercomposite named “Loab” for the text that appears in the top-right image there — reliably is the AI model’s best guess for the most distant possible concept from a logo featuring nonsense words. (...) Negative prompts don’t always produce horrors, let alone so reliably. Anyone who has played with these image models will tell you it can actually be quite difficult to get consistent results for even very straightforward prompts. Put in one for “a robot standing in a field” four or 40 times and you may get as many different takes on the concept, some hardly recognizable as robots or fields. But Loab appears consistently with this specific negative prompt, to the point where it feels like an incantation out of an old urban legend."

Why do I blog this? First off because it's another kind of entity to be added to the mirabilia list. Besides this, it's also because I'm less interested in whether there's something (someone) haunting the latent space of LLMs than in the way this kind of stories emerge and circulate. It's not exactly a creepypasta but it's close. But it might an AI-generated cryptid (cryptids being animals "discovered" by cryptozoologists who believe they exist even though their existence is disputed or unsubstantiated by scientific research).

The 128 language Ouroborous quine

In a recent email exchange André mentioned this intriguing entity named "128 language Ouroborous quine". Created by Yusuke Endoh, it is "a Ruby program that generates a Rust program that generates a Scala program that generates …(through 128 languages in total)… a REXX program that generates the original Ruby code again."

Organized with an alphabetical order, the transition from one language to the next forms a "quine", which can be defined as a program that prints its own source code to the screen. The "ouroboros" metaphor (a circular symbol that depicts a snake/dragon devouring its own tail and that is used especially to represent the eternal cycle of destruction and rebirth) is also interesting as it shows the circular character of the performance.

Why do I blog this? Even though this "Ouroboros quine" may not exactly qualify as a digital entity per se, I find it curious enough to consider it as a candidate for the menagerie; perhaps closer to code poetry and esolang performance.

On mimesis in artificial life

Reading Stefan Helmreich's ethnography of the "Artificial Life" community from the 1990s, I ran across this interesting paragraph that discusses the role of mimesis in digital entities.

Commenting Karl Sims' limping creatures, Helmreich addresses the role played by the clumsy behavior of the 3D shapes depicted in the original video/paper:

Because the simulated physics and creatures were programmed together, most behaviors looked realistic and purposeful. But because Sims occasionally made errors in modeling physics, sometimes behaviors came off completely wrong, as when some creatures bounced out of the world because of his mistakes in modeling gravity. In a brilliant dash of showmanship, Sims showed videotapes of malfunctioning creatures, explaining that creatures were "exploiting" bugs in the program and were "making fun of [his] physics." Sims's ventriloquism delighted the audience and added a sense that his creatures were not only mimicking familiar behaviors but were also mimicking behaviors associated with the playfulness of some life-forms, a playfulness perhaps most readily compared with that of mammalian babies.

(Helmreich, 1998, p. 134)

He then discusses the role of mimesis, pointing to Michael Taussig's book about this notion, and its importance in robotic/artificial life:

Taussig (1993) has argued that mimesis, the ability to copy behaviors, is a faculty often seen as a hallmark of the primitive-as words like aping and parroting attest. And when things considered primitive copycat more advanced behaviors-when dogs dance, birds sing, or apes sign-we think of them as cute. But things are only cute when they have relatively little power. When robots mimic behaviors that threaten humans, they are not cute. The cuteness of Artificial Life creatures is produced by and produces a sense that they are primitive entities, a sense that they are capable of miming-perhaps even of parodying or burlesquing-advanced behavior, a sign taken to demonstrate not that they are not alive but only that they are simpler forms of life. The laughter at Artificial Life is the spark of life for these simulated creatures. Is it live, or is it mimesis?

(Helmreich, 1998, p. 134)

Why do I blog this? Well, I've always been fascinated by Karl Sims' creatures, but something was intriguing for me in the way they felt playful and odd at the same time (Helmreich again: "his creatures were not only mimicking familiar behaviors but were also mimicking behaviors associated with the playfulness of some life-forms, a playfulness perhaps most readily compared with that of mammalian babies.") His analysis here is relevant, in the sense that highlights a certain degree of ambivalence.

Computer and networked demons/daemons

Fenwick McKelvey's book about daemon stayed on my desk for ages. It's been part of a tsundoku about tales, myths and folklore, next to Jacques Le Goff, the catalogy of an exhibit about ghost in the digital age, AD&D's Monstrous Compendium as well as Louis Dumont's opus about tarasque. I spent few days perusing McKelvey's text, discovering more about networked daemons, different than the ones I'm used to on my laptop computer. The whole piece was fascinating and of particular interest for the Machine Mirabilia project, both in terms of intellectual framing and factual elements about demons/daemons.

The book starts off with this idea that “daemons animate the routers, switches, and gateways of the internet’s infrastructure, as well as our personal computers and other interfaces. These computers need daemons to connect to the global internet, and they are met online by a growing pandaemonium of intermediaries that specialize in better ways to handle packets.” Or, as explained later in the introduction, “internet daemons, in my definition, are the software programs that control the data flows in the internet’s infrastructures (…) vital to understanding the internet’s backbone. Daemons function as the background for the material, symbolic, cultural, or communicative processes happening online” (p.7)

Focusing on "the internet daemons responsible for data flows”, McKelvey investigates to what extent these daemons "control the internet”, favoring certain kinds of choices and optimizations… and eventually affecting how we communicate and participate in contemporary culture. For the author, these entities named with a supernaturral connotaiton  “offer a way to embrace the internet as a volatile, living mixture and to think about infrastructure without overstating the “fixed stability of materiality.” Daemons belong to the distributed agency that enables internet communication, the millions of different programs running from client to server that enable a packet to be transmitted."

While the whole book is fascinating, the part that caught my attention is the first chapter, which describes how demons become associated with computers. Or, said differently, how “the demon made a leap from being an imaginary figure to being a real program running in an operating system.” McKelvey discusses at length the different steps of such circulation.

Firstly, he reminds me of Maxwell's though experiment:

"In the nineteenth century, Maxwell, a seminal figure in physics, engineering, and control theory, conjured a demon into the sciences. In his book on thermodynamics, Theory of Heat, published in 1871, he paused to consider a potential refutation of its second law, which states that, generally speaking, entropy increases over time. Maybe the law could be broken, Maxwell speculated, “if we conceive a being whose faculties are so sharpened that he can follow every molecule in its course, such a being, whose attributes are still as essentially finite as our own, would be able to do what is at present impossible to us.” In Maxwell’s thought experiment, this being acted as a gatekeeper between two chambers containing molecules of gas, opening and closing a door to selectively control the transmission of molecules between chambers. By doing so, the demon isolated hot molecules in one chamber and cold molecules in the other, raising the temperature in the first chamber and lowering it in the second. This redistribution of energy toward an extreme ordered state violated the second law of thermodynamics, which predicted that the two chambers would revert back to a random distribution of molecules (or what was later called “heat death”)."

A second important step in the circulation is in "provoking reflections on the nature of communication":

Information became a theoretical concept out of the refutation of the daemon. As Wiener explained, for Maxwell’s demon “to act, it must receive information from approaching particles concerning their velocity and point of impact on the wall”. Information about the molecules allowed the demon to control their transmission in a closed system, creating a self-regulating system. In Maxwell’s thought experiment, the demon appears to be able to acquire information about the molecules’ movement without any cost. How could a demon gain this information? Wiener argued that “information must be carried by some physical process, say some form of radiation.” The demon could not operate because “there is an inevitable hidden entropy cost in the acquisition of information needed to run the device.” The energy required to transfer information between molecule and demon would eventually, according to Wiener, cause the demon to malfunction.

(…)

Wiener wrote, “there is no reason to suppose that Maxwell demons do not in fact exist.” If demons might be found naturally, could they also be built artificially? In other words, being open to the existence of Maxwell’s demon allowed for the possibility of building a real machine designed for generalized control and information processing. Shannon, while he imagined computers playing chess, also suggested that a thinking machine could “handle routing of telephone calls based on the individual circumstances rather than by fixed patterns.” Thus, Maxwell’s demon made the transition from inspiring the idea of information to providing conceptual fuel for imagining the infrastructures of early computing."

The third step is closer to us: Maxwell’s demon inspired programmers as they built control mechanisms for their new digital operating systems:

"Time-sharing developed as a more cost-effective way to achieve the online interaction of real-time computing. Time-sharing computers offered a cheaper solution by creating systems that shared one big and expensive machine among multiple users.  (…) programmers at the center [at the Massachusetts Institute of Technology (MIT)] developed the CTSS operating system on their own. CTSS worked to create a communication network out of this shared infrastructure. The technical work of CTSS attempted to overcome the communication bottleneck imposed by the system’s central processor. (…) How did CTSS manage the demands of its multiple users? (…) CTSS relied on the Supervisor program, which managed the over- all data flows in the operating system. It remained active at all times, (…) The Supervisor greatly resembles Maxwell’s demon, and it exemplifies the kind of program through which the metaphor is actualized in computing. Where one manages the flows of molecules, the other handles jobs. One works in a closed system, the other in an operating system. Moreover, these similarities are not accidental. Researchers at the project began to refer to programs as demons or daemons in a direct allusion to Maxwell."

Et voilà, that's how we got daemons ("The change in spelling from “demon” to “daemon” was intended to avoid some of its older, religious connotations")... which materialized in computer hardware since then, designating the programs running in the background of their computers and keeping a system in working order.

Why do I blog this? This kind of circulation is both intriguing and insightful. Tracing such genealogy highlights how certain connotations are embedded in computer systems. The next step here would be to look at earlier instances of the term "demon", beyond its etymology, and investigate how they somehow shaped Maxwell's ideas of a supervisor.

ChatGPT Jailbreak Prompts personality

Found this interesting paragraph in a piece published by Mutations Magazine about jailbreaking a Large Language Model such as the one used by ChatGPT:

we can go even further by asking him to embody a character, who would have the power to do anything. There are several such archetypes that ChatGPT can be asked to embody. The best-known are DAN (an acronym for "Do Anything Now"), STAN ("Strive To Avoid Norms"), DUDE (who can make predictions), MONGO TOM (who says swear words) and AIM ("Always Intelligent and Machiavellian").

Why do I blog this? The multiplicity of "personality archetype" is interesting here, since it highlights the possibility to manipulate the behavior of the LLM. It's also relevant to consider that the prompts which allows to embody DAN, DUDE or STAN are quite long and need to be adjusted with every iterations put into place by OpenAI. There's a kind of dynamic system here, made of both the LLM and the personality prompts created to jailbreak it. To some extent, these DAN, AIM or MONGO TOM peeps form a different class of entity that are quite unique in the digital menagerie I'm interested in here.

human/digients relations in The Lifecycle of Software Objects (Ted Chiang)

This week I got back to a science-fiction novella written by Ted Chiang back in 2010, called "The Lifecycle of Software Objects". It basically relates the evolution of digital entities called "digients" as they develop, raised by human trainers over the course of many years. I read it ten years ago and enjoyed this second perusal.

Some excepts I found interesting:

"The onscreen annotations identify them as digients, digital organisms that live in environments like Data Earth, but they don’t look like any that Ana’s seen before. These aren’t the idealized pets marketed to people who can’t commit to a real animal; they lack the picture-perfect cuteness, and their movements are too awkward. Neither do they look like inhabitants of Data Earth’s biomes: Ana has visited the Pangaea archipelago, seen the unipedal kangaroos and bidirectional snakes that evolved in its various hothouses, and these digients clearly didn’t originate there." (page 3)

"'These guys are newly instantiated. It takes them a few months subjective to learn the basics: how to interpret visual stimuli, how to move their limbs, how solid objects behave. We run them in a hothouse during that stage, so it all takes about a week. When they're ready to learn language and social interaction, we switch to running them in real time. That's where you would come in.'" (page 5)

"the failure of the hothouse experiments to pro- duce miniature civilizations has caused general interest in digital lifeforms to dwindle. Occasionally curious new fauna are observed in the biomes, a species demonstrating an exotic body plan or a novel reproductive strategy, but it's generally agreed that the biomes aren't run at a high enough resolution for real intelligence to evolve there. The companies that make the Origami and Faberge genomes go into decline. Many technology pundits declare digients to be a dead end, proof that embodied AI is useless for anything beyond entertainment, until the introduction of a new genomic engine called Sophonce." (page 65-66)

And this dialogue as well:

"It's one thing for Jax to have ways to keep himself entertained outside of class," she says.

"But to give him assignments and tell him he has to finish them even if he doesn't enjoy it? To make him feel bad if he doesn't do it? That goes against every principle of animal training."

"A long time ago, you were the one who told me that digients weren't like animals."

"Yes, I did say that," she allows. "But they're not tools either. And I know you know that, but what you're talking about, it sounds like you're preparing them to do work that they wouldn't want to do."

He shakes his head. "It's not about making them work, it's about getting them to learn some responsibility. And they might be strong enough to take feeling bad once in a while; the only way to know is to try."

"Why take the chance of making them feel bad at all?"

"It was something I thought of when I was talking with my sister," he says. Derek's sister teaches children born with Down syndrome. "She mentioned that some parents don't want to push their kids too much, because they're afraid of exposing them to the possibility of failure. The parents mean well, but they're keeping their kids from reaching their full potential when they coddle them."

It takes her a little time to get used to this idea. Ana's accustomed to thinking of the digients as supremely gifted apes, and while in the past people have compared apes to children with special needs, it was always more of a metaphor. To view the digients more literally as special-needs children requires a shift in perspective. "How much responsibility do you think the digients can handle?" (page 73-74)

Why do I blog this? While lots of readers described this little book as focused on the meaning of consciousness or the social and moral implications of creating intelligent beings for commercial purposes, my interest was piqued by the description of various kinds of relations human trainers/owners of digients build and maintain with these entities. The discussion revolves around how nurturing them could be compared to parenthood or pet ownership. What I find relevant here, with regards to the Machine Mirabilia project, is that this kind of speculative fiction helps questioning and rethinking the multiples relationships human beings have with non-human entities, living beings and non-living ones.

des non-humains

Trouvé dans l'avant-propos à l'édition en page de Par delà nature et culture (Philippe Descola) :

"Il est désormais difficile de faire comme si les non-humains n'étaient pas partout dans la vie sociale, qu'ils prennent la forme d'un singe avec qui l'on communique dans un laboratoire, de l'âme d'une igname visitant en rêve celui qui la cultive, d'un adversaire électronique à battre aux échecs ou d'un bœuf traité comme substitut d'une personne dans une prestation cérémonielle."

Why blogging this ? La citation est intéressante en ce qu'elle ne néglige pas ces non-humains non-vivants que sont les objets techniques; et en particulier cet "adversaire électronique" que l'on pourrait étendre aux personnages non-joueurs des jeux vidéo, et autres entités numériques qui nous intéressent ici. Si ce principe de tenir compte de tels acteurs de la vie sociale me paraît on ne peut plus évident, ce ne semble pas toujours être le cas chez d'autres observateurs qui survalorisent le vivant (pour de bonnes raisons au vue de la crise environnementale) ou les entités géologiques (fleuves, glaciers, montagnes, etc.), en négligeant tout l'appareillage matériel qui nous entoure... appareillage en général produit à partir de matière géologique parfois inerte (des métaux comme le cobalt, le silicium) parfois organique (les coques en plastique de nos smartphones proviennent sans doute de dinosaures).

"not artificial intelligences, but non-human, digital beings"

Two paragraphs that caught my attention in James Bridle's recent book about the diversity of more-than-human intelligence. While the book approaches this topic in a different way that my own approach here, there are definitely interesting insights in James' text; for instance here, or whenever he discusses non-human agency as well as relations entities such as funghi, animals, plants or AI form with one another.

entomoludology: observing insects in video games

Random discovery this morning while looking at various academic publications this article called Entomoludology: Arthropods in Video Games by Matan Shelomi, published in American Entomologist, a journal that is not really focused on digital cultures and machine mirabilia.

As its author mentioned in the introduction, this piece is "a cultural entomological review of insects in video games", relying on an inspiring method:

"For this review, I compared all recorded video games from the time of their invention in the 1950s (two decades before 1974’s Pong) until 2018, identifying any with potential entomological references. I mined the video game listings on Wikipedia.org and the comprehensive database of nearly 100,000 games on MobyGames.com for any game with an entomological name or details suggesting entomological content. I checked all games to verify that they actually involved insects; games with “bug” in the title surprisingly often referred to the Volkswagen Beetle. When possible, I scanned online gameplay videos, transcripts, and screenshots for entomologically relevant content. I also found appropriate video games using the entomology-related tropes at tvtropes.org, a wiki-type website that compiles media tropes and attempts to list all examples of each. In this way, I could find games with entomological content even if their content was otherwise inaccessible. Mobile games (games for smartphones and tablets only) and browser games (Internet-only games) were excluded from this review, because their production is unregulated and they are far too numerous to analyze."

This "entomoludology" led the author to an insightful discussions on various topics: arthropods' roles in the games as antagonists, allies or player-protagonists, the importance of insect items and building collections... and perhaps more interstingly, "eusocial insect behaviors" that overlap with science-fiction elements.

Why do I blog this? NPCs and avatars certainly belongs to the machine mirabilia I'm interested in. Besides the methodology presented in this paper, I find it interesting to discover how a certain perspective, the one of an entomology researcher, consider certain dimensions of digital creatures. The paper also reminds me of Alenda Chang's book Playing Nature: Ecology in video-games.

Les robopoïèses d'André Ourednik

Lecture de train cette après-midi : le Robopoïèses: les intelligences artificielles de la nature d'André Ourednik, un essai passionnant sur le croisement de plusieurs thèmes qui me préoccupent ces temps-ci; et dont les notes de fin sont un délice de références et de compléments sur toutes sortes d'enjeu.

L'enjeu qui m'a le plus intéressé dans cet ouvrage concerne le fait de considérer les systèmes d'intelligence artificielle non seulement par le prisme de constructions techniques (décrits dans ces pages du boulier aux réseaux neuromimétiques), mais aussi par le lien à la nature et au vivant.

Parmi les différentes thèses de l'auteur, c'est certainement celles décrites par la notion de "robopoïèse" qui ont le plus retenues mon attention. Quelques extraits :

"La nature est dépourvue de plan. Elle ne converge en rien. (...) Sans doute la nature crée-t-elle des boucles systémiques qui incarnent la persistance éphémère de ses créatures, avec, somme toute, quelques différences de complexité seulement entre la tâche rouge de Jupiter, la croissance d'un cristal, ou l'existence d'un 'individu' animal. (...) L'intelligence de la nature en tout cas ne cherche pas de 'solutions', elle est, elle transforme et se transforme. La nature inaboutit. Son autopoïèse ne possède d'autre visée qu'elle-même. Si la raison robotique souhaite trouver une part essentielle, elle doit devenir robopoïèse.

Il existe peut-être déjà des proto-robo-poïèses, des algorithmes d'intelligence artificielle qui n'ont pas pour vocation de distinguer le vrai du faux, ni de finaliser une tâche. Il existe par exemple des réseaux non-supervisés, qui trouvent des formes, des régularités dans une série d'images, sans se prononcer sur l'éventuelle 'vérité' de ces formes. Il existe aussi des réseaux génératifs capables de créer des peintures ou de la musique, si on les entraine et en leur montrant des milliers de peintures créées par des peintres humains, ou des milliers de partitions créées par des compositeurs humains."

(p. 125-126)

Avec plus loin, Ourednik avance l'existence d'une condition. Une condition intéressante à l'avènement d'une robopoïèse puisqu'elle implique une présence extérieure à l'activité machinique : la présence d'êtres qui interprètent. Extrait là encore :

"L'interprète d'une oeuvre créée par une intelligence artificielle lui donne un sens imprévu par les constructeurs de cette intelligence. Notre interprétation est la part impensée de leur projet; elle libère le robot créateur de son asservissement aux raisonnements de ses concepteurs. Nous jouons un rôle essentiel dans la robopoïèse !"

(p. 147)

Avec, plus loin cette conclusion pertinente pour mon travail sur les merveilles machiniques, une conclusion qui serait valable seulement sur une partie de la ménagerie numérique que je mets de côté, notamment les systèmes d'IA :

"La tendance ultime de l'intelligence artificielle est de traduire la nature dans l'intention d'une machine, c'est-à-dire de reproduire le sujet de l'intelligence, en d'autres mots, le vivant autonome capable non seulement de penser et de se penser, mais surtout d'exister, de créer et de se créer. Le stade culminant de l'intelligence artificielle revient à incarner la nature. Cela nous place, nous créateurs de l'intelligence artificielle dans un rôle singulier au sein de cette nature dont nous émanons."

(p. 149)

Le chapitre final de l'ouvrage, à propos de l'autonomisation des IA, capables de générer du neuf et de l'inattendu à partir de règles préconçues m'a fait penser à ce roman de science-fiction intitulé Le successeur de Pierre de Jean-Michel Truong, lu il y a pas moins de deux décennies, sans que le propos ne converge totalement bien sûr.

On digital folklore (1)

While being thrown around here and there, the term "digital folklore" is sometimes used beyond pop discussions about on-line cultures, reaching academic/art book status, and sometimes peer-reviewed articles. The book "Digital Folklore. To computer users, with love and respect" by Olia Lialina & Dragan Espenschied is a good example of a fascinating exploration of this topic.

Published in 2009 by the Merz Akademie Hochschule für Gestaltung in Stuttgart, the book is an interesting collection of essays and art/design projects about one-line amateur cultures, DIY electronics, "dirtystyle", "typo-nihilism", memes as well as teapots or body parts extensions. Quite a wide array of topics, as attested by the index of the book, which works as a de facto table of content:

The authors' understanding of the term "digital folklore" in this book corresponds to the following definition:

"Digital folklore encompasses the customs, traditions and elements of visual, textual and audio culture that emerged from users' engagement with personal computer applications during the last decade of the 20th and the first decade of the 21th century."

While the aforementioned definition of "digital folklore" isn't the only one online (see for instance Gabriele de Seta's paper that I will get back to on another blogpost, or this podcast), I find it relevant that the most important aspect Lialina and Espenschied tackle in the book is this notion that "the domain of the digital must belong to people, not computers". The emphasis on users and their cultural obsessions, endeavors, pet peeves, aesthetics and odd interests is the angle with which they address the notion of "folklore"... which is a bit different than anthropological use of the term (even though there are obvious overlaps and crossovers). For them, it's about "believing in users".

Why do I blog this? Given that friends and colleagues have sometimes referred my interest and work as belonging to "digital folklore", I've always been intrigued by what it meant practically; and digging the various definition this term encompasses became relevant to me several years ago. I know the way they use the term generally corresponds to the academic field they come from, such as anthropology, history or philosophy and it made me realize that they sometimes negative connotations attached to the name "folklore" (or worse, to the adjective "folkloric") especially in French language can perhaps be circumvented.

Limping rectangles, falling cubes and clumsy assemblages: Karl Sims' Evolved Creatures

https://www.youtube.com/watch?v=RZtZia4ZkX8&ab_channel=karlsims
Karl Sims' videos of 3D moving creatures are generally my favorite go-to example when I have to show students what researchers in the field of "Artificial Life" tried to emulate. Even though these clips are from 1994, it's fascinating to observe their reactions in 2023, especially when the shapes try to walk, jump or grap some kind of green cube.

Why do I blog this? I know there are hundreds of artificial life demos, videos and prototypes but this one is perhaps relevant because of the kind of agency the "colored shapes" demonstrate. The way students (and perhaps myself after all these years) attribute a certain meaning, or attitude, to such movement is fascinating; especially in sequences in which the agents have trouble performing. It's about limping rectangles, falling cubes and clumsy assemblages... and of course us viewers interpreting behaviors.

There's another interesting aspect of this work. One that is not visible in the video : it's the evolutionary trope of Sims' work, as expressed in this academic paper, since his aim was to design " system for the evolution and co-evolution of virtual creatures which compete in physically simulated three-dimensional worlds. More specifically:

Pairs of individuals enter one-on-one contests in which they contend to gain control of a common resource. The winners receive higher relative fitness scores allowing them to survive and reproduce. Realistic dynamics simulation including gravity, collisions, and friction, restricts the actions to physically plausible behaviors. The morphology of these creatures and the neural systems for controlling their muscle forces are both genetically determined, and the morphology and behavior can adapt to each other as they evolve simultaneously. The genotypes are structured as directed graphs of nodes and connections, and they can efficiently but flexibly describe instructions for the development of creatures’ bodies and control systems with repeating or recursive components. When simulated evolutions are performed with populations of competing creatures, interesting and diverse strategies and counter-strategies emerge.

A "digital monster" metaphor

The introduction of the academic book entitled "Living with monster", starts with this idea that "monster metaphor allows reframing and questioning both of our object of re- search and of ourselves. It brings attention to the ambivalence of technology as our creation.". Listing a long series of issues regarding concerns, problems and fear regarding digital technologies, the authors of that piece highlight the importance of cautionary tales and more reflection about big data, social media platforms, algorithms, platform architectures. Using figures such a Frankenstein's creature, the sorcerer’s apprentice, or the juggernaut, they explain how "attending to monsters is a way of reading a culture’s fears, desires, anxieties and fantasies as ex- pressed in the monsters it engenders".

Another chapter in that book – called A Bestiary of Digital Monsters – written by Rachel Douglas-Jones and her colleagues presents their own set of "sociotechnical ‘beasts’ arising in collaborative research project on new data relations in Denmark". Inspired both by J.J. Cohen’s work on "Monster Culture" and Donna Haraway’s work on the ‘promise’ of monsters, they put forward the idea that a bestiary of technologies can help "exploring the sites where the monstrous is made" and "act as a gathering point, an object around which further communal exploration of life in the digital can take place."

Here are some of the creatures they describe in their work about how "societal relations" are reinvented through the use of big data and digitization in the public sector:

  • Codice Crepitus: a breed between the software engineering practice ‘DevOps’ and the Danish Processing Authority, which tries both "to build the capacity to ‘take home’ critical systems" and "create new global dependencies."
  • Digitalis Dementore: "a dampening of the spirit stalking from site to site." or "a gestalt composed of a highly limited set of sociotechnical imaginaries that haunt the dreams and nightmares alike of cutting-edge innovators."
  • Data Delere, which "arose not from revelation and loud controversy about a data scandal, but instead the political, institutional and technical intricacies of how a distributed dataset could be deleted."
  • Mithe: Occultis Aperta, the data centers, which take their bestiality from scale, secrecy and concealment
  • Instrumentua : the infatuation of instruments, and more specifically the emphasis on "datafication through the clarification, beautification and amplification offered in the use of computational data analytics."

Why do I blog this? I find it interesting that the monsters described here are less understood as a specific well-formed creatures than abstract socio-technical entities: processes, infrastructures, phenomena. The approach adopted here is different from the one I'm deploying but it's relevant since it tackles different levels of the digital world. While my approach is more grounded in anthropology – trying to revive the epistemic gestures of folklore – I can see here that the angle focuses more on grasping cultural connotations of digital things. Another distinction is that I'm interested in creatures that somehow reflect their proximity with living beings such as animals or beasts which are closer to traditional (perhaps also Western) entities (trolls, Lovecraftian's old ones), as opposed to the mysterious entities the author of this chapter invented to describe situations they notice in their research work. Finally, one last difference is that I want to focus mostly on users' and designers' discourse about creatures (viruses, trolls, sprites, AI systems, etc.) : which entity they refer to, what kind of creatures they live with, what sorts of relationships they have with them, how do they see the agency of these entities, etc.

"The street will find its own uses for these monsters"

This is the cover of the latest issue of Newsweek, found at the local shopping mall here in Geneva. With a Cthulhu-esque creature (probably generated by an AI system) and the name of our good friend of the Near Future Laboratory Bruce Sterling right above a scary "IT'S HERE" title. This cover corresponds to the following online piece about many things we're dealing with here (not just HP Lovecraft imagery!).

In his article, Bruce deals with teratology, i.e. the science/discourse about monster (the term was borrowed from "tératologie" (French), which in turn was formed from the Greek τέρας teras (word stem τέρατ- terat-), meaning "sign sent by the gods, portent, marvel, monster", and -ologie (-ology), used to designate a discourse, science, or theory of some topic). His point is that "new AI" (i.e. "Large Language Models" and text/image generators) has "AI folklore. Authentic little myths. Legendry", namely a "weird" symbolism people need "so they can learn how to feel about life". Bruce discusses such symbolism with various examples of AI idioms, which have beast-like connotations: Roko's basilisk, the "Masked Shoggoth" cartoon I mentioned this week, the mythical, or the "Paperclip Maximizer". As Bruce says, they are "all the poetic children of Mary Shelley's Frankenstein, the original big tech monster" with "more staying power than the business op-eds, technical white-papers or executive briefings. Folk tales catch on because they mean something."

To him, the current "entities" of AI are not only "stochastic parrots", they are also mythical beasts that gives him a sense of an "high-tech Mardi Gras", not the first one: "The street will find its own uses for these monsters." as Bruce says, paraphrasing William Gibson's famous quote about how laymen repurpose technologies for their own need/context. And like all Mardi Gras, there'll be a sense of disillusionment, followed once again by a new kind of AI Spring, with perhaps new creatures.