How do you name your algorithm?

Re-arrarranging books and documents on my shelves this afternoon, I revisited this gem of masters thesis by Raphaël Pluvinage. Written in 2015, this mémoire produced in an design school (ENSCI-Les Ateliers, Paris) addresses the ever-increasing importance of algorithms in our everyday life, as well as issues regarding their behavior and design.

As often with thesis in applied arts, the visual character of the document is stunning and highly interesting from both an aesthetical and intellectual viewpoint. The author chose to use multiple representations to highlight what he calls the "forms" of algorithms, with diagrams and visual patterns expressing abstract data trajectories.

The most interesting part of thesis IMHO – wrt to my project here and this blog – is the chapter right in the middle about how to name algorithms (yes this is none other than Nintendo ROB used as a book holder):

Implications: A common practice in computer science and programming is to name algorithms, either by category denominations (e.g. sorting algo, matching algo), or by individual nickname. The alphabetical classification provided by Pluvinage in this thesis is interesting, because it shows two of the epistemological gestures I want to focus on in this project: naming and organizing entities. A potential follow-up here would be to explore how these names are used in various communities of practice, and how their connotation or cultural references have an influence on activities carried out by practitioners.

"étranges esprits"

Dans un article d'opinion publié au journal Le Temps, le chercheur de l'Université de Genève François Fleuret utilise le terme "d'étranges esprits" pour qualifier les modèles de langages (LLM) que sont GPT-4 d'OpenAI, LaMDA de Google, ou LLaMA de Meta, et leur capacité à "démontrer des étincelles de raisonnement", comme il le formule.

Voici comment il motive cet usage, ancré dans une critique de la tendance à anthropomorphiser ces entités :

Ces modèles sont des esprits étranges, dont le comportement nous semblefami-
lier, mais avec une origine,une structure et une relation à la réalité totalement différentes des nôtres. Notre humanité leur est parvenue comme un écho lointain à travers nos écrits, mais ce que vous savez des humains, votre aptitude à inférer ce qu'un humain fera, ou pensera, n'a pas de raison de s'appliquer. Si vous devez évoquer une image quand vous interagissez avec une de ces entités, cela devrait être celle d'une créature de science-fiction informe venant d'un autre monde, plutôt que celle d'un humain, aussi bizarre soit-il.

Dans la continuité des billets antérieurs à propos de la métaphore Lovecraftienne, cette proposition est intéressante dans le sens où elle souligne en quoi les experts du domaine ont recours eux-même au lexique de l'étrangeté pour expliquer leur champ de recherche.

Lovecraftian AI part deux: the Shoggoth metaphor

The Twitter thread that I mentioned the other day about AI as a Lovecraftian creatures was commented by various people on the blue bird media. One of them pointed to this other kind of representation for Artificial Intelligence algos:

Created a month after ChatGPT release, this octopus-like creature is a Shoggoth (hence the term "Shoggoth memes" for variations on this representation) obviously comes from HP Lovecraft's lore commonly known as Cthulhu mythos.

Reacting to my post on Mastodon, Justin showed me this article in the New York Times, by Kevin Roose which discusses the use of this entity as a recurring metaphor in essays and message board posts about AI risk and safety. Some excerpts from Roose's paper:

In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses and feeding those scores back into the A.I. model. Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.

@TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think. Comparing an A.I. language model to a Shoggoth, @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable. “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”

Eventually, A.I. enthusiasts extended the metaphor. In February, the Twitter user @anthrupad created a version of a Shoggoth that had, in addition to a smiley-face labeled “R.L.H.F.,” a more humanlike face labeled “supervised fine-tuning.”

Why do I blog this? As Kevin Roose discusses at the end of his piece, the interesting thing here is that developers, scientists and entrepreneurs working with this technology seem to be "somewhat mystified by their own creations". Not knowing precisely how these systems work, they translate their anxiety into such kind of metaphor, with peculiar connotations of weirdness and oddity. The NYT piece is also relevant on a different aspect that is dear to my heart: the vocabulary employed by practitioners, who says things like "glimpsing the Shoggoth" to refer to people taking a peek at such entities.

Troglobytes: tiny creatures in your computer

A somewhat unexpected discovery at the flea market the other day: this book called "The Troglobytes. There's chaos in computer city" by Graham Philpot. Funny find, as Margot sent me some pics of the French version few weeks ago.

The book aimed at kids describes a team of characters, which live inside computers. There's Professor Processor (the brain of Computer City"), Major Rom (who look after the start up file and knows its secret code), Miss Floppy Disk (in charge of running Level 2 in Disk Drive and take care of copy disk files in case something goes awry), Robot Ticks (patrolling the City to "keep law and order"), Mike Megabike (who carries messages along the electronic highways), the wise Hard Disk Controller (runner of the the Disk Drive Library) as well as the microchippies (builderbytes who expand storage on the New Ram Expansion Site). There's also a bunch of mischievous personages such as Captain Hacker and his gang of Pirate Hackerbytes, who want to take control of the computer; and the Beastiebytes, who make mischief wherever they can.

The story is quite basic (fun though!): Captain Hacker has stolen the start-up file of Computer City, causing the Central Control to shut down half an hour later. Miss Floppy disk is in charge of getting a copy of the file to Major Rom and subsequently save them for disappearing.

Why do I blog this? More than the scenario, I'm intrigued by this idea of describing tiny creatures inside computer machinery. While the angle is aimed at children – educating them about the curious vocabulary of such devices – the way it is framed is quite interesting. The book was published in 1997, which probably explain both the aesthetic and the emphasis on certain things... like the poetic connotations of terms like "ROM", "RAM", "start-up file", "Hard Disk". Of course the author relies on a simple binary trope (good guys/bad guys) but the diversity of characters and expertise is relevant in the sens that it highlights various processes of information and communication technologies.

Besides that, I like the way the book plays on the "tiny creature inside machine" angle, expanding on this idea of living entities that I've always find curious. To some extent, it reminds me of this video game called Bugs Buster I use to play as a kid on my Thomson TO7/70. A game in which the player had to capture another type of weird computer creatures: bugs.

Lovecraftian AI Tentacle monster

Chatting with Tommaso yesterday about the Machine Mirabilia project, he showed this uncanny representation of ChatGPT that made the round on social media a while back.

He then pointed me to a twitter thread from a technology researcher working in Machine Learning, which unpack the logic behind this monstrous representation. Some excerpts I captured :

First, some basics of how language models like ChatGPT work: Basically, the way you train a language model is by giving it insane quantities of text data and asking it over and over to predict what word comes next after a given passage. Eventually, it gets very good at this. This training is a type of unsupervised learning It's called that because the data (mountains of text scraped from the internet/books/etc) is just raw information—it hasn't been structured and labeled into nice input-output pairs (like, say, a database of images+labels). But it turns out models trained that way, by themselves, aren't all that useful. They can do some cool stuff, like generating a news article to match a lede. But they often find ways to generate plausible-seeming text completions that really weren't what you were going for. So researchers figured out some ways to make them work better. One basic trick is "fine-tuning": You partially retrain the model using data specifically for the task you care about. If you're training a customer service bot, for instance, then maybe you pay some human customer service agents to look at real customer questions and write examples of good responses. Then you use that nice clean dataset of question-response pairs to tweak the model.

Unlike the original training, this approach is "supervised" because the data you're using is structured as well-labeled input-output pairs. So you could also call it supervised fine-tuning. Another trick is called "reinforcement learning from human feedback," or RLHF. The way reinforcement learning usually works is that you tell an AI model to maximize some kind of score—like points in a video game—then let it figure out how to do that. RLHF is a bit trickier: How it works, very roughly, is that you give the model some prompts, let it generate a few possible completions, then ask a human to rank how good the different completions are. Then, you get your language model to try to learn how to predict the human's rankings… And then you do reinforcement learning on that, so the AI is trying to maximize how much the humans will like text it generates, based on what it learned about what humans like. So that's RLHF And now we can go back to the tentacle monster!

Now we know what all the words mean, the picture should make more sense. The idea is that even if we can build tools (like ChatGPT) that look helpful and friendly on the surface, that doesn't mean the system as a whole is like that. Instead……Maybe the bulk of what's going on is an inhuman Lovecraftian process that's totally alien to how we think about the world, even if it can present a nice face. (Note that it's not about the tentacle monster being evil or conscious—just that it could be very, very weird.)

See also this addition to her thread:

"But wait," I hear you say, "You promised cake!" You're right, I did. And here's why—because the tentacle monster is also a play on a very famous slide by a very famous researcher. Back in 2016, Yann LeCun (Chief AI Scientist at FB) presented this slide at NeurIPS, one of the biggest AI research conferences. Back then, there was a lot of excitement about RL as the key to intelligence, so LeCun was making a totally different point… …Namely, that RL was only the "cherry on top," whereas unsupervised learning was the bulk of how intelligence works. To an AI researcher, the labels on the tentacle monster immediately recall this cake, driven home by the cheery "cherry on top :)

Why do I blog this? This is an interesting example of how experts rely on fantastic creatures to make sense of technologies. This case is quite relevant as the "monster metaphor" is sometimes associated with laymen, or people who are clueless about the way these systems work (as in "naive physics" phenomena). The AI tentacle monster represented above, and how it circulated on social media, shows that it's not the case. And that such visuals/metaphor can be used to characterize what these entities are, as well as their moral connotations (as illustrated by the adjectives used by the author: "evil", "nice", "helpful", "friendly"). T

On pixels

Lausanne, Feb 7, 2016. A DIY ad by Mathieu, perhaps a kid from this neighborhood, who sells "pixel arts" for a small sum on money: 5 centimes (Swiss Francs) for average size, 10 for big size. One can find them "Chez Mathieu" (at Mathieu's place), and he encourages us to take advantage of this opportunity ("Profiter", which has a typo as it should be "Profitez").

While a "pixel" is not exactly a creature, it belongs to this investigation of the digital menagerie we're interested in here. Mostly because a pixel is defined as an "element". The term is a combination of pix- (from "pictures", shortened to "pics") and -el (for "element"), and it can be defined as the smallest addressable element in a raster image (or the smallest addressable element in a dot matrix display device). As defined on the Wikipedia, pixels are the smallest element that can be manipulated through software in most digital display devices.

"Pixel Art" – what Mathieu from Lausanne offers here – is a form of drawing made with a graphical software in which images are built using pixels as the only building block... leading to this low-resolution graphics commonly used on machines which have a limited number of pixels and colors available; typically, computers and video game consoles with 8-bit and 16-bit era, LED displays and graphing calculators.

Why do I blog this? As a basic "element" of digital art/culture, the pixel is a remarkable entity one could study in anthropology. A tiny entity actually, whose investigation could belong to the exploration of "minuscule worlds", as carried out by the ethnographers who wrote in this issue of Techniques & Culture few years ago.

Apophenia & pareidolia

apophenia: the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas).

pareidolia: the tendency to perceive a specific, often meaningful image in a random or ambiguous visual pattern (e.g. seeing animals in cloud formations or faces in everyday objects).

a soap bubble around each creature

An evocative excerpt from Jacob von Uexküll's work that could be helpful to grasp the kind of digital creatures I'm interested in:

“Perhaps it should be called a stroll into unfamiliar worlds; worlds strange to us but known to other creatures, manifold and varied as the animals themselves. The best time to set out on such an adventure is on a summer day. The place, a flower-strewn meadow, humming with insects, fluttering with butterflies. Here we may glimpse the worlds of the lowly dwellers of the meadow. To do so, we must first blow, in fancy, a soap bubble around each creature to represent its own world, filled with the perceptions which it alone knows. When we ourselves then step into one of these bubbles, the familiar meadow is transformed. Many of its colorful features disappear, others no longer belong together but appear in new relationships. A new world comes into being. Through the bubble we see the world of the burrowing worm, of the butterfly, or of the field mouse; the world as it appears to the animals themselves, not as it appears to us. This is what we call the phenomenal world or the self-world of the animal.”

Uexküll (von), Jakob et Claire H Schiller. 1957. « A Stroll Through the Worlds of Animals and Men », dans Instinctive Behaviour. The development of a modern concept. New York : International Universities Press, p. 5-80.

Pokémon rescue

Why do I blog this? Interesting question/topic addressed in this YouTube video by Nick Robinson, as it highlights both the socio-technical dimensions of a "cartridge rescue" chaîne opératoire, and the kind of relationships Pokémon players build with such digital creatures.

Observing little people living inside one's computer, circa 1985

Little Computer People, also called House-on-a-Disk, was a social simulation game published in the mid-1980s by Activision for various computers (C64, ZX Spectrum, Amstrad CPC, Atari ST, Apple II, Amiga). It basically led users to believe that they could observe small characters living in our computers, represented as a ‘House-on-a Disk’ on your display.

The game mechanic was quite simple, as described on retrogamer:

after booting up your disk for the first time you are presented with an empty three story house on screen. After a few minutes your Little Computer Person (LCP) appears through the front door and takes several minutes to check out his new dwelling before getting his suitcase and moving in proper.

Your LCP generally does his own thing about the house; he watches TV, listens to his record collection, reads the newspaper, operates his computer and will even exercise when his mood takes him. Food and water are supplied through keyboard controls, as are a series of ‘mood boosters’ such as petting your LCP, giving books and records as gifts and letting him receive a phone call. Little Computer Person will also play games with you including poker and an anagram game, and from time to time will type out a letter to you expressing his feelings and desires. You can also attempt to communicate with the LCP by typing in requests through the keyboard.

Obviously, the game has no specific aim beyond than observing these charming little computer people moving around; what became more common later with games such as The Sims.

Why do I blog this? LCP was certainly one of these applications that played with this idea that machines were filled with (small) creatures having a life of their own... suggesting the existence of human-shaped machinic lifeforms. The most striking feature for that matter was the 'observation sheet' in the manual (yes the game had a paper manual), which encouraged users to adopt a quasi-naturalist perspective by writing down few words about what one could notice, in terms of health, appearance, or appetite.

Inspiration

“...si nos ordinateurs n’étaient que des clés pour accéder à des mondes que nous ne pouvons plus voir ? On pense qu’on a créé la matrice avec de l’information, mais peut-être cette information existait-elle déjà avant, que nous ne faisons que lui donner une forme logique, et nos interfaces partagées”

Calvo, Sabrina. 2016. Toxoplasma. Paris: La Volte, p. 104.