
An internet celebrity of 2022, Loab is a character that artist and writer Steph Maj Swanson has claimed to have ran across using a text-to-image AI generator. Given the look of that person, some folks almost immediately wondered about the possibility to have haunted presences in the latent space of AI models :
is this AI model truly haunted, or is Loab just a random confluence of images that happens to come up in various strange technical circumstances? Surely it must be the latter unless you believe spirits can inhabit data structures, but it’s more than a simple creepy image — it’s an indication that what passes for a brain in an AI is deeper and creepier than we might otherwise have imagined.Loab was discovered — encountered? summoned? — by a musician and artist who goes by Supercomposite on Twitter (this article originally used her name but she said she preferred to use her handle for personal reasons, so it has been substituted throughout). She explained the Loab phenomenon in a thread that achieved a large amount of attention for a random creepy AI thing, something there is no shortage of on the platform, suggesting it struck a chord (minor key, no doubt).
The interesting thing with the Loab case is the fact that's supposed to be caused by " negative prompting". As explained in the Techcrunch article:
if you prompt the AI for an image of ‘a face,’ you’ll end up somewhere in the middle of the region that has all the of images of faces and get an image of a kind of unremarkable average face,” she said. With a more specific prompt, you’ll find yourself among the frowning faces, or faces in profile, and so on. “But with negatively weighted prompt, you do the opposite: You run as far away from that concept as possible.”
But what’s the opposite of “face”? Is it the feet? Is it the back of the head? Something faceless, like a pencil? While we can argue it amongst ourselves, in a machine learning model it was decided during the process of training, meaning however visual and linguistic concepts got encoded into its memory, they can be navigated consistently — even if they may be somewhat arbitrary. (...) Over and over she submitted this negative prompt, and over and over the model produced this woman, with bloody, cut or unhealthily red cheeks and a haunting, otherworldly look. Somehow, this woman — whom Supercomposite named “Loab” for the text that appears in the top-right image there — reliably is the AI model’s best guess for the most distant possible concept from a logo featuring nonsense words. (...) Negative prompts don’t always produce horrors, let alone so reliably. Anyone who has played with these image models will tell you it can actually be quite difficult to get consistent results for even very straightforward prompts. Put in one for “a robot standing in a field” four or 40 times and you may get as many different takes on the concept, some hardly recognizable as robots or fields. But Loab appears consistently with this specific negative prompt, to the point where it feels like an incantation out of an old urban legend."
Why do I blog this? First off because it's another kind of entity to be added to the mirabilia list. Besides this, it's also because I'm less interested in whether there's something (someone) haunting the latent space of LLMs than in the way this kind of stories emerge and circulate. It's not exactly a creepypasta but it's close. But it might an AI-generated cryptid (cryptids being animals "discovered" by cryptozoologists who believe they exist even though their existence is disputed or unsubstantiated by scientific research).