Synthetic Symbols: How machines can learn to cast spells

From ancient magic to modern AI, symbols have always been used to shape meaning. Machine learning engineer, Karin Valis explores how today’s image generators echo old ritual practices — turning words and fragments into something new, strange, and powerful.

Karin Valis

In the beginning, it wasn’t the Word — it was the Symbol. The dawn of human culture was rooted in the architecture of myth, and in this same image, we now create our technology. The symbolic finds its reflection in the latent space of AI — where the formless potential is given shape and meaning. 

Just as symbols shaped the earliest contours of human reasoning, they continue to serve as dynamic bridges, spanning the conscious and the unconscious. They are not just static ornaments of thought, Jung argues. Unlike signs, which pin their meaning down with sharp precision, symbols are restless, dripping with vital, erotic energy. They do not explain; they provoke. 

They won’t sit unmoved in an encyclopaedia; they constantly evolve through chains of interpretation— bound up with our desires — into shifting, mutable possibilities. This fluid evolution has permeated all of our creation, oozing through terminal windows and bony rigs of industry-grade GPUs — the backbone of our computational infrastructures. To trace the currents between the mystical, the semiotic and the machine — is to find oneself submerged in the chthonic realm: where symbols, whether drawn by hand or encoded by an algorithm, haunt the spaces between thought and action. 

Sigil 

What Carl Jung mapped onto the architecture of the psyche, artist Austin Osman Spare grasped through his intuition, deeply rooted in the Western magical tradition. He inscribed his desires into sigils, magically activated glyphs that act as spells. These shapes,  fingerpainted in blood and bodily fluids, acts as an incision - a deliberate cut through the clutter of analytical thought.

Spare’s sigilisation technique begins with a simple (yet so difficult) act: the articulation of a specific desire in words. From this, letters are stripped away, rearranged, and coagulated into an abstract form that bears no conscious resemblance to its original phrase. The lines and shapes must be charged. Through ritual—by pleasure, by blood, or sheer will—the sigil is imbued with energy and impressed upon the psyche. When we craft this abstract shape, we deliberately destroy all the familiar signposts, so when plunged into the unconscious it remains unhindered by the grasping, anxious, pattern-seeking faculties of the analytical mind. Once embedded there, its power ripples outwards, and according to Spare’s magical teaching, it subtly bends the world to align with the sigil’s programming. 

Spare’s sigil, a shape stripped of its narrative, inhabits the space of the symbolic. This spell acts as a program, with instructions running their course within the psyche. Scripted in the syntax of the unconscious, our desired objective is silently coming into being. In many ways, this symbolic space of the unconscious mirrors the latent space of AI. Both are realms of abstraction, built on the dissolution of surface structure into deeper patterns. These patterns are formed from crude matter of experience, drenched in the vital force of our relentless sentience. The Temple of Psychic Youth treated words and images as portals, linked through an invisible psychic network, a compelling metaphor for our karmic entrapment in the world of semiotics. According to Bell’s Theorem, any two particles that have once been in contact will continue to act as though they are informationally connected, regardless of their separation in space and time. What if the images and words sublimated into the latent space may carry deep psychic links to their original content as well? Even if the images generated by AI models are imperfect and sloppy, they are tapped into our very humanity. The forms in the latent space, imprinting themselves onto the generated outputs, much like sigils, exist as glyphs awaiting activation, conduits for a force that surpasses their apparent simplicity.

Even if the images generated by AI models are imperfect and sloppy, they are tapped into our very humanity.

Latent Space 

The training of diffusion models — used for image generation and other computer vision tasks—happens on vast datasets of pictures and their captions. Algorithms translate pixels into numerical representations, reducing them to their essential features — shape, texture, colour, spatial relationships — while discarding accidentals. The training process is forming a high-dimensional space that isn’t visual or linguistic but mathematical. Here, the meanings are placed next to each other in a way where their position maps to their actual qualities. 

In the training process, the model goes through the images, looking for similarities and differences. Here, thousands of roses scattered across the dataset are reduced into a set of numbers, a point in the semantic space that, through its geometry, emanates its rose-ness. And what’s more, this compressed rose exists in a garden, just a few digits away from the poppies, carnations, and other flowering visuals - so far from the thorny regions of barbed wires and sharp edges of industrial waste, located in a different part of the semantic space.  Every image in the training marks the geometry of this space, carving its own area of representation. Twisting Peircian logic, the latent space reveals the symbolic web of the associations buried in the training data.

Synthetic Symbol 

The resource-intensive training phase hopefully concludes by obtaining a well-performing model. Once the user types in their query, the model is ready to start a chain of translations: from words to machinic signs, a vector, a position in the space into an image object with infinite variations. The visuals are unearthed from the prima materia of random noise, shaped by the forces of meaning lying dormant in the latent space.

When the model leaves the training, from engineers into the hands of users, it has a fully formed visual vocabulary - the learning is frozen, no new concepts are entering the latent space. But what if we want to play? 

Even the fully trained model enables injection of new information. We don’t need to repeat the whole extensive training from scratch — there are techniques like Dreambooth or Textual Inversion. Each of them uses a slightly different algorithm to achieve a similar goal — to enable users to enrich the visual model with personalised objects, or concepts that were not present and labelled in the original training data. By taking a series of images representing a specific idea — say, a unique personal artefact — components of the model can be adjusted so that it recognises these as new, coherent entities. Whether it is Dreambooth, where the weights of the model are fine-tuned, or textual inversion, where the new meaning is embedded into the model’s textual vocabulary, the results are similar. A glyph, a sigil, an idea that was never part of the training data now carves its place into the model’s symbolic lexicon, without having to start from the very beginning. 

How far can we stretch this process before it breaks? 

The usual input into Dreambooth or Textual Inversion algorithms would be a set of consistent images of a new object — your face from five angles for a new AI selfie — and a unique name to recall this item. This coherent signal enables the model to reliably reconstruct the new object. However, what if we feed the model a disparate collection of visuals? Tossing in meaningful references of intent and scattered desires, the system, designed to form patterns from chaos, will attempt to resolve these fragments into a coherent structure. A new glyph — born from misaligned inputs — becomes a strange hybrid, carrying shards of all its parts yet becoming something fully new. We’ve created a synthetic symbol, a machinic artefact of fractured intent, now permanently embedded in the symbolic lexicon of the model. 

The visual assets chosen to form this spell underwent an alchemical transformation, from the concrete into abstract, from analytical into emotional, tying a complex knot in our semantic web. Much like with the traditional Gysin-Burroughs cut-up technique, this newly formed entity acts as a generator, a virtual infinity of images awaiting manifestation. Its presence can be recalled by its unique name, a goetic token summoning a servitor from the depths of the latent space. It lends itself to many magical operations: posing a question for divination, talismanic applications or further ritual work. The portal is open.

Image Virus 

If language, as Burroughs argued, is a virus, carried by words and images, then we’ve entered an era of its new major mutation. Finally shedding its human host, the virus now inhabits the latent space, a realm of infinite semiosis where fragments of meaning collide and recombine. This space becomes a new playground for magicians, elegantly lending itself to traditional sigilistic operations. We have always attempted to manipulate subtle energies to align with our desires — and don’t forget that we created AI in our own image — in the image of us, we created her. Fed by our raw humanness, she, too, drips with the same substance.

Karin Valis is a Berlin-based machine learning engineer and writer with a passion for the occult. Karin's work focuses on combining technology with the esoteric, including projects like Tarot of the Latent Spaces.

Images were generated through the synthetic symbol constructed by textual inversion. They are imbued with intention and carry encoded meaning into the unconscious.

Editorial Note

Anna Gerber