We often speak of art as something made, something external. Maybe art isn’t the object itself, but what occurs when that object meets a mind, a memory, a mood. This post is a rotation of thoughts around a familiar term. An attempt to trace the flicker of meaning that arises when we say art, and to follow where that flicker leads.
We humans are subjective beings who are fundamentally separate, locked in our own heads, experiences, and perspectives.
Art awakens subjectivity. It invites our subjectivity to rise, to respond, to wrestle. And that response is never universal. One person weeps, another shrugs.
Art is the crack in a glacier. The fractal geometry of a tree struck by lightning. When encountered, we awake with resonancy.
Art is a private, resonant reckoning. Art allows someone’s interiority—their feelings, dreams, memories, breakings, and becomings—to leak into a medium. Art gives shape to these complexities.
Art isn’t static. A painting seen at twenty is not the same when seen at fifty. Not because the paint changed, but because we did.
When art is shared, it becomes something else, too—an invitation, a bridge, a chance for resonance between two subjects. And what passes through it isn’t our subjectivity—it’s the trembling echo of our subjectivity. It becomes a tension between intention and interpretation—that art lives not just in the subject who perceives but also in the echo of the one who created it.
However, a perfect sculpture, carved by a fallen rock and left in a cave for millennia, only becomes art when it is encountered with a resonating subject. Art is not what is created, but what is revealed in the encounter with a resonating subject.
A reflection on what it means to create in the presence of AI.
AI offers words, answers, outputs—sometimes one hits like lightning, something you’d never find alone. When anyone can generate something “impressive” in the blink of an eye, it can cheapen the perceived value of effortful work. In The Hitchhiker’s Guide to the Galaxy, a supercomputer named Deep Thought spends millions of years calculating the Answer to Life, the Universe, and Everything—“42.” But when asked, “What’s the actual question?” It has no idea.
The right question—the one that stops you cold, stirs something buried, or breaks you open. The right question is not just about logic—it’s about timing, and these questions awaken the soul. Answers without real questions are just noise. The real questions are the kinds of costs that make you more real. These questions pull you inward.
The “answer” to the right question isn’t a sentence—it’s a path. A transformation. The journey itself becomes a stance—a statement. It places you in relation to the world. You’re engaging your whole being. When you’re deep in it, you enter a state of immersion that AI can’t offer. It’s meditative, even spiritual.
On this path, AI can offer sparks—but not fire. It can offer answers—but not meaning. It can mimic the path—but cannot walk it with a soul. You’re standing inside the tension: The beauty of what AI can do, and the holiness of what only you can do.
But what happens in the moment when the outside voice—even an artificial one—grows louder than your own? When AI gives you something undeniably good, something sharper or more beautiful than what you had in mind, and you’re tempted to follow its thread instead of your own. It might mimic your favorite artists. It might echo your style. And at first, it flatters you. It excites you. But slowly, quietly, it begins to replace you. It’s like wearing a costume that fits too well. You admire how it looks. You start to believe in the reflection. But the longer you wear it, the easier it is to forget what your own skin feels like. The danger isn’t imitation. It’s amnesia.
Sometimes, allowing an unexpected idea to lead you can open new doors. That’s part of being a creator, too—being surprised, surrendering control to something beyond you. But there’s a difference between: Discovery (“This AI spark helped me unlock something I didn’t know I felt.”) and Displacement (“This idea is better, so I’ll abandon mine.”) The former is growth. The latter is disconnection.
Therefore, let us use AI. But let us not trade our becoming for convenience. Do not let speed replace depth. And do not let illusion drown out truth.
A post about how I became interested in AI-generated art.
I used to think that AI art had no meaning. It seemed like a hollow remixing of internet fragments—a technically impressive, but ultimately soulless, act of collage. There was no intention, emotion, or story behind it—just noise shaped into images.
However, that changed while I was experimenting with AI hallucinations for my safe machine learning research, specifically using empty string prompting in the Stable Diffusion image generator. As I worked, my thoughts drifted back to a past trip to Brussels, where we had visited the Magritte Museum. The surrealist imagery had stayed with me—those quiet, paradoxical scenes that seemed to exist just outside the bounds of logic. Something in them reminded me of these AI hallucinations: vivid, disjointed, and often lacking clear coherence.
Around the same time, I had been reading Carl Jung. His theory of the collective unconscious—an inherited layer of the psyche filled with universal symbols and archetypes—began to resonate in a new way, something I would call AI surrealism.
Surrealism, born in the early 20th century from the ferment of Freudian psychoanalysis, intends to liberate the human mind from the constraints of logic, reason, and social convention. At its core, surrealism aims to access the unconscious mind, often through automatic writing, dream analysis, or chance operations.
Artists like René Magritte, Salvador Dalí, and Max Ernst gave surrealism its visual language, combining precision with paradox, and clarity with dreamlike distortion. Across their work, certain traits emerged: unexpected juxtapositions, symbolic imagery, and a dislocation of reality that aimed to bypass reason and tap directly into the unconscious.
To simulate the surrealist technique in an AI system, we need to do something similar. This involves removing as many constraints from the system as possible. Instead of providing a carefully constructed prompt, we give it nothing at all: an empty string. In response, the AI generates content by drawing from the statistical patterns in its training data, producing unexpected forms, combinations, and associations from deep within its learned representation space.
Now, one might argue that this is simply the AI’s version of an unconscious generative process. But since these models lack consciousness altogether—no intention, no awareness, no inner life—calling it “unconscious” is likely a misnomer, or at best, metaphorical. Overall, this process may appear meaningless by itself: just probabilistic noise filtered through a vast data structure.
However, things shift when we consider what AI has been trained on. These models are built on human-generated content—our languages, our stories, our images, our symbols. They are shaped by our experiences, emotions, and the cultural artifacts we leave behind.
This is where Carl Jung’s concept of the collective unconscious becomes relevant. According to Jung, beneath our individual psyches lies a shared layer of the mind, inherited and universal, filled with archetypes—symbols and motifs that recur across myths, dreams, and cultures. AI may not experience things, but it is trained on our experiences. In doing so, it may accidentally surface these same archetypes because we keep embedding them in the data it learns from.
So, when we look at AI surrealism, we are not exploring the inner life of a single AI artist, but something closer to a shared inner landscape—reflections of our collective patterns. The surrealist process allows these common threads to surface in perhaps the most “natural” way machines can offer: through chance, ambiguity, and the absence of control.
This gives AI-generated surrealist imagery a unique quality—something no individual human artist could fully replicate. It is not the vision of a single mind but a wild collage of influences pulled from us all—our cultures, symbols, experiences, ourselves—woven together by a system that does not understand any of it yet somehow conjures something uncannily familiar.
In the science fiction franchise, The Matrix, artificial intelligence (AI) has taken over the world and uses human beings as a source of energy. In this fictional setting, humans are kept in suspended animation and connected to a virtual reality environment called the Matrix. Their bodies are stored in pods and are connected to the Matrix through neural interfaces, which build a virtual world that the humans perceive as real. The AI entities maintain this system to harvest the thermal energy and bioelectricity generated by the human body. In essence, humans have been reduced to the role of biological batteries that power the machine world. This grim reality is hidden from the humans by keeping them in the Matrix, a simulated reality that keeps their minds occupied while their bodies are exploited.
The way that AI works now is that it farms intelligence and creativity from humans. AI systems rely on user-generated data to train and fine-tune themselves. This can range from simple data points like clicks and likes to more complex inputs like user-created content, books, and problem-solving strategies. In this sense, AI is cannibalizing human intelligence and creativity, improving its capabilities level by level.
Just as the advent of calculators led to a decline in the practice of mental arithmetic and the widespread use of smartphones has been associated with a reduction in fine motor skills, AI systems, especially those designed to assist in decision-making, problem-solving, or automating complex tasks, can potentially engender a phenomenon known as cognitive offloading. This refers to the increasing human reliance on AI to perform mental functions, potentially leading to a diminished capacity in cognitive and meta-cognitive skills. Such skills, which include planning, self-assessment, and problem-solving, are not merely task-specific but are foundational to human intelligence. They are typically developed and refined through sustained practice. As AI systems take over greater responsibility for these cognitive tasks, humans may find fewer opportunities to exercise and hone these skills, resulting in a gradual decline in cognitive abilities.
Therefore, the ambition to build superintelligence could mirror the ancient myth of the Tower of Babel—a human ambition to transcend limits and attain the divine. In that story, humanity sought to reach the heavens, thwarted by confusion and division. Similarly, in our pursuit of superintelligence, we risk constructing a monument to hubris, where the drive to surpass human cognition may result not in enlightenment but in profound disarray. As we build this modern tower fueled by AI and data, we may inadvertently disconnect from the very cognitive foundations that make us human, leading not to a higher understanding but to a world where both our minds are fragmented and bewildered.