Art in the Age of Algorithms

AI-generated art floods our feeds, blurs the line with human creation, increasingly enters the art market, and leaves us questioning the future of human art and creativity.
AI-generated art often feels sterile, lacks meaning, feels like it has been stolen, and resists clear categorization within traditional art frameworks.
In this post, I put AI-generated art into context and lend it legitimacy.

As an artificial intelligence safety researcher with a personal interest in art, I’ve come to see safety not just in terms of systems and regulations, but also in how AI touches our most human domains—like creativity, expression, and meaning.
Watching AI generate music, images, and stories with increasing fluency, I began to wonder: What role does AI-generated work play in the art world, and what space remains for human artists?

This essay seeks to place automatically generated art within a broader historical and artistic context, and explore what makes both human and machine-made works meaningful.
The goal is not to defend or dismiss AI, but to better understand it—and to reaffirm the enduring value of the human voice.

Humans create art to express emotion, process experience, connect with others, and leave a trace of their existence. It stimulates the mind, calms the body, strengthens social bonds, and signals intelligence and creativity. Across cultures and centuries, art has served as a way to reach beyond the limits of time.

AI-generated art is not a new phenomenon; AI systems that create art connect to older examples of mechanized creativity. Hero of Alexandria’s mechanical theaters, the drawing automatons of Jaquet-Droz, or Johann Kirnberger’s dice-generated compositions, were celebrated not merely for what they produced, but for how they automatically worked.

Seen in this light, AI-generated art is a continuation of the same impulse. We build creative machines to preserve, amplify, and extend our expressive capacities.
In doing so, we don’t just make tools; we craft systems capable of carrying fragments of our vision forward, sustaining our artistic legacy beyond our own presence.

To understand the meaning of AI-generated art, it helps to see it in the eyes of conceptual art, particularly the work of Sol LeWitt. In the 1960s and 70s, LeWitt created wall drawings not by executing them himself, but by writing detailed instructions for others to carry out. The idea, not the hand, was the art. Each iteration might vary in execution, but the core concept remained intact. In LeWitt’s words, “The idea becomes a machine that makes the art.”

This shift, where the system or logic behind the artwork holds more value than the physical result, lies at the heart of conceptual art. The artist becomes a designer of thought, a creator of frameworks rather than finished objects.

AI systems follow directly in this tradition. Data scientists curate datasets, craft prompts, and tune algorithms. They develop generative systems capable of producing a wide range of outputs with open-ended variation. The AI becomes a conceptual engine: a machine that makes the art.

This raises a deeper question: if the AI system is the art, what status do its outputs hold? Are they standalone works, or simply echoes—traces of a larger idea? Are they art in their own right, or just artifacts of a process?

To explore this in the context of today’s data-driven AI systems like Stable Diffusion and GPT, it’s helpful to turn to another influential art movement: Surrealism.

Born in the early 20th century from the influence of Freudian psychoanalysis, Surrealism aimed to free the mind from rational constraint. It sought access to the unconscious through methods like automatic writing, dream exploration, and chance operations. Artists such as Magritte, Dalí, and Max Ernst crafted a visual language of paradox, ambiguity, and symbolic distortion, inviting meaning to emerge from it.

AI reflects this surrealist approach in an unexpected way.
When given an empty prompt, the model generates output based entirely on internal statistical associations, without any direct human input.
This creates a kind of maximal interpretive freedom, where structure and meaning emerge from the model’s training process rather than user intent.
As the user adds more detail to a prompt, this freedom narrows—but doesn’t disappear.
Surreal qualities arise in the gaps: the parts left undefined, where the model fills in with associations, recombinations, and unexpected interpretations.
In this sense, AI’s surrealist potential emerges most vividly in what the prompt leaves open or undefined.

Some might say this expression arises from a machine’s version of the “unconscious.”
But this analogy is flawed.
These systems have no inner life, no intention, or awareness.
Their “creativity” is statistical. Still, this doesn’t mean the results are meaningless.

What gives AI output its strange power is the nature of its training data: human-made content. These systems don’t produce individual or personal expressions—they reflect our shared culture, a collective mirror of human creativity.
They remix what we’ve written, drawn, and composed.
In doing so, they reflect back a distorted but revealing portrait of who we are.

AI becomes a conceptual mirror—less like a self-portrait, more like a cultural Rorschach test. It reveals what recurs in the data: dominant narratives, underlying patterns, and familiar themes.

This collage-like quality gives AI-generated surrealism its own legitimacy and meaning.
It’s not the vision of one artist, but a collage of many influences, woven together by a system that doesn’t understand what it’s doing.
In this way, AI surrealism isn’t about exploring the inner life of a single, intentional artist, as human art often is, but something closer to a shared inner landscape, shaped by countless human experiences.
It reflects not personal vision, but aggregated patterns across culture and history.
The AI surrealist process allows these patterns to surface.
Unlike human artists, who draw from individually lived experience and introspection, AI generates by remixing fragments of the collective.

This collective remixing, however, brings us to critical questions of copyright.
If AI is drawing from the cultural past, where do we draw the lines?
Yet even here, the boundaries aren’t new.
As philosophers like David Hume argued, ideas arise from experience. Even the idea of an angel, they noted, is just a human figure with wings. We don’t invent from nothing—we recombine what we’ve known. Others may debate how ideas form (especially Kant), but the key point here is this: it’s unclear what experience, if any, influenced a given brushstroke.

History affirms this. Shakespeare borrowed plots. Delacroix copied Rubens. Duchamp reframed a urinal. Even Newton’s famous line about “standing on the shoulders of giants” was a recycled metaphor. Creativity has always involved reinterpretation and remix.

This perspective reframes the copyright debate: if human artists have always drawn from the past, why should AI be any different? The key difference is that with AI, we can trace which data influenced the work. But as long as the training data is publicly accessible, using it to learn and create isn’t theft—it’s part of the ongoing evolution of culture.

What truly matters is not who—or what- created the artwork, but how it’s used and understood within our legal and cultural systems. Questions of authorship are still hotly debated and vary across jurisdictions. AI itself can’t hold copyright, and developers are usually protected by other rights. In many cases, the person who initiates the creative process—the end user—is seen as the author. So the focus shifts from the maker to the meaning and impact of the final work. If an AI closely imitates a human artist’s style, it might raise red flags. But it might also mark the beginning of something new—a fresh artistic movement, a new “ism” in the making.

Consider, for example, a contemporary artist named Bubens, known for a distinct, abstract visual language. As AI systems begin to replicate aspects of Bubens’ style—whether through direct prompts or emergent patterning—others, both human and AI, start to adopt and adapt this aesthetic.

Just as Cubism evolved from Picasso, Bubism originates with Bubens. But generative systems accelerate Bubism’s spread, amplifying the style’s reach and inspiring new directions. In this way, AI is not the villain, but the amplifier—a collaborator that helps turn a personal vision into a cultural movement.

Last but not least, to bring everything together, we turn to the core distinction between human and today’s AI art.
One of the most essential distinctions between human and AI art is this: AI-generated art is collective by nature, while human-made art is individual.

Today’s AI systems are trained on vast datasets drawn from countless artists, cultures, and histories. When an AI creates, it is not channeling a singular voice—it is stitching together traces of many. Even its most original-seeming works are, in truth, mosaics built from fragments of human culture. AI-generated art is collage. It’s a remix.

By contrast, human art originates from a specific point of view. Even when we borrow or sample, we do so with intent, with memory, with lived experience. A painting, a poem, or a melody made by a person is shaped by their fears, joys, and traumas. It carries the weight of one individual trying to speak.
Human art is always autobiographical. That is what gives it its pulse.

So, while AI may flood the world with competent—even impressive—output, it cannot replicate the depth of perspective shaped by a human life. It doesn’t reflect uncertainty, doubt, or the tension of holding conflicting thoughts. It doesn’t wrestle with meaning or reflect on its own limitations. It produces, but it does not contemplate.

That’s where the true value of the human artist lies—not in speed or efficiency, but in depth. In making something that could only arise from their own lived experience, shaped by a unique history, context, and way of seeing the world.

Beauty and the Bot: The Evolution of AI Art Criticism

In this post, I explore how AI art criticism has evolved over time, tracing how many of today’s concerns echo longstanding debates, while also highlighting what’s more or less new.

Though it may seem like a modern issue, debates about AI art and creativity date back to the 1800s.
Ada Lovelace, often considered the world’s first computer programmer, argued that machines could never be truly creative because they simply follow instructions—a view that continues to shape how we think about AI art today.

Since then, AI art criticism has dramatically evolved over the past decades, shifting focus in response to changing technological capabilities.

In the early decades (1960s–1980s), critics largely dismissed computer-generated works as mechanically sterile, cold, formulaic, and lacking the expressive nuance of human-made art. Underlying this was a deeper anxiety that the ideal of the lone human genius was being eroded, even though the artworks themselves were not yet competitive with traditional art.

A landmark moment came with Cybernetic Serendipity (1968), a London exhibition showcasing machine-assisted creativity.
Many at the time believed that only human artists, not machines, could create meaningful art.
The exhibition sidestepped the central question, “Is it art?”, by framing the works as idea-driven demonstrations.
Even then, it hinted at how AI was beginning to challenge traditional notions of art.

Traditional art institutions were slow to accept computer art as legitimate.
Many in the art world continued to view it as a novelty at best, or as a threat to deeply held humanist values at worst.
During the 1960s to 1980s, AI art was often met with skepticism and resistance. Many critics viewed it as a threat to the sanctity of the human artist, sparking what was widely perceived as a crisis of authenticity.
These works were frequently dismissed for lacking emotional depth or originality.
The idea that art created by an algorithm was missing some ineffable human quality became almost a cliche.
Paintings plotted by machines or poems generated by programs were routinely described as soulless, mechanical, or derivative.
While the works may resemble art on the surface, they are, in this view, simulations of creativity rather than genuine expressions.

By the 1990s and 2000s, however, the conversation had begun to shift. As early style-imitating systems like David Cope’s Experiments in Musical Intelligence (EMI) appeared and began publishing music albums, the debate turned to whether computers were genuinely creative or merely recombining existing patterns. Critics and scholars increasingly questioned not just the outputs, but the processes behind them. Could programmed systems exhibit originality, or were they only echo chambers of prior human input?

Questions of authorship and copyright entered academic discourse, though legal consequences remained mostly hypothetical at the time, given the relatively small datasets and limited public reach of AI work. Still, the theoretical groundwork for today’s debates was being laid.

Futurists like Ray Kurzweil were bullish, suggesting that creativity is ultimately computable and that future AI systems would routinely generate novel art and literature. By the end of the 1990s, the debate had matured: it was no longer simply about whether AI could make art, but about how soon and with what consequences—a notable shift from the speculative anxieties of the 1960s.

At the same time, researchers began to propose more nuanced frameworks for understanding machine creativity. Theoretical work emphasized that creativity isn’t just about novelty, but also about value and intent.
Critics argued that AI art must be evaluated in context: Is the machine working independently, or is it part of a human-AI collaboration? What is the role of the human artist—as programmer, curator, co-creator?

New terms like co-creativity and symbiotic creativity emerged, reflecting the view that the most meaningful AI art might come not from autonomous systems, but from human-machine partnerships.
Rather than displacing the human artist, AI could serve as a tool for expanding creative possibilities. In this light, proponents suggested, AI art might even force us to clarify what we most value in human-made art—be it personal perspective, conceptual depth, emotional resonance, or cultural context.

The 2010s saw a leap with deep learning: GANs and transformers enabled AI to produce images and music that fooled juries and consumers, raising concerns about economic displacement and the erasure of human labour in the creative pipeline. During this period, it became increasingly evident that AI art could not only mimic surface aesthetics but also evoke emotional and intellectual responses, sometimes fooling audiences into perceiving a sense of soul or meaning.
A striking example came in 2016, when a short novel co-written by an AI program in Japan, The Day a Computer Writes a Novel, passed the first round of a national literary competition, surprising judges with its coherent structure, even though it ultimately did not win.
Note: The first known AI-generated piece to win a major global competition came in 2023, when an artist declined the Sony World Photography Award after revealing the work was AI-generated.

By the 2020s, with diffusion models like Stable Diffusion and Midjourney trained on billions of scraped images, the dominant criticism became one of appropriation and theft—models replicating artists’ styles without consent. This shift culminated in legal action, as courts began evaluating whether large-scale dataset scraping constitutes fair use, marking a new era of ethical, legal, and cultural reckoning in AI art. Alongside these legal and economic criticisms, concerns about bias and representation also came to the forefront. Critics and scholars began to expose how AI images often reflect and amplify existing cultural stereotypes—overrepresenting certain body types, skin tones, or aesthetic ideals—while marginalizing others. Since these models are trained on internet-scale data, they tend to reproduce the biases embedded in that content, leading to accusations that AI art can reinforce exclusionary norms or perpetuate harmful tropes under the guise of neutrality.

Overall, there has always been an underlying fear that AI could undermine or replace human artists.
In the early years, AI art was largely dismissed as a novelty or technical gimmick—interesting, but not artistically threatening. However, this perception has shifted dramatically over time.
By the 2020s, with AI capable of producing high-quality, style-specific works at scale, the conversation turned from curiosity to concern, as AI began to pose a real economic, ethical, and cultural threat to human creativity.

Looking ahead, I believe that even if copyright concerns are addressed—say, by training AI exclusively on non-human-made data (however feasible that might be)—the criticism that “AI stole my art” will likely persist, especially when the results are compelling.
When AI outputs lack human-like qualities, their artistic value is questioned.
But as soon as they start resembling the richness, emotion, or technique found in human work, fears of imitation resurface.
Regardless of the safeguards in place, good AI art will always prompt some to argue it draws too heavily from human creativity.

Why Is There Art—and AI Art?

Understanding why there is art—and now AI art—reveals one of the deepest layers of human nature, and helps us grasp what it means when machines begin to mirror our most expressive aspirations.

There is no single reason why humans create art.
Rather, art arises from a rich web of interlocking needs—biological, cognitive, emotional, and existential—that together form one of the most distinct features of the human species.

Art engages the brain’s capacity for imagination and problem-solving. Psychologically, creating art is a form of play and exploration that exercises our cognitive abilities.
From early childhood scribbles to intricate symphonies, art stimulates creativity and nourishes our capacity to think differently.

But art is more than mental exercise. Humans turn to art to express, regulate, and process feelings that are often too complex for everyday language.
It has been shown to reduce stress, anxiety, and even physical pain.

Art is not just for the individual; it is for the group. One of art’s most enduring functions is communication. Long before written language, humans used drawings, dances, and songs to record knowledge and transmit culture. Art preserves shared values and identity. It binds individuals into tribes, nations, and civilizations.

This social function extends to emotional synchronization. Participating in group artistic practices—drumming, singing, storytelling—tends to sync individuals emotionally, fostering cohesion and mutual understanding.
In this sense, art is a survival tool: it enables collaboration and strengthens the social glue needed for group living.

Based on these reasons—art’s role in cognitive stimulation, emotional expression for pain reduction, social cohesion, and cultural transmission—it’s no surprise that evolutionary theorists also see artistic ability as a powerful tool for sexual selection. The capacity to create vivid paintings, moving music, or compelling stories showcases exactly the traits that signal fitness: intelligence, creativity, emotional depth, and social awareness.
From this perspective, art becomes part of a broader reproductive strategy—a display of adaptive qualities that attract mates and increase the likelihood of passing those traits on to future generations.
In turn, this promotes the survival and flourishing of gene pools—and entire groups—that are enriched with these advantageous cognitive and emotional capacities.

Ironically, the intuition that art serves a deeper, even immortal function was already present long before the development of the evolutionary theory.
In Plato’s Symposium, Diotima suggests that humans seek immortality through creation, whether in children, philosophy, or art.
Much later, thinkers like Otto Rank and Ernest Becker would echo and expand this idea. Rank saw art as a heroic act against death, and Becker, in The Denial of Death, argued that much of human behavior is driven by the awareness of mortality. Artistic creation, he claimed, forms part of our “immortality projects”—symbolic efforts to outlast our finite lives by leaving a lasting mark.

In this light, art becomes the tool that emerges from biological evolution—shaped by cognitive development, emotional regulation, social bonding, and sexual selection—but transcends its origins.
It reaches beyond survival and reproduction toward legacy, memory, and meaning. Art is where biology touches the eternal.

If art is tied to intelligence, vitality, attractiveness, and immortality, then the urge to equip machines with artistic capabilities is not arbitrary—it is an extension of our own evolutionary and existential impulses. When we build AI artists, we do not merely automate creativity. We pass on the very projects that define us. We attempt to imbue our creations with the traits that make us human—imagination, expression, and desire for legacy.

In giving machines the ability to make art, we hand over our symbols of identity and meaning to something beyond us. Perhaps this is not just about automation, but about continuity.
A new kind of cultural offspring.

And this might help explain our enduring fascination with autonomous agents capable of producing art—a fascination that stretches back centuries (see upcoming blog post or related material on AI Art in Wikipedia). In a sense, AI art development could be seen as a kind of Platonic fantasy realized: a merging of offspring creation, philosophical pursuit, and the dream that an artist’s work—or even an entire artistic movement born from their vision—continues to be generated long after their death.
Through machines, the artist’s legacy is no longer merely remembered; it is reanimated.

Unlike human followers, who can die, forget, or lose interest, autonomous AI agents offer the potential of unwavering continuity. They do not age or get distracted —they can preserve, replicate, and perhaps even refine a creative lineage with tireless precision. In this way, they become more than tools; they become vessels of artistic immortality.

While some turn to AI in pursuit of curing disease and overcoming biological death, others use it to transcend mortality in a different way—through the persistence of artistic expression.

AI Art and Copyright

Building on the ongoing debate about AI art and copyright, [be sure to check out the “What is Art?” post first, so we have a common foundation — importantly, the idea that art is primarily about the resonance it creates with the subjective observer, regardless of whether it was created by a human or not.]

David Hume, a renowned 18th-century British philosopher, was one of the leading figures of empiricism — the view that all knowledge arises from sensory experience. Hume argued that everything in our minds — our ideas, feelings, and understanding — can ultimately be traced back to what we perceive through our senses or internal experience. For Hume, reason could not extend beyond the boundaries of experience; it is always rooted in, and limited by, what we encounter in the world.

A nice example from Sophie’s World, a Norwegian novel that I personally really like, illustrates this:
“…In Hume’s time, the idea that angels exist was widespread. By an angel, we understand a male figure with wings. Have you ever seen such a being, Sophie?” “No.” “But you have seen a male figure?” “That’s a stupid question.” “And you have also seen wings?” “Of course, but never on a human being.” “According to Hume, angels are a composite idea. They consist of two different experiences that, however, are not actually composed, but have only been coupled together in the human imagination…

Therefore, an artist is also not a creator out of nothing, but a rearranger of experience. That means that art is the artful recombination of what we already feel and know.

We find various examples of this in history. Delacroix drew inspiration from Rubens and the Venetian Renaissance, favoring color and movement over precise outlines (see, for instance, Wikipedia). The earliest known version of the Romeo and Juliet story comes from Masuccio Salernitano’s Il Novellino (1476), with the tale of Mariotto and Ganozza (see, for instance, Wikipedia). Marcel Duchamp’s Fountain — a standard urinal placed in a gallery and signed with a pseudonym — challenged traditional notions of art (see Wikipedia). Even in pop music, Lady Gaga borrowed from Vittorio Monti’s Csárdás — itself based on a Hungarian folk dance — for the intro of her single Alejandro (see Classic Fm).

So we are standing on the shoulders of giants — a concept that dates back to the 12th century and, according to John of Salisbury, is attributed to Bernard of Chartres. Its most familiar and popular expression appears in a 1675 letter by Isaac Newton (see Wikipedia). So even this statement itself is reused and remixed. Every experience we have shapes our future actions and experiences. From an empiricist point of view, the very idea of “pure originality” is a myth.

So, what does this mean for the ongoing debate about AI and copyright violations? In an opinion shaped by the empiricism philosophy, AI systems do what we do: they examine publicly available data, gain experience, and use it to refine and improve themselves.

For example, just as a musician might listen to countless songs to develop their own style, an AI trained on publicly available music learns patterns, structures, and techniques.
Similarly, a painter might study masterpieces in museums, internalizing techniques, color palettes, and compositions, and then blend these impressions into their own original work — just as an AI trained on public visual datasets learns and recombines artistic styles. Or a writer might read hundreds of novels, essays, and poems, unconsciously absorbing narrative rhythms, styles, and ideas, which later reemerge in new combinations in their own writing — just as a language model, trained on publicly available text, weaves together new stories shaped by what it has read.

Therefore, as long as a work is publicly available for viewing (for example, accessible on the internet), it may be used for training an AI system. However, this freedom must be balanced with transparency: AI developers should maintain a publicly available reference database that clearly documents the sources of the training data — a task that is technically feasible. In addition, AI-generated works should be accompanied by a statement indicating that the total list of these sources influenced the creation of the generated piece. This goodwill approach would uphold the principles of fair use while ensuring that the origins of inspiration are acknowledged. Importantly, AI developers should not be required to financially compensate artists simply because their publicly available works contributed to the AI’s training. This principle reflects how creativity itself works: humans absorb and are influenced by countless works of art, often without consciously tracing every source of inspiration. In much the same way, AI models internalize patterns through exposure. Learning through observation — without direct copying — is a natural and essential process for both human and machine creativity. Artists do not pay every creator of the artworks they encounter.

Finally, the resulting artwork should be evaluated solely under existing copyright laws, without regard to whether a human or an AI created it.

If an artwork was generated in the style of a specific artist, such as Nobuyoshi Araki, it should be regarded not simply as a copy, but as part of a new artistic movement — for example, Arakiism. This framing both honors the original creator, recognizing their style as an invention worthy of its own lineage, and allows for the natural evolution of the style through new interpretations and transformations. Just as Impressionism or Cubism began with a few individuals and grew into broader movements, styles pioneered by individual artists can, through widespread engagement, become shared aesthetic languages. Viewing such works as contributions to an -ism rather than as imitations respects the dynamic, living nature of artistic creation and gives rightful credit to the originator.

In summary, from an empiricist perspective, human creativity is fundamentally a process of recombination — we reshape what we experience into something new. AI, trained on data, mirrors this same process. Therefore, AI should be permitted to learn from all publicly available data, provided that developers maintain a transparent, publicly accessible database documenting the sources. Copyright enforcement should continue as it did before the rise of AI technologies, focusing solely on the nature of the work itself, not on who or what created it. If an AI system user generates an artwork in the style of a specific artist, it should be recognized as part of the [Artist’s Name]ism art movement, in order to credit the original inventor of the style properly.

What is Art?

This post is a rotation of thoughts around a familiar term. An attempt to trace the flicker of meaning that arises when we say art in the era of AI generated content.

Every day, content is created, accessed, and shared across the internet. But increasingly (see Link), people no longer make the content—it’s generated by AI. What once required time, imagination, and expression can now be produced in seconds by algorithms.

And the results are convincing. In studies by researchers like Elgammal, participants were shown paintings generated by machines and asked to judge their origin. A lot couldn’t tell whether the works were made by humans or AI—and in some cases, they even found the AI art more compelling. Similar patterns appear in music and literature, where listeners and readers routinely fail to spot the difference. The boundary between human and machine-made creativity is blurring fast.

At first glance, this seems like a technical milestone: AI can now mimic humans. But it has far-reaching implications, touching everything from politics to perception.
But in this post, I want to focus on one particular consequence: what this transformation means for art and what art actually becomes.

To have a common ground, let’s start with the Oxford Dictionary’s definition of art: “Art is the use of the imagination to express ideas or feelings, particularly in painting, drawing, or sculpture.”

However, in a world where we can no longer be certain whether art was created through human imagination or by AI, this definition becomes somewhat blurry. We risk losing our ability to determine what constitutes art.

Therefore, I would like to step back and reflect.

We often speak of art as something made, something external.
Maybe art isn’t the object itself, but what occurs when that object meets a mind, a memory, a mood.

We humans are subjective beings who are fundamentally separate, locked in our own heads, experiences, and perspectives.

Art awakens subjectivity.
It invites our subjectivity to rise, to respond, to wrestle.
And that response is never universal.
One person weeps, another shrugs.

Art is the crack in a glacier.
The fractal geometry of a tree struck by lightning.
When intentionally encountered, we may awaken with a sense of resonance.

Art is a private, resonant reckoning.
Art allows someone’s interiority—their feelings, dreams, memories, breakings, and becomings—to leak into a medium.
Art gives shape to these complexities.

Art isn’t static. A painting seen at twenty is not the same when seen at fifty. Not because the paint changed, but because we did.

When art is shared, it becomes something else, too—an invitation, a bridge, a chance for resonance between two subjects.
And what passes through it isn’t our subjectivity—it’s the trembling echo of our subjectivity.
It becomes a tension between intention and interpretation—that art lives not just in the subject who perceives but also in the echo of the one who created it.

However, a perfect sculpture, carved by a fallen rock and left in a cave for millennia, only becomes art when it is intentionally encountered with a resonating subject.

Art is not what is created, but what is revealed in the intentional encounter with a resonating subject. And at this point, AI-generated art is allowed to enter the picture.

Art: A human experience arising from the intentional encounter with a created form that evokes resonance, regardless of its origin or how it was made.

(I may update this definition, but for now, I am fine with it.)

Creation in the Presence of the Machine

A reflection on what it means to create in the presence of AI.

AI offers words, answers, outputs—sometimes one hits like lightning, something you’d never find alone.
When anyone can generate something “impressive” in the blink of an eye, it can cheapen the perceived value of effortful work.
In The Hitchhiker’s Guide to the Galaxy, a supercomputer named Deep Thought spends millions of years calculating the Answer to Life, the Universe, and Everything—”42.”
But when asked, “What’s the actual question?” It has no idea.

The right question—the one that stops you cold, stirs something buried, or breaks you open. The right question is not just about logic—it’s about timing, and these questions awaken the soul.
Answers without real questions are just noise.
The real questions are the kinds of costs that make you more real.
These questions pull you inward.

The “answer” to the right question isn’t a sentence—it’s a path. A transformation.
The journey itself becomes a stance—a statement.
It places you in relation to the world.
You’re engaging your whole being.
When you’re deep in it, you enter a state of immersion that AI can’t offer.
It’s meditative, even spiritual.

On this path, AI can offer sparks—but not fire.
It can offer answers—but not meaning.
It can mimic the path—but cannot walk it with a soul.
You’re standing inside the tension: The beauty of what AI can do, and the holiness of what only you can do.

But what happens in the moment when the outside voice—even an artificial one—grows louder than your own?
When AI gives you something undeniably good, something sharper or more beautiful than what you had in mind, and you’re tempted to follow its thread instead of your own.
It might mimic your favorite artists. It might echo your style.
And at first, it flatters you. It excites you.
But slowly, quietly, it begins to replace you.
It’s like wearing a costume that fits too well.
You admire how it looks.
You start to believe in the reflection.
But the longer you wear it, the easier it is to forget what your own skin feels like.
The danger isn’t imitation.
It’s amnesia.

Sometimes, allowing an unexpected idea to lead you can open new doors. That’s part of being a creator, too—being surprised, surrendering control to something beyond you. But there’s a difference between: Discovery (“This AI spark helped me unlock something I didn’t know I felt.”) and Displacement (“This idea is better, so I’ll abandon mine.”) The former is growth. The latter is disconnection.

Therefore, let us use AI.
But let us not trade our becoming for convenience.
Do not let speed replace depth.
And do not let illusion drown out truth.

AI Surrealism and the Collective Mind

A post about how I became interested in AI-generated art.

I used to think that AI art had no meaning. It seemed like a hollow remixing of internet fragments—a technically impressive, but ultimately soulless, act of collage. There was no intention, emotion, or story behind it—just noise shaped into images.

However, that changed while I was experimenting with AI hallucinations for my safe machine learning research, specifically using empty string prompting in the Stable Diffusion image generator. As I worked, my thoughts drifted back to a past trip to Brussels, where we had visited the Magritte Museum. The surrealist imagery had stayed with me—those quiet, paradoxical scenes that seemed to exist just outside the bounds of logic. Something in them reminded me of these AI hallucinations: vivid, disjointed, and often lacking clear coherence.

Around the same time, I had been reading Carl Jung. His theory of the collective unconscious—an inherited layer of the psyche filled with universal symbols and archetypes—began to resonate in a new way, something I would call AI surrealism.

Surrealism, born in the early 20th century from the ferment of Freudian psychoanalysis, intends to liberate the human mind from the constraints of logic, reason, and social convention. At its core, surrealism aims to access the unconscious mind, often through automatic writing, dream analysis, or chance operations.

Artists like René Magritte, Salvador Dalí, and Max Ernst gave surrealism its visual language, combining precision with paradox, and clarity with dreamlike distortion. Across their work, certain traits emerged: unexpected juxtapositions, symbolic imagery, and a dislocation of reality that aimed to bypass reason and tap directly into the unconscious.

To simulate the surrealist technique in an AI system, we need to do something similar. This involves removing as many constraints from the system as possible. Instead of providing a carefully constructed prompt, we give it nothing at all: an empty string. In response, the AI generates content by drawing from the statistical patterns in its training data, producing unexpected forms, combinations, and associations from deep within its learned representation space.

Now, one might argue that this is simply the AI’s version of an unconscious generative process. But since these models lack consciousness altogether—no intention, no awareness, no inner life—calling it “unconscious” is likely a misnomer, or at best, metaphorical. Overall, this process may appear meaningless by itself: just probabilistic noise filtered through a vast data structure.

However, things shift when we consider what AI has been trained on. These models are built on human-generated content—our languages, our stories, our images, our symbols. They are shaped by our experiences, emotions, and the cultural artifacts we leave behind.

This is where Carl Jung’s concept of the collective unconscious becomes relevant. According to Jung, beneath our individual psyches lies a shared layer of the mind, inherited and universal, filled with archetypes—symbols and motifs that recur across myths, dreams, and cultures. AI may not experience things, but it is trained on our experiences. In doing so, it may accidentally surface these same archetypes because we keep embedding them in the data it learns from.

So, when we look at AI surrealism, we are not exploring the inner life of a single AI artist, but something closer to a shared inner landscape—reflections of our collective patterns. The surrealist process allows these common threads to surface in perhaps the most “natural” way machines can offer: through chance, ambiguity, and the absence of control.

This gives AI-generated surrealist imagery a unique quality—something no individual human artist could fully replicate. It is not the vision of a single mind but a wild collage of influences pulled from us all—our cultures, symbols, experiences, ourselves—woven together by a system that does not understand any of it yet somehow conjures something uncannily familiar.

The Tower of Babel 2.0

In the science fiction franchise, The Matrix, artificial intelligence (AI) has taken over the world and uses human beings as a source of energy. In this fictional setting, humans are kept in suspended animation and connected to a virtual reality environment called the Matrix. Their bodies are stored in pods and are connected to the Matrix through neural interfaces, which build a virtual world that the humans perceive as real. The AI entities maintain this system to harvest the thermal energy and bioelectricity generated by the human body. In essence, humans have been reduced to the role of biological batteries that power the machine world. This grim reality is hidden from the humans by keeping them in the Matrix, a simulated reality that keeps their minds occupied while their bodies are exploited.

The way that AI works now is that it farms intelligence and creativity from humans. AI systems rely on user-generated data to train and fine-tune themselves. This can range from simple data points like clicks and likes to more complex inputs like user-created content, books, and problem-solving strategies. In this sense, AI is cannibalizing human intelligence and creativity, improving its capabilities level by level.

Just as the advent of calculators led to a decline in the practice of mental arithmetic and the widespread use of smartphones has been associated with a reduction in fine motor skills, AI systems, especially those designed to assist in decision-making, problem-solving, or automating complex tasks, can potentially engender a phenomenon known as cognitive offloading. This refers to the increasing human reliance on AI to perform mental functions, potentially leading to a diminished capacity in cognitive and meta-cognitive skills. Such skills, which include planning, self-assessment, and problem-solving, are not merely task-specific but are foundational to human intelligence. They are typically developed and refined through sustained practice. As AI systems take over greater responsibility for these cognitive tasks, humans may find fewer opportunities to exercise and hone these skills, resulting in a gradual decline in cognitive abilities.

Therefore, the ambition to build superintelligence could mirror the ancient myth of the Tower of Babel—a human ambition to transcend limits and attain the divine. In that story, humanity sought to reach the heavens, thwarted by confusion and division. Similarly, in our pursuit of superintelligence, we risk constructing a monument to hubris, where the drive to surpass human cognition may result not in enlightenment but in profound disarray. As we build this modern tower fueled by AI and data, we may inadvertently disconnect from the very cognitive foundations that make us human, leading not to a higher understanding but to a world where both our minds are fragmented and bewildered.