Human and AI Artist Cooperation

Over the last 7 decades, human–AI collaboration in art has evolved from simple tools to dynamic partnerships.
Today, AI acts as a generator, instrument, co-creator, muse, curator, and vacuum cleaner, shaped by both technical progress and artistic intent.

In its earliest stages, AI functioned as an instrument, executing processes strictly defined by the human artist.
It offered no ideas or interpretation of its own, but extended human creativity through formal systems like rule-based logic or randomness.
These early experiments weren’t about delegating authorship, but about testing whether creativity itself could be systematized, blurring the line between artistic intuition and algorithmic procedure.

However, soon AI began producing autonomous outputs with minimal human input beyond initial setup.
In this mode, works like Annie Dorsen’s Hello Hi There (2010) emerged, where two chatbots delivered unscripted onstage dialogue, showcasing AI’s potential as a performer within human-defined boundaries.
In the art project, The Next Rembrandt (2016), a team used AI trained on Rembrandt’s works to produce a new painting in his style.
AI can also serve as a muse, not producing the final work, but sparking new creative directions through its unexpected outputs. For instance, Google’s DeepDream (2015) generated surreal, dreamlike images by amplifying patterns in existing photos.
While the results were often chaotic, artists used them as inspiration.

In co-creative modes, humans and AI influence each other in real-time or through iteration. In 2002, Lynn Hershman Leeson’s Agent Ruby, an AI web character for SFMOMA, learned and evolved through conversations with users, blending interactive art with cinema.

Beyond creation, AI also plays an important role in restoration and completion, helping artists reconstruct unfinished works.
A notable example is the 2021 project to complete Beethoven’s unfinished 10th Symphony.
It was trained on Beethoven’s compositions to generate stylistically consistent passages, but human musicologists curated the material, arranged the movements, and ensured historical and musical coherence.

A less direct but increasingly controversial form of “collaboration” occurs when data-driven AI systems learn from large collections of human-created work.
In this mode, AI builds its generative abilities by absorbing patterns and styles from existing art.

What unites these different modes of cooperation is the shifting distribution of creative agency. AI challenges artists to rethink their role in the creative process—and forces audiences to reconsider what art means.

Beauty and the Bot: The Evolution of AI Art Criticism

In this post, I explore how AI art criticism has evolved over time, tracing how many of today’s concerns echo longstanding debates, while also highlighting what’s more or less new.

Though it may seem like a modern issue, debates about AI art and creativity date back to the 1800s.
Ada Lovelace, often considered the world’s first computer programmer, argued that machines could never be truly creative because they simply follow instructions—a view that continues to shape how we think about AI art today.

Since then, AI art criticism has dramatically evolved over the past decades, shifting focus in response to changing technological capabilities.

In the early decades (1960s–1980s), critics largely dismissed computer-generated works as mechanically sterile, cold, formulaic, and lacking the expressive nuance of human-made art. Underlying this was a deeper anxiety that the ideal of the lone human genius was being eroded, even though the artworks themselves were not yet competitive with traditional art.

A landmark moment came with Cybernetic Serendipity (1968), a London exhibition showcasing machine-assisted creativity.
Many at the time believed that only human artists, not machines, could create meaningful art.
The exhibition sidestepped the central question, “Is it art?”, by framing the works as idea-driven demonstrations.
Even then, it hinted at how AI was beginning to challenge traditional notions of art.

Traditional art institutions were slow to accept computer art as legitimate.
Many in the art world continued to view it as a novelty at best, or as a threat to deeply held humanist values at worst.
During the 1960s to 1980s, AI art was often met with skepticism and resistance. Many critics viewed it as a threat to the sanctity of the human artist, sparking what was widely perceived as a crisis of authenticity.
These works were frequently dismissed for lacking emotional depth or originality.
The idea that art created by an algorithm was missing some ineffable human quality became almost a cliche.
Paintings plotted by machines or poems generated by programs were routinely described as soulless, mechanical, or derivative.
While the works may resemble art on the surface, they are, in this view, simulations of creativity rather than genuine expressions.

By the 1990s and 2000s, however, the conversation had begun to shift. As early style-imitating systems like David Cope’s Experiments in Musical Intelligence (EMI) appeared and began publishing music albums, the debate turned to whether computers were genuinely creative or merely recombining existing patterns. Critics and scholars increasingly questioned not just the outputs, but the processes behind them. Could programmed systems exhibit originality, or were they only echo chambers of prior human input?

Questions of authorship and copyright entered academic discourse, though legal consequences remained mostly hypothetical at the time, given the relatively small datasets and limited public reach of AI work. Still, the theoretical groundwork for today’s debates was being laid.

Futurists like Ray Kurzweil were bullish, suggesting that creativity is ultimately computable and that future AI systems would routinely generate novel art and literature. By the end of the 1990s, the debate had matured: it was no longer simply about whether AI could make art, but about how soon and with what consequences—a notable shift from the speculative anxieties of the 1960s.

At the same time, researchers began to propose more nuanced frameworks for understanding machine creativity. Theoretical work emphasized that creativity isn’t just about novelty, but also about value and intent.
Critics argued that AI art must be evaluated in context: Is the machine working independently, or is it part of a human-AI collaboration? What is the role of the human artist—as programmer, curator, co-creator?

New terms like co-creativity and symbiotic creativity emerged, reflecting the view that the most meaningful AI art might come not from autonomous systems, but from human-machine partnerships.
Rather than displacing the human artist, AI could serve as a tool for expanding creative possibilities. In this light, proponents suggested, AI art might even force us to clarify what we most value in human-made art—be it personal perspective, conceptual depth, emotional resonance, or cultural context.

The 2010s saw a leap with deep learning: GANs and transformers enabled AI to produce images and music that fooled juries and consumers, raising concerns about economic displacement and the erasure of human labour in the creative pipeline. During this period, it became increasingly evident that AI art could not only mimic surface aesthetics but also evoke emotional and intellectual responses, sometimes fooling audiences into perceiving a sense of soul or meaning.
A striking example came in 2016, when a short novel co-written by an AI program in Japan, The Day a Computer Writes a Novel, passed the first round of a national literary competition, surprising judges with its coherent structure, even though it ultimately did not win.
Note: The first known AI-generated piece to win a major global competition came in 2023, when an artist declined the Sony World Photography Award after revealing the work was AI-generated.

By the 2020s, with diffusion models like Stable Diffusion and Midjourney trained on billions of scraped images, the dominant criticism became one of appropriation and theft—models replicating artists’ styles without consent. This shift culminated in legal action, as courts began evaluating whether large-scale dataset scraping constitutes fair use, marking a new era of ethical, legal, and cultural reckoning in AI art. Alongside these legal and economic criticisms, concerns about bias and representation also came to the forefront. Critics and scholars began to expose how AI images often reflect and amplify existing cultural stereotypes—overrepresenting certain body types, skin tones, or aesthetic ideals—while marginalizing others. Since these models are trained on internet-scale data, they tend to reproduce the biases embedded in that content, leading to accusations that AI art can reinforce exclusionary norms or perpetuate harmful tropes under the guise of neutrality.

Overall, there has always been an underlying fear that AI could undermine or replace human artists.
In the early years, AI art was largely dismissed as a novelty or technical gimmick—interesting, but not artistically threatening. However, this perception has shifted dramatically over time.
By the 2020s, with AI capable of producing high-quality, style-specific works at scale, the conversation turned from curiosity to concern, as AI began to pose a real economic, ethical, and cultural threat to human creativity.

Looking ahead, I believe that even if copyright concerns are addressed—say, by training AI exclusively on non-human-made data (however feasible that might be)—the criticism that “AI stole my art” will likely persist, especially when the results are compelling.
When AI outputs lack human-like qualities, their artistic value is questioned.
But as soon as they start resembling the richness, emotion, or technique found in human work, fears of imitation resurface.
Regardless of the safeguards in place, good AI art will always prompt some to argue it draws too heavily from human creativity.

Conceptual Art and AI Art

In conceptual art, the artwork is the idea, the process, or the generative system itself, not the result of its execution.
For example, although Sol LeWitt passed away in 2007, his wall drawings can still be recreated by anyone following his written instructions, because the essence of the work resides in the conceptual framework, not in any individual execution.
This means that multiple executions of the same idea are equally valid, as they serve to activate the conceptual gesture embedded in the work.
Thus, the “artwork” is dematerialized—it is a proposition, a challenge, a question that lingers in the realm of thought rather than objecthood.
In this view, the artwork exists in the mind of the artist and the viewer, in the invisible exchange between idea and perception, rather than in any specific form.

Following this logic, in data-driven AI art, the artwork is the system—the dataset, the model, the algorithm—that generates infinite possible executions.

Conceptual art breaks open the definition of art. It frees the artist of the traditional conventions, the pretext of beauty and skill, and criticizes the establishment.

In this tradition, AI art can be seen as an extension of this break—it dissolves authorship even further by transferring the act of creation to autonomous systems that were trained on our tastes, experiences, and cultural memories.
AI art reflects back a distorted mirror of ourselves, shaped by these collective traces beyond individual control.
It challenges the romantic image of the artist by presenting creation as a recombination of the collective.
AI art forces us to see ourselves as a culture of patterns, symbols, and repetitions rather than unique, autonomous creators.
The mirror reflects not only what we consciously produce, but also what we unconsciously repeat—prejudices, stereotypes, trends.
Rare, marginal, or non-dominant expressions may be erased, while dominant cultural patterns are amplified.

This collective mirror is itself a conceptual gesture.
It exposes the invisible systems and data infrastructures that now shape art, culture, and perception.
The mirror is not the image—it is the process that generates the image, forcing us to reflect on our own participation in these systems.
Thus, AI art continues the conceptual art tradition by turning the system itself into a mirror of cultural processes, rather than a medium for self-expression.

While conceptual art and its integration with AI open intriguing new perspectives for art-making, not every artist should follow this path.
If an artist wants to survive in today’s world, they should resist simply mirroring the mirror of the collective.
Artistic relevance comes not from reacting to the trends of the day, nor from virtue signaling or cheap provocation, but from engaging with deeper, timeless, and personal questions that resist easy answers.
Art that merely seeks to affirm or outrage often lacks the layers, complexity, and self-criticism that give it enduring value.
Instead, art should become so personal, so singular, that the collective mirror can not reflect it back.

AI Art History

When I first explored AI art, I was surprised to learn that algorithmic and automatically generated art has fascinated artists for centuries.

Back then, ancient Greek artists followed simple yet strict rules to draw mesmerizing maze-like patterns called meanders onto ceramic mugs and plates.
At the same time, fractals and recursive patterns, now mostly associated with computer graphics, were woven into textiles that enveloped human bodies and formed jewelry that spiraled around their wrists and necks.
Around this period, Hero of Alexandria wrote Automata, detailing functional mechanical theaters and hydraulic devices.
One such automaton theater could perform a multi-minute puppet show, with characters animated by a system of ropes, accompanied by drum-produced sound effects—all not powered by electrical circuits, but driven solely by the silent force of gravity.

Recursion was also a foundational principle in Islamic art, giving rise to intricate geometric tiling, tessellations, and flowing arabesques.
In the 8th century, the Abbasid caliph al-Ma’mun adorned his palace in Baghdad with a silver and gold mechanical tree, where metal birds sang automatically from swaying branches.
In the 9th century, the Banu Musa brothers invented an automatic flute player, considered one of the earliest programmable machines. Powered by steam, the device produced flute sounds and allowed users to adjust its settings to create different musical patterns.

Clockwork automata flourished in Renaissance Europe between the 15th and 17th centuries.
Complex mechanized figures, automaton clocks, and animated tableaux became prominent expressions of early “algorithmic” kinetic art.
One striking example is the astronomical clock in Prague, built in 1410.

Johann Kirnberger’s Musical Dice Game of 1757 is considered an early example of a generative system based on randomness.
Dice were used to select musical sequences from a numbered pool of previously composed phrases.

But for me, the story of automatically generated art truly comes to life with the emergence of autonomous drawing machines—most notably the Draughtsman, developed by Pierre Jaquet-Droz together with his son Henri-Louis and Jean-Frédéric Leschot between 1768 and 1774.
Just imagine sitting at a table with a little porcelain-white boy, his face locked in a frozen, lifeless stare, lips curled into a faint, unsettling smile that never touches his hollow, empty eyes.
Next to him sits a tiny inkwell, the delicate quill lying motionless in his pale, rigid hand. You stare into his hollow gaze—and then, from deep within his clockwork chest, you hear a faint, almost organic click.
His tiny hand begins to move with lifelike precision, each stroke whispering echoes of a forgotten past, as if some restless ghost were reaching out, desperate to reveal something long buried.
Because he must obey.
Because he must draw.
Stroke by stroke, the pale paper surrenders to the ink. Slowly, agonizingly slowly, shapes begin to emerge—curves, shadows, contours. What begins as nothing becomes something disturbingly familiar.
A face. Regal. Haunting.
The ghostly image of a king emerges from the void.
It is Louis XV, untouched by time, by death, by decay. Drawn from the trembling hand of a child who never lived, and who will never die—condemned for all eternity to bring his king back to life, over and over, in a loop of perfect, obedient misery.

But yeah… to be fair, the little boy doesn’t just produce haunted kings—he can also draw a dog, a royal couple, and even a Cupid scene. And he isn’t entirely alone in his mechanical eternity; beside him stand some equally tireless companions, including the Musician, a woman-shaped automaton who plays the organ with the same lifeless, obedient precision, and another boy who endlessly writes custom letters—some of which, read HELP ;0

But the story doesn’t end in that 18th-century workshop. It continues.
Maillardet’s automaton, created around 1800, pushed the boundaries further, capable of producing multiple intricate drawings and even writing poems, expanding the mechanical imagination beyond mere portraits.

The 19th century becomes the golden age of mechanical music—an era when music boxes, orchestrions, barrel organs, and player pianos fill salons, streets, and grand ballrooms with melodies performed not by human hands, but by intricate machines.
These devices no longer need gears alone but evolve into systems driven by punched cards, pinned cylinders, and perforated paper rolls. Their compositions become complex, layered, and repeatable—combinatorial engines of sound where a simple change of the input medium reshapes the entire performance.
No longer bound to a single king’s portrait or a solitary melody, these machines hint at an unsettling future where art, once the domain of fragile human touch, can now be summoned by mechanism, electricity, and mathematics, endlessly repeating, endlessly perfect, endlessly detached from the warmth of breath and flesh.

In the early 1950s, Christopher Strachey used the Ferranti Mark 1—the world’s first commercially available stored-program digital computer—to create some of the earliest examples of computer-generated art. Strachey programmed the Mark 1 to play a medley including God Save the King, Baa Baa Black Sheep, and In the Mood. In 1952, he also developed a love letter generator.
Illiac Suite (1957), later retitled String Quartet No. 4, is widely considered the first musical score composed by a computer. Lejaren Hiller and Leonard Isaacson programmed the ILLIAC I at the University of Illinois to generate material for the piece, which features four movements—each experimenting with different aspects of composition, from melody and harmony to rhythm and generative algorithms like Markov chains.
In 1959, German mathematician Theo Lutz programmed a computer to generate stochastic texts, marking an early exploration of computer-generated poetry.

In 1964, Jeanne Hays Beaman and four other female artists, together with Paul Le Vasseur, created a dance-randomization computer program. By inputting 20 variations each of time, space, and movement, the computer generated around 70 text-based dance sequences in just four minutes.
In 1965, inventor Ray Kurzweil developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show I’ve Got a Secret that same year.
In February 1965, Frieder Nake and Georg Nees presented some of the world’s first computer-generated graphics, created using the newly introduced Zuse Graphomat Z64 plotter. Nees’s exhibition, titled Computer Graphik, was held at the Study Gallery of Stuttgart College, marking one of the earliest public presentations of computer art.
In 1966, Nees extended his work into computer sculptures, using programs to control milling machines that produced sculptural forms instead of traditional workpieces. His computer-generated sculptures and graphics were later exhibited at the 1969 Nuremberg Biennale.

This period coincided with the rise of conceptual art in the 1960s, where the idea or concept behind the work took precedence over the execution. Artists embraced the notion that the idea becomes a machine that makes the art.
Computer art, with its reliance on pre-programmed rules and generative systems, was increasingly grouped under the emerging umbrella of post-conceptual art—a movement that extended conceptual art’s legacy by embracing algorithmic, generative, and computational methods as legitimate artistic processes.

Around the beginning of the 1970s, AARON became popular.
AARON is the name for a series of computer programs developed by artist Harold Cohen to autonomously create original artistic images, distinguishing it from earlier programs that relied heavily on human input.
Emerging from Cohen’s central question, “What are the minimum conditions under which a set of marks functions as an image?”, AARON was continuously developed from 1972 through the 2010s.

In 1981, David Cope was commissioned to compose an opera but struggled with severe writer’s block. To overcome it, he began developing Experiments in Musical Intelligence (EMI) on an Apple desktop computer, aiming to analyze and replicate his own compositional style. Eight years later, with EMI’s assistance, he completed the opera in just two days.
Building on this success, Cope adapted EMI to emulate the styles of other composers, including Bartók, Brahms, Chopin, Gershwin, Joplin, Mozart, and Prokofiev. This work led to the release of Bach by Design, an album of computer-composed music performed by a Disklavier. His following album, Classical Music Composed by Computer, featured works performed by human musicians. In 1984, a program named RACTER (short for “raconteur”) authored the book The Policeman’s Beard is Half Constructed, a collection of surreal poems and prose. Aside from the introduction by its human creator (William Chamberlain), the book’s content was “entirely written by a computer program”– the first book ever credited to an AI author.

In the early 1990s, simple prototypes for computer-generated puns were developed using natural language generation systems such as VINCI. In 1994, Graeme Ritchie and Kim Binsted introduced JAPE (Joke Analysis and Production Engine), a program designed to generate question-answer-type puns from a general, non-humorous lexicon.
In 1999, Scott Draves and a team of engineers created Electric Sheep, a free, open-source screensaver and volunteer computing project for animating and evolving fractal flames. The system used AI to generate endless animations that evolved by learning from user preferences across a network of connected computers.

In 2001, Draves was awarded the Fundación Telefónica Life 4.0 prize for Electric Sheep.
In 2002, Lynn Hershman Leeson created Agent Ruby, an artificial intelligence web character commissioned by SFMOMA for its online platform e.space. Designed as an “e-dream portal,” Agent Ruby could converse with online users, with these interactions shaping her memory, knowledge, and moods over time. The interactive, multiuser work also served as an expanded cinema element of Hershman Leeson’s film Teknolust, featuring Ruby as a female face with shifting expressions who chats with users and searches the internet to expand her knowledge.
Simon Colton’s project The Painting Fool was developed throughout the 2000s as an AI “artist” capable of generating portraits and imitating various styles and moods. Colton, a computer scientist, aimed to explore the creative potential of artificial intelligence through this work.

In 2014, Stephanie Dinkins began Conversations with Bina48, a project in which she recorded dialogues with BINA48, a social robot modeled after a middle-aged Black woman. In 2019, Dinkins received the Creative Capital award for her development of an evolving artificial intelligence based on the cultures and interests of people of color.
In 2015, Sougwen Chung initiated Mimicry (Drawing Operations Unit: Generation 1), an ongoing collaboration between the artist and a robotic arm.
In 2019, Chung received the Lumen Prize for her performances with the robotic arm, which uses AI to mimic her drawing style.
In 2018, Christie’s in New York auctioned the AI-generated artwork Edmond de Belamy, created by the Paris-based collective Obvious. The piece sold for USD 432,500, far exceeding its estimate of USD 7,000–10,000.

In 2023, an AI-generated photograph titled The Electrician won a Sony World Photography Award—one of the most prestigious prizes in the field. It was later revealed that the image had been generated using AI as part of a provocation by the artist, who declined the prize to spark discussion about the authenticity and emotional impact of AI-generated art.

In 2024, the Japanese film GenerAIdoscope was released, co-directed by Hirotaka Adachi, Takeshi Sone, and Hiroki Yamaguchi. All video, audio, and music in the film were created using AI.
In 2025, the Japanese anime television series Twins Hinahima was released. The series was produced and animated with AI assistance, particularly during the process of cutting and converting photographs into anime-style illustrations, which were later refined by art staff. Most characters and logos were still hand-drawn using various software.

From ancient drawing rules to mechanical puppets to AI systems crafting music, text, and images—the story of AI art is not just technological, but a reflection of human imagination extending into machines.
Each generation blurs the line between tool and author, algorithm and artist. And the journey continues, inviting us to ask: why AI art at all?

Why Is There Art—and AI Art?

Understanding why there is art—and now AI art—reveals one of the deepest layers of human nature, and helps us grasp what it means when machines begin to mirror our most expressive aspirations.

There is no single reason why humans create art.
Rather, art arises from a rich web of interlocking needs—biological, cognitive, emotional, and existential—that together form one of the most distinct features of the human species.

Art engages the brain’s capacity for imagination and problem-solving. Psychologically, creating art is a form of play and exploration that exercises our cognitive abilities.
From early childhood scribbles to intricate symphonies, art stimulates creativity and nourishes our capacity to think differently.

But art is more than mental exercise. Humans turn to art to express, regulate, and process feelings that are often too complex for everyday language.
It has been shown to reduce stress, anxiety, and even physical pain.

Art is not just for the individual; it is for the group. One of art’s most enduring functions is communication. Long before written language, humans used drawings, dances, and songs to record knowledge and transmit culture. Art preserves shared values and identity. It binds individuals into tribes, nations, and civilizations.

This social function extends to emotional synchronization. Participating in group artistic practices—drumming, singing, storytelling—tends to sync individuals emotionally, fostering cohesion and mutual understanding.
In this sense, art is a survival tool: it enables collaboration and strengthens the social glue needed for group living.

Based on these reasons—art’s role in cognitive stimulation, emotional expression for pain reduction, social cohesion, and cultural transmission—it’s no surprise that evolutionary theorists also see artistic ability as a powerful tool for sexual selection. The capacity to create vivid paintings, moving music, or compelling stories showcases exactly the traits that signal fitness: intelligence, creativity, emotional depth, and social awareness.
From this perspective, art becomes part of a broader reproductive strategy—a display of adaptive qualities that attract mates and increase the likelihood of passing those traits on to future generations.
In turn, this promotes the survival and flourishing of gene pools—and entire groups—that are enriched with these advantageous cognitive and emotional capacities.

Ironically, the intuition that art serves a deeper, even immortal function was already present long before the development of the evolutionary theory.
In Plato’s Symposium, Diotima suggests that humans seek immortality through creation, whether in children, philosophy, or art.
Much later, thinkers like Otto Rank and Ernest Becker would echo and expand this idea. Rank saw art as a heroic act against death, and Becker, in The Denial of Death, argued that much of human behavior is driven by the awareness of mortality. Artistic creation, he claimed, forms part of our “immortality projects”—symbolic efforts to outlast our finite lives by leaving a lasting mark.

In this light, art becomes the tool that emerges from biological evolution—shaped by cognitive development, emotional regulation, social bonding, and sexual selection—but transcends its origins.
It reaches beyond survival and reproduction toward legacy, memory, and meaning. Art is where biology touches the eternal.

If art is tied to intelligence, vitality, attractiveness, and immortality, then the urge to equip machines with artistic capabilities is not arbitrary—it is an extension of our own evolutionary and existential impulses. When we build AI artists, we do not merely automate creativity. We pass on the very projects that define us. We attempt to imbue our creations with the traits that make us human—imagination, expression, and desire for legacy.

In giving machines the ability to make art, we hand over our symbols of identity and meaning to something beyond us. Perhaps this is not just about automation, but about continuity.
A new kind of cultural offspring.

And this might help explain our enduring fascination with autonomous agents capable of producing art—a fascination that stretches back centuries (see upcoming blog post or related material on AI Art in Wikipedia). In a sense, AI art development could be seen as a kind of Platonic fantasy realized: a merging of offspring creation, philosophical pursuit, and the dream that an artist’s work—or even an entire artistic movement born from their vision—continues to be generated long after their death.
Through machines, the artist’s legacy is no longer merely remembered; it is reanimated.

Unlike human followers, who can die, forget, or lose interest, autonomous AI agents offer the potential of unwavering continuity. They do not age or get distracted —they can preserve, replicate, and perhaps even refine a creative lineage with tireless precision. In this way, they become more than tools; they become vessels of artistic immortality.

While some turn to AI in pursuit of curing disease and overcoming biological death, others use it to transcend mortality in a different way—through the persistence of artistic expression.

AI Art and Copyright

Building on the ongoing debate about AI art and copyright, [be sure to check out the “What is Art?” post first, so we have a common foundation — importantly, the idea that art is primarily about the resonance it creates with the subjective observer, regardless of whether it was created by a human or not.]

David Hume, a renowned 18th-century British philosopher, was one of the leading figures of empiricism — the view that all knowledge arises from sensory experience. Hume argued that everything in our minds — our ideas, feelings, and understanding — can ultimately be traced back to what we perceive through our senses or internal experience. For Hume, reason could not extend beyond the boundaries of experience; it is always rooted in, and limited by, what we encounter in the world.

A nice example from Sophie’s World, a Norwegian novel that I personally really like, illustrates this:
“…In Hume’s time, the idea that angels exist was widespread. By an angel, we understand a male figure with wings. Have you ever seen such a being, Sophie?” “No.” “But you have seen a male figure?” “That’s a stupid question.” “And you have also seen wings?” “Of course, but never on a human being.” “According to Hume, angels are a composite idea. They consist of two different experiences that, however, are not actually composed, but have only been coupled together in the human imagination…

Therefore, an artist is also not a creator out of nothing, but a rearranger of experience. That means that art is the artful recombination of what we already feel and know.

We find various examples of this in history. Delacroix drew inspiration from Rubens and the Venetian Renaissance, favoring color and movement over precise outlines (see, for instance, Wikipedia). The earliest known version of the Romeo and Juliet story comes from Masuccio Salernitano’s Il Novellino (1476), with the tale of Mariotto and Ganozza (see, for instance, Wikipedia). Marcel Duchamp’s Fountain — a standard urinal placed in a gallery and signed with a pseudonym — challenged traditional notions of art (see Wikipedia). Even in pop music, Lady Gaga borrowed from Vittorio Monti’s Csárdás — itself based on a Hungarian folk dance — for the intro of her single Alejandro (see Classic Fm).

So we are standing on the shoulders of giants — a concept that dates back to the 12th century and, according to John of Salisbury, is attributed to Bernard of Chartres. Its most familiar and popular expression appears in a 1675 letter by Isaac Newton (see Wikipedia). So even this statement itself is reused and remixed. Every experience we have shapes our future actions and experiences. From an empiricist point of view, the very idea of “pure originality” is a myth.

So, what does this mean for the ongoing debate about AI and copyright violations? In an opinion shaped by the empiricism philosophy, AI systems do what we do: they examine publicly available data, gain experience, and use it to refine and improve themselves.

For example, just as a musician might listen to countless songs to develop their own style, an AI trained on publicly available music learns patterns, structures, and techniques.
Similarly, a painter might study masterpieces in museums, internalizing techniques, color palettes, and compositions, and then blend these impressions into their own original work — just as an AI trained on public visual datasets learns and recombines artistic styles. Or a writer might read hundreds of novels, essays, and poems, unconsciously absorbing narrative rhythms, styles, and ideas, which later reemerge in new combinations in their own writing — just as a language model, trained on publicly available text, weaves together new stories shaped by what it has read.

Therefore, as long as a work is publicly available for viewing (for example, accessible on the internet), it may be used for training an AI system. However, this freedom must be balanced with transparency: AI developers should maintain a publicly available reference database that clearly documents the sources of the training data — a task that is technically feasible. In addition, AI-generated works should be accompanied by a statement indicating that the total list of these sources influenced the creation of the generated piece. This goodwill approach would uphold the principles of fair use while ensuring that the origins of inspiration are acknowledged. Importantly, AI developers should not be required to financially compensate artists simply because their publicly available works contributed to the AI’s training. This principle reflects how creativity itself works: humans absorb and are influenced by countless works of art, often without consciously tracing every source of inspiration. In much the same way, AI models internalize patterns through exposure. Learning through observation — without direct copying — is a natural and essential process for both human and machine creativity. Artists do not pay every creator of the artworks they encounter.

Finally, the resulting artwork should be evaluated solely under existing copyright laws, without regard to whether a human or an AI created it.

If an artwork was generated in the style of a specific artist, such as Nobuyoshi Araki, it should be regarded not simply as a copy, but as part of a new artistic movement — for example, Arakiism. This framing both honors the original creator, recognizing their style as an invention worthy of its own lineage, and allows for the natural evolution of the style through new interpretations and transformations. Just as Impressionism or Cubism began with a few individuals and grew into broader movements, styles pioneered by individual artists can, through widespread engagement, become shared aesthetic languages. Viewing such works as contributions to an -ism rather than as imitations respects the dynamic, living nature of artistic creation and gives rightful credit to the originator.

In summary, from an empiricist perspective, human creativity is fundamentally a process of recombination — we reshape what we experience into something new. AI, trained on data, mirrors this same process. Therefore, AI should be permitted to learn from all publicly available data, provided that developers maintain a transparent, publicly accessible database documenting the sources. Copyright enforcement should continue as it did before the rise of AI technologies, focusing solely on the nature of the work itself, not on who or what created it. If an AI system user generates an artwork in the style of a specific artist, it should be recognized as part of the [Artist’s Name]ism art movement, in order to credit the original inventor of the style properly.

What is Art?

This post is a rotation of thoughts around a familiar term. An attempt to trace the flicker of meaning that arises when we say art in the era of AI generated content.

Every day, content is created, accessed, and shared across the internet. But increasingly (see Link), people no longer make the content—it’s generated by AI. What once required time, imagination, and expression can now be produced in seconds by algorithms.

And the results are convincing. In studies by researchers like Elgammal, participants were shown paintings generated by machines and asked to judge their origin. A lot couldn’t tell whether the works were made by humans or AI—and in some cases, they even found the AI art more compelling. Similar patterns appear in music and literature, where listeners and readers routinely fail to spot the difference. The boundary between human and machine-made creativity is blurring fast.

At first glance, this seems like a technical milestone: AI can now mimic humans. But it has far-reaching implications, touching everything from politics to perception.
But in this post, I want to focus on one particular consequence: what this transformation means for art and what art actually becomes.

To have a common ground, let’s start with the Oxford Dictionary’s definition of art: “Art is the use of the imagination to express ideas or feelings, particularly in painting, drawing, or sculpture.”

However, in a world where we can no longer be certain whether art was created through human imagination or by AI, this definition becomes somewhat blurry. We risk losing our ability to determine what constitutes art.

Therefore, I would like to step back and reflect.

We often speak of art as something made, something external.
Maybe art isn’t the object itself, but what occurs when that object meets a mind, a memory, a mood.

We humans are subjective beings who are fundamentally separate, locked in our own heads, experiences, and perspectives.

Art awakens subjectivity.
It invites our subjectivity to rise, to respond, to wrestle.
And that response is never universal.
One person weeps, another shrugs.

Art is the crack in a glacier.
The fractal geometry of a tree struck by lightning.
When intentionally encountered, we may awaken with a sense of resonance.

Art is a private, resonant reckoning.
Art allows someone’s interiority—their feelings, dreams, memories, breakings, and becomings—to leak into a medium.
Art gives shape to these complexities.

Art isn’t static. A painting seen at twenty is not the same when seen at fifty. Not because the paint changed, but because we did.

When art is shared, it becomes something else, too—an invitation, a bridge, a chance for resonance between two subjects.
And what passes through it isn’t our subjectivity—it’s the trembling echo of our subjectivity.
It becomes a tension between intention and interpretation—that art lives not just in the subject who perceives but also in the echo of the one who created it.

However, a perfect sculpture, carved by a fallen rock and left in a cave for millennia, only becomes art when it is intentionally encountered with a resonating subject.

Art is not what is created, but what is revealed in the intentional encounter with a resonating subject. And at this point, AI-generated art is allowed to enter the picture.

Art: A human experience arising from the intentional encounter with a created form that evokes resonance, regardless of its origin or how it was made.

(I may update this definition, but for now, I am fine with it.)

Creation in the Presence of the Machine

A reflection on what it means to create in the presence of AI.

AI offers words, answers, outputs—sometimes one hits like lightning, something you’d never find alone.
When anyone can generate something “impressive” in the blink of an eye, it can cheapen the perceived value of effortful work.
In The Hitchhiker’s Guide to the Galaxy, a supercomputer named Deep Thought spends millions of years calculating the Answer to Life, the Universe, and Everything—”42.”
But when asked, “What’s the actual question?” It has no idea.

The right question—the one that stops you cold, stirs something buried, or breaks you open. The right question is not just about logic—it’s about timing, and these questions awaken the soul.
Answers without real questions are just noise.
The real questions are the kinds of costs that make you more real.
These questions pull you inward.

The “answer” to the right question isn’t a sentence—it’s a path. A transformation.
The journey itself becomes a stance—a statement.
It places you in relation to the world.
You’re engaging your whole being.
When you’re deep in it, you enter a state of immersion that AI can’t offer.
It’s meditative, even spiritual.

On this path, AI can offer sparks—but not fire.
It can offer answers—but not meaning.
It can mimic the path—but cannot walk it with a soul.
You’re standing inside the tension: The beauty of what AI can do, and the holiness of what only you can do.

But what happens in the moment when the outside voice—even an artificial one—grows louder than your own?
When AI gives you something undeniably good, something sharper or more beautiful than what you had in mind, and you’re tempted to follow its thread instead of your own.
It might mimic your favorite artists. It might echo your style.
And at first, it flatters you. It excites you.
But slowly, quietly, it begins to replace you.
It’s like wearing a costume that fits too well.
You admire how it looks.
You start to believe in the reflection.
But the longer you wear it, the easier it is to forget what your own skin feels like.
The danger isn’t imitation.
It’s amnesia.

Sometimes, allowing an unexpected idea to lead you can open new doors. That’s part of being a creator, too—being surprised, surrendering control to something beyond you. But there’s a difference between: Discovery (“This AI spark helped me unlock something I didn’t know I felt.”) and Displacement (“This idea is better, so I’ll abandon mine.”) The former is growth. The latter is disconnection.

Therefore, let us use AI.
But let us not trade our becoming for convenience.
Do not let speed replace depth.
And do not let illusion drown out truth.

AI Surrealism and the Collective Mind

A post about how I became interested in AI-generated art.

I used to think that AI art had no meaning. It seemed like a hollow remixing of internet fragments—a technically impressive, but ultimately soulless, act of collage. There was no intention, emotion, or story behind it—just noise shaped into images.

However, that changed while I was experimenting with AI hallucinations for my safe machine learning research, specifically using empty string prompting in the Stable Diffusion image generator. As I worked, my thoughts drifted back to a past trip to Brussels, where we had visited the Magritte Museum. The surrealist imagery had stayed with me—those quiet, paradoxical scenes that seemed to exist just outside the bounds of logic. Something in them reminded me of these AI hallucinations: vivid, disjointed, and often lacking clear coherence.

Around the same time, I had been reading Carl Jung. His theory of the collective unconscious—an inherited layer of the psyche filled with universal symbols and archetypes—began to resonate in a new way, something I would call AI surrealism.

Surrealism, born in the early 20th century from the ferment of Freudian psychoanalysis, intends to liberate the human mind from the constraints of logic, reason, and social convention. At its core, surrealism aims to access the unconscious mind, often through automatic writing, dream analysis, or chance operations.

Artists like René Magritte, Salvador Dalí, and Max Ernst gave surrealism its visual language, combining precision with paradox, and clarity with dreamlike distortion. Across their work, certain traits emerged: unexpected juxtapositions, symbolic imagery, and a dislocation of reality that aimed to bypass reason and tap directly into the unconscious.

To simulate the surrealist technique in an AI system, we need to do something similar. This involves removing as many constraints from the system as possible. Instead of providing a carefully constructed prompt, we give it nothing at all: an empty string. In response, the AI generates content by drawing from the statistical patterns in its training data, producing unexpected forms, combinations, and associations from deep within its learned representation space.

Now, one might argue that this is simply the AI’s version of an unconscious generative process. But since these models lack consciousness altogether—no intention, no awareness, no inner life—calling it “unconscious” is likely a misnomer, or at best, metaphorical. Overall, this process may appear meaningless by itself: just probabilistic noise filtered through a vast data structure.

However, things shift when we consider what AI has been trained on. These models are built on human-generated content—our languages, our stories, our images, our symbols. They are shaped by our experiences, emotions, and the cultural artifacts we leave behind.

This is where Carl Jung’s concept of the collective unconscious becomes relevant. According to Jung, beneath our individual psyches lies a shared layer of the mind, inherited and universal, filled with archetypes—symbols and motifs that recur across myths, dreams, and cultures. AI may not experience things, but it is trained on our experiences. In doing so, it may accidentally surface these same archetypes because we keep embedding them in the data it learns from.

So, when we look at AI surrealism, we are not exploring the inner life of a single AI artist, but something closer to a shared inner landscape—reflections of our collective patterns. The surrealist process allows these common threads to surface in perhaps the most “natural” way machines can offer: through chance, ambiguity, and the absence of control.

This gives AI-generated surrealist imagery a unique quality—something no individual human artist could fully replicate. It is not the vision of a single mind but a wild collage of influences pulled from us all—our cultures, symbols, experiences, ourselves—woven together by a system that does not understand any of it yet somehow conjures something uncannily familiar.

Science Atlas

Science Atlas is a website designed to provide a high-level overview of where research is conducted in the private and public sectors worldwide.

It is a dynamic map that allows users to explore the geographical distribution of research institutions and facilities across various countries. Using an intuitive interface, you can filter research centers by country, topic, and sector, providing deep insights into the scientific output from around the globe.

Discover potential collaborators or institutions to join, with access to detailed information about their focus areas and partnerships. Understand the distribution of research activity globally.

Simply visit Science Atlas. We welcome your feedback as we continue to develop it. If you have suggestions, data to contribute, or institutions you’d like to see added, don’t hesitate to contact us through our Add Map Content form.