Beauty and the Bot: The Evolution of AI Art Criticism

In this post, I explore how AI art criticism has evolved over time, tracing how many of today’s concerns echo longstanding debates, while also highlighting what’s more or less new.

Though it may seem like a modern issue, debates about AI art and creativity date back to the 1800s.
Ada Lovelace, often considered the world’s first computer programmer, argued that machines could never be truly creative because they simply follow instructions—a view that continues to shape how we think about AI art today.

Since then, AI art criticism has dramatically evolved over the past decades, shifting focus in response to changing technological capabilities.

In the early decades (1960s–1980s), critics largely dismissed computer-generated works as mechanically sterile, cold, formulaic, and lacking the expressive nuance of human-made art. Underlying this was a deeper anxiety that the ideal of the lone human genius was being eroded, even though the artworks themselves were not yet competitive with traditional art.

A landmark moment came with Cybernetic Serendipity (1968), a London exhibition showcasing machine-assisted creativity.
Many at the time believed that only human artists, not machines, could create meaningful art.
The exhibition sidestepped the central question, “Is it art?”, by framing the works as idea-driven demonstrations.
Even then, it hinted at how AI was beginning to challenge traditional notions of art.

Traditional art institutions were slow to accept computer art as legitimate.
Many in the art world continued to view it as a novelty at best, or as a threat to deeply held humanist values at worst.
During the 1960s to 1980s, AI art was often met with skepticism and resistance. Many critics viewed it as a threat to the sanctity of the human artist, sparking what was widely perceived as a crisis of authenticity.
These works were frequently dismissed for lacking emotional depth or originality.
The idea that art created by an algorithm was missing some ineffable human quality became almost a cliche.
Paintings plotted by machines or poems generated by programs were routinely described as soulless, mechanical, or derivative.
While the works may resemble art on the surface, they are, in this view, simulations of creativity rather than genuine expressions.

By the 1990s and 2000s, however, the conversation had begun to shift. As early style-imitating systems like David Cope’s Experiments in Musical Intelligence (EMI) appeared and began publishing music albums, the debate turned to whether computers were genuinely creative or merely recombining existing patterns. Critics and scholars increasingly questioned not just the outputs, but the processes behind them. Could programmed systems exhibit originality, or were they only echo chambers of prior human input?

Questions of authorship and copyright entered academic discourse, though legal consequences remained mostly hypothetical at the time, given the relatively small datasets and limited public reach of AI work. Still, the theoretical groundwork for today’s debates was being laid.

Futurists like Ray Kurzweil were bullish, suggesting that creativity is ultimately computable and that future AI systems would routinely generate novel art and literature. By the end of the 1990s, the debate had matured: it was no longer simply about whether AI could make art, but about how soon and with what consequences—a notable shift from the speculative anxieties of the 1960s.

At the same time, researchers began to propose more nuanced frameworks for understanding machine creativity. Theoretical work emphasized that creativity isn’t just about novelty, but also about value and intent.
Critics argued that AI art must be evaluated in context: Is the machine working independently, or is it part of a human-AI collaboration? What is the role of the human artist—as programmer, curator, co-creator?

New terms like co-creativity and symbiotic creativity emerged, reflecting the view that the most meaningful AI art might come not from autonomous systems, but from human-machine partnerships.
Rather than displacing the human artist, AI could serve as a tool for expanding creative possibilities. In this light, proponents suggested, AI art might even force us to clarify what we most value in human-made art—be it personal perspective, conceptual depth, emotional resonance, or cultural context.

The 2010s saw a leap with deep learning: GANs and transformers enabled AI to produce images and music that fooled juries and consumers, raising concerns about economic displacement and the erasure of human labour in the creative pipeline. During this period, it became increasingly evident that AI art could not only mimic surface aesthetics but also evoke emotional and intellectual responses, sometimes fooling audiences into perceiving a sense of soul or meaning.
A striking example came in 2016, when a short novel co-written by an AI program in Japan, The Day a Computer Writes a Novel, passed the first round of a national literary competition, surprising judges with its coherent structure, even though it ultimately did not win.
Note: The first known AI-generated piece to win a major global competition came in 2023, when an artist declined the Sony World Photography Award after revealing the work was AI-generated.

By the 2020s, with diffusion models like Stable Diffusion and Midjourney trained on billions of scraped images, the dominant criticism became one of appropriation and theft—models replicating artists’ styles without consent. This shift culminated in legal action, as courts began evaluating whether large-scale dataset scraping constitutes fair use, marking a new era of ethical, legal, and cultural reckoning in AI art. Alongside these legal and economic criticisms, concerns about bias and representation also came to the forefront. Critics and scholars began to expose how AI images often reflect and amplify existing cultural stereotypes—overrepresenting certain body types, skin tones, or aesthetic ideals—while marginalizing others. Since these models are trained on internet-scale data, they tend to reproduce the biases embedded in that content, leading to accusations that AI art can reinforce exclusionary norms or perpetuate harmful tropes under the guise of neutrality.

Overall, there has always been an underlying fear that AI could undermine or replace human artists.
In the early years, AI art was largely dismissed as a novelty or technical gimmick—interesting, but not artistically threatening. However, this perception has shifted dramatically over time.
By the 2020s, with AI capable of producing high-quality, style-specific works at scale, the conversation turned from curiosity to concern, as AI began to pose a real economic, ethical, and cultural threat to human creativity.

Looking ahead, I believe that even if copyright concerns are addressed—say, by training AI exclusively on non-human-made data (however feasible that might be)—the criticism that “AI stole my art” will likely persist, especially when the results are compelling.
When AI outputs lack human-like qualities, their artistic value is questioned.
But as soon as they start resembling the richness, emotion, or technique found in human work, fears of imitation resurface.
Regardless of the safeguards in place, good AI art will always prompt some to argue it draws too heavily from human creativity.

Leave a Reply

Your email address will not be published. Required fields are marked *