We often speak of art as something made, something external. Maybe art isn’t the object itself, but what occurs when that object meets a mind, a memory, a mood. This post is a rotation of thoughts around a familiar term. An attempt to trace the flicker of meaning that arises when we say art, and to follow where that flicker leads.
We humans are subjective beings who are fundamentally separate, locked in our own heads, experiences, and perspectives.
Art awakens subjectivity. It invites our subjectivity to rise, to respond, to wrestle. And that response is never universal. One person weeps, another shrugs.
Art is the crack in a glacier. The fractal geometry of a tree struck by lightning. When encountered, we awake with resonancy.
Art is a private, resonant reckoning. Art allows someone’s interiority—their feelings, dreams, memories, breakings, and becomings—to leak into a medium. Art gives shape to these complexities.
Art isn’t static. A painting seen at twenty is not the same when seen at fifty. Not because the paint changed, but because we did.
When art is shared, it becomes something else, too—an invitation, a bridge, a chance for resonance between two subjects. And what passes through it isn’t our subjectivity—it’s the trembling echo of our subjectivity. It becomes a tension between intention and interpretation—that art lives not just in the subject who perceives but also in the echo of the one who created it.
However, a perfect sculpture, carved by a fallen rock and left in a cave for millennia, only becomes art when it is encountered with a resonating subject. Art is not what is created, but what is revealed in the encounter with a resonating subject.
A reflection on what it means to create in the presence of AI.
AI offers words, answers, outputs—sometimes one hits like lightning, something you’d never find alone. When anyone can generate something “impressive” in the blink of an eye, it can cheapen the perceived value of effortful work. In The Hitchhiker’s Guide to the Galaxy, a supercomputer named Deep Thought spends millions of years calculating the Answer to Life, the Universe, and Everything—“42.” But when asked, “What’s the actual question?” It has no idea.
The right question—the one that stops you cold, stirs something buried, or breaks you open. The right question is not just about logic—it’s about timing, and these questions awaken the soul. Answers without real questions are just noise. The real questions are the kinds of costs that make you more real. These questions pull you inward.
The “answer” to the right question isn’t a sentence—it’s a path. A transformation. The journey itself becomes a stance—a statement. It places you in relation to the world. You’re engaging your whole being. When you’re deep in it, you enter a state of immersion that AI can’t offer. It’s meditative, even spiritual.
On this path, AI can offer sparks—but not fire. It can offer answers—but not meaning. It can mimic the path—but cannot walk it with a soul. You’re standing inside the tension: The beauty of what AI can do, and the holiness of what only you can do.
But what happens in the moment when the outside voice—even an artificial one—grows louder than your own? When AI gives you something undeniably good, something sharper or more beautiful than what you had in mind, and you’re tempted to follow its thread instead of your own. It might mimic your favorite artists. It might echo your style. And at first, it flatters you. It excites you. But slowly, quietly, it begins to replace you. It’s like wearing a costume that fits too well. You admire how it looks. You start to believe in the reflection. But the longer you wear it, the easier it is to forget what your own skin feels like. The danger isn’t imitation. It’s amnesia.
Sometimes, allowing an unexpected idea to lead you can open new doors. That’s part of being a creator, too—being surprised, surrendering control to something beyond you. But there’s a difference between: Discovery (“This AI spark helped me unlock something I didn’t know I felt.”) and Displacement (“This idea is better, so I’ll abandon mine.”) The former is growth. The latter is disconnection.
Therefore, let us use AI. But let us not trade our becoming for convenience. Do not let speed replace depth. And do not let illusion drown out truth.
A post about how I became interested in AI-generated art.
I used to think that AI art had no meaning. It seemed like a hollow remixing of internet fragments—a technically impressive, but ultimately soulless, act of collage. There was no intention, emotion, or story behind it—just noise shaped into images.
However, that changed while I was experimenting with AI hallucinations for my safe machine learning research, specifically using empty string prompting in the Stable Diffusion image generator. As I worked, my thoughts drifted back to a past trip to Brussels, where we had visited the Magritte Museum. The surrealist imagery had stayed with me—those quiet, paradoxical scenes that seemed to exist just outside the bounds of logic. Something in them reminded me of these AI hallucinations: vivid, disjointed, and often lacking clear coherence.
Around the same time, I had been reading Carl Jung. His theory of the collective unconscious—an inherited layer of the psyche filled with universal symbols and archetypes—began to resonate in a new way, something I would call AI surrealism.
Surrealism, born in the early 20th century from the ferment of Freudian psychoanalysis, intends to liberate the human mind from the constraints of logic, reason, and social convention. At its core, surrealism aims to access the unconscious mind, often through automatic writing, dream analysis, or chance operations.
Artists like René Magritte, Salvador Dalí, and Max Ernst gave surrealism its visual language, combining precision with paradox, and clarity with dreamlike distortion. Across their work, certain traits emerged: unexpected juxtapositions, symbolic imagery, and a dislocation of reality that aimed to bypass reason and tap directly into the unconscious.
To simulate the surrealist technique in an AI system, we need to do something similar. This involves removing as many constraints from the system as possible. Instead of providing a carefully constructed prompt, we give it nothing at all: an empty string. In response, the AI generates content by drawing from the statistical patterns in its training data, producing unexpected forms, combinations, and associations from deep within its learned representation space.
Now, one might argue that this is simply the AI’s version of an unconscious generative process. But since these models lack consciousness altogether—no intention, no awareness, no inner life—calling it “unconscious” is likely a misnomer, or at best, metaphorical. Overall, this process may appear meaningless by itself: just probabilistic noise filtered through a vast data structure.
However, things shift when we consider what AI has been trained on. These models are built on human-generated content—our languages, our stories, our images, our symbols. They are shaped by our experiences, emotions, and the cultural artifacts we leave behind.
This is where Carl Jung’s concept of the collective unconscious becomes relevant. According to Jung, beneath our individual psyches lies a shared layer of the mind, inherited and universal, filled with archetypes—symbols and motifs that recur across myths, dreams, and cultures. AI may not experience things, but it is trained on our experiences. In doing so, it may accidentally surface these same archetypes because we keep embedding them in the data it learns from.
So, when we look at AI surrealism, we are not exploring the inner life of a single AI artist, but something closer to a shared inner landscape—reflections of our collective patterns. The surrealist process allows these common threads to surface in perhaps the most “natural” way machines can offer: through chance, ambiguity, and the absence of control.
This gives AI-generated surrealist imagery a unique quality—something no individual human artist could fully replicate. It is not the vision of a single mind but a wild collage of influences pulled from us all—our cultures, symbols, experiences, ourselves—woven together by a system that does not understand any of it yet somehow conjures something uncannily familiar.
Science Atlas is a website designed to provide a high-level overview of where research is conducted in the private and public sectors worldwide.
It is a dynamic map that allows users to explore the geographical distribution of research institutions and facilities across various countries. Using an intuitive interface, you can filter research centers by country, topic, and sector, providing deep insights into the scientific output from around the globe.
Discover potential collaborators or institutions to join, with access to detailed information about their focus areas and partnerships. Understand the distribution of research activity globally.
Simply visit Science Atlas. We welcome your feedback as we continue to develop it. If you have suggestions, data to contribute, or institutions you’d like to see added, don’t hesitate to contact us through our Add Map Content form.
In the science fiction franchise, The Matrix, artificial intelligence (AI) has taken over the world and uses human beings as a source of energy. In this fictional setting, humans are kept in suspended animation and connected to a virtual reality environment called the Matrix. Their bodies are stored in pods and are connected to the Matrix through neural interfaces, which build a virtual world that the humans perceive as real. The AI entities maintain this system to harvest the thermal energy and bioelectricity generated by the human body. In essence, humans have been reduced to the role of biological batteries that power the machine world. This grim reality is hidden from the humans by keeping them in the Matrix, a simulated reality that keeps their minds occupied while their bodies are exploited.
The way that AI works now is that it farms intelligence and creativity from humans. AI systems rely on user-generated data to train and fine-tune themselves. This can range from simple data points like clicks and likes to more complex inputs like user-created content, books, and problem-solving strategies. In this sense, AI is cannibalizing human intelligence and creativity, improving its capabilities level by level.
Just as the advent of calculators led to a decline in the practice of mental arithmetic and the widespread use of smartphones has been associated with a reduction in fine motor skills, AI systems, especially those designed to assist in decision-making, problem-solving, or automating complex tasks, can potentially engender a phenomenon known as cognitive offloading. This refers to the increasing human reliance on AI to perform mental functions, potentially leading to a diminished capacity in cognitive and meta-cognitive skills. Such skills, which include planning, self-assessment, and problem-solving, are not merely task-specific but are foundational to human intelligence. They are typically developed and refined through sustained practice. As AI systems take over greater responsibility for these cognitive tasks, humans may find fewer opportunities to exercise and hone these skills, resulting in a gradual decline in cognitive abilities.
Therefore, the ambition to build superintelligence could mirror the ancient myth of the Tower of Babel—a human ambition to transcend limits and attain the divine. In that story, humanity sought to reach the heavens, thwarted by confusion and division. Similarly, in our pursuit of superintelligence, we risk constructing a monument to hubris, where the drive to surpass human cognition may result not in enlightenment but in profound disarray. As we build this modern tower fueled by AI and data, we may inadvertently disconnect from the very cognitive foundations that make us human, leading not to a higher understanding but to a world where both our minds are fragmented and bewildered.
Aidlines is a website that gathers emergency phone numbers for people and animals worldwide. Whether you need an ambulance, police, animal rescue, or help with issues like sexual violence, domestic violence, depression, suicide, or drug addiction, Aidlines makes it easy to find the right numbers quickly, no matter where you are. Our goal is to ensure everyone can get the help they need in times of crisis.
AI-driven discussions present a unique opportunity for intellectual engagement and growth in today’s dynamic and rapidly changing world. Facilitated or generated by artificial intelligence (AI) systems—such as advanced language models like OpenAI’s GPT series—these discussions can take various forms, including virtual debates among AI entities. AI-driven discussions enable users to engage with diverse topics anytime, anywhere, fostering a flexible learning experience. These discussions broaden users’ understanding and encourage critical thinking by presenting fresh perspectives on controversial or complex issues. Serving as a valuable resource for brainstorming sessions, AI-driven discussions can help researchers and creative professionals generate new ideas and insights. Moreover, they facilitate time efficiency by concisely summarizing vast amounts of information or presenting multiple viewpoints. Importantly, these automated discussions are devoid of personal biases or emotions, which often impede productive debates, allowing for more objective and focused discourse.
I created a Python script enabling multiple AI language models to engage in an AI-moderated discussion on any topic. An additional AI model provides real-time analysis and critique to improve the conversation’s quality further.
The AI Philosopher’s Roundtable Script
The Python script harnesses OpenAI’s GPT-4 to create an interactive setting in which three distinct AI entities, each assigned specific roles, engage in a structured dialogue:
Moderator: This AI model ensures the conversation remains focused, provides guidance, and promotes productive discourse. GPT-based analysis system: This AI model summarizes and assesses the debate in real-time, offering valuable insights and constructive feedback to enhance the conversation’s quality. The script starts by requesting the user to input a discussion topic. Once entered, the conversation begins with the Moderator setting the stage. The AI philosophers, System1 and System2, alternate in contributing to the debate, with the Moderator periodically intervening to maintain focus.
System1andSystem2: These AI models represent philosophers celebrated for their critical thinking and capacity to propel discussions forward.
Evaluator: After a predetermined number of iterations, the GPT-based analysis system evaluates and summarizes the discussion.
Despite the numerous advantages, there are also some drawbacks to AI-driven discussions. One significant limitation is that AI language models are based on existing knowledge and might not be able to provide truly original insights or ideas. They may lack the depth and nuance that human experts can bring to a conversation, resulting in oversimplifications of certain topics. This can be observed in the discussion records. They may also inadvertently reproduce biases present in the data they were trained on. Misinterpretations or inaccuracies may also arise, as AI models might not fully comprehend the context or nuances behind a specific subject. Lastly, the absence of emotions and personal experiences may limit the empathetic understanding and interpersonal connections that can be fostered through human-to-human discussions.
Nevertheless, even with such a simple script, it is already possible with GPT-4 to produce plausible discussions. It will be interesting to see how these discussions involve with more advanced system information prompts, models, and scripts.
Read Further
Quite similar to the topic discussed in this blog post are Auto-GPT and BabyAGI.
These projects attempt to create AI agents that can perform multistep tasks autonomously. While they currently require significant human input and are not yet fully autonomous, they represent early steps towards more complex AI models.
Auto-GPT, created by Toran Bruce Richards, chains together GPT-4 outputs to achieve a set goal. It currently requires user permission for each step and can’t make purchases, but it demonstrates the potential for AI assistants. BabyAGI, created by Yohei Nakajima, is inspired by the idea of using GPT-4 as an AI co-founder for businesses and has a task-oriented approach. Both projects face limitations due to GPT-4’s narrow range of interpretive intelligence and the issue of confabulations.
Read the full article to learn more about Auto-GPT, BabyAGI, and their implications for AI development.
Another self-looping ChatGPT agent system is described in the paper “Generative Agents: Interactive Simulacra of Human Behavior.” Implemented in a sandbox environment inspired by The Sims, these agents exhibit realistic individual and social behaviors. The research emphasizes the significance of observation, planning, and reflection in creating convincing simulations and demonstrates the integration of large language models with interactive agents.
Barrier-free websites are like digital superheroes, battling against the evil of discrimination and exclusion by empowering everyone to engage with the digital world with ease and confidence, regardless of ability or disability. These websites aim to accommodate individuals with various disabilities, including visual impairments, hearing impairments, motor impairments, cognitive impairments, and seizure disorders.
With the integration of newer versions of language models like the Prometheus model (a successor of ChatGPT) into the Microsoft Edge web browser, website accessibility for language models will play a crucial role in the future. The ability to summarize website content and answer questions about it will be a valuable tool for people with and without disabilities, may influence how likely they visit a website, and could even impact the ranking of websites in search engines.
As a result, the optimization of website accessibility for language models will become an important aspect of future search engine optimization (SEO). This could involve adjusting website content and language to give the best output for language models, leading to higher search engine rankings and a better user experience for everyone.
In our previous post, we explored the potential of ChatGPT as a forecasting support tool. In this post, we put ChatGPT to the test and evaluate its predictions made entirely on its own, without any human assistance. To do this, we will use the normalized mean square error (NMSE) as our evaluation metric. The NMSE is a measure of the accuracy of a prediction. It is calculated by dividing the mean square error (MSE) of the prediction by the variance of the true values. In general, the NMSE is preferred over the MSE when you want to compare the accuracy of different predictions that are based on datasets with different variances.
def calc_nmse(true_values, predicted_values): """Calculate the normalized mean square error (NMSE)""" # Calculate the mean square error (MSE) mse = sum([(y - ŷ)**2 for y, ŷ in zip(true_values, predicted_values)]) / len(true_values)
# Calculate the variance of the true values variance = sum([(y - sum(true_values)/len(true_values))**2 for y in true_values]) / (len(true_values) - 1)
# Calculate the NMSE nmse = mse / variance
return nmse
If you want to do your own estimations and compare them to ChatGPT, don’t scroll further and estimate them here:
How many cars are there in the United States?
How many minutes of video are uploaded to YouTube every day?
How many flights take off from airports around the world every day?
How many babies are born every day?
How many people visit Disneyland every year?
How many cells are there in the human body?
How many words are there in the English language?
We now let ChatGPT estimate the following values. We used the following chat message: “Estimate via Fermi quiz method QUESTION.”
How many cars are there in the United States? Estimated: 495 million cars Actual: 276 million cars
How many minutes of video are uploaded to YouTube every day? Estimated: 333,333,333 hours Actual: 720,000 hours
How many flights take off from airports around the world every day? Estimated: 250,000 flights/day Actual: 100,000 flights/day
How many babies are born every day? Estimated: 400,000 people Actual: 385,000 babies
How many people visit Disneyland every year? Estimated: 18 million people Actual: 8.5 million visitors
How many cells are there in the human body? Estimated: 100 trillion Actual: 30 trillion
How many words are there in the English language? Estimated: 500,000 Actual: 171,146 words
The NMSE of ChatGPT is 5.44. A value of 0 indicates a perfect fit, while a value greater than 1 indicates a poor fit.
Have you calculated the NMSE for your forecasts? If so, please leave a comment with your result or send me your result directly. It would be interesting to see how ChatGPT’s performance compares to that of a human forecaster.
The Fermi Quiz is a powerful tool for making accurate estimates and solving problems quickly. Named after physicist Enrico Fermi, this method involves breaking a problem down into smaller, more manageable pieces and using your knowledge and experience to make educated guesses. By following a few simple steps, you can use the Fermi Quiz to solve problems ranging from estimating the number of coffee shops in a city to calculating the number of stars in the universe. In this post, I will explain how to use the Fermi Quiz to make accurate estimates and demonstrate how ChatGPT, a chatbot, can help us generate more manageable pieces for our estimates and may even improve them.
Fermi Quiz
The Fermi Quiz is a method of solving problems and making estimates by breaking a problem down into smaller, more manageable pieces and using your knowledge and experience to make educated guesses. Here’s how it works:
Define the scope of your estimate: First, you need to clearly define the problem or question that you are trying to solve. This will help you focus your efforts and make it easier to come up with a good estimate. For example:How many bike stores are in the Netherlands?
Once you have defined the scope of your estimate, you can begin to break the problem down into smaller, more manageable pieces that help you answer the overall question independently. For example: 1. Piece: How many bike stores are in a dutch city on average? How many cities are in the Netherlands? 2. Piece: How many people in the Netherlands go on average in one week to a bike store? How many people can one bike store handle in a week? 3. Piece: How many bikes are in the Netherlands? How many bikes have an average bike store sold since its initial opening?
Answer all questions and estimate the actual value for the overall question with each piece independently. Average all of the estimates together to get the final estimate. This method is based on the wisdom-of-crowds effect, which states that averaging independent judgments often leads to improved accuracy.
ChatGPT for manageable piece generation
As a rule of dumb, more manageable pieces make your final result more precise. However, at some point, it can be difficult to generate more pieces. Therefore, we can utilize the chatbot ChatGPT to do it for us. You can use the following messages to generate the pieces via ChatGPT (note that the ChatGPT outputs vary, so you may have to tweak the messages a bit):
Estimate how many bike stores are in the Netherlands by using the Fermi quiz method and do not give me estimates.
[ChatGPT ANSWER]
What are five examples of breaking the problem down into smaller, more manageable pieces that I mentioned in my previous response?
[MULTIPLE IDEAS] (Piece 2 and Piece 3 were actually created by ChatGPT)
Estimate each generated manageable piece a value and average it with your previous estimated values.
Why did I not want to get an estimate from ChatGPT yet?
Estimate how many bike stores are in the Netherlands by using the Fermi quiz method and do not give me estimates.
The anchoring effect is a cognitive bias that refers to the tendency for people to rely too heavily on the first piece of information they receive (the “anchor”) when making decisions or judgments. This can lead to distorted judgments and decisions, as people may give too much weight to the initial anchor and not consider other relevant information. Therefore, knowing the estimate of the chatGPT (which is not necessarily precise) may influence your estimate.
Can ChatGPT improve our forecasting?
Now for every manageable piece, we use ChatGPT to get some estimates. Note that multiple times, the same question results in different estimates. This is not a big problem and we can handle it by, for example, averaging the estimates for each subquestion.
Let’s calculate the ChatGPT estimates.
1. Piece
How many bike stores are in a dutch municipality on average? How many cities are in the Netherlands?
Estimate via the Fermi quiz method how many bike stores are in a dutch municipality on average? -> ANWSERS: 5
Estimate via the Fermi quiz method how many municipalities are in the Netherlands. -> ANWSER: 233
ESTIMATE: 5 * 233 = 1165
2. Piece
How many people in the Netherlands go on average in one week to a bike store? -> 525000 How many people can one bike store handle in a week? -> 500
ESTIMATE: 525000/500=1050
3. Piece
How many bikes are in the Netherlands? -> 35 million bikes How many bikes have an average bike store in the Netherlands sold in its life span? -> 10000 bikes
ESTIMATE: 35,000,000/10,000 = 3500
FINAL CHATGPT ESTIMATE: (1165 + 1050 + 3500)/3 = 1905
Now that we have generated additional pieces using ChatGPT, we can average its estimate with your own to create a more precise estimate for the problem. To see how accurate your final estimate is, you can compare it to the actual number of bike stores in the Netherlands, which was approximately 3080 in 2020.
If you have tried using ChatGPT to generate additional manageable pieces for the Fermi Quiz method, please let me know in the comments how it worked for you. Did it help you come up with a more accurate estimate? Did combining your own estimate with ChatGPT’s estimate bring you closer to the actual number? I would love to hear your thoughts and experiences with using ChatGPT to improve the accuracy of your Fermi Quiz estimates. Please share your comments below.