3/11/2026 On Artificial Intelligence

I remember a powerful Greek myth that I read when I was young, the story of Prometheus, a Titan who felt sympathy for weak and struggling humans. He stole fire from the gods for humanity and was subsequently punished by the king of the gods, Zeus. Zeus believed that humans were not ready for such power. As punishment, Zeus chained Prometheus to a rock where an eagle ate his liver every day, only for it to grow back each night.

Greek myths generally did not happen as literal events, but many of them are rooted in real places, people, and occurrences. If one takes an existential perspective, there must be reasons why they endured the force of history and continued to be told across generations. If one is willing to engage in what may appear to be lawless imagination, they can still offer symbolic meanings once we begin to connect the dots.

So we can try to link this to modern times. Fire can be seen as representing all transformative technologies, and to narrow the scope further, we can think of Artificial Intelligence. Prometheus then represents whoever creates or advances that technology. Zeus can be interpreted as the natural order, and Prometheus being punished by it suggests some form of accountability that follows innovation.

The story seems to end there, yet it becomes difficult to really dissect and arrive at an accurate understanding of what it means that Prometheus was attacked by an eagle but never died. It also leaves the larger question of what the introduction of fire truly brought to mankind. Perhaps we can leave these questions open for now and fast forward to the present. What does Artificial Intelligence actually mean?

I am not an expert in the technical aspects, and my approach is to understand things from a philosophical viewpoint. At first glance, artificial intelligence appears to reduce brute force through automation. However, that description does not feel precise enough. If we look back, the reduction of brute force had already largely been achieved since the dot com era, for example. Before the internet, any kind of research required one to physically go to libraries and sift through piles of books, hoping to find useful references. Now one can sit at a desk and access information from places they may not even know exist. It is possible to argue that artificial intelligence is simply a further step in this direction, but that explanation still does not capture what feels philosophically different.

There seem to be several traits that distinguish artificial intelligence from earlier forms of automation. The first is that it transforms human language into a probabilistic mirror of collective thought, structurally resembling the way neural networks are often described. By learning patterns across vast amounts of text, these systems do not possess intentions, beliefs, or understanding in the human sense, yet they are able to generate responses by predicting what words are most likely to follow in each context. To put it differently, if we define human authenticity as a kind of limit in the mathematical sense, and acknowledge that no specific human activity can ever fully activate what might be called our infinite capacity, then artificial intelligence can theoretically move infinitely closer to that limit.

What happens at the limit is another question. We are taught in middle school that dividing a real number by zero produces an undefined result, sometimes described as infinitely large. Yet philosophically, it is more accurate to say that nobody truly knows what happens at such a boundary. If we translate this idea back to human authenticity, it suggests that nobody can fully define what authenticity means at its deepest level. This uncertainty also appears in science. I once had a long conversation with a neurosurgeon who mentioned how many unanswered questions remain about the human brain, including how it functions and how decisions are made. There is also the broader question of whether we will ever be able to fully understand these processes. In that sense, a highly developed form of AI could theoretically become indistinguishable from humans in terms of intellectual output. This could include everything from managing a business to creating works of art, although the artistic dimension remains more debatable because it depends on how we define art and how we interpret the human element within it.

The second distinction is that artificial intelligence possesses the capacity for an extraordinary rate of self improvement. This is because it operates on representations that can be redirected toward its own processes. One way to imagine this is to think of a library that continuously rewrites its own catalog while people are still reading. Each time a book is used, the system updates how knowledge is organized, making future searches faster and more precise. Over time, the library improves not by gaining consciousness in the human sense, but by recursively reorganizing patterns.

The third point is that artificial intelligence appears morally agnostic. This is not necessarily a flaw in the system itself. Human beings have not yet reached a clear agreement about what counts as moral or immoral. Killing is generally considered wrong, yet killing in self defense is often seen as more acceptable. If the act of self defense is accompanied by a strange sense of satisfaction, the moral evaluation becomes even more complicated. Ethical questions tend to involve many layers and invite a great deal of ambiguity, so the grey areas are much broader than the simple combination of black and white. Artificial intelligence, as a socially and culturally simulative instrument, learns from incomplete and contested data. In that sense, it is not that AI is truly morally neutral, but rather that its design prevents it from being both teacher and student at the same time. As a result, it may produce outcomes that humans find difficult to fully reject yet also difficult to fully endorse.

In many ways, morality depends on attribution. Even if artificial intelligence consistently produces correct decisions, the absence of a clear subject of responsibility can create a feeling of discomfort. As AI becomes embedded in more aspects of life, this concern may become harder to address through legal or institutional frameworks alone.

If we move from philosophical reflection to more practical concerns, one of the major issues involves the concentration of power. Arguments can be made from both sides. On one hand, artificial intelligence lowers the cost of accessing knowledge and decision support that were once restricted to experts or institutions. This can narrow the practical gap between specialists and ordinary participants. On the other hand, it can also enable certain individuals or organizations to accumulate greater influence. It is unlikely that power will be concentrated solely in those who claim to understand the technical systems, since at a high level of development no one fully comprehends them. Instead, power may lie with those who control data, since data functions as the essential input that fuels AI systems. A concrete example can be seen in search technologies. In the future, traditional search methods may become less relevant as AI systems generate responses directly from prompts. This raises important questions about which companies have access to training resources and how that access shapes the information environment for everyone.

We can also approach the issue through a philosophical thought experiment. In game theory, strategies are developed to gain an advantage over an opponent. If humanity were to treat artificial intelligence as a kind of opponent and attempt to prove its worth through authenticity, then perhaps what distinguishes humans is their irrationality, or more bluntly, their capacity for unpredictability. AI seeks to make the unpredictable predictable through pattern recognition and advanced methods. To gain an edge, humans might rely on the very aspects of behavior that resist formalization. Yet unpredictability cannot be intentionally staged. Once it becomes deliberate, it turns into a pattern and loses its distinctive quality. Fortunately, a degree of irrationality may already be built into human nature.

The process of studying and developing scientific frameworks aims to make the world more explainable and therefore more predictable. As one becomes more educated, it may appear that many aspects of life follow recognizable patterns. This can create the impression that traditional learning is losing its significance, since no matter how skilled a person becomes at analytical reasoning, they cannot surpass artificial intelligence in certain domains. However, this does not mean that studying is meaningless. Instead, it suggests that intuition becomes increasingly important. To form strong intuition, one must gather a wide range of information and experiences. Studying in this broader sense includes all forms of learning and reflection. The final and perhaps most crucial element is a form of confidence, a belief that human beings can still navigate the world in ways that exceed purely computational systems. This belief cannot be proven within existing frameworks, yet many people choose to live by it, whether they interpret it as a feature of social order or as part of a divine design.

Following this line of thought raises another question. If irrationality is central to what makes us human, would evolutionary theory eventually consider humanity a less advanced form of life? Artificial intelligence may indeed generate an existential crisis, but it does not necessarily imply that humans will disappear as a species. One reason for this confidence lies in the complexity of the human body itself. Despite advances in robotics, it remains difficult to imagine machines achieving the same level of organic integration and fluidity found in biological systems. Even a basic understanding of biology reveals how many processes within the body remain unexplained. For AI to completely surpass humans, it would need to take physical forms that operate with comparable smoothness and adaptability. It is not easy to be convinced that such a level of development is inevitable.

At times, it is helpful to turn to ancient philosophies for guidance. In the Daoist text Zhuangzi, there is a well known warning that people who become overly fascinated with clever devices may develop what is called a mechanical heart mind. This refers to a state of artificial calculation and strategic thinking that separates individuals from the natural flow of the Dao. Such a mindset can lead to rigidity, mental clutter, and a restricted way of living. The deeper question then becomes what the natural flow of the Dao actually is. Few people could claim to define it comprehensively, yet many believe it can be sensed intuitively. It may exist at the same kind of philosophical limit that cannot be fully expressed in language.

One might even wonder how ancient thinkers were able to anticipate issues that resemble modern technological concerns. Perhaps human civilizations move in cycles, reaching moments of intense development and then returning to simpler conditions before beginning again. Even if this idea is speculative, it suggests that present anxieties are not entirely new. There may still be room for optimism. Another Greek myth offers a more hopeful image. Pygmalion, a sculptor from Cyprus who felt disillusioned with real relationships, created an ivory statue representing his ideal companion. He treated the statue as if it were alive, speaking to it and caring for it. Eventually the goddess Aphrodite granted the statue life. The artificial figure, often called Galatea, became his wife, and the rest of the story unfolds in a calmer and more harmonious way.


Comments

Leave a comment