This blog post examines the impact of the changing numerical system on human thought, as discussed by Yuval Noah Harari, and explores how it alters human thinking and social structures.
Yuval Noah Harari’s ‘Sapiens: A Brief History of Humankind’ contains critical insights about the numerical system. Harari expresses concern that human thinking has shifted from natural, associative patterns to a bureaucratic, compartmentalized system. He specifically points out that the introduction of numerical systems has intensified the tendency to think about many things in numerical terms. He fears that with the advent of computers, people are being taught to speak, feel, and even dream in the numerical language that computers can understand. He even describes the numerical system as a rebellious writing system and argues that the emergence of artificial intelligence is itself a product of this numerical system. So what exactly does the author mean by “thinking in a numerical way” and “speaking, feeling, and dreaming in a numerical language” that causes him such fear?
Looking at the Korean education curriculum, mathematics occupies a significant portion. In Korea, regardless of whether students pursue liberal arts or science tracks, they take math classes at least three times a week in high school. However, this is strictly limited to education up to high school. After graduating high school, many people never encounter mathematics beyond basic arithmetic operations, depending on their major or profession. Considering this, ‘learning and feeling in numerical language’ seems to mean more than simply learning mathematics.
As much of the world becomes digitized, computers are ubiquitous to the point where it’s hard to find someone who doesn’t use one. From mathematical calculations to writing literary fiction, using a computer makes things incredibly convenient. For this reason, computer-related certifications are considered fundamental for job preparation, and knowledge of how to use computer programs is essential. So, does ‘feeling and dreaming in the language of numbers’ mean learning and mastering how to use computer programs? No, it does not. Often, you don’t need to know numbers to use computer programs. For example, just because someone writes a novel using a word processor doesn’t mean they imagine and dream in numbers.
The true meaning of ‘feeling and dreaming in numerical language’ is to think about and perceive situations or phenomena numerically. For instance, consider when we plan to watch a movie. There’s not much time left before the movie starts, and just looking at the poster, I can’t be sure whether I’ll be satisfied or disappointed after watching it. In this situation, what do we look for to gain certainty? It’s the ratings given by people who have already seen the movie. By seeing the scores others gave after watching it, we can feel a little more confident. In this way, people assign ‘numbers’ to many things. We’re becoming accustomed to expressing and accepting intelligence as IQ, safety as ‘accident rates,’ and countless other things numerically. When a company formulates a new strategy, what it must present isn’t abstract rhetoric but precise figures like ‘success rate’ or ‘profit margin.’ Now, efforts to quantify abstract concepts like ‘poverty,’ ‘happiness,‘ and ‘honesty’ are also underway.
The author’s fear of the numerical system stems from two main reasons. First, the numerical system itself is unnatural compared to the fundamental way humans think. Humans were not originally designed to think in numerical terms. Second, there is fear of artificial intelligence, which can be considered the ultimate product of the numerical system. This article will focus more on the fear surrounding artificial intelligence.
The reasons AI evokes fear can be broadly divided into two categories. First, there is the fear, often depicted in novels and films, that AI might become independent from humans and attack them. Stories like ‘Avengers: Age of Ultron’ or ‘Transformers’, where AI robots pose a threat to humanity’s extinction, are prime examples. The possibility of AI attacking humans is certainly feasible. Research is already underway on weapons like unmanned combat aircraft utilizing AI, and global efforts to create robots that move like humans foreshadow the reality of war robots equipped with AI in humanoid form. However, this fear ultimately stems from the potential problem that could arise if humans were to build robots and develop AI specifically to kill people. Since AI is ultimately created by humans, it is unlikely to attack humans unless it is designed with that specific intent from the outset. Of course, undesirable outcomes can occur even without human intent. In one novel, when asked how to protect the environment, an AI responds that humans must disappear. The possibility of catastrophic results exists, even if unintended by humans. Therefore, we must be careful not to completely lose control over AI and ensure it does not act autonomously without human intervention.
The problem of humanity’s extinction by AI is still a distant future concern. The more immediate fear is that AI will take away jobs. There is a fear that the fundamental cycle of capitalism—labor → income → consumption → corporate investment → employment → labor—could be severed by AI, potentially halting the economy. Already, many jobs have disappeared due to AI, and more are vanishing now. If AI continues to advance, it’s highly likely that there will be almost nothing humans can do better than AI. Consequently, job losses may be inevitable. However, a reduction in jobs isn’t necessarily all bad. If the number of workers decreases while production remains unchanged, a future utopia might emerge where everyone can live without working. Of course, such an outcome is as distant as the possibility of AI destroying humanity and is unlikely to happen. Yet, the disappearance of jobs due to AI isn’t necessarily pessimistic. Whether the future becomes a utopia or a dystopia depends on who monopolizes the profits generated by AI or shares them with others.
We have examined the fears surrounding artificial intelligence thus far. These fears are not without merit. AI could potentially destroy humanity or take away jobs, leading to mass unemployment. However, both of these problems depend entirely on humanity’s choices. If humans use AI with good intentions and share the profits generated by AI with others, we can build a better future. In this sense, I find myself wondering if what we should truly fear is not AI itself, but rather the selfish desire to ‘live well and prosper alone’.