This blog post explores the possibility of Dataism replacing human-centered value systems and discusses how it might resolve philosophical debates about free will and self-formation.
According to Yuval Noah Harari, religion is a reflection of superhuman laws within social structures, serving as an intersubjective reality that legitimizes human norms and values. Therefore, religion must not only provide a worldview but also offer a standard for value judgments. An example that satisfies the first condition but fails the second is Einstein’s theory of relativity. Einstein asserted that only the speed of light is absolute, while time and space for all other objects are relative to the observer’s perspective. This idea, supported by numerous experimental evidences, has been accepted by many physicists, engineers, and even some laypeople. Similar to other religions, criticism of this theory exists. This is because relativity theory is known to have theoretical contradictions with quantum mechanics, another paradigm of modern science. However, relativity theory is not called a religion. This is because relativity theory does not provide a standard for value judgments.
Dataism, as Yuval Noah Harari mentioned in “Homo Deus: A Brief History of Tomorrow,” provides a paradigm—a way of viewing the world. Specifically, it focuses on increasing the efficiency of data processing. Life sciences have begun to view humans as systems that process data; history is understood as a process of changing to enable data to be processed efficiently; and political science explains political structures as mechanisms for collecting and analyzing information. However, it seems highly unlikely that Dataism will replace humanism in value judgments and exert influence beyond merely providing a paradigm for viewing the world within the mere decades mentioned by Yuval Noah Harari. Grounds for this include the technological limitations of AI development and philosophical factors concerning free will.
One primary reason for expecting Dataism to supplant humanism is the prospect that humans will become obsolete within economic and political systems. From a Dataist perspective, an entity possessing superior data processing capabilities—surpassing the human experience, which has historically offered the best data processing—will emerge. This entity will hold greater value within the system, rendering humans devoid of value. Humanism places the highest value on each individual human, whereas Dataism evaluates value based on contribution to data processing. This is already happening today. Google predicts the spread of influenza faster than the U.S. health authorities, supercomputers analyze vast observational data faster than humans to forecast tomorrow’s weather, and AlphaGo defeated Lee Sedol by analyzing the Go board more rapidly. In other words, from the perspective of Dataism, humans currently hold less value than machines in the realm of data processing.
However, this line of reasoning overlooks a crucial fact: all these machines require human users. Could Dataism truly claim that calculators or PowerPoint are more valuable than humans simply because they process data faster? Probably not. The work done with these tools isn’t performed by the machines or programs themselves, but by humans utilizing them. One might counter, “How can you compare an electronic calculator to AlphaGo?” Yet, from the perspective of data processing capability, both are equally efficient in their respective domains (calculation, Go) compared to humans and both require human operators.
To better understand why artificial intelligence/machine learning and classical algorithms aren’t fundamentally different, we must grasp why these new technologies were developed. Before machine learning emerged, one of the biggest questions in computer science academia was: “Computers excel at many tasks better than humans, so why do they struggle with pattern recognition?” Various methods were attempted to solve this, and the most successful methodologies are what we now call artificial intelligence or machine learning. In other words, the difference between these technologies and classical algorithms is merely the field in which they are applied. While their operating principles differ, from the user’s perspective, they are no more different than the gap between a calculator and PowerPoint.
This principle applies only to weak artificial intelligence. It remains usable only within specific domains and lacks self-awareness. However, strong artificial intelligence—which can learn across broad domains, possess self-awareness, and create novel things without predefined rules or human users—might genuinely supplant humanism and diminish human value. So when might strong AI become a reality? Fortunately, it appears unlikely in the near future. The evidence for this is the extremely slow progress made in the field of strong AI to date. Weak AI has achieved significant advancements recently. In contrast, strong AI has seen almost no progress. This is because computer memory and processing power remain vastly limited compared to our brains. Our brains consist of 100 billion neurons (nerve cells) and 100 trillion synapses (connections), numbers far too immense for computers to directly replicate. Moreover, the current lack of understanding about how the brain forms consciousness and emotions makes realizing strong AI even more difficult.
So, if strong AI is eventually perfected after a long time, and the day arrives when humans are completely defeated in data processing capabilities, could Dataism completely supplant humanism? The realization of strong AI is merely one of the many great mountains Dataism must overcome. Another is the very existence of free will. Yuval Noah Harari explains in his book that according to modern life sciences, free will does not exist. He predicts that current liberal humanism, which relies heavily on the existence of free will, will inevitably collapse. He further argues that Dataism will emerge to fill this void. But is this discussion sufficient?
The author defines free will as non-deterministic and non-random, deducing its non-existence. But could ‘free will’ be considered an inherent entity, something ‘perceptible,’ akin to ‘pain’ or ‘sadness’? That is, since we can feel the existence of free will just as we feel pain or sorrow, can we not define it as a subjective reality? Even if the emotion of sorrow can be produced by external mechanisms, or is merely generated by the firing of specific neurons without any substantive reality, we do not say sorrow does not exist. Similarly, even if free will is scientifically disproven or is merely a byproduct of mental processes, if individuals can feel it, that feeling could be defined as free will.
Indeed, liberal humanism seems to rely more strongly on free will as a subjective feeling than on free will scientifically disproven. For theistic religions, the legitimacy of religion strongly depends on the existence of God. If God’s existence is logically disproven, the very meaning of religion disappears. Humanism, being an intersubjective reality, also gains meaning within the narratives shared among people. However, unlike theistic religious believers who share a belief in God’s existence through a deity explicitly defined in scripture (the Bible, the Quran, etc.), liberal humanism gains its meaning through individuals sharing the existence of the ‘free will’ they actually experience. In other words, even if the author explains through various conditions that free will does not exist, this does not provide a logical basis for the collapse of liberal humanism.
Finally, the mountain Dataism must climb to become a truly dominant religion is self-formation. While this article won’t address the relationship between the self and strong artificial intelligence, it’s crucial to remember that no religion has ever become dominant without possessing a genuine self. All mainstream religions possess an identity of “I am me,” which unites their members and creates strong social structures. Even religions like Buddhism, which transcend the identity of ‘I’, are no exception in that they exerted social influence through such an identity. Like free will, the formation of the self as a subjective feeling may be scientifically refutable. But, similar to free will, if it is genuinely felt and provides a benchmark for the identity of ‘I’, it will gain meaning within the narratives shared among people. Can the current Dataism truly foster such self-formation?
Therefore, for Dataism to truly supplant existing humanism, the realization of strong artificial intelligence is necessary. Furthermore, it must resolve the philosophical and scientific debates surrounding free will and self-formation. Without meeting these conditions, Dataism will ultimately remain nothing more than a simple paradigm.