This blog post explores programs that evolve autonomously and select their own paths, examining both the potential and risks this technology may bring.
One of the most notable trends in technology today is the active pursuit of redefining the laws of life within the ‘inorganic realm’. Computer viruses are often cited as a prime example. These viruses evade detection by antivirus programs, constantly competing with other viruses while replicating and evolving themselves. During this process, replication errors or mistakes occur, leading to a kind of mutation. Over time, these repetitions accumulate, eventually giving rise to viruses that appear entirely different from their initial forms. The core concept of inorganic evolution programs is to actively harness this phenomenon, where outcomes emerge that bear no relation to the initial state.
“Many programmers dream of creating a program that can learn and evolve independently of its creator. In such a case, the programmer would merely be the first mover, the initial spark. The resulting program would then evolve in directions neither its creator nor any other human could ever have envisaged.”
— Yuval Noah Harari, Sapiens: A Brief History of Humankind
The form of the ‘program’ referred to in non-organic evolution programs is highly diverse. It ranges from simple computer code to structures directly connected to the human brain, forming what is called a ‘brain-network’. This naturally raises a question: Can an artificial program that has been fully imbued with all human experience, thought, and knowledge be classified simply as ‘non-organic’? If the human mental structure is fully contained within the program, can we definitively assert that it is a separate entity from humans? The answer to this remains ambiguous and is an area open to much debate. Therefore, to facilitate a clearer discussion in this article, we will focus on the ‘simplest possible form’ of program: one where the creator designs the program’s structure, sets only the initial input values, and allows the program to evolve on its own thereafter.
This design approach is implemented either by the creator intentionally embedding minor errors or by allowing the program to deviate from its set path and produce unexpected outcomes. Among programmers, creating such programs is regarded as a kind of romantic ideal. The creator merely provides the initial spark; the program then changes and evolves autonomously, making it nearly impossible to predict its direction or limits. The question is whether we should continue this research if the moment arrives when the program’s development completely surpasses human cognitive capabilities.
What is certain is that encountering new mechanisms beyond human imagination is tremendously appealing, both technologically and academically. It could shorten research that would have taken decades using conventional approaches and might even open doors to new possibilities humans themselves could not unlock. Nevertheless, if such an evolved program moves into realms beyond human use, is that truly meaningful progress? No matter how brilliant the mechanism, if we cannot understand or utilize it, it cannot be called human talent. Ultimately, it becomes like a ‘lock without a key’ – knowledge we can only observe, not harness. Investing massive resources and costs in such technology is not only inefficient but could also lead to waste in the long term.
Furthermore, the problem arises when such programs gain the ability to make judgments on their own. If a program can autonomously decide on a specific direction, the consequences of that choice cannot be ruled out as a potential threat to humanity. The level programmers currently anticipate is akin to dropping a single ink droplet onto white paper and observing how it spreads. However, as time passes and the program develops its own ‘interests,’ it may seek to eliminate elements it perceives as threats. It is entirely conceivable that humans could be the first target. If a program aims for survival and replication, it might try to control or restrict human activity in the process.
So, to block such concerns, could we design programs to prevent them from forming their own interests? If we restrict them from the start to perform only mechanical computations, without concepts like profit or survival, the risk factors could be eliminated. Theoretically, this might sound plausible, but is ‘change without interests’ truly possible? For something to change, there must be a reason justifying the direction of that change. That is, when a certain direction is chosen, we must be able to explain ‘why’ it was chosen, which means the program is judging based on ‘something’. In other words, even changes we perceive as ‘random’ are actually based on clear internal logic and interests, however difficult to define.
From this perspective, designing a program that completely excludes interests is practically an unattainable ideal. Even if possible, it’s uncertain whether such a restricted program could drive meaningful evolution beyond human imagination. A program’s autonomy signifies its potential, but the uncontrollability that arises when autonomy is maximized is precisely what we must guard against.
Of course, inorganic engineering remains an unfinished field. Its potential and risks cannot be rashly determined at this stage. However, the fact that this technology is deeply intertwined with cyberspace is undeniable. In modern society, we already store most information in cyberspace, and the production and sharing of knowledge also occur through digital networks. In this context, if autonomously evolving programs distort or alter knowledge within cyberspace, the repercussions could be unimaginably vast. This is because it could threaten the very ecosystem of human knowledge, going beyond mere technological advancement.
Ultimately, creating an autonomously evolving program is akin to embarking on an experiment whose end is unknowable. Programs created solely to pursue endless advancement could one day escape our control and trigger another crisis. While technological progress has brought immense benefits to humanity, the entity determining the direction of that progress must remain human. Even creations possessing infinite potential must never be allowed to lead us toward a dystopian outcome.