This blog post discusses how the scope of information we must provide could expand with the advancement of algorithms, and the resulting privacy concerns.
Introduction
Yuval Noah Harari’s book “Homo Deus” stimulates the reader’s mind in multiple ways. As the chapters progress, he paints an increasingly unsettling future. Particularly with the advancement of algorithms like artificial intelligence, things once considered uniquely human are beginning to be replaced. He presents an extreme future where we surrender all our data to algorithms and are ultimately absorbed by them. The Data Church pursues the ideal of complete data openness and freedom. Is such a future truly feasible? While this conclusion may seem overly extreme, this article argues that a future where we provide all information to algorithms is not impossible when reflecting current trends.
The Ever-Expanding Information Demands of Algorithms
For humans to decide on any action, judgment about that behavior is necessary, and such judgment is based on knowledge gained through past experience and learning. This sounds like a self-evident truth for humans. However, for algorithms, it must be even more self-evident. This is because algorithms operate based on purely logical calculations and data. Examining the concept of “functions” used in mathematics and computer science makes this self-evident. A function F(x) requires an input parameter x to return the corresponding function value. Therefore, a well-designed algorithm must have the necessary information to properly perform the intended task. Consequently, an algorithm designed to assist humans will be engineered to request the information needed to help us.
Consider, for example, a service providing diet and health information. This service will require information from the user, such as height, weight, eating habits, and preferences. Without this data, the service cannot provide useful, tailored information to the user.
However, there is one blind spot here: the fact that algorithms are constantly evolving. The assistance algorithms provide is becoming increasingly broad and detailed. Let’s return to the human perspective. For a person to make more complex and profound judgments, it is evident that the quantity and depth of necessary information must also be secured. The same applies to algorithms. To receive better services from these evolving algorithms, the information we provide to them must also be significantly more extensive and diverse.
The problem lies in how far the scope of this information provision will expand. If health information services advance to the point where they can store customer data and provide long-term, continuous solutions based on big data and artificial intelligence, we will likely need to provide not only our identity and personal information but also specific details about our private lives and physical condition to obtain suitable results. In this scenario, there will undoubtedly be people who feel uncomfortable about having to provide excessive private information.
Information flying above our consciousness
However, we must not overlook the fact that we are already providing our private, highly personal information to countless places. Consider “YouTube,” the world’s leading video service. After creating an account and using it for just one week, YouTube already understands our preferences. The recommended tab is filled with content similar to videos we’ve watched before, and videos related to those we’ve marked ‘Not interested’ no longer appear on our home feed. In this way, we unconsciously provide our private information while using algorithms. This is the first key point. While we often explicitly consent to sharing information with algorithms, most of the data is provided implicitly during service usage. Have we ever asked social media, “How about this kind of post?” Have we ever requested Google Ads to tailor those annoying website ads to our tastes? It’s simply the algorithm displaying results based on the information it has gathered. By the time we realize it, the algorithm has already obtained the information. This demonstrates that, despite the common belief that consent is required when disclosing information, obtaining that consent is not an essential step in acquiring the information.
Few would disagree with the book’s statement that “we cannot catch up with the rabbit called technology.” However, as mentioned earlier, as algorithms advance, the scope of information we must provide will only expand further. So, won’t the infinitely widening scope and speed of our information provision, driven by technological progress, become too difficult to manage through our own decision-making? Even now, as in the examples already cited, we unconsciously provide information to algorithms (whether owned by corporations or not). And this situation won’t change significantly even as algorithms become more advanced.
What if we could resist the tide of technological progress and protect our information?
We previously examined how we must provide our information in exchange for the excellent algorithms that greatly assist our lives. This occurs unconsciously for users. So, let’s shift our perspective to a conscious level. If a time comes when people become extremely conscious of this information disclosure, would they hesitate or even refuse to disclose it? Unfortunately, probably not.
Human desires are indispensable to algorithmic advancement. Humanity endlessly pursues convenience, and when using algorithms, we experience indescribable ease. Recognizing this convenience, people freely disclose their information to algorithms. Therefore, future algorithms will likely offer far greater convenience than today. Regardless of how much people consent, won’t the very question of whether they acknowledge this convenience become meaningless? Its usefulness is so immense that a situation will inevitably arise where information disclosure is taken for granted. If technological advancement is endless, and people consent to providing information while acknowledging its convenience and usefulness, this perspective becomes self-evident.
How many people would actually exercise a veto against this tide of progress to protect their own information? Even now, to join a useful website, you cannot even register without agreeing to terms like providing member information. Rather than giving up on joining, people will choose to agree to those terms, register, and receive the service. This will not be different in the future. Our society, recognizing overwhelming technological power, is far more likely to provide information and become customers of algorithms that will make our lives exponentially more convenient, rather than rejecting them.
Conclusion
If the information disclosed to algorithms is data that has found freedom, these algorithms will form a global network, granting even greater freedom to information. The vision of all information being publicly accessible worldwide is no longer an impossible reality. We might even arrive at the somewhat chilling conclusion that technological advancement is the only way to legally disclose and acquire everyone’s information.