Over the past few decades, from the emergence of computers to mobile devices, human society has experienced several revolutionary technological leaps. And now, another leap is upon us. Suddenly, we realize that artificial intelligence (AI) is one of the most transformative technologies of the 21st century. While many people may not yet feel its impact on a personal level, AI seems to be a topic that only experts and technologists are fervently discussing. In reality, however, it has already begun to influence various aspects of people’s lives. What sets this technological leap apart is that not only has it garnered unprecedented attention, but for the first time, besides nuclear weapons, humans are also worrying about AI in ways they’ve never feared any other technology.
When people talk about AI today, much of the curiosity revolves around when AI will reach “human intelligence,” i.e., the advent of Artificial General Intelligence (AGI), or even Artificial Superintelligence (ASI). Many concerns about AI are linked to AGI and ASI, and some individuals are genuinely worried about the potential dangers associated with them. However, most people have not realized that the risks AI might bring to humanity are already right in front of us, imminent. These dangers may manifest long before the arrival of superintelligence, assuming the latter still requires more time.
Contrary to the common belief that AI’s existential threats will only emerge with the advent of AGI or ASI, some have noticed that due to the rapid advancements in narrow AI (AI focused on specific tasks), it could pose a significant threat to human civilization much sooner. Narrow AI may overwhelm human systems, economies, and even social structures in certain areas long before we achieve AGI or ASI, leading to widespread disruption and impact.
What is Narrow AI, AGI, and ASI?
Let’s first understand the differences between Narrow AI, AGI, and ASI. Narrow AI, also known as weak AI, is designed to perform a specific task, such as facial recognition, natural language processing, or playing Go. The AI systems we are familiar with today and those that are widely applied are all examples of narrow AI. So, narrow AI is not a technology of tomorrow but of today. When we discuss the dangers of narrow AI, we are talking about threats that could arise at any moment.
Narrow AI systems are typically highly specialized and can perform better than humans in specific fields. Many people have not yet realized that this is already a reality. Not only has AI made humans completely bow in Go, a game once considered highly complex, but since the emergence of OpenAI, we must also acknowledge that generative AI far surpasses the average human in tasks it is already involved in. Numerous examples can be provided to demonstrate this point.
AGI, or Artificial General Intelligence, also known as strong AI, can understand, learn, and apply knowledge across various tasks like a human. ASI, or Artificial Superintelligence, would surpass human intelligence in all respects, potentially becoming a super-intelligent entity capable of solving extremely complex problems. Although we’ve said generative AI is already more capable and intelligent than most people in its current application scenarios, its scope is still limited, so it does not yet qualify as AGI.
While AGI and ASI remain theoretical concepts for now, narrow AI is already around us, integrating into our daily lives and rapidly advancing. The issue is not only that these systems are becoming more powerful but also that their widespread application could lead to narrow AI systems surpassing human capabilities in specific critical areas, which could result in unforeseen and uncontrollable consequences.
The Current State of Narrow AI
In recent years, with the continuous advancement of machine learning, deep learning, and data analysis technologies, narrow AI has made remarkable progress. Systems like AlphaGo have demonstrated human-surpassing abilities in games like Go, which require intuition and strategic thinking. Natural language processing models, such as OpenAI’s GPT series, have shown remarkable capabilities in understanding and generating human language, even producing text that mimics human writing styles. In the financial sector, AI algorithms manage vast investment portfolios, executing trades at speeds far beyond human capability and efficiently detecting fraudulent behavior. In healthcare, AI is being used to predict patient conditions, recommend treatment plans, and even perform surgeries with greater precision than humans.
These examples show that narrow AI has already surpassed human abilities in specific, well-defined tasks. As these systems become more advanced and widely applied across society, they may begin to affect broader aspects of human activity, leading to consequences that are difficult to predict or control. This is the imminent danger we are referring to.
(To be continued)