Smart ≠ Wise: AI Needs More Than Just Scientific Laws

We previously discussed that developing true Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) requires a broad and comprehensive knowledge system, encompassing even contradictory or opposing viewpoints. Relying solely on scientific and technological knowledge is not sufficient to cultivate true intelligence. Today, we will delve into a more specific question: even within the realm of science and technology, is it enough to train AI by teaching it only the established scientific theories? Or should we also enable it to understand how scientific knowledge has evolved step by step over time?

At its core, this question relates to what “intelligence” truly means. Simply put, human intelligence is not just about learning and applying existing knowledge; more importantly, it is about the ability to create new things, new applications, and new theories. While we cannot assert that humans are the only creative beings in the universe, within the world we know, human creativity is undoubtedly unique. Even the most intelligent humans do not acquire knowledge simply by identifying problems, finding answers, and accepting them. Instead, knowledge is obtained through a continuous process of trial and error, revision, and sometimes even the complete overturning of old theories in search of better explanations. The scientific and technological system is built upon this iterative, back-and-forth, and long-term accumulation. The way human knowledge has developed suggests that if we want AI to have creativity, we should not just feed it the final versions of scientific laws but also allow it to learn about the process through which human knowledge and innovative thinking evolve.

Scientific theories are essentially systematic summaries of nature. They undergo extensive experimental validation, deduction, and revision before forming relatively stable models and frameworks. However, history has shown that no theory remains correct forever. Newtonian mechanics was once regarded as the ultimate theory for understanding motion, with its mathematical rigor and experimental accuracy. However, by the late 19th century, discoveries in electromagnetism, thermodynamics, and quantum mechanics began to shake the foundations of Newtonian mechanics. Einstein’s General Relativity further replaced parts of Newtonian physics. Yet even relativity cannot explain everything—such as dark matter and quantum gravity. If AI only learns Newtonian mechanics and relativity without understanding how these theories were developed, it may lack the ability to draw analogies and make inferences, making it ill-equipped to handle future scientific revolutions or understand how human society reacts to scientific discoveries.

There are countless historical examples of this phenomenon. For instance, before Copernicus proposed the heliocentric model, Ptolemy’s geocentric theory had dominated for over a thousand years and was even considered “absolutely correct.” Despite being fundamentally flawed, the geocentric model was still highly practical at the time, widely used in astronomical calculations and navigation. Later, Galileo used a telescope to observe four moons orbiting Jupiter, directly challenging the geocentric model. However, the scientific community did not immediately accept the heliocentric model because the shift was not just about changing a mathematical model—it involved a complete upheaval of the prevailing worldview. If AI simply learns that “the heliocentric model is correct” without understanding the historical context of its acceptance, it may struggle to comprehend future paradigm shifts, such as modifications to General Relativity or new discoveries in quantum cosmology. Even worse, when faced with new theories, it may fail to adapt or make sound judgments.

Science is not just a process of accumulating correct knowledge; its progress heavily relies on identifying and correcting errors. The phlogiston theory is a classic example. In the 18th century, scientists used this theory to explain combustion, until Lavoisier’s oxygen theory overturned it. Today, we know that the phlogiston theory was incorrect, but if we do not study why it was widely accepted at the time and how it was eventually refuted, we cannot fully grasp how science advances. Similarly, if AI only learns oxygen theory and remains unaware of phlogiston theory, it may struggle to recognize and challenge erroneous theories that still exist in other scientific fields.

This issue becomes even more pronounced in medicine. For example, in the early 20th century, people widely believed that gastric ulcers were caused by stress and excessive stomach acid, so treatment focused primarily on reducing acid levels. However, Australian scientists Barry Marshall and Robin Warren discovered that Helicobacter pylori was the real culprit behind gastric ulcers. This breakthrough completely changed medical understanding and treatment approaches. If AI only learns the prevailing medical theories from the 1960s, it may entirely dismiss the possibility of bacterial infections. Similarly, in rapidly evolving fields such as cancer research and neuroscience, shifts in scientific understanding mean that what is considered “correct” today may become “obsolete” in the future. If AI lacks a historical perspective on science, it risks becoming rigid and incapable of adapting to new discoveries.

If we hope for AI to be more than just a passive tool—if we want it to discover new theories, propose new hypotheses, and push the frontiers of science—then it is essential for AI to study how scientists construct and revise theories. Scientific creativity often stems from the recombination of old ideas. For instance, quantum mechanics did not emerge overnight; it was built upon breakthroughs such as Planck’s black-body radiation research, Einstein’s photoelectric effect, and Schrödinger’s wave equation. If scientists had simply adhered to classical mechanics without exploring alternative theories, quantum mechanics would never have been developed. Likewise, if AI only learns the final principles of quantum mechanics without understanding its historical development, it may struggle to propose innovative improvements.

Another major risk of AI learning only the “final correct answers” is that it might become over-reliant on contemporary scientific consensus, which is inherently subject to change. For example, today’s Standard Model is the best framework for explaining fundamental particles and their interactions, yet it still fails to explain dark matter, dark energy, or quantum gravity. If AI learns only the Standard Model and disregards the history of physics, it may outright reject exploring alternatives such as string theory or loop quantum gravity. Therefore, understanding how science has evolved—from Aristotle to Newton, from Newton to Einstein, and from classical mechanics to quantum mechanics—could be crucial for helping AI reason and innovate in scientific domains that have yet to be explored.

True intelligence is not just about mastering scientific laws and applying known formulas—it is about understanding how knowledge evolves, gets revised, and is sometimes completely overturned. Human wisdom comes from continuously challenging existing theories, learning from mistakes, and pushing the boundaries of knowledge. If AI only learns the final versions of theories without understanding their historical background, it risks becoming a system that merely preserves current understanding rather than advancing science. To train a truly superintelligent AI, we must ensure that, like humans, it learns not only the correct answers but also how those answers were discovered and how they might be redefined in the future.