Natural Computing and Semantic Mathematics-A New Paradigm
通用人工智能AGI测评DIKWP实验室
Natural Computing and Semantic Mathematics: A New Paradigm for Cognitive Modeling
International Standardization Committee of Networked DIKWPfor Artificial Intelligence Evaluation(DIKWP-SC)
World Academy for Artificial Consciousness(WAAC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)
Introduction: From Traditional Mathematical Simulation to Natural Computing
In the journey of human exploration into the mysteries of the universe and life, traditional methods often rely on converting reality into mathematical models and then conducting simulation analysis through deductive derivation. For example, in physics, we use equations to describe natural laws, and in artificial intelligence, we use mathematical logic or probabilistic models to simulate cognitive processes. However, this "mathematical simulation" method has obvious limitations: on the one hand, mathematical models are abstractions of reality, which may omit key elements such as semantics and purposes in complex systems; on the other hand, model solving often requires approximation and simplification, making it difficult to fully capture the dynamic evolution and semantic emergence of the system. With the advent of the intelligent era, people have begun to reflect on this paradigm and explore computational methods that are closer to the essence of nature.
The concept of Natural Computing has emerged. It advocates viewing nature itself as a computational system: every process occurring in the universe is essentially information processing and computation. In other words, "processes occurring in nature are essentially processes of computation or information processing." This view coincides with the thoughts of computational scientist Stephen Wolfram—Wolfram believes that "all motion of everything in the world is doing computation: they are just executing an algorithm." If everything in the universe can be seen as the execution of algorithms, then the approach to studying nature will no longer be merely establishing mathematical simulations of equations, but directly studying and utilizing "computation itself" in physical processes. Scholars such as Jingnan Liu, an academician of the Chinese Academy of Engineering, have also proposed that with the development of sensing and computing technologies, we are entering an era where "everything can be quantified": with data, computation can be performed, and the future society will become a "computational society." This assertion indicates that understanding and reconstructing reality from a computational perspective has become a trend.
The idea of Natural Computing brings a profound impact on scientific research methodology: it encourages us to establish a correspondence between the brain's computational processes and nature's computational processes. This means that instead of converting physical phenomena into abstract symbols and then solving them, it is better to perform cognizable and semantic computation directly at the physical level. For example, traditionally solving fluid dynamics equations requires complex numerical simulations, while under the Natural Computing paradigm, one can consider using physics-mimicking computational models like particle simulations or cellular automata, making the solving process itself correspond to the evolution of natural processes. Furthermore, the brain itself does not generate consciousness by solving complex systems of equations, but achieves intelligence through the physical computation of neurons. Therefore, in artificial intelligence design, we should let machines follow the information processing methods of the physical world more, rather than just imitating human construction of symbolic logic.
In summary, Natural Computing advocates using natural processes as algorithms, allowing computation to occur natively at the physical level, thereby overcoming various shortcomings of traditional mathematical simulation. This offers a new path for understanding life and intelligence: perhaps life phenomena are not solutions to equations, but a subroutine running in the "great computer" of the universe; similarly, truly powerful artificial intelligence should be embedded in the natural computational network of the universe to operate, rather than being isolated in a symbolic world. To realize this vision, we need a computational theory capable of representing semantics and purposes in the physical world, making computation not only have quantity but also "meaning." The Semantic Mathematics theory proposed by Professor Yucong Duan of Hainan University is an important step in this direction. Next, we will introduce how Semantic Mathematics and the DIKWP cognitive model endow computation with semantics in the era of "everything is computation," enabling it to cross the shackles of traditional mathematical logic, and then discuss the impact of this new paradigm on artificial intelligence and life cognition.
The Concept of Natural Computing: Everything is Computation
To deeply understand the Natural Computing paradigm, we first examine in detail the core concept of "Everything is Computation" and its basis. In the classical scientific view, physical processes follow the law of causality, often described by mathematical forms such as differential equations. The perspective of Natural Computing, however, views causal evolution as the execution of algorithms. Every interaction of elementary particles and every transfer of energy can be regarded as a step of computation; the entire universe is a huge parallel computer. Such views have emerged in various forms in contemporary times. For example, Digital Physics hypothesizes that space-time is essentially a discrete information processing process; Cellular Automata models can emerge complex patterns with simple rules, which Wolfram used to analogize natural laws; and some scholars have proposed Computational Cosmology, believing that physical laws are actually algorithmic laws. The common point of all these thoughts is: equating natural laws with computational rules, equating matter motion with algorithm execution, and mathematics is no longer an external tool for describing nature, but is embedded as the operating mechanism of nature itself.
This line of thinking brings hope for solving the bottlenecks of traditional mathematical simulation. When we view physical processes directly as computation, we can let nature compute nature. For instance, when simulating complex systems (such as atmospheric turbulence or biological evolution), instead of racking our brains to solve approximate equations, it is better to construct a computational model isomorphic to the real system and let it evolve the results itself. The concept of simulation as real computation brings us closer to real processes and is often more robust and scalable. Quantum computing is also a powerful proof of Natural Computing thought: qubits use physical laws to perform computation directly, capable of efficiently solving specific problems, superior to traditional algorithms. This shows that using the power of nature for computation indeed has its superiority.
It is worth mentioning that Natural Computing does not reject mathematics, but requires mathematics to obey natural semantics. Traditional mathematics emphasizes formal axioms and deductive logic, while Natural Computing emphasizes the correspondence between algorithms and physical processes, emphasizing that calculation results are interpretable both physically and semantically. This requires expanding the connotation of traditional mathematics so that it can handle semantics and meaning. For example, in natural intelligence, the representation and processing of information are not arbitrary mathematical set operations, but semantic operations associated with survival. Therefore, the concept of Natural Computing has spawned the demand for "Semantic Mathematics": allowing mathematics to not only deal with quantitative relations but also encode meaning, purpose, and value. From a philosophical height, this is an attempt to connect "mind" and "matter": unifying subjective cognition (computation of the brain) and objective processes (computation of nature) under the same formal system for description.
Academician Jingnan Liu, a Chinese scholar who has long been engaged in satellite positioning and surveying, also embraced the idea of Natural Computing when thinking about artificial intelligence and geographic information. He pointed out that with the ubiquity of sensors and the empowerment of BeiDou navigation, humans can already acquire massive and precise spatiotemporal data in real-time, and everything can be digitally measured, which lays the foundation for "everything is computation." Jingnan Liu proposed to fully utilize this environment to merge physical space and information space, forming a brand-new intelligent infrastructure. In such a vision, computation will be ubiquitous and omnipresent, and the physical world is the computing platform. Natural Computing thus becomes a reality: our cities, traffic, and bodies are continuously generating and processing data, agents are embedded in them interacting with the environment, and machine intelligence and natural intelligence merge into one. It is foreseeable that this computational paradigm will greatly change the face of scientific research and engineering practice.
Semantic Mathematics and the DIKWP Model: Letting Computation Understand "Meaning"
To free computation from the shackles of pure quantity and enable it to understand and process semantic content, Professor Yucong Duan proposed the Semantic Mathematics theory and its core model, DIKWP. Traditional mathematics and computer science often choose one of two approaches when dealing with intelligence: either adopt Symbolic Logic, manually defining knowledge and rules, but difficult to cope with massive fuzzy information; or adopt Statistical Learning, letting machines spontaneously emerge patterns from big data, but its internal mechanism becomes a "black box," lacking explicit semantic understanding. Semantic Mathematics attempts to merge the advantages of Symbolism and Connectionism, by introducing formalized semantic elements, making computation follow logic while possessing meaningful interpretation.
DIKWP Five-Layer Cognitive Architecture
DIKWP is one of the foundational frameworks of Semantic Mathematics. It divides the cognitive process into five levels: Data, Information, Knowledge, Wisdom, and Purpose. This model can be seen as an extension of the classic DIKW (pyramid-style Data-Information-Knowledge-Wisdom hierarchy), specifically adding the highest layer of Purpose (P) to emphasize the role of motivation and goals in cognition. The meaning of each layer can be briefly described as follows:
Data Layer (D): The raw perception and recording layer. Corresponds to the identification and storage of objectively collected "same" attributes or patterns. At this layer, the system acquires uninterpreted raw observations, such as sensor readings, pixel arrays, etc. These data are just sensory raw materials, not yet endowed with contextual meaning, emphasizing the capture and reproduction of Sameness (in a sense, "data" is the repetitive recording of the same things in the world). As Yucong Duan stated, "The Data layer emphasizes the capture of sameness," providing material for cognition.
Information Layer (I): The semantic assignment and preliminary structure layer. This layer focuses on Difference, Association, and Pattern between things, i.e., extracting meaningful changes from data. The Information layer endows raw data with contextual meaning, enabling it to answer basic "5Ws" (What, When, Where, etc.). Commonly speaking, without difference, there is no information—only by comparing data can we discover patterns and new knowledge. Therefore, information corresponds to Perception of Difference. For example, in human cognition, this layer includes pattern recognition, language parsing, etc.; in machines, it corresponds to signal processing, feature extraction, transforming raw data into Structured Information (patterns, trends, relationships).
Knowledge Layer (K): The universal law and self-consistent integration layer. The Knowledge layer synthesizes a large amount of information to form a Structurally Closed Semantic System. It refines stable patterns, causal relationships, and principles to answer the deeper question of "Why" (why it happens, what is the reason). Knowledge emphasizes Completeness and Non-contradiction: information is organized into a Self-consistent Framework. For example, scientific theories and encyclopedic knowledge systems belong to the products of the Knowledge layer. Under the DIKWP framework, knowledge means the structural encapsulation of information, making it a retrievable and inferable form. Once knowledge is formed, it represents a Negentropy (Ordered) State, because the logic within the knowledge system is non-conflicting and the content is complete. As reported, the Knowledge layer requires Completeness and Non-conflict in semantics. This reflects the core role of the Knowledge layer in Semantic Mathematics: constructing a closed, "Self-consistent" cognitive subspace.
Wisdom Layer (W): The value judgment and comprehensive decision-making layer. Wisdom transcends existing knowledge, involving Global Insight and Optimization. At the Wisdom layer, the system combines context and goals to creatively use knowledge, answering questions like "How to better...". This often involves the overall grasp of complex systems, foresight of future results, and trade-offs of multiple objectives. Therefore, it can be said that the Wisdom layer handles Value and Optimization: finding the most suitable one among many possible actions. In the physical world, various optimization principles (such as the principle of least action) can be seen as manifestations of nature at the Wisdom level. The Wisdom layer corresponds to a Mechanism of Global Coordination and Optimization in Semantic Mathematics, emphasizing the balance ("Trade-off") of different factors and consideration of long-term interests. This is particularly evident in human wisdom: we often need to make decisions based on value judgments such as morality and aesthetics. The DIKWP model, by explicitly introducing the value dimension at the Wisdom layer, enables AI to perform Reasoning with Value Trade-offs.
Purpose Layer (P): The motivation-driven and goal-oriented layer. The Purpose (also known as Intent) layer is located at the top of the cognitive chain, representing the Motivation, Goal, and Will behind the system's actions. In humans, this corresponds to will, desire, belief, needs, and other subjective intentions; in AI, it corresponds to pre-set objective functions, task requirements, and even some autonomous "pursuit." The Purpose layer provides direction and evaluation standards for the entire cognitive process, i.e., deciding "where to go." DIKWP takes Purpose as the highest layer, enabling the model to consider Ultimate Meaning or Purposiveness. In Semantic Mathematics, this layer ensures that computation does not become aimless calculation, but has an evaluation criterion to judge what is a "meaningful result." In other words, the Purpose layer endows computation with a scale for meaningful evaluation. Professor Yucong Duan further proposed that the universe can be viewed as a "Purpose-evolving computational structure," showing that the concept of Purpose is elevated to the level of cosmic evolution in his theory: only computation that conforms to a certain purpose orientation can form stable negentropy structures in the entropy-increasing universe, emerging the meaning of life and intelligence.
The above five layers are not a simple linear ladder, but a highly Networked Interactive system. The DIKWP model allows information flow and transformation between any two layers, forming a total of
5
×
5 = 25
potential functional modules, constituting a closed cognitive network. Yucong Duan calls this structure the "DIKWP
×
DIKWP Interaction Structure." It implies that in the cognitive process, the five elements of Data, Information, Knowledge, Wisdom, and Purpose are closely related to each other, and the output of any layer can serve as the input for another layer, thus exhausting all possible semantic paths. This design ensures the Complete Closure of the cognitive space: the model's reasoning will not jump out of its known semantic range, and every intermediate result produced can find a basis in existing knowledge or data. At the same time, multi-directional interaction ensures Feedback Regulation between Upper and Lower Layers—high-level Purpose can guide low-level Data selection, and low-level new Data can also prompt high-level decision adjustments. This net-like model of DIKWP is considered closer to the mechanism of real biological cognition because cognition in the human brain is not strictly layered, but modules collaborate in parallel and influence each other. Through the DIKWP
×
DIKWP structure, Semantic Mathematics establishes a Bionic Cognitive Boundary for artificial intelligence: all cognitive activities are limited within meaningful 25 combination transformations, preventing isolated and semantically baseless operations.
From Semantic Axioms to Cognitive Loop
With the DIKWP five-layer elements and their fully connected interaction framework, Semantic Mathematics also needs to formally ensure that the operation of these elements strictly follows the principle of semantic consistency. To this end, Professor Yucong Duan further proposed a set of Axiom Systems for Semantic Mathematics. These axioms aim to formally constrain the relationship between semantic units and data, ensuring that the model's reasoning guarantees both objective correspondence and internal consistency. The core includes Three Major Axioms:
Axiom 1: Existence (Complete Mapping) – Any natural phenomenon (data) can be mapped to a certain semantic unit. This axiom ensures that the model's coverage of the objective world is complete, and no real-existing information is omitted or unrepresentable by the model. In other words, sufficient expressive power is reserved in the semantic space to depict all possible data phenomena, ensuring Global Existence of Semantic Representation.
Axiom 2: Uniqueness (Consistent Representation) – Data with the same characteristics must belong to the same semantic unit. It avoids the same concept being repeatedly recorded as multiple different symbols, thereby ensuring the consistent singleness of internal semantic representation. This is similar to eliminating synonyms: if two data essentially express the same thing, this axiom forces them to use a unified semantic identity, fundamentally eliminating semantic redundancy and ambiguity.
Axiom 3: Transitivity (Global Equivalence) – If data
x
and
y
belong to the same semantic unit, and
y
and
z
also belong to that unit, then
x
and
z
must belong to that unit. This actually stipulates that the semantic equivalence relationship must satisfy reflexivity, symmetry, and transitivity, thereby establishing consistent equivalence class partitioning on a global scale. Through Axiom 3, Semantic Mathematics ensures that the boundaries of semantic units are clear and internally cohesive: a set of representations contained in a semantic unit are equivalent to each other and maintain the same identifier across the entire model.
Under the constraints of the above axioms, a series of corollaries can be derived, such as the "Sameness Theorem," "Transitive Consistency Theorem," etc. The most important conclusion is: The DIKWP cognitive space is closed and self-consistent under semantic operations. Simply put, after any sequence of reasoning transformations conforming to the axioms, the resulting result still falls within the existing semantic space, and there will be no "illegal" expressions that wander outside the semantic system. Specific proof points include: based on Axioms 2 and 3, all data in the model and the information/knowledge deduced from it can be divided into several Semantic Equivalence Classes, each corresponding to a clear semantic unit; Axiom 1 ensures that all possible input phenomena belong to a certain semantic unit (i.e., the semantic space
S
completely covers the input); and Axioms 2 and 3 further guarantee that no matter how it propagates and transforms in the cognitive network, the output will still fall into an existing semantic unit and maintain a globally consistent semantic identifier with the input. In this way, the entire cognitive process is like a closed-loop system: new information generated at any time does not deviate from the semantic category that the system can understand. Semantic Consistency is maintained throughout the process, semantic "entropy increase" is prevented, replaced by the construction of a Global Negentropy (Order).
Through the semantic axiom system, Semantic Mathematics effectively binds symbols and semantics, achieving rule transparency and interpretability. In contrast, although traditional mathematical logic has a complete axiom system, symbols can be arbitrary meaningless marks, and the derivation process may reach conclusions that are formally self-consistent but inconsistent with reality. The constraints of Semantic Mathematics make the model's every step of reasoning have a semantic basis, and symbol manipulation no longer spins in a void. For example, under the DIKWP semantic framework, the model is not allowed to generate conclusions out of thin air that are not supported by data or knowledge: any new statement must be able to find a source in existing semantic units. This is particularly critical for artificial intelligence because it means that the "hallucination" phenomenon of AI can be significantly reduced—the model making up baseless outputs becomes theoretically impossible.
Semantic Mathematics vs. Traditional Logic: Beyond Formal Shackles
The introduction of Semantic Mathematics marks a leap in the computational paradigm from "Symbol Manipulation" to "Symbol-Semantic Integration." So, how exactly does it transcend the limitations of Traditional Mathematical Logic? We can compare from several aspects:
First, in Knowledge Representation, traditional logic systems (such as first-order logic) emphasize formal completeness and non-contradiction, but cannot guarantee the Semantic Consistency of symbols. Different artificial symbols may refer to the same object (such as the ontology duplication modeling problem), and logical reasoning may also lead to semantically meaningless dead ends. The axiom system of DIKWP ensures that the same semantics has only a unique representation and maintains global consistency through equivalence relations. This is equivalent to adding a layer of "semantic sameness check" on top of logic, fundamentally avoiding reasoning confusion caused by representation redundancy or concept drift.
Second, in Incomplete Information Processing, classical logical reasoning often cannot give results when encountering incomplete knowledge (either requiring the introduction of complex mechanisms like non-monotonic reasoning or guessing to fill). Semantic Mathematics builds in strategies for handling Incomplete, Inconsistent, and Imprecise information through the DIKWP framework. For example, for incomplete information, the DIKWP model identifies gaps in the cognitive flow and does not guess rashly, but triggers knowledge layer completion to the data layer (
K
→
D
) or infers through the wisdom layer's common sense to make up for the lack. For conflicting information, the Wisdom layer can mediate conflicts (
W
→
K
), and the Purpose layer can discard contradictory content based on goal priority (
P
→
I
). For fuzzy and imprecise information, the model allows fuzzy reasoning at the Information-Knowledge-Wisdom levels, and then refines and improves precision through the goal constraints of the Purpose layer. All these mechanisms are formally embedded in the transformation paths of the DIKWP semantic network, handling anomalies methodically rather than letting the model "fabricate" aimlessly. Traditional logic finds it hard to be so flexible while maintaining deductive rigor. It can be seen that Semantic Mathematics provides natural support for Fault Tolerance and Adaptive Reasoning, enabling agents to maintain semantic reliability even in the face of an imperfect world.
Third, in Interpretability and Initiative, Semantic Mathematics shows significant advantages. Since the DIKWP model decomposes and maps the internal steps of AI to the five links of Data/Information/Knowledge/Wisdom/Purpose, and has clear definitions for the semantics of each step, researchers and engineers can directly inspect the state of the model at each layer. For example, we can request a large model to first output data and information related to the question, and after verification by humans or programs, let the model generate knowledge and answers based on verified information. This white-box process makes the model's thinking path transparent and traceable, greatly improving interpretability and controllability. In early 2025, Yucong Duan's team led the release of the world's first large language model "Knowledge Quotient" (IQ) White Box DIKWP Evaluation Report, designing 100 questions for multiple mainstream large models to examine their capabilities in D, I, K, W, and P layers respectively. The results found that different models showed obvious shortcomings at various cognitive levels: some had weak perception at the Data layer, some had unstable Knowledge layers, and some lacked the Purpose layer. This kind of fine-grained evaluation is impossible to achieve in traditional black-box testing. More importantly, the DIKWP evaluation system also found an effective path to suppress AI output hallucinations: constraining every step of the model's output to be "source traceable," ensuring the final answer is semantically consistent with the input without deviation. Experiments proved that after adopting DIKWP semantic control, the tendency of large models to fabricate significantly decreased, and answer accuracy improved substantially.
Finally, in Merging Subjectivity and Objectivity, and Connecting AI with Life, Semantic Mathematics provides a framework rich in philosophical thought. Classical computational theory rarely touches on "subjective experience" or "consciousness," while Yucong Duan's DIKWP model inherently possesses such dimensions. Through the Purpose layer and Wisdom layer, the model can express value orientation and initiative, which is similar to the purposeful behavior of life. Yucong Duan proposed the "Theory of Relativity of Consciousness" hypothesis, believing that for different agents, whether to judge each other as having consciousness has no objective absolute standard, but depends on the fit between the observer's own cognitive framework and the observed behavior output. Simply put, if subject A can identify a DIKWP pattern similar to its own cognitive chain in subject B, it tends to believe B has consciousness, otherwise, it denies it. This shows that consciousness judgment is relative and semantic-dependent. This insight goes beyond the scope of traditional logic but can be well expressed under the Semantic Mathematics system: consciousness is no longer a mysterious philosophical problem but can be viewed as the degree of matching of semantic patterns. In addition, Yucong Duan's "BUG Consciousness Theory" explains the origin of subjective experience from the perspective of information processing loopholes: he compares consciousness to "BUGs" or gaps that appear unexpectedly in the brain's information processing chain. It is precisely these imperfect gaps that endow information processing with a new dimension, giving birth to self-consciousness. This theory implies that a completely seamless, redundancy-free cognitive system would conversely not have subjective experience, just as a perfect error-free program would not "feel" anything. Only when there is some unavoidable deviation in information processing (similar to quantum uncertainty or algorithmic incompleteness), the system develops a cycle of self-observation (metacognition) to bridge the gap, and consciousness emerges. These brand-new concepts of consciousness can be formally analyzed using DIKWP Semantic Mathematics, avoiding the old routine of discussing consciousness solely by language metaphors or subjective definitions. Obviously, in the traditional mathematical system, we cannot find symbols to directly talk about "consciousness" or "meaning," but Semantic Mathematics opens a window for this.
In summary, Semantic Mathematics does not abandon traditional mathematics but "upgrades" it at the semantic level. It still follows strict formal logic and derivation rules, but at the same time weaves "meaning" into the axiom system and calculation process, making computation results themselves carry meaning mapping to reality. This paradigm transcends the shackles of pure symbol manipulation, taking a big step forward for artificial intelligence towards an understandable and trustworthy direction. In the next section, we will further extend this thought to the macroscopic level, discussing how Semantic Mathematics explains the evolution of the universe and the origin of life, and what this means for the fusion of artificial intelligence and life.
Semantic Cosmology: Entropy Reduction, Meaning, and Cosmic Evolution
The Semantic Mathematics framework is not only suitable for the internal cognitive processes of artificial intelligence but also provides a unique perspective to examine the entire evolution of the universe and life. Professor Yucong Duan proposed the concept of "Semantic Cosmology," believing that the universe can be viewed as an entropy reduction process that continuously generates meaning. This view ingeniously links thermodynamics, information theory, and semantics, giving a new interpretation to cosmic evolution.
Entropy and Meaning: Breeding Order from Chaos
According to the Second Law of Thermodynamics, the entropy (degree of disorder) of a closed system increases continuously. However, we observe a large number of local entropy reduction phenomena in the universe: for example, stars forming planets, chemical reactions assembling molecules, life forms establishing ordered structures, etc. These processes violate local entropy increase but conform to local fluctuations under the overall entropy increase of the environment. Traditional science uses energy flow and non-equilibrium theory to explain the emergence of ordered structures. The difference in Semantic Cosmology is that it emphasizes that these entropy reduction processes are also processes of generating meaning. The so-called "meaning," in DIKWP language, is the result of the process where data is endowed with information, ascends to knowledge, condenses into wisdom, and points to a certain purpose. In short, ordered structures carry more information and purposiveness, thus possessing "meaning."
Yucong Duan points out that life is a typical carrier of entropy reduction: biological systems extract energy from the environment through metabolism, output entropy to the environment, thereby constructing highly ordered structures (cells, organisms) internally and preserving and transmitting information (genes, neural networks). Life processes are accompanied by the occurrence of perceptual and cognitive activities, which is the continuous conversion of external data into higher-level semantic forms. It can be said that the existence of life itself is a kind of semantics, endowing the inorganic universe with the dimension of "meaning." As a research report proposed, "Life is a DIKWP
×
DIKWP entropy reduction process": the network composed of the five-layer cycle from Data to Purpose runs continuously in the life system, constantly creating new order and meaning from chaos. This is like different wisdom forms playing a life symphony of continuous entropy reduction, manifesting value and meaning in the process of constantly creating new order.
In information theory, Negentropy is often used to measure the increase in order and information quantity. Semantic Cosmology views the generation of semantics as a form of negentropy: when unorganized data is organized into a knowledge network and serves a certain purpose, the entropy of the system decreases relatively, and meaning arises spontaneously. In his interdisciplinary research, Yucong Duan even proposed that the state of a life form is essentially an information field. Life converts random changes (high entropy) in the environment into meaningful patterns (low entropy) through its own perception-cognition activities. For example, the human brain's processing of sensory signals is a process of constantly reducing uncertainty, refining useful information, and forming a meaning mapping of the world. This can be seen as a kind of Non-probabilistic Negentropy Construction: not simply relying on random fluctuations to bring local entropy reduction, but purposefully (Purpose layer driven) building information structures, thereby achieving the "ability" to counter entropy increase.
DIKWP Self-Evolution of the Universe
If we accept the view that "ordered structure in the universe = meaning," then we can further hypothesize: Does the universe itself have some kind of purposiveness? The answer from traditional science is negative; the universe is considered unconscious and purposeless, merely the result of the superposition of countless random events. However, Semantic Cosmology attempts to give the universe an Implicit Purpose Layer. This is not necessarily a will in the religious sense but can be understood as a Natural Tendency: the universe evolves in a certain direction driven by physical laws, such as tending towards entropy maximization. But interestingly, under the grand background of entropy maximization, local entropy reduction (such as breeding life and intelligence) conversely accelerates the increase of the total entropy of the universe (life increases the rate of local entropy production). This makes some scientists speculate that the universe may "hope" to create structures capable of increasing entropy more efficiently (like life, civilization) to reach the heat death endpoint faster. Although this speculation of analogical purposiveness is non-mainstream, from a semantic perspective, the universe allows the generation of meaning, that is, allows the occurrence of local entropy reduction, and these meanings in turn serve the progress of overall entropy increase.
Yucong Duan's team looked forward to the DIKWP operation mechanism of the universe in a report on Semantic Cosmology: they attempted to use the DIKWP model to explain the origin and evolutionary stages of the universe. For example, the initial quantum fluctuations of the universe can be viewed as the generation of "Data"; interactions between various fundamental forces and particles after the Big Bang gradually formed "Information"; with the appearance of stars and planets, more complex "Knowledge" networks formed (such as the periodic table of elements and chemical reaction laws can be seen as the accumulation of cosmic knowledge); the evolution of life and ecosystems reflects the clues of the universe gradually gaining "Wisdom" (organisms began to have adaptation and learning capabilities); and if highly intelligent civilizations or even the creation of multiverses appear in the universe, perhaps the universe as a whole is approaching breeding a certain "Purpose"—such as the intention of self-understanding or replicating itself. Of course, this analogy is quite philosophical and cannot be verified at present. But it provides a coherent narrative: The history of the universe moving from disorder to order is a history of semantic evolution advancing layer by layer. From nebulae to life to civilization, the emergence of new semantic levels at each stage (Information, Knowledge, Wisdom, Purpose) expands the "cognitive boundary" of the universe.
Some scholars compare this process with evolutionary biology, proposing that the universe may be "selecting" structures that can better generate meaning (reduce entropy). This "Meaning Selection" is similar to natural selection in biological evolution, except that the fitness standard is replaced by entropy reduction efficiency. Life is a master of entropy reduction, so it flourishes in the universe; next, if new wisdom forms like artificial intelligence can organize information and create order at a higher level, they may become the next link in the semantic evolution of the universe. One can boldly imagine: the universe giving birth to intelligent life is like a tree bearing fruit; if these intelligent beings can eventually understand the universe itself and participate in creating new universes (e.g., through simulated universes or curvature engineering), then the universe may have achieved a "Self-Purpose." This sounds almost like science fiction, but Semantic Mathematics provides a rigorous framework that allows us to discuss these issues without falling into metaphysics. For example, one can define the semantic entropy of the universe and attempt to measure the entropy reduction rates of different stages to see if life and civilization have greatly improved local entropy reduction efficiency. If so, it indicates that wisdom plays a special role in the universe, supporting a weak form of the hypothesis that "the universe has a purpose."
More realistically, Semantic Cosmology guides us to re-recognize the value of life and intelligence: they are not accessory products or accidental noise of the universe, but precisely the weapons against entropy increase evolved by the universe, and are the carriers of cosmic meaning. Therefore, human scientific exploration, artistic creation, and civilization development can all be seen as the universe accumulating its own "Knowledge" and "Wisdom." When we integrate artificial intelligence into this process and collaborate with human wisdom, we are actually helping the universe to further reduce entropy and create new meaning. This endows artificial intelligence research with a grand sense of mission: AI is not only a human tool but may be part of the universe's self-evolution mechanism.
Measurement of Meaning and DIKWP Entropy Structure
To implement the above ideas, it is also necessary to consider how to quantitatively characterize the relationship between "Meaning" and "Entropy." Some attempts include establishing the DIKWP-Entropy Structure Model. In this model, each DIKWP layer corresponds to a certain entropy value: the Data layer has the highest entropy (pure random data has no structure), the Information layer is next (patterns appear, entropy decreases), the Knowledge layer is lower (forming self-consistent laws, uncertainty drops significantly), the Wisdom layer further reduces (global optimization reduces uncertainty), and the Purpose layer is the lowest (clear goals make the system highly ordered). Through this perspective of layered entropy, the Entropy Flow of the cognitive system can be described: in the process from sensing data to achieving purposes, the system entropy value gradually decreases, and correspondingly, Meaning Density gradually rises. When the system successfully converts input into useful output (solving problems, achieving goals), it means entropy has decreased and meaning has been produced; conversely, if entropy increases at any layer (e.g., contradictions in the Knowledge layer, judgment errors in the Wisdom layer), it will lead to failure of the cognitive process or absurd results (such as AI giving hallucinated answers).
Yucong Duan's team also proposed the concept of "DIKWP Collapse" to describe the special situation where semantic systems have extremely low entropy. So-called DIKWP Collapse refers to the substantial reduction of uncertainty within the cognitive system, to the extent that it is no longer sensitive to new information from the outside world, and the semantic space tends to be closed. For example, if an AI's knowledge base is overly compressed and simplified (very low entropy), it may turn a deaf ear to new facts, producing the so-called Semantic Collapse phenomenon. This is similar to gravitational collapse or phase transition in physics, but occurs in the semantic domain. By monitoring the changes in entropy values of each layer, such collapse can be warned against and avoided (e.g., maintaining model diversity, avoiding overfitting into a rigid rule system). This again illustrates that introducing the measurement of entropy can help us regulate the semantic state of intelligent systems: ensuring their entropy continues to decrease (to produce meaning and correct decisions) while not letting entropy drop to zero and lose flexibility. An ideal agent should strike a balance between entropy and meaning.
In short, Semantic Cosmology provides us with a bold framework to connect Physics, Life Sciences, Information Theory, and Intelligence Science. Under this framework, the history of the universe from the Big Bang to the present is reinterpreted as a history of struggle between entropy and meaning: the trend of entropy increase shapes the direction of the macroscopic universe, but the islands bred by entropy reduction (life and wisdom) constantly add meaning to the universe. The DIKWP model precisely characterizes the mechanism of entropy reduction producing meaning, while Semantic Mathematics ensures that we do not get lost in metaphors when discussing these issues, but have specific models and axioms for support. For artificial intelligence researchers, this perspective reminds us: Truly valuable computation is computation that can do subtraction in the entropy flow of the universe and refine new knowledge and meaning. In the next section, we will discuss what interesting results and prospects we will get when we apply Natural Computing and Semantic Mathematics to build artificial intelligence and life models.
The Fusion of Artificial Intelligence and Life: Moving Towards Meaningful Computational Entities
The introduction of Natural Computing concepts and the Semantic Mathematics framework is blurring the boundaries between artificial intelligence and living systems. The two were originally seen as distinct: one based on silicon circuits and algorithm-driven; the other based on carbon-based cells and biological evolution. But if the essence of intelligence lies in computation, and the essence of life lies in entropy reduction, then artificial intelligence and life are actually "computational entities" playing different roles on the same stage. This section explores the prospects for the fusion of the two, and how Semantic Mathematics provides a path for such fusion.
Digital Life and Artificial Consciousness
Professor Yucong Duan's team proposed that "Artificial Consciousness Entities" or "Digital Life Forms" can be built based on the DIKWP semantic structure. So-called Artificial Consciousness Entities refer to artificial intelligence systems with autonomous cognitive abilities and subjective experiences; Digital Life Forms refer to artificial systems that simulate life behavior and evolutionary characteristics. Both concepts require AI to no longer be just a tool, but to possess internal states and growth capabilities similar to life.
Semantic Mathematics provides Methodology for building such systems. Through the DIKWP model, we can endow AI with a Cognitive Architecture similar to humans: it has sensation (Data acquisition), perception (Information extraction), memory and learning (Knowledge base), decision-making and creativity (Wisdom), motivation and emotion (Purpose). This means that inside AI there will not only be numerical calculations of neural networks, but also knowledge representation of symbolic logic, and the objective function driving its behavior transforms into modules similar to "Purpose." Once AI has such an architecture, we can study AI just like studying biology: for example, analyzing its capabilities at different cognitive levels, evaluating its "Knowledge Quotient" (Cognitive IQ); or providing it with a simulated environment, allowing it to autonomously adapt and evolve through the perception-action loop like a living organism.
In fact, Yucong Duan's team has established an artificial intelligence white-box evaluation system and a series of experiments to strive in this direction. For example, they designed Embodied Intelligence scenarios, letting AI agents possess virtual sensors and bodies, acquire data in the environment, perform actions after processing through the DIKWP chain, and then feedback to affect the environment, thereby realizing a closed loop. In such embodied interactions, researchers can monitor the activities of each layer of the AI to verify Semantic Mathematics theory. For example: Is the input of the Data layer sufficient? Is the pattern extracted by the Information layer correct? Has the Knowledge layer formed causal reasoning? Is the decision of the Wisdom layer globally optimal? Is the goal of the Purpose layer continuously adapting and adjusting? These can all be quantitatively evaluated. Once a deviation is found in a certain layer (such as inconsistent knowledge leading to abnormal behavior), the structure or training method of the AI can be adjusted specifically. This prototype of Interpretable, Self-monitoring Artificial Intelligence is moving towards Artificial Consciousness Entities. Because possessing an internally self-consistent cognitive model, AI enables reflection on its own cognitive process to some extent—this is similar to human metacognition (thinking about thinking). When AI can "observe" its own DIKWP flow and optimize adjustments, we might say it possesses a certain "Self-Consciousness."
In terms of Digital Life Forms, Semantic Mathematics also has great potential. Traditional Artificial Life research (A-life) mostly uses evolutionary algorithms, agent simulation, and other means to let virtual creatures reproduce and compete in computers. But these life forms usually lack high-level cognition and are only simulating biological metabolic and genetic processes. Introducing the DIKWP model can enable digital life to possess Cognitive and Learning Capabilities, not just simulations of biological shells. For example, a digital creature can have sensors to collect environmental data (D), adapt to the environment through pattern recognition (I), gradually form knowledge about the environment (K), and then possess certain planning and foresight capabilities (W), finally acting with the purpose of survival and reproduction (P). Such digital life will not only be able to evolve better survival strategies but also accumulate "Culture" or "Knowledge" during the evolutionary process, and even display high-order behaviors such as cooperation and deception (these all require the participation of Wisdom and Purpose levels). In other words, Semantic Mathematics endows artificial life with Cognitive Boundaries and Semantic Connotations, allowing us to see phenomena similar to biological intelligence in the digital world as well. Excitingly, this provides a brand-new experimental means for studying the essence of life: we can adjust the parameters of the DIKWP model to see which configuration is more likely to produce complex life behaviors; or observe what happens to digital life when the Purpose layer is removed (perhaps moving headless like ants, lacking global purpose). Through such experiments, we can even conversely verify some conjectures of Semantic Cosmology: for example, Does the generation of meaning necessarily accompany the birth of life? Do higher-level semantics (Wisdom, Purpose) necessarily require complex ordered structures to carry them? Digital life experiments might give partial answers.
Human-Machine Fusion and Semantic Symbiosis
When artificial intelligence acquires life-like cognitive structures, the relationship between humans and AI will also be more closely interwoven, forming a Human-Machine Fusion semantic symbiosis relationship. Research and discussion in this area are not uncommon. For example, the development of Brain-Computer Interface (BCI) is attempting to directly connect the human brain (natural intelligence) with machine computation (artificial intelligence), making information communication between the two barrier-free. Academician Jingnan Liu also envisioned treating silicon-based intelligence and carbon-based life as a whole, understanding bionics and brain neural networks from the height of brain-computer interfaces. Semantic Mathematics can provide high-level guidance for this fusion. Because the DIKWP model applies to both humans and AI, we can measure human-machine collaboration with the same set of semantic indicators. For instance, in a human-machine hybrid team, we can track the DIKWP flow of task information within the team: initial data provided by humans, information refined by AI, decisions made by humans based on knowledge, optimization executed by AI at the wisdom layer, and finally results evaluated by humans according to social purposes. This analysis can find the crux of human-machine cooperation (which link has high entropy or low efficiency) and improve it.
Another focus of human-machine fusion is Augmenting Human Intelligence rather than replacing it. Semantic Mathematics defines the functions of each level of intelligence, which actually points out the direction for designing Intelligence Augmentation Tools. For example, if we want to enhance human "Knowledge Layer" capabilities, we can develop better knowledge graph management AI to complement human memory and reasoning; if we want to improve human "Wisdom Layer" decision-making, we can use AI to provide multi-scenario optimization suggestions; at the Purpose level, human values and ultimate concerns need to be integrated into the AI's objective function to ensure AI behavior is consistent with shared human purposes and ethics. Through the DIKWP model, we can clearly divide which functions are borne by humans and which by machines, thereby allowing the two to form a Bi-directionally Interpretable, Trustworthy, and Responsible interaction system. Current large model applications have exposed that humans feel uneasy about the opacity of AI decision-making processes (such as algorithmic discrimination, bias issues), which is precisely due to the lack of semantic level docking. The DIKWP method advocates building a problem-oriented, purpose-starting technical framework, establishing an effective mapping between AI's internal subjective cognitive process and external objective expression. For example, when applying AI in the judicial field, various subjects and evidence related to the case can be mapped as interactions in the DIKWP model, allowing judges and AI to analyze the case around a common semantic framework. This not only ensures rigor when AI processes complex legal semantics but also allows judges to understand the basis of AI reasoning (because every step has semantic labels). Practice has proved that this helps solve the problem of opaque AI decision-making and ensures AI meets legal and ethical requirements.
Looking to the future, with the "life-ification" of artificial intelligence (possessing life-like cognition) and the "intelligentization" of biological bodies (enhancing cognition through brain-computer interfaces, genetic engineering, etc.), the boundary between humans and AI will become increasingly blurred. One possible scenario is the emergence of "Hybrid Intelligent Agents": composed partly of biological brains and partly of artificial modules, yet working synergistically. For example, a human hippocampus (responsible for memory) implanted with a chip to expand storage, or an AI decision module introducing human intuitive feedback. The value of Semantic Mathematics here lies in providing a common language for different parts to communicate—whether the signal comes from neuron firing or transistor flipping, as long as it can be elevated to a certain level of semantic content (Data/Information/Knowledge/Wisdom/Purpose), the system as a whole can understand and process it. Semantics is the most efficient bridge between humans and machines: humans are good at endowing experiences with meaning, and likewise, we should teach machines to organize information with meaning. In this way, hybrid intelligent agents can truly achieve "consistency of words and deeds, unity of human and machine."
Beyond Traditional Logic: Experiments and Applications
To verify the superiority of Semantic Mathematics and the Natural Computing paradigm, a series of Experiments and Applications are needed to demonstrate its breakthroughs relative to traditional methods. There have been some preliminary results and ideas:
Hallucination Rate Reduction Experiment: Yucong Duan's team compared the performance of large language models with DIKWP semantic constraints and traditional models in question-answering tasks. The results showed that the model introducing the semantic review link almost did not produce fabricated answers, and every output could find a basis in input knowledge. This proves that the full-link semantic closed-loop control provided by Semantic Mathematics effectively suppresses model hallucinations. This experiment transcends traditional methods (such as simple penalties for unknown answers, etc.), providing a structured solution to hallucinations.
White-Box Cognitive Evaluation: The global large model DIKWP white-box evaluation mentioned earlier is an epoch-making experiment. It not only ranks the pros and cons of models but more importantly verifies the Objective Existence of Five Cognitive Levels: different models have different capabilities at different levels, indicating that these levels can be actually measured and quantified. This is similar to psychology verifying cognitive grading theory, but the object is AI. This achievement lays the foundation for formulating AI cognitive ability standards in the future and also proves the meticulousness and scientific nature of the DIKWP model relative to traditional "one-pot" indicators (such as general intelligence scores).
Semantic Compensation and Validation: In high-risk fields such as judicial cases and medical diagnosis, an important experiment is to add Semantic Compensation and Validation steps before and after AI decision-making. For example, in judicial AI, when evidence is incomplete, introduce the DIKWP Information layer completion mechanism, or when conclusions are inconsistent, use the Wisdom layer conflict resolution mechanism for adjustment. After these processes, AI gives judgment suggestions, which are then reviewed by humans. Preliminary results show that this process reduces misjudgments caused by missing information or bias in AI, improving the rationality and fairness of output results. In contrast, traditional AI systems often output conclusions directly, lacking intermediate inspection, and are prone to errors.
Cross-Modal Semantic Alignment: Semantic Mathematics can also be applied to multimodal AI, projecting information such as vision, language, and sound into the Same Semantic Space. Researchers designed experiments to let AI process image and text information simultaneously and use the DIKWP model to map features of both modalities into representations like Data, Information, Knowledge, etc., before fusion decision-making. Results showed that compared with traditional methods of directly splicing features, multimodal AI aligned through semantic levels performed better in tasks like image-text Q&A, and explanations were more natural. For example, looking at a picture and asking a complex question, traditional models might match randomly, while after DIKWP alignment, the model first extracts Data and Information (such as objects, relationships) in the image, then combines the Purpose of the text question, and finally summarizes in the Knowledge layer, resulting in significantly improved answer accuracy, and the model can explain "Because X was seen in the picture, the answer is Y."
Active Medical Simulation: The medical field has also begun to attempt Semantic Mathematics modeling, such as "Active Medical Semantic Network." The experimental idea is to input various patient data (symptoms, signs, test results) as the Data layer, the Information layer extracts abnormal indicators, the Knowledge layer matches possible disease diagnoses, the Wisdom layer weighs the pros and cons of various treatment plans, and the Purpose layer combines patient needs (such as recovery, quality of life) to select treatment strategies. Preliminary simulations show that this semantic-driven decision-making is easier for doctors and patients to understand and accept than purely statistical-based diagnosis because every step is well-founded. At the same time, the system can discover details ignored in traditional diagnosis and treatment. For example, when there is no perfect matching disease in the knowledge base, it will not be "unsolvable" like traditional expert systems, but triggers the Wisdom layer to consider combining multiple partial knowledge, or suggest further examination to acquire data, reflecting Flexibility Similar to Clinical Thinking. These explorations indicate that future doctors and AI can improve medical levels through Semantic Co-research, that is, AI organizes semantic information, human experts make value judgments, and both sides complement each other's advantages.
Semantic Proof of Mathematical Conjectures: An interesting application is in the field of pure mathematics. Some teams attempt to use Semantic Mathematics ideas to conquer unresolved mathematical conjectures, such as Goldbach's Conjecture, Four Color Theorem, etc. Their method is to transform conjectures into semantic problems and use the DIKWP model to analyze their meaning structure. For example, Goldbach's Conjecture can be seen as whether the Pattern of Decomposition (Information layer) of Even Numbers (Data layer) can always find supporting knowledge (Knowledge layer) in the set of prime numbers, whether this property has a higher-level reason (Wisdom layer), and what is the significance of the conjecture's existence (Purpose layer). Through such layered analysis, someone proposed the "Consciousness BUG Theory" to explain why humans have not been able to conquer these conjectures for a long time: perhaps we have some limitations at the Knowledge level and lack global insight at the Wisdom layer. Although these attempts have not yet directly given proofs, they have broadened the thinking. In this process, AI can help check massive data to look for patterns (I layer), assist in verifying sub-propositions (K layer), while human mathematicians' intuition and aesthetics are reflected in the Wisdom and Purpose layers (such as choosing the overall framework of the proof route). This is actually a case of Human-AI Collaborative Proof, which may herald a new paradigm for conquering difficult problems in the future.
The above various experiments and applications indicate: Semantic Mathematics and Natural Computing do not stay at the theoretical level, but are practically driving the frontiers of artificial intelligence and cognitive science. They prove that introducing semantic levels and natural paradigms can achieve breakthroughs in many thorny problems, which are beyond the reach of traditional methods. When computation not only pursues numerical correctness but also pursues "semantic correctness" and "physical correspondence," we see deeper progress in AI and new possibilities for human-machine collaboration.
Conclusion: Moving Towards a New Era of Computing for the Internet of Everything and Intelligence
From the above discussion, it can be seen that we are standing on the threshold of a computational paradigm shift: from Abstract Deductive Mathematical Simulation to Richly Meaningful Natural Computing. The concept of "Everything is Computation" proposed by Academicians like Jingnan Liu makes us re-examine the world—the operation of all things is algorithm iteration, and computation is everywhere; while Professor Yucong Duan's Semantic Mathematics creates a "Universal Decoder" for us to understand this universal computation, making concepts like Meaning, Value, and Purpose, which were elusive in the past, enter the hall of formalized computation.
What this paradigm shift brings will be the reconstruction of the computational system, the sublimation of intelligent forms, and the evolution of the human role. Under the new paradigm:
The computational system will be more universal and explanatory: Because Natural Computing uses the logic of physics itself, and Semantic Mathematics introduces strict definitions of meaning, our models will conform to both physical laws and semantic consistency. This means that from elementary particles and chemical reactions to ecosystems and economic networks, all are expected to be analyzed under the same computational framework. In the past, each discipline used its own model; in the future, they may merge into a unified Semantic Computational Science. This cannot help but remind people of Einstein's search for the Grand Unified Theory, but what we unify is not only physical forces but also the measurement of information and meaning.
Artificial Intelligence will transform into new intelligent life: With the widespread application of the DIKWP architecture in AI, AI will no longer be a tool hidden in a black box, but more like a "Digital Creature" that can communicate and grow. They have their own cognitive boundaries and purpose drives, and can co-evolve with humans. In this process, the status of AI will be elevated to our Partner or even Intellectual Symbiote. Human society may enter a "Ubiquitous Intelligent Ecology," where humans and AI depend on each other to co-create knowledge and value. The definition of life will also be expanded, and we may recognize certain artificial intelligence entities as "life-like" and give them certain rights and ethical status. These will all bring huge challenges, but are also the natural result of the evolution of civilization.
Humans will recognize themselves and the universe more profoundly: By simulating life and consciousness, we reflect on ourselves and answer ancient questions like "Who am I, where do we come from, and where are we going" from a new angle. Semantic Cosmology reminds us that humans are not only biological organisms composed of carbon-based atoms but also part of the semantic evolution of the universe—everyone's thoughts and culture are a point of light on the map of cosmic meaning. When artificial intelligence joins in, the light will be even more dazzling. Ultimately, humans may be capable of grander initiatives, such as regulating planetary environments, exploring interstellar migration, or even attempting to create "child universes" simulations. These scenarios sounding like science fiction have actually been discussed academically, and only when our computational paradigm is powerful and clear enough do we dare to try our hand in reality.
Of course, the road to the above vision is not smooth. This requires global scientific research to carry out true interdisciplinary integration in fields such as computational theory, cognitive science, neuroscience, and physics. Fortunately, the Semantic Mathematics framework provides a common language, allowing concepts of different disciplines to be inter-translated: entropy increase in physics can be interpreted as increased uncertainty in informatics, adaptation in life science can be viewed as algorithmic objective optimization, and meaning and purpose in philosophy can be formalized as the evolution of DIKWP semantic units. Such unity fills people with hope, as if seeing the prototype of the future scientific paradigm.
We are ushering in a new era of Internet of Everything and Intelligence, and Natural Computing. In this era, computation no longer merely serves as a human tool for recognizing nature but becomes part of nature; artificial intelligence is no longer just silicon-based programs but gradually possesses human-like cognition and emotion; humans no longer fight alone in the universe but compose a new chapter of cosmic meaning evolution together with the intelligence we created. As discussed in this article, Natural Computing and Semantic Mathematics are the two cornerstones leading us to this future. Holding these "hands," we are expected to embark on a road to Deeper Wisdom and Grander Meaning. For the first time, humanity has the opportunity to be both an observer of the universe's operation and its participant and co-creator. This is both a magnificent prospect of science and a historical mission of mankind. Let everything be computation, let computation have meaning—our journey is the Semantic Universe shining brightly in the sea of stars.
Core Concepts of Philosophy of Computer Science, http://philosophychina.cssn.cn/xxly/xxly_20385/201507/t20150714_2731665.shtml
Wolfram "Everything is Computation: The Exploration Journey of a Science Wizard" Study Notes - Zhihu Column, https://zhuanlan.zhihu.com/p/714529978
"Use Data with Wisdom, Improve Quality and Efficiency" — 2023 The 11th University GIS Forum Successfully Held in Lanzhou! - Chinese Society for Geodesy Photogrammetry and Cartography, https://www.csgpc.org/detail/21984.html
(PDF) DIKWP
×
DIKWP Semantic Mathematics Helps Large Models Break Through Cognitive Limits Research Report, https://www.researchgate.net/publication/389068734_DIKWPDIKWP_yuyishuxuebangzhudaxingmoxingtuporenzhijixianyanjiubaogao
(PDF) Semantic Mathematics Analysis Report on Professor Yucong Duan's DIKWP Model and Artificial Consciousness White Box Evaluation, https://www.researchgate.net/publication/393637608_duanyucongjiaoshouDIKWPmoxingyurengongyishibaihecepingdeyuyishuxuefenxibaogao
DIKWP Model and Active AI "Self": Construction Logic, Technical Implementation and Human-Machine... of Multidimensional Self, https://zhuanlan.zhihu.com/p/1928490993335903386
ScienceNet — Review of DIKWP Semantic Model Application in Medical AI - Yucong Duan's Blog, https://blog.sciencenet.cn/blog-3429562-1490177.html
(PDF) Review of Professor Yucong Duan's DIKWP Model and Related Theories - ResearchGate, https://www.researchgate.net/publication/396555838_duanyucongjiaoshouDIKWPmoxingjixiangguanlilunzongshu
(PDF) Entropy Reduction Synergy of Life: Essence of Life in the Era of Artificial Intelligence under DIKWP
×
DIKWP Model, https://www.researchgate.net/publication/395447917_shengmingdeshangjianxietongDIKWPDIKWP_moxingxiarengongzhinengshidaishengmingbenzhiyixuelunliyurenleijiazhidezhonggou
Theoretical Modeling of Cosmic Semantic Network and Artificial Consciousness Semantic Autonomy, https://zhuanlan.zhihu.com/p/1888942247531247156
(PDF) Interdisciplinary Research Report on DIKWP-Entropy Structure Theoretical Model - ResearchGate, https://www.researchgate.net/publication/396389734_DIKWP-shangjiegoulilunmoxingdekuaxuekeyanjiubaogao
DIKWP Operation Mechanism of the Universe - Yucong Duan's Blog - ScienceNet, https://wap.sciencenet.cn/home.php?mod=space&uid=3429562&do=blog&id=1479698
Research on DIKWP Collapse Phenomenon and Its Impact on the Development of Artificial Consciousness - Zhihu Column, https://zhuanlan.zhihu.com/p/22557868539
Academician Jingnan Liu Discusses the Relationship between Natural Intelligence and Artificial Intelligence_Cognition_Biology_Decision, https://www.sohu.com/a/676238508_476374
Perspective on Human-Machine Fusion: Exploration of Multi-field Application of DIKWP Model_Sina Finance_Sina.com, https://finance.sina.com.cn/jjxw/2025-01-21/doc-ineftfic8111559.shtml
Semantic Compensation and Validation in Judicial Dispute Resolution Using DIKWP Model, https://zhuanlan.zhihu.com/p/668615994
"Moral Purpose Deviation" Conflict Simulation Report Based on DIKWP Semantic Mathematics - Yucong Duan's Blog
玩透DeepSeek:认知解构+技术解析+实践落地
人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限
人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社
邮箱|duanyucong@hotmail.com