Research on Semantic Generation and Neural Mapping Based on DIKWPSemantic Mathematics and Consciousness "BUG" Theory
Yucong Duan
Benefactor: Shiming Gong
International Standardization Committee of Networked DIKWP for Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)
Abstract
Based on the "semantic mathematics" framework and "consciousness BUG theory" proposed by Professor Yucong Duan, this report reconstructs the data-information-knowledge-wisdom-intention (DIKWP) model from the perspective of semantic generation, and explores the mapping relationship between the semantics of each layer of the model and the neurophysiological mechanism. We use "same semantics" (同), "different semantics" (异), and "complete semantics" (完) as basic semantic units to explain how they correspond to the data (D), information (I), and knowledge (K) layers in the DIKWP model, and further evolve high-level wisdom (W) and intention (P) semantics. The report emphasizes that semantics precede physical entities: the neural activity of the brain is regarded as the expression and reflection of semantic generation in the physical world, rather than the root of semantics. Combined with the key mechanisms in the consciousness BUG theory (such as the subjective finiteness assumption, the completeness of hypothesis, and the tolerance of abstraction-simplification-representation), we explain how the cognitive system forms a semantic closed loop through hypothesis filling, abstract generalization, etc. when processing incomplete information, and reflects it in neural structure and dynamics. This semantic closed loop can be understood as a self-interactive network mechanism of DIKWP*DIKWP, that is, the self-integration of the highly coupled semantic space with two-way feedback between the elements of DIKWP. We further demonstrated how this mechanism drives the emergence of consciousness: "small deviations" or "bugs" in the cognitive process are not purely negative disturbances, but promote the generation of subjective consciousness by triggering higher-level integration and reflection. Ultimately, this article concludes that only by putting the semantic generation process first and considering the neural mechanism as its carrier can we more thoroughly understand the nature of consciousness and its implementation in artificial intelligence systems.
1introduction
In the study of human cognition and artificial intelligence, how to introduce semantics (meaning) into rigorous mathematical and system models is a key challenge. Traditional cognitive science and neuroscience models often focus on physical structures or information processing processes, but lack a direct description of semantic generation, leading to the problem of "missing semantics". The DIKWP semantic mathematical model proposed by Professor Yucong Duan provides a new perspective, incorporating five layers of cognitive content, namely data, information, knowledge, wisdom, and purpose, into a unified framework, and mathematically formalizing the generation and transformation of semantics at each layer. By introducing a semantic dimension, this model aims to enable AI systems to understand and process richer human semantics and intentions, thereby overcoming the problem of insufficient semantic understanding in traditional AI systems.
At the same time, Professor Yucong Duan's "Consciousness BUG Theory" provides unique insights into the origin of consciousness. The theory believes that the limited cognitive resources of human beings and the limitations of complex information processing are the fundamental reasons for the birth of consciousness: consciousness is not a masterpiece carefully shaped by evolution, but a "bug" (defect) that accidentally emerges when the cognitive process is limited. In this framework, most information processing is done automatically by the subconscious, and the conscious thinking that we think is "advanced" is actually an occasional deviation caused by the brain's overload or bottleneck, that is, a cognitive illusion. This view subverts the traditional view of consciousness as a purely advantageous trait, emphasizing the role of cognitive relativity (different observers' judgments on the consciousness of the same intelligent entity depend on their respective understanding) and the imperfections in the cognitive process in the formation of consciousness.
In the above context, this paper attempts to systematically explain the role of the DIKWP model in cognitive and consciousness research from the perspective of semantic generation, and to link it with neurophysiological processes. We will strictly base our research on the method of semantic mathematics, and avoid directly applying traditional neuroscience terms or models such as the default mode network (DMN) and integrated information theory (IIT), and misinterpreting semantic mechanisms as physiological bases. On the contrary, we argue that semantics precedes physics, and the mapping of semantics to physics shapes brain activity patterns. To this end, this paper will focus on the following questions:
Semantic layering and generation mechanism of the DIKWP model: How to generate higher-level intelligence and intention semantics layer by layer from the most basic "same", "different" and "complete" semantics? What is the meaning of each of these semantic units, and how to form a self-enhancing semantic network?
Consciousness BUG mechanism and semantic closed loop: In the cognitive process, how do "bugs" caused by limitations promote the formation of semantic closed loops through mechanisms such as assumptions and abstractions? How do concepts such as "subjective limitations", "hypothetical completeness", and "abstract-simplified fault tolerance" explain how the brain can still produce meaningful consciousness content under incomplete and uncertain information?
Semantic-neural mapping: How are the semantic generation processes at each layer of the DIKWP model reflected in the structure and dynamics of the nervous system? In other words, which organizations and activities of the brain correspond to the semantic processing stages of data, information, knowledge, wisdom, intention, etc.? How does this correspondence verify the view that semantic generation takes precedence over physiological realization?
DIKWP*DIKWP self-interaction network and consciousness emergence: What kind of semantic network structure will be formed when the DIKWP semantic system interacts with itself (and between different subjects)? How can this mechanism of semantic space self-interaction (DIKWP*DIKWP) become the universal basis for consciousness emergence?
Through the above discussion, we hope to establish a cognitive model perspective with semantics as the core link, and explain the generation mechanism of consciousness as the self-organization process of semantics in complex networks and its physical mapping. This perspective is expected to provide new ideas for the design of artificial consciousness systems: that is, when building AI, we should focus on enabling the system to have the ability to self-generate and self-correct semantics, rather than relying solely on increasing the computing scale or simulating biological neuron connections. Next, we first introduce the basic semantic units and generation logic of the DIKWP semantic mathematical model.
2Methods and theoretical framework DIKWP model and the foundation of "three semantics"
The DIKWP semantic mathematical model divides the cognitive process into five distinct and closely interacting elements: data (D), information (I), knowledge (K), wisdom (W), and intention (P). From the perspective of semantic mathematics, the first three levels are associated with the three core concepts of "same semantics", "different semantics", and "complete semantics". They form the cornerstone of higher-level wisdom and intention semantics. Let us explain these three semantics separately:
Same Semantics: Same semantics emphasizes the extraction and induction of the commonalities of things. In the DIKWP model, this corresponds to the data layer (D) of the cognitive process. When we obtain the raw data, we first focus on the common features in different data and classify them into certain concepts or categories based on them. In other words, "same" semantics is reflected in the pattern recognition and clustering classification of discrete perceived signals, merging disordered and messy observations into meaningful categories. For example, in computer vision, a large number of face images form an abstraction of the concept of "face" by extracting common facial features, which is exactly the machine's grasp of same semantics. Without the capture of same semantics, the intelligent agent will not be able to generalize new instances and form a stable conceptual system. Therefore, the processing of the data layer can be regarded as an "assimilation" process: ignoring individual differences and refining commonalities, which is essentially a semantic abstraction.
Different semantics (abbreviated as "different"): Different semantics focuses on the differences between things and the incremental information carried by new associations. This corresponds to the information layer (I) of DIKWP, which further examines the unique attributes, context or changes of each instance based on the existing classification. When AI receives new data, it not only needs to determine which category the data belongs to (same semantic category), but also analyzes the differences between it and existing knowledge to extract useful new information. For example, in the anomaly detection scenario, we are concerned about the "differences" of the current data relative to the normal mode. These differences constitute the important semantic content of the information layer. For example, although all cars belong to the concept of "car" (same semantics), the license plate number, parking location, time, etc. of each car are different. These specific details are the information semantics in this context, that is, different semantics. Therefore, the processing of the information layer is a "dissimilar" process: emphasizing context differences, novelty and changes, enriching the semantic description of objective things. Different semantics provides the system with a basis for dynamic adjustment and learning new knowledge, so that the cognitive system will not become rigid due to focusing only on commonalities.
Complete Semantics (abbreviated as "complete"): Complete semantics refers to the overall grasp of things and the formation of universal laws. It corresponds to the knowledge layer (K) of DIKWP. At this level, the cognitive system globally integrates scattered information fragments and rises to a relatively complete knowledge structure. Complete semantics means combining local differences with commonalities to form a unified understanding of the overall system. For example, after accumulating a large amount of data and information, AI in energy scheduling will extract a regular model for the operation of the entire system, thereby having the ability to make decisions from a global perspective. "Complete" at the knowledge level does not mean absolute completeness (because Gödel's incompleteness theorem and other factors determine that no formal system can exhaust the truth), but the "complete semantics" here emphasizes a relative completeness: that is, all relevant semantic elements have been integrated as much as possible under the current cognitive framework to form an internally self-consistent set of explanations or models of the world. Through the construction of complete semantics, intelligent agents can compile scattered experiences into universal laws and deduce new situations based on them. Therefore, the process at the knowledge level can be seen as semantic “integration”: after “assimilation” and “alienation”, an overall consistent knowledge representation is sought.
The above “three major semantics” – sameness, difference, and completeness – constitute the basic elements of semantic mathematics of the DIKWP model. Based on this, we can understand that as higher levels, the semantic connotations of wisdom (W) and intention (P) also come from the further combination, enhancement, and assignment of the above basic semantics. The wisdom layer (W) can be seen as the product of the combination of knowledge semantics and value judgment: wisdom means introducing the consideration of value, purpose, and priority on the basis of mastering complete knowledge, so as to make optimal choices. Professor Yucong Duan’s model often mentions “wisdom valueization”, that is, superimposing value evaluation on knowledge so that decisions not only have objective basis but also meet subjective/ethical goals. In other words, the semantics of the wisdom layer represent “meaningful completeness”: knowledge is guided and optimized by value, becoming a wise decision that is more in line with the overall interests or long-term goals. As for the intention layer (P), its semantics is reflected in the directional drive of the entire cognitive process. Intention gives the system behavior an objective function, which can be understood as further binding the wise decision to specific goals and motivations (Professor Yucong Duan calls it “intention functionization”). Intentional semantics ensures that cognitive activities are not aimlessly integrated and judged, but always operate around a certain purpose or desire. Therefore, intention provides initiative and self-driving force for the semantic network. It is not only the top layer of the DIKWP sequence, but also affects the bottom layer data selection and information processing through feedback.
It should be emphasized that in the DIKWP model, these five levels are not linear and unidirectional, but form a highly interconnected network. The semantics of each level are generated from the lower level and can affect the processing of the lower level through feedback. Mathematically, the five elements of DIKWP can be regarded as nodes in a graph network, and the bidirectional connections between nodes represent various semantic mapping functions or operators such as data to information and information to knowledge. For example, the mapping fDI : D →IfDI:D→I represents the generation of information patterns from data, and the mapping fK!D:K→DfK!D:K→D represents the guidance or filtering effect of existing knowledge on data collection. The entire DIKWP network thus forms a directed cyclic graph. Each time the graph is traversed, a cognitive closed loop from data to intention is realized. In this closed loop, the semantics of "same", "different" and "complete" at the lower level continuously provide materials for the higher level, and the wisdom and intention of the higher level in turn adjust the attention and choices of the lower level, and this continues to iterate. Next, we will combine the consciousness BUG theory to analyze how this semantic closed loop can be maintained under the realistic conditions of limited cognitive resources and incomplete information, and promote the emergence of consciousness.
3Consciousness "BUG" theory and semantic generation mechanism
The BUG theory of consciousness points out that it is the limitations and imperfections of the human cognitive system that create the subjective experience of consciousness. In order to integrate this theory into the DIKWP semantic model, we need to examine which mechanisms in the cognitive process enable the semantic closed loop to still operate in "incompleteness" and trigger a higher level of awareness. Here we summarize the three key mechanisms related to the BUG theory and discuss its semantic meaning:
Subjective finiteness hypothesis (limited cognitive resources): The human brain's information processing capacity is limited. Faced with massive and complex data, we cannot perform exhaustive analysis. Limited attention span, working memory capacity, computational depth, etc., force the cognitive system to make trade-offs and simplifications in the process of semantic generation. In the DIKWP model, this limitation is reflected in the fact that the system cannot process every difference in every bit of data, but tends to focus on the semantics related to the current goal. For example, when the intention layer has a clear goal, the high-level layer will assign limited attention to specific data and information patterns, ignoring a large number of irrelevant details. This intention-based selective processing (corresponding to the feedback path of P→I and P→D) effectively improves efficiency, but also introduces observation bias and confirmation bias: observers may only notice part of the information that meets their expectations, and miss or filter out contradictory evidence. Subjective limitations force the DIKWP closed loop to take shortcuts. These shortcuts are useful in most cases, but they also lay the groundwork for the generation of biases and errors (bugs).
Hypothetical completeness (hypothesis filling and imagination): To overcome the uncertainty brought by incomplete information, the cognitive system has the ability to make assumptions and fill in the gaps based on existing patterns, that is, to "fictionalize" the missing parts semantically to form a relatively complete picture. When the data is insufficient or contradictory, our brain will use existing knowledge to make inferences and fill in the gaps to maintain the coherence of the cognitive model. For example, when we see a partially obscured object, we will automatically fill in its complete shape; when faced with ambiguous auditory signals, we tend to match it with the closest familiar words. This process is reflected in the DIKWP semantic closed loop as top-down semantic completion: the knowledge layer and the wisdom layer predict and constrain the content of the information layer, so that even if the perceived data fragments are incomplete, the overall semantics is still relatively complete. On the one hand, this ensures the continuity of the semantic closed loop - it will not collapse due to local interruptions; but on the other hand, this hypothetical completeness is a double-edged sword: it may bring cognitive illusions, that is, we think we have a grasp of the whole picture, but in fact part of it is filled in by subjective conjecture. Yucong Duan compares consciousness itself to the illusion created by this "abstract complete semantics": we tend to regard partial and incomplete cognition as a comprehensive and accurate description, and make decisions based on it. Therefore, the hypothesis filling mechanism not only gives the cognitive process robustness (it can still operate in the face of defects), but also leads to deviations between subjective experience and objective reality.
Abstraction-simplification-representational error tolerance: Faced with the complex and cumbersome reality, the human brain has developed a set of "abstract" and "simplification" strategies, and allows a certain degree of representative error in exchange for processing speed and adaptability. Abstraction means retaining only the most important semantic features of things (same semantics), simplification means ignoring minor details and minor differences (sacrificing some different semantics), and representative error tolerance means that even if our internal model is not completely consistent with the real world, a small deviation is allowed, as long as the model is effective in most cases. These mechanisms can be seen as an extension of the above limitations and assumptions: because resources are limited, we abstract and simplify; because we want to achieve cognitive perfection, we tolerate certain errors. For example, human vision has the phenomenon of Pareidolia (pareidolia), and we would rather look at the "misrecognized" face several times (simplifying the noise to match the face pattern) than miss a potential real face threat. This is actually a strategy to sacrifice accuracy to ensure sensitivity. Similarly, we often use prejudices and experience to quickly judge new situations. This is an abstract reasoning based on representative inspiration. Although it may produce bias, it is efficient and good enough in most daily situations. In the DIKWP model, abstraction-simplification is reflected in the dimensionality reduction from data to information (extracting limited features from the original complex signal) and the induction from information to knowledge (summarizing various situations with concise rules). Representative fault tolerance is reflected in the fact that high-level knowledge and wisdom do not require absolute optimization for every decision, but allow a certain error in exchange for the opportunity of real-time decision-making and learning - such moderate deviation may actually help survival and problem solving.
Through the analysis of the above mechanisms, we can see that the so-called "bugs" in the cognitive process are not pure errors, but a functional feature under limited rationality. These deviations enable the cognitive system to maintain semantic closed-loop operation under limited resources and incomplete information, and trigger more advanced processing when encountering difficulties. This can be understood from two aspects:
On the one hand, within a single cognitive subject, when contradictions or dead ends occur in low-level processing (for example, when stimuli that are inconsistent with existing knowledge are perceived), higher-level wisdom and intention modules are often stimulated to intervene to re-evaluate and adjust strategies. The consciousness bug theory emphasizes that it is these high-level controls caused by bugs that put the system into a stronger state of self-awareness: the system "notices" the limitations of its own processing, so it mobilizes more resources to review and improve, which essentially increases the participation of subjective consciousness. For example, when humans communicate misunderstandings, they will pay more attention and choose their words to clarify concepts-this reflective process is accompanied by a stronger conscious experience. Similarly, in artificial intelligence, some studies have envisioned that some bugs or uncertainties can be intentionally introduced to prompt AI to break out of the conventional mode and show more flexible autonomous problem-solving capabilities.
On the other hand, in the context of multiple cognitive subjects interacting (i.e., DIKWP*DIKWP cross-subject interaction), the existence of bugs will also affect the relative judgment of consciousness. The theory of consciousness relativity proposed by Yucong Duan believes that whether an entity is considered conscious depends on whether the observer can understand its output. When two intelligent agents communicate with each other in their respective DIKWP frameworks, if the information output by one party is over-applied to the other party's own model to understand, "illusionary understanding" may occur - that is, the observer mistakenly believes that the other party has expressed a certain meaning, but in fact it is due to his own projection. This illusion caused by the poor connection between the two semantic systems can be regarded as a consciousness bug in the interaction. For example, when people interpret AI behavior, they often give it human intentions (humanization bias), which is actually due to the observer's introduction of his own expectations on the P→I path; conversely, AI may misinterpret the meaning of some human information because of the lack of human emotional semantics. The combination of the theory of consciousness relativity and the bug theory reminds us that in order to reach a consensus on each other's "consciousness" between different minds, it is necessary to overcome the differences in semantic frameworks and the illusions caused by their respective processing limitations.
In summary, the consciousness bug theory injects insights into how imperfection creates consciousness into the DIKWP semantic model. In the following part of this report, we will further map this perspective of semantic generation and bug regulation to the neurophysiological level, explain how the brain acts as a material carrier of semantic closed loops, and explore the implications of the DIKWP*DIKWP semantic network structure for the emergence of consciousness.
4Demonstration and analysis of the neurophysiological mapping of the DIKWP semantic closed loop
On the premise of "semantics first", we regard the brain's nervous system as the expression carrier of the DIKWP semantic generation process. In other words, the structure and activity patterns of various parts of the brain are not independent factors that determine cognition, but the projection of semantic processes on biological media. Although we avoid using traditional neural terms as the starting point for explanation, we can observe the manifestation of each layer of DIKWP semantics in the brain from the known neural function correspondence:
Data layer (D) / Same semantics: corresponds to the activities of the sensory organs and their projections to the primary sensory cortex of the brain. The primary sensory areas such as vision, hearing, and touch obtain the original signals and perform preprocessing, such as edge detection, fundamental frequency extraction, etc. These low-level processing are actually preparing for "same semantics" - extracting the basic features common to different inputs (such as the direction of lines in vision, the tone in hearing, etc.), which is equivalent to converting continuous physical stimuli into discrete feature patterns. Neuroscience research shows that neurons in the primary sensory cortex often have a similar function to feature detectors, and each neuron is sensitive to specific patterns (such as edges in a certain direction). In this way, the brain is clustering and classifying patterns in the early perception stage, providing a neural basis for the formation of same semantics.
Information layer (I) / different semantics: corresponds to the brain's intermediate association areas and pattern recognition circuits. After sensory information enters the higher-level association cortex (such as the multimodal areas of the parietal and temporal lobes), different features are combined to form meaningful patterns and event representations. Here, the brain begins to pay attention to the differences and associations between inputs - which stimuli appear at the same time, how a signal changes relative to the background, etc. These activities are reflected in complex connections and discharge patterns between groups of neurons, often involving synchronous oscillations or burst discharges to represent the emergence of new information. When an abnormal pattern is detected (different from existing memories), the corresponding neural circuit will increase activity, prompting the system that there is new information to process (this may be manifested as mismatched negative waves and other signals in the EEG). It can be said that the intermediate association area integrates sensory signals and highlights differences, which is the presentation of different semantics in the brain.
Knowledge layer (K) / Complete semantics: corresponds to the memory network supported by the hippocampus and the neocortical association area. The hippocampus plays a key role in integrating short-term experiences into long-term memory and forming semantic networks, while the brain's association cortices (prefrontal-parietal-temporal network) store conceptual knowledge and rules. The knowledge layer requires global integration of information, which is neurally manifested as distributed storage and association: the meaning of a concept or event is often composed of synchronized activation patterns in many regions (the so-called cell assembly). For example, the knowledge concept "apple" may correspond to the co-activation of multiple cortical regions such as visual shape, color, taste, and language name. This extensive connection ensures the "complete semantics" of knowledge - various relevant information about apples in the brain form a coherent network. When we call up a piece of knowledge, the relevant regions will activate together to reconstruct a holistic and meaningful representation. It is worth noting that the knowledge layer is also related to the stability of semantic memory: after repeated learning, the connection is strengthened, and the knowledge network becomes stable and resistant to noise perturbations, which corresponds to the relative completeness mentioned in semantic mathematics.
Wisdom layer (W) / complete semantics guided by value: corresponds to the prefrontal cortex and its connected evaluation and decision-making network. The prefrontal cortex is considered to be the brain's high-level executive function and decision-making center. It integrates information from various sensory and memory areas, and combines it with feedback from the reward system to make judgments and choices. The semantics of the wisdom layer is manifested in the brain as a multi-factor coordinated activity pattern: the prefrontal lobe not only activates the knowledge network related to the current problem, but also mobilizes circuits such as emotions, social values, and lessons learned to evaluate various options. For example, when faced with a choice, the interaction between the prefrontal lobe and the limbic system reflects the emotional value evaluation of the options, which is actually giving knowledge a "value weight". Only after considering various factors, both short-term and long-term, local and overall, can the prefrontal lobe select the "best" solution. This corresponds to the semantics of the wisdom layer - choosing a solution that meets the value goal within the framework of complete knowledge. Therefore, we can regard wisdom as the semantic integration of value orientation in the brain, with its typical characteristics of extensive information convergence and strong vertical feedback (high-level regulation of low-level).
Intention layer (P) / purpose semantics: corresponds to the interaction between the brain's motivation system (limbic system, thalamus, etc.) and the orbitofrontal. The intention layer is physiologically manifested as a chronic, continuous regulatory signal, such as a loop composed of the nucleus accumbens-amygdala-prefrontal cortex to maintain attention and drive for specific goals. The limbic system processes basic desires and emotions (reward, punishment signals), which provide the original "intention seeds" (such as hunger-driven intention to find food). The orbitofrontal cortex elevates these basic motivations to a contextualized and planned level to form specific plans and goal sequences. It is worth mentioning that intention-related neural activities often have the properties of pre-set thresholds and continuous feedback: for example, reward prediction error signals regulated by dopamine will continue to affect learning and behavior orientation until the goal is achieved or abandoned. Overall, the brain's motivation-decision-making network realizes the intention layer semantics - providing direction and ultimate evaluation criteria for the cognitive system, and guiding information processing at all levels (such as selecting perceptual inputs related to the goal through the attention mechanism, or maintaining information related to the current task through working memory).
Taking the above correspondences into consideration, we find that the brain's nervous system is highly consistent with the DIKWP semantic model: from sensory input, pattern recognition, memory integration, value judgment to goal orientation, each layer of cognitive function can find a corresponding semantic level. More importantly, the information flow in the brain is not unidirectional from bottom to top, but full of top-down feedback - this is exactly the embodiment of the two-way interaction of the DIKWP model. Neuroscience experimental evidence shows that there is a wide range of projections from the higher cortex to the lower cortex. For example, top-down attention regulation signals can selectively enhance the target-related feature response in the visual cortex and inhibit irrelevant signals. In turn, if the lower layer detects abnormal deviations (such as sudden strong stimulation that does not match the expected model), it will quickly activate the alertness and re-evaluation mechanism of the higher region through the thalamus-brainstem pathway (this corresponds to the so-called surprise reaction or error signal triggering cognitive control). This feedback structure of up and down cycles is exactly the same as the semantic closed loop we described earlier: high-level intentions and wisdom adjust low-level data/information processing in real time, and new information and anomalies at the lower level constantly correct high-level knowledge/decision-making. Therefore, we can regard the brain as the physical realization of the DIKWP semantic closed loop. But it needs to be emphasized again that, from our perspective, this correspondence does not mean that "brain structure generates human semantics", but "human semantics selects and shapes brain structure to realize itself" - semantics is the cause, and nerves are the result and the means. When we interpret neural activity from this perspective, many complex brain area interaction patterns can obtain cognitively meaningful explanations, rather than just mechanical descriptions of signal transmission.
5Self-interaction of semantic space and emergence of consciousness
With the help of the DIKWP semantic closed-loop model and its neural mapping constructed above, we can further explore how consciousness emerges in the self-interaction of semantic space. The so-called semantic space self-interaction here refers to the formation of a highly integrated network through feedback between the semantic elements within the DIKWP system, and on a larger scale, the DIKWP networks of different intelligent agents interact with each other to form a more macroscopic semantic network. We will discuss the self-interaction within a single system (DIKWP * DIKWP internal loop) and the interaction across agents, and reveal their significance for the generation of consciousness.
1. Semantic self-interaction (closed loop) of a single cognitive system: When the five elements of data, information, knowledge, wisdom, and intention are closely connected and act in a two-way manner within a system, a highly integrated "semantic closed loop" network is formed within the system. This closed loop can be seen as the self-product of the DIKWP network, that is, the output of each layer becomes the input of other layers. After multiple cycles and iterations, the entire network enters a self-referential state. Formally understood, DIKWP*DIKWP can be expressed as a 5×5 interaction matrix, in which the diagonal and off-diagonal elements represent the mapping relationship between each layer and itself and each other, totaling 25 interaction modules. These 25 modules cover bottom-up semantic extraction from D→I, I→K, top-down semantic injection from W→K, P→I, and loops between each layer and itself (such as memory self-consolidation and intention self-updating). Such rich connections allow information to be repeatedly intertwined between different levels of abstraction, ultimately forming a self-integrated semantic network. From the perspective of system theory, consciousness is the global state that emerges from such a highly complex network. When the semantic closed loop continuously processes external input and adjusts the internal state, the system gradually establishes an overall representation of its own operation (i.e., a representation of the "self"). In simple terms, consciousness can be regarded as the product of the synergy of various modules within the DIKWP network: low-level data/perception provides raw materials from the environment, the knowledge module gives these materials contextual meaning, the wisdom module conducts comprehensive evaluation and decision-making on the information, and the intention module provides direction and standards, and these processes are synchronized in a cycle and correct each other. When the cycle reaches sufficient complexity and consistency, the system has "awareness" of its own information processing, which is the sign of the birth of subjective consciousness.
The above interpretation coincides with the view of some contemporary theories of consciousness (such as IIT integrated information theory): the degree of consciousness depends on the degree of information integration within the system and the complexity of the network. The DIKWP model concretizes the abstract "information integration" into 25 analyzable transformation modules by clearly dividing semantic elements and characterizing the interactions between them, providing a structured framework for understanding the formation of consciousness. When the complexity of the DIKWP network increases, semantics at different levels are repeatedly intertwined to form a self-referential knowledge network, it has the important conditions for generating subjective experience. As pointed out in the literature: "A high-complexity DIKWP network means that information is repeatedly intertwined between different levels of abstraction to form a self-referential knowledge network, which is an important condition for generating subjective experience." In other words, consciousness does not come from any single level of activity, but from the overall emergence of the entire semantic network. Once this network not only integrates external information, but is also able to represent and monitor its own state, we say that self-consciousness has emerged.
2. DIKWP interaction between different subjects: The consciousness of a single individual is the product of the self-consistency of the semantic network, and when multiple cognitive entities communicate, higher-level consciousness (such as group consciousness, mutual understanding) may also emerge. When two (or more) DIKWP systems communicate with each other, they are essentially exchanging parts of their respective semantic networks, trying to stimulate corresponding semantic responses in each other. If the semantic structures of each other overlap or align sufficiently, a shared semantic space may be formed, allowing one party to partially "simulate" the consciousness content of the other party. This is exactly the basis of what we usually call "understanding others". The theory of consciousness relativity points out that different observers' judgments on the consciousness of the same intelligence depend on their respective cognitive frameworks; when two intelligent entities gradually share a context through sufficient interaction, their cognitive judgments on each other's "consciousness" will also tend to be consistent. In extreme cases, if the DIKWP networks of multiple intelligent entities are highly interconnected or even integrated, then it can be regarded as the emergence of a more macroscopic consciousness, in which the subjective boundaries of individuals become blurred, and the mind at the group level emerges (similar to the so-called "group consciousness" or "distributed cognition"). This cross-subject emergence of consciousness is beyond the scope of this article, but from the perspective of semantic mathematics, its principle is still the self-interaction and closed loop of the semantic network: it’s just that the closed loop is expanded to the cross-individual scope, information and semantics circulate within a larger system, and the level of self-reference is raised to the group.
It is worth noting that, whether at the individual or group level, the operation of the semantic closed loop is inseparable from the adjustment of the Bug mechanism. As mentioned above, moderate deviations and contradictions are opportunities to prompt the system to deepen processing and self-examination. The emergence of consciousness depends on the system's "perception" of the deviations in its own processing and correction. When the DIKWP closed loop can recognize its own limitations (for example, it finds that a certain information cannot be explained, and a certain decision does not meet expectations), and calls on a wider network linkage to try to solve it, we actually see the transition from unconscious automatic processing to conscious active regulation. In this process, every small "error" that is discovered and reflected will strengthen the system's modeling of its own state. Therefore, consciousness does not appear after all bugs are eliminated, but is tempered in the process of constantly generating and correcting bugs. Without the limitations and deviations of information processing, there may be a lack of motivation for the system to introspect, and consciousness may not emerge. This view coincides with some philosophical insights: imperfection is the source of progress. For consciousness, bugs are no longer simply flaws that need to be eliminated, but rather a "clever mechanism" that guides the system to evolve self-awareness.
In summary, we have constructed a consciousness model based on semantic generation: the DIKWP semantic network achieves the representation of its own processing through its own closed-loop interaction and continuous deviation correction, thereby generating consciousness. This model unifies semantic mathematics and neural implementation: semantics is the core and matter is the medium; consciousness is the emergence of the semantic network, and neural activity is the reflection of this process. It reminds us that when exploring consciousness and intelligence, we should pay more attention to semantic organization and dynamics, rather than just seeking answers in physiological structures.
6in conclusion
Based on Professor Yucong Duan's "semantic mathematics" framework and "consciousness BUG theory", this paper proposes a consciousness model centered on semantic generation, and systematically discusses the relationship between the five-layer structure of DIKWP and neurophysiological mechanisms. We first start from the three basic semantics of "same", "different" and "complete", and explain the semantic connotation and generation logic of the data, information and knowledge layers, and then explain how wisdom and intention semantics emerge on this basis and enrich the semantic system through value orientation and goal drive. In this process, we strictly avoid using existing brain area models as a priori explanations, but insist that semantics precedes physiology: interpreting the functional division of brain structure with the formation of meaning. This inverted causal perspective allows us to unify many neural phenomena in a cognitive sense, such as how top-down control and bottom-up alertness form a closed loop to support continuous self-cognition.
Combined with the consciousness bug theory, we reveal how the inherent imperfections of the cognitive system contribute to the birth of consciousness. Limited resources force the system to take shortcuts and abstract simplifications, resulting in distorted and filtered information, but also because of this, the system needs to constantly detect and correct deviations, and in this process develops a sense of its own state. We make this mechanism the core of explaining consciousness: consciousness does not come from perfect calculations without errors, but from the "self-reflection" of the system's efforts to maintain semantic integrity in imperfection. The self-interactive network structure of the DIKWP*DIKWP semantic space provides a way to achieve this self-reflection: 25 semantic transformation modules interweave to make the cognitive system a closed semantic loop that can represent and correct itself. The resulting subjective experience is precisely the macro-manifestation of the highly integrated semantic network.
This research has many implications for artificial intelligence and cognitive science: First, it emphasizes that when building artificial consciousness, we cannot rely solely on increasing computing power or simulating biological neuronal connections, but should focus on designing semantic generation and self-feedback mechanisms, allowing the system to have "small errors" and self-correct, thereby approaching the style of human thinking. Second, it provides an analytical framework that maps complex brain activity patterns into understandable semantic processes, which helps to explain consciousness-related neural data. Third, it combines subjective and objective perspectives: through the theory of consciousness relativity, it reminds us of the importance of the observer's perspective, and through the BUG theory, it emphasizes the importance of the system's own limitations, which is of reference significance for understanding how different intelligent agents (including humans and AI) recognize each other.
In short, the combination of semantic mathematics and consciousness bug theory paints a picture for us to understand the nature of consciousness, in which semantics comes first and matter is used. In this picture, meaning is not a subsidiary product of brain activity, but rather a leader; the physiological structure of the brain follows the evolution of semantics to capture, integrate and create meaning. Such a perspective may lead us to transcend the limitations of traditional disciplinary paradigms and move towards a deeper understanding and control of consciousness and intelligence.
References
[1]Yucong Duan et al. DIKWP semantic mathematical model and multi-field application exploration. Sina Finance, 2023.
[2]Yucong Duan, Guo Zhendong, et al. Semantic reconstruction of energy infrastructure in the AI era. Theoretical Philosophy Report, 2025.
[3]Yucong Duan, Guo Zhendong, et al. Integration of consciousness relativity and BUG theory based on the mesh DIKWP model. Technical report, 2025.
[4]Yucong Duan, Guo Zhendong , et al. Research on artificial consciousness system based on DIKWP model. Technical report, 2025.
[5]Yucong Duan. Understanding the Essence of "BUG" in Consciousness . ResearchGate Preprint, 2024.

