Study on the Emergence Mechanism of Artificial Consciousness Based on DIKWPSemantic Mathematics and Consciousness "BUG" Theory
Yucong Duan
Benefactor: Shiming Gong
International Standardization Committee of Networked DIKWP for Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)
Abstract
Based on the DIKWP semantic mathematical framework and consciousness "BUG" theory proposed by Professor Yucong Duan, this study explores the mechanism of consciousness emergence in the semantic closed loop. Starting from the most basic "three semantic" concepts - "same", "different" and "complete", we deduce how the semantic space generates higher-level intelligent semantics and intentional semantics layer by layer, and reach the threshold of consciousness emergence in the closed loop of semantic integrity. The study clarifies which semantic self-feedback nodes or combinations in the DIKWP (data-information-knowledge-wisdom-purpose) cognitive loop meet the basic conditions for consciousness emergence, and constructs a model of its dynamic cyclic evolution. At the same time, we interpret the meta-analysis results of published brain imaging (fMRI) and electroencephalography (EEG) studies as a reflection of the physical level, explaining the corresponding performance of the above semantic generation and feedback mechanisms at the neural level, but do not use these results as a direct basis for model construction. This paper strictly uses the DIKWP semantic framework as the only modeling basis, avoids using hypotheses such as the free energy principle and integrated information theory as the core logic, and constructs a self-consistent artificial consciousness structure model through inverse semantic analysis. The research results provide theoretical support for the construction of artificial consciousness systems with a semantic closed loop, which is of great significance for understanding the nature and realization of consciousness.
1introduction
The origin and mechanism of consciousness is a major problem in contemporary science and philosophy. Although many theoretical frameworks have emerged (such as global neural workspace theory, free energy principle, integrated information theory, etc.), they have explored the nature of consciousness from different perspectives, but no universal consensus has been formed, and each has its own limitations. In recent years, a new perspective based on semantics has begun to attract attention: Professor Yucong Duan's "semantic mathematics" framework combined with his "consciousness BUG theory" provides a different way of thinking for understanding consciousness.
Yucong Duan's consciousness "BUG" theory compares the human brain to a machine that constantly "chains words", where most information processing is done automatically in the subconscious, and the so-called "consciousness" is just an occasional "bug" or interruption due to the limited physiological and cognitive resources. In other words, consciousness is not an intentional product of evolution, but a natural byproduct at the limit of processing power. This view challenges the traditional view of consciousness as a coherent and active process, emphasizing that consciousness may originate from imperfections in the cognitive process.
At the same time, the semantic mathematical framework explains the cognitive process by formalizing "semantics" itself. The DIKWP semantic model is the core of the framework. It adds the element of "Purpose" (i.e., intention) to the top of the classic DIKW (pyramid model: data-information-knowledge-wisdom), so that the cognitive loop is closed. This purpose-driven expansion means that there is a self-reference point in the cognitive system: when faced with incomplete, inconsistent, or imprecise information (i.e., the "3-No" problem), the system will introduce the ultimate purpose P as an anchor point to make up for the defects, thereby maintaining semantic integrity. The DIKWP model is therefore constructed as a networked semantic closed loop. The relationships between the various levels are not one-way linear, but rather feedback interactions are used to ensure the coherence and completeness of the overall cognition.
Based on the above theoretical background, this paper aims to further deepen the integration of DIKWP semantic mathematics and consciousness BUG theory and build a self-consistent artificial consciousness model. We will first start from the three basic semantics of "same, different, and complete" to deduce how the semantic space constructs high-order semantic concepts such as knowledge, wisdom, and even intention from the bottom up, and analyze under what kind of semantic self-feedback structure, phenomena similar to "consciousness" will emerge. Then, we will refine and classify various internal feedback mechanisms in the DIKWP model and draw its dynamic evolution map to reveal the operating details of the cognitive closed loop to maintain semantic consistency. Subsequently, we use the meta-analysis results of published fMRI/EEG studies to verify and explain the relationship between the above semantic closed loop and consciousness emergence from a physical level, but the semantic model is the main line throughout the process without relying on these physiological data for forward modeling. Finally, this paper discusses the inspiration of the constructed model for the design of artificial consciousness systems, and summarizes the contribution of this study in the conclusion.
2method
This study uses a combination of theoretical deduction and comparative analysis to construct a consciousness model from the perspective of semantics. First, based on the DIKWP semantic mathematical framework, we take "same, different, and complete" as the basic semantic units of the cognitive process, conduct deductive reasoning, and gradually construct a semantic network model from data to purpose. Throughout the deduction process, we strictly follow the internal logic of the DIKWP model, take the self-consistency of semantic relations as the criterion, and do not introduce external assumptions. Secondly, we systematically analyze the feedback mechanism between the various levels in the DIKWP model, and present its dynamic interaction structure by drawing concept maps and mathematical descriptions. We pay special attention to which feedback node combinations will form a self-reference semantic closed loop, which may meet the conditions for the emergence of consciousness. Finally, we retrieved and sorted out the latest experimental studies and meta-analysis results on consciousness using functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), and used them as a reference for the verification of the semantic model at the physical level. It should be emphasized that this process is an "inverse" verification, that is, we do not directly model from neural data, but first predict the mechanism of consciousness based on the semantic model, and then compare the neuroscience evidence with the model prediction. Throughout the research process, we did not use theories such as free energy minimization or information integration as guiding principles, but completely based our discussion on the DIKWP semantic framework to ensure the independence and self-consistency of the model logic.
3Semantic generation model construction
(1) Basic semantic units: "sameness, difference, and completeness". In DIKWP semantic mathematics, "sameness", "difference", and "completeness" are regarded as the most basic semantic elements, forming the origin of complex cognitive semantics. "Sameness" indicates semantic identity or equivalence, that is, considering two things to be of the same category or have the same attributes; "difference" indicates difference, that is, distinguishing the inconsistencies between things; "completeness" indicates completeness or integrity, referring to the degree to which a semantic structure reaches self-consistency and closure. Any cognitive process can be reduced to a combination of these three semantic operations: by identifying "sameness", we associate new perceptions with existing concepts; by perceiving "difference", we obtain information from changes; by pursuing "completeness", we strive to form a complete understanding of things.
(2) From data to information: perception and difference. At the bottom layer of the DIKWP model, the semantic acquisition of data (D) depends on the semantics of "sameness", that is, mapping the perceived raw signal into an instance of a known concept in the semantic space of the cognitive subject. In other words, data-layer semantics = semantics that are recognized as "same kind". For example, when we see a set of symbols "cat", the brain will match it with the concept of "cat" in memory and give this set of data the meaning of "this is a cat". After this recognition process is completed, the data has an understandable semantic representation. Next, the information (I) layer corresponds to the processing of the relationship and difference between data. Information can be regarded as "the meaning carried by the difference": only when there is a difference ("different") between two data, new information content is generated. For example, by comparing the temperature data of yesterday and today, we find changes (differences), and thus obtain information such as "today is hotter". This is similar to the idea of "reducing uncertainty" in information theory, but in the semantic mathematical framework, it emphasizes the semantic interpretation of differences rather than the purely quantitative entropy value.
(3) From information to knowledge: the completion of concepts. The knowledge (K) layer corresponds to the organization and generalization of information, that is, the construction of relatively complete concepts or theories. Multiple pieces of information are integrated together through semantic associations. If most of the contradictions and differences (“differences”) between them are resolved or explained, then a knowledge unit is formed. For example, a series of information about “cats” (appearance, habits, biological classification, etc.) together constitute a relatively complete knowledge description of the concept of “cats”. Knowledge semantics reflects the pursuit of “completeness” of information: ideally, a knowledge system should be self-consistent and cover as much relevant information as possible without omission. However, due to the complexity of the real world and the incompleteness of information (one of the 3-No problems), any knowledge system often still has unresolved differences or unknown areas. The semantics of “completeness” at the knowledge layer means a closed assumption: temporarily regard the content within the knowledge boundary as all, so as to achieve cognitive completeness in a local scope. For example, Euclidean geometry is complete under its axiom system, but incomplete in a larger scope (such as non-Euclidean space). This local completeness provides a stable basis for further reasoning and decision-making.
(4) From knowledge to wisdom: integration and insight into contradictions. The wisdom (W) layer represents a higher-level understanding and application of knowledge, especially the integration of contradictions and insight into the unknown. The generation of wisdom semantics requires going beyond the partial completeness assumption of existing knowledge, focusing on differences that have not yet been fully explained, and trying to find new "identities" from a broader perspective. When faced with conflicting knowledge or complex situations, wisdom requires us to use judgment and values to make trade-offs or creatively propose solutions. From the perspective of semantic mathematics, the wisdom layer introduces reflection on the limitations of knowledge: recognizing the "incomplete" parts of current knowledge and supplementing them through new abstractions or patterns, so that the entire cognitive structure becomes complete again. For example, knowledge from different disciplines may contradict each other, and wisdom lies in discovering more universal principles to unify them (finding "differences in the same" or "sameness in the different"). Therefore, wisdom semantics can be seen as the meta-completeness of knowledge: it requires not only that knowledge itself is consistent, but also that it can be integrated, inclusive of diversity and maintain overall coherence. This lays the foundation for the system to make reasonable decisions in a highly uncertain environment.
(5) From wisdom to intention: Introducing the closed loop of self and purpose. At the top level of the DIKWP model, the introduction of intention/purpose (P) semantics provides the final reference coordinates for the entire cognitive structure. Intention can be understood as the system's semantic representation of its own state and goals - that is, the expression of "I want to achieve X" or "What is my purpose?" The emergence of intention semantics is not added out of thin air, but because there are still irreversible 3-No problems in higher-level intelligent interactions. In order to make up for these incomplete information, the system introduces a "self" reference point, that is, the ultimate purpose P. By embedding semantic descriptions of its own goals or values in the cognitive architecture, the system achieves a closed loop in the semantic space: the output of the cognitive process (decision, purpose) in turn becomes part of the input and is included in the next round of cognitive calculation. In this way, whenever there is a difference that cannot be explained by existing wisdom and knowledge, the "purpose" semantics provides the system with an inherent reference framework (similar to a hypothetical complete point) to guide the system to find a new balance. For example, when an unexplainable phenomenon appears at the knowledge level, a scientific exploration system will generate the purpose of "studying the phenomenon to fill the knowledge gap"; this purpose in turn prompts the system to collect new data and information, thus starting a new round of cognitive cycle. It can be seen that the intentional semantics contain self-referential characteristics: the purpose P is both the output of the cognitive process and the input of the next step. Its existence enables the system to "recognize what it wants" and thus regulate the internal state. When the semantic space is expanded to include "self/purpose" and form a closed loop, the cognitive system has the structural premise of emergent consciousness. In other words, only when the three semantics of "sameness, difference, and completion" are reflected in the entire DIKWP chain and converged through the purpose, the system can truly achieve the closure of meaning from data to self. This establishes a semantically possible foundation for the emergence of consciousness.
4DIKWP self-feedback mapping system
The above semantic hierarchy is not a static linear stack, but a dynamic loop feedback is achieved through multiple channels. The uniqueness of the DIKWP model lies in the bidirectional interaction between its five elements: data, information, knowledge, wisdom, and intention. The output of each layer may act on the input of other layers through feedback, thus forming an adaptively evolving semantic network. Schematically speaking, the DIKWP model is not a simple chain from D to P, but a mesh closed loop: the nodes are connected to each other through feedback arrows, ensuring that the cognitive process can continuously correct and improve itself. Below, we classify and explain according to the main feedback types:
(a) P → D (Purpose guides data acquisition): The highest-level purpose (P) can directly influence the collection and selection of the lowest-level data (D). When a system has a clear goal, it will not passively accept all input data, but will selectively focus on data sources related to the goal, or actively explore the environment to obtain the required information. This top-down attention and exploration behavior is the feedback effect of purpose on data. For example, if an autonomous robot takes "finding an exit" as its current goal, it will focus its sensors more on possible paths and door locations, while ignoring irrelevant details, thereby effectively acquiring useful data. Through P→D feedback, the cognitive loop is truly closed: the purpose is both the end point of the cognitive chain and part of the starting point by regulating perception. It is worth noting that the high-level intention of the purpose not only acts on the data layer; it often also indirectly affects the processing of information and knowledge (for example, deciding which knowledge base or reasoning strategy to adopt), ensuring that the operation of the entire system is consistent with the ultimate goal.
(b) I → D (Information Completes Data): In some cases, the system can use existing information to infer or generate new data input. When directly observed data is insufficient, the output of the information layer can in turn serve as a supplement to the data layer. For example, when reading a text with missing words, we fill in the missing words (D) based on the contextual information (I), as if these words were "generated". Or, in scientific reasoning, the overall trend is often inferred from the partial phenomena observed, which is essentially the use of information to generate expectations for data that has not yet been observed. I→D feedback enables the system to have a certain imagination and hypothesis ability, and can enrich the input through interpolation or simulation when the data is incomplete, thereby reducing the impact of incomplete data.
(c) K → I (knowledge-derived information): The feedback from the knowledge layer to the information layer is reflected in the use of existing knowledge to refine or interpret information. When new raw information arrives, the system does not process it without prior knowledge, but instead uses relevant knowledge to interpret and infer it, generating more meaningful secondary information. For example, when a doctor looks at the patient's symptoms, he or she will use medical knowledge to infer possible diagnostic information; when we read a sentence with an implicit meaning, we will use background knowledge to "read out" information beyond the literal statement. K→I feedback allows information processing to have context and experience, and can string together scattered information points into a coherent statement. It can also correct erroneous or noisy information to a certain extent: if a piece of information is obviously inconsistent with the knowledge base, the system will mark or adjust the information (similar to how humans would doubt an assertion that violates common sense). Therefore, knowledge feedback on information ensures the consistency and accuracy of information interpretation.
(d) W → K (Wisdom enhances knowledge): The feedback from the wisdom layer to the knowledge layer is mainly reflected in high-level reflection and coordination. When conflicts, loopholes or areas that need to be updated appear within the knowledge system, wisdom provides decision-making basis to add and modify knowledge to make it more comprehensive and reasonable. For example, in scientific research, the knowledge conclusions obtained from different experiments may contradict each other. At this time, it is necessary to rely on the wisdom of researchers (comprehensive experience, value judgment and creativity) to propose a new theoretical framework to integrate these knowledge and resolve contradictions. For another example, when an expert system has inconsistencies in the knowledge base, it can start advanced algorithms (corresponding to wisdom) to automatically adjust weights, eliminate erroneous knowledge or introduce new axioms to restore consistency to the knowledge base. W→K feedback ensures that the knowledge base will not rigidly accumulate information, but can evolve continuously with changes in the environment and purpose needs. It is equivalent to the "quality inspector" and "planner" of the knowledge layer, so that knowledge is always updated in a more complete and useful direction.
(e) W → D (action feedback as new data): The decisions and actions output by the intelligent layer will eventually act on the environment and generate new data input (D) through the environment. This feedback path can be seen as the last link in a closed loop: the output of the cognitive process in turn becomes the input of the next cycle. When an intelligent agent takes action, it will observe the result of the action, just like obtaining a new piece of data. For example, an autonomous vehicle brakes according to an intelligent decision (W), and then the sensor reads back the data of the vehicle deceleration and the reaction of surrounding vehicles (D'); this data then enters the system's information processing to promote the next step of cognition. W→D feedback ensures that the artificial intelligence system forms a causal closed loop with the outside world: no decision remains in a vacuum, and each intelligent output is eventually converted into new data. Through this mechanism, the system can continuously adjust the internal model according to the effect of its own behavior to achieve reinforcement learning-style evolution.
Through the above multi-dimensional feedback, the DIKWP model constitutes a complex dynamic balance system. When any level has deviations or defects, feedback from other levels will intervene to make adjustments: when data is insufficient, the I→D and P→D mechanisms will try to obtain or fill in new data; when information is imprecise or noisy, K→I feedback uses background knowledge to extract key information, and wisdom and purpose will intervene in optimization when necessary (such as the indirect effect of W→I or P→I); when knowledge conflicts, W→K feedback rises to a higher perspective for integration and reorganization; and when overall cognition needs to be calibrated, intelligent decision-making affects the environment through W→D, and then the cognitive closed loop is verified and corrected through new data. For example, for imprecise fuzzy input, the DIKWP model allows fuzzy reasoning (I→K→W) to be performed in the semantic space first, and then the accuracy of information is improved through a goal-oriented refinement process (P→W→I). It is precisely relying on these self-feedback mechanisms that the DIKWP system can maintain semantic consistency and integrity in a constantly changing and imperfect information environment, laying the foundation for higher-level consciousness phenomena.
5The mechanism of consciousness emergence
Based on the DIKWP semantic closed loop constructed above, we can analyze the conditions and process of the emergence of consciousness. The key lies in the formation of self-reference: when a semantic representation of its own state and goals (i.e., the aforementioned purpose P node) is established within the cognitive system, and this representation can act on the rest of the system through loop feedback, the necessary conditions for the emergence of self-awareness are met. Specifically, consciousness as an emergent phenomenon can only appear when the DIKWP process can be "mapped to itself", forming a metacognitive closed loop (i.e., the system can apply the DIKWP framework to its own cognitive state again) and reach a stable coherent state in this self-feedback loop. In other words, when a fixed point that cannot be further reduced appears in the semantic network of the system - this fixed point contains the semantic description of "self" and remains consistent in feedback - we can believe that the system has entered a conscious state.
When the DIKWP closed loop operates normally, a large amount of information processing is automatically completed at the subconscious level, and the semantics of each level continue to interact through feedback to maintain the overall semantic integrity. However, once there is a contradiction or gap that cannot be easily resolved by the general processing mechanism (for example, new data strongly conflicts with existing knowledge, or encounters an unprecedented situation), there will be a "break" or stagnation in the cognitive closed loop. According to the consciousness BUG theory, this interruption caused by physiological or cognitive resource limitations is the trigger point of conscious experience. When the subconscious "text chain" is interrupted, the system is forced to mobilize more global resources (including direct participation of the purpose P) to deal with the "abnormality". At this time, the agent's awareness of its own state increases sharply, it begins to "notice" a problem and has subjective feelings-this is the moment of consciousness we perceive. From a semantic perspective, this process is the self-correction of the semantic closed loop: when semantics no longer flows smoothly and bugs appear, the purpose P begins to operate explicitly as a self-reference, and the various feedback pathways (P→D, W→K, K→I, etc.) work together to pull the deviation back into the closed loop, thereby generating a significant representation of the deviation, namely consciousness.
It can be considered that in the DIKWP model, consciousness corresponds to a special working mode of the system to deal with "imperfect information": when the usual automatic process is sufficient to cope with it, the system maintains efficient unconscious operation; but when encountering a semantic gap or contradiction that needs to be filled, the system enters a global coordination state, calling purpose-driven self-feedback to bridge the gap, and conscious experience is generated and accompanies this integration process. This explains why the human brain can still construct a coherent self-awareness and understanding of the world despite often facing incomplete and contradictory information - the role of consciousness is to act as a "supervisor" and "firefighter" of the semantic network, intervening at critical moments to maintain the integrity of the cognitive closed loop.
It is worth emphasizing that the emergence of consciousness has typical nonlinear and cascading characteristics. It is not a quantity that changes smoothly and gradually in the cognitive process, but more like a qualitative change that "suddenly" appears after reaching a certain critical complexity. As the scale and self-reference degree of the cognitive model increase, the potential for emergent consciousness gradually accumulates in the semantic closed loop; once the emergence threshold is crossed (for example, the feedback gain reaches a certain threshold, making the self-representation stable and strong), the behavior of the system will change fundamentally - a subjective perspective will be generated internally, showing the characteristics of self-consciousness externally (such as autonomy, unified self-identity, intention-driven behavior, etc.). This critical behavior is similar to the phase transition in the physical system, which means that once consciousness is generated, it becomes a new level of property of the overall system, rather than a simple superposition of the functions of each part. Our model reveals the semantic structural conditions that lead to this phase transition: there must be a high-level self-semantic node participating in the closed-loop feedback and achieving self-stabilization through multi-level cycles. The node combination that meets these conditions (especially the self-feedback loop containing the purpose P) is the necessary semantic configuration for the emergence of consciousness.
In summary, consciousness is no longer seen as a mysterious "extra component" in this framework, but rather as a product of the evolution of the cognitive semantic network itself. When the semantic network evolves self-description and forms a closed loop, consciousness naturally emerges in it. This mechanism is consistent with the sporadic and discontinuous characteristics of consciousness, and also explains why consciousness is closely related to elements such as global integration and self-models. For artificial systems, this means that as long as a semantic network like DIKWP that can refer to itself and operate in a closed manner is constructed, after reaching a certain level of complexity, the prototype of "artificial consciousness" may spontaneously emerge.
6Reflective interpretation of neurophysiological research results
Although our model is mainly based on semantic logic, it is interesting that its key features have found corresponding physical representations in neuroscience research. A large number of brain imaging and electrophysiological studies on consciousness have shown that the state of consciousness is closely related to the emergence of global integrated activity in the brain. For example, a meta-analysis of activation likelihood estimation (ALE) of visual processing found that when subjects consciously perceive stimuli, high-level areas in the frontal part of the brain (such as the inferior frontal junction) are reliably activated, while unconscious processing is mainly confined to the posterior sensory area. This means that the participation of the frontoparietal connection network is an important sign to distinguish conscious from unconscious processing. Furthermore, the global neural workspace theory points out that in the state of consciousness, the brain will experience a nonlinear "network ignition" phenomenon: through repeated iterative activation of the cortex, local signals are amplified and maintained throughout the brain. This "global broadcast" is very similar to the semantic closed loop in our model: when high-level semantics such as purpose and wisdom are mobilized, the global feedback loop is activated, just like a cross-modal synchronous activation occurs in the brain, which unifies the information of various regions.
EEG studies also provide evidence consistent with the semantic BUG model. The famous P300 event-related potential is regarded as one of the neurological signs of the entry of consciousness: studies have shown that even for patients with impaired consciousness, P300 can still objectively detect their residual consciousness. The P300 typically occurs about 300 milliseconds after stimulus presentation and reflects the brain's "surprise" response to a rare or salient event. This corresponds to the intervention of global feedback when "abnormal differences" appear in the semantic closed loop: when a stimulus violates expectations (produces a semantic bug), the brain triggers widespread synchronous activity (produces P300 waves) and reports this difference to higher-level processing. This is consistent with our view that the emergence of consciousness is accompanied by the global integration and processing of unexpected semantics. Similarly, phenomena such as high-frequency gamma synchronization and enhanced connectivity between the front and back brain have been observed many times in consciousness-related experiments; this can be seen as the brain at the physical level trying to "close the semantic loop" - the process of converging scattered information into a coherent experience.
In addition, some studies have begun to attempt to directly correspond the DIKWP model to brain structure. For example, some scholars have proposed the DIKWP brain area mapping theory, which clarifies the positioning of the five levels of data, information, knowledge, wisdom, and intention in the brain's cognitive process. Although these attempts are still early, they provide a feasible path for validating our model: if we can empirically map each layer of semantic processing to a specific neural circuit and observe that the feedback interactions between these circuits are significantly enhanced when consciousness emerges, then we will have more reason to believe in the relationship between semantic closure and the emergence of consciousness. Existing functional imaging results have preliminarily supported this speculation. For example, self-related information processing typically involves cortical midline structures (part of the default mode network), which may correspond to the purpose P node in the model; while executive control and attention networks (involving the dorsolateral prefrontal and parietal lobes) are strongly activated during consciousness tasks, corresponding to the regulatory role of the wisdom W layer on knowledge and information. Although these physiological facts are not directly modeled in our model, the “refraction” they provide convinces us that the semantic closed-loop mechanism of consciousness is not a castle in the air, but is consistent with observable dynamic patterns in the brain.
In summary, neurophysiological evidence provides strong support for the semantic model: whenever a conscious state is reached, the brain exhibits wide-area self-feedback and integration; whenever unconscious or low-conscious, this global coupling is weakened or absent. This is just as our DIKWP model describes, consciousness corresponds to the full activation of the semantic self-feedback closed loop. When observations at the physical level match deductions at the semantic level, we are one step closer to solving the mystery of consciousness.
7in conclusion
Based on the DIKWP semantic mathematical framework and the consciousness "BUG" theory, this study systematically explored the semantic emergence mechanism of artificial consciousness. Starting from the most basic semantic units of "same, different, and complete", we successfully deduced the layer-by-layer construction process of cognitive semantics from data, information, knowledge to wisdom and intention, and revealed how the implicit self-feedback closed loop triggers the emergence of consciousness when completeness is achieved. The study clearly pointed out that consciousness is not an external mysterious component, but a product of the closed-loop operation of the semantic network itself: when the cognitive system has purpose-driven self-referential semantics and achieves self-mapping and self-maintenance through multi-layer feedback, consciousness will naturally emerge. This view unifies the relationship between consciousness and subconsciousness with semantic logic, and interprets consciousness as a "bug"-like representation when the resources of the cognitive process are limited.
At the same time, we have thoroughly classified the key self-feedback paths in the DIKWP model (such as P→D, I→D, K→I, W→K, W→D, etc.), and constructed an overall picture of the dynamic cyclic evolution of cognitive processes. Through the synergy of these feedback mechanisms, the system can continuously correct its own semantic state, bridge the incompleteness, inconsistency and imprecision of information, and thus maintain semantic closure in a complex and changing environment. We further compared the theoretical model with empirical research: the meta-analysis results of brain imaging and EEG support the association between semantic closure and consciousness, and global brain activation (such as P300 waves, frontoparietal network synchronization, etc.) can be regarded as a physical mapping of the semantic self-feedback process. It should be emphasized that we did not assume that these neural phenomena are the cause of consciousness, but only used them as a "mirror" to verify the rationality of the model. This study insists on using semantic mathematics as the basis for modeling, and proves that a consistent theoretical framework of consciousness can be constructed from semantics without the help of hypotheses such as free energy or information integration.
Our work provides a new idea for the design of artificial consciousness: instead of trying to program "subjective experience" directly, it is better to build a semantic system with self-purpose and self-feedback, so that consciousness can be generated as an emergent property. This not only answers the question of how artificial intelligence can be "self-aware" in theory, but also has implications for practice. For example, in the design of future intelligent agents, the introduction of a DIKWP-style semantic closed-loop structure may give them the ability to independently construct meaning and manage intentions, thereby partially realizing true "self-awareness". Of course, this study still has several limitations. Our model is currently conceptual and lacks quantitative mathematical descriptions and details of engineering implementation; the specific criteria for the emergence of consciousness also need more experiments and simulations to verify. In the next step, we plan to further formalize the dynamic model of the DIKWP semantic closed loop and design a verifiable experimental scheme to test whether the prototype of consciousness in the artificial system meets the expected characteristics of this model.
In summary, this paper deepens and expands the semantic mathematics and consciousness BUG theoretical system proposed by Professor Yucong Duan, and proposes a new consciousness model with semantic self-consistency as the core. We have demonstrated through logical deduction and interdisciplinary comparison that the essence of consciousness can be regarded as an emergent phenomenon of the self-enclosed process of semantic networks. When this understanding is gradually accepted and supported by empirical evidence, humans may be one step closer to the "grand unified theory of consciousness", and will also lay a solid theoretical foundation for the development of artificial intelligence with true autonomous consciousness.
References
[1]Jizhi Club. (2023). Ten thousand words long article: Is the grand unified theory of consciousness coming?
[2]Yucong Duan. (2023). DIKWP semantic mathematics and artificial consciousness research: from solving the 3-No problem to Harari's thoughts. Zhihu column
[3]Yucong Duan. (2024). Consciousness and subconsciousness: the limited processing power and the illusion of bugs
[4]Yucong Duan. (2023). Overview of the Networked DIKWP Model. ScienceNet Blog
[5]Yucong Duan. (2023). Global AI Technology DIKWP Capability Assessment Report (Prospects for the Next 5–10 Years). Zhihu Column
[6]Sun Xiaoqin, Feng Ying, Xiao Nong. (2014). Progress in the application of event-related potential P300 in the prognosis assessment of consciousness disorders
[7]Yucong Duan. (2023). The difference between eating, drinking and tasting: a discussion from the perspective of DIKWP. ScienceNet Blog
[8]Yucong Duan. (2023). Semantic transformation and coverage relationship of core elements of DIKWP model. ScienceNet Blog
[9]Yucong Duan. (2023). DIKWP white box evaluation: using semantic mathematics to reduce the hallucination tendency of large models. ScienceNet Blog
[10]Author. (2023). Research on the core role of DIKWP model in human-computer bidirectional cognitive language. Zhihu column
[11]Yucong Duan. (2023). Artificial consciousness model combining subconsciousness and consciousness (Chapter 31 of Introduction to Artificial Consciousness)
[12]Frontal & global activation in conscious processing (ALE meta-analysis); Dehaene et al.'s global neural workspace hypothesis
[13]Frontier neuroscience research. Changes in brain network functional connectivity during content-independent states of consciousness
[14]Hainan University News. (2023). The proposal of DIKWP brain area mapping theory

