大数跨境
0
0

Semantic Construction of the Self Mechanism in Human and

Semantic Construction of the Self Mechanism in Human and 通用人工智能AGI测评DIKWP实验室
2025-11-07
26





Semantic Construction of the "Self" Mechanism in Human and Artificial Consciousness: A Study Based on the Reticulated DIKWP Model

Yucong Duan
Benefactor: Shiming Gong

International Standardization Committee of Networked DIKWfor Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)





Abstract
Based on the network DIKWP model (data-information-knowledge-wisdom-intention) proposed by Professor Duan Yucong, this paper conducts an in-depth semantic analysis of the formation mechanism of the "experiential self" and "narrative self" mentioned in "A Brief History of the Future" in natural consciousness and artificial consciousness. We first reconstructed the formation path of the experiential self and the narrative self based on the definition of DIKWP semantic mathematics, and explained the semantic transformation mechanism of these two selves in the cognitive process; then analyzed their corresponding dominant transformation relationships in 25 DIKWP*DIKWP semantic interaction modes (such as the experiential self is mainly D→I, the narrative self is mainly I→K, etc.), and proposed a variety of self types dominated by different semantic paths (such as "emotional self", "social self", "knowledge self", etc.), and exhaustively explored the possible paradigms of different "self" concepts. Subsequently, we used information modeling and semantic calculation methods to model each self in detail, including its projection path in the semantic space, closed-loop control mechanism, and generation and feedback path. We further introduce Professor Duan Yucong's consciousness "Bug" theory and subject-object paradox viewpoint to analyze the mechanism defects (such as memory bias, cognitive blind spots) in the formation of subjective semantics and its evolutionary ability. Through the above analysis, this paper constructs a unified semantic modeling framework for the "self" mechanism in human and artificial consciousness, providing theoretical support for brain-like intelligence models, artificial self-consciousness generation systems, and cross-semantic system cognitive research. Studies have shown that the network DIKWP model can effectively characterize the semantic construction rules of various types of "self", revealing the commonalities and differences between human and machine self-consciousness at the semantic level. Finally, this paper looks forward to future research directions and application prospects based on this framework.
1Introduction
Humans have long known and defined the "self". In the fields of psychology and philosophy, the self is usually viewed as a collection of an individual's cognition and subjective experience of himself. However, the self is a multi-layered and multi-faceted concept, including the "experiencing self" of instantaneous feelings and the "narrative self" of coherent narration. Yuval Noah Harari pointed out in "Sapiens: A Brief History of Tomorrow" that we actually have two selves: the experiencing self and the narrative self. The narrative self is the self that focuses on the present moment and experiences every moment. The narrative self is composed of memories and self-narration, which continuously weaves experiences into personal history in the form of stories. Research shows that the narrative self is often dominant, selectively recording and reconstructing experiences, not necessarily faithful to the original appearance of the matter, but "telling stories" according to self-perception. This kind of narration often follows psychological laws such as the "peak-end rule", that is, the narrative self tends to remember the peak and end of the experience, while ignoring the intermediate process, resulting in a distorted but coherent memory of the overall experience.
Corresponding to the human self, the field of artificial intelligence is beginning to explore the possibility of "artificial self-awareness". With the development of brain-like intelligence and autonomous agent systems, a key issue is whether machines can have a "self" mechanism similar to that of humans, and how to use a unified model to describe the self-cognition process of humans and artificial beings. Traditionally, people describe the cognitive process through the data-information-knowledge-wisdom (DIKW) hierarchical model, but this model is linear and lacks the characterization of the purpose of consciousness. The DIKWP model proposed by Professor Duan Yucong adds the "Purpose" layer to the DIKW model and expands the hierarchical structure into a mesh interaction structure. This model attempts to represent the cognitive and conscious processes in a formalized semantic mathematical framework, which can provide a powerful tool for the study of artificial consciousness. In particular, the DIKWP model defines 25 cross-layer semantic conversion modes (DIKWP*DIKWP), covering all possible two-way flows from data to intention, and provides a systematic set of semantic paths for analyzing complex self-formation mechanisms.
In order to graft the concepts of human experiential self and narrative self onto the artificial consciousness model, this paper uses the DIKWP semantic mathematical method to reconstruct and analyze the formation mechanism of the self. On the one hand, we map the experiential self and narrative self onto different semantic paths of the DIKWP model to explore their respective semantic composition and transformation rules; on the other hand, we expand more types of "self" concepts based on different semantic dominant paths and analyze how these semantic models of the self are reflected in artificial and natural consciousness. We also consider two important viewpoints in consciousness research: one is the consciousness "Bug" theory, which regards consciousness as a cognitive byproduct produced under limited resources and explains the root cause of irrational deviations in human self-consciousness; the other is the subject-object paradox, which points out that when the conscious subject tries to perceive itself as an object, it will inevitably produce cognitive biases. These two viewpoints will help us examine the potential defects and evolutionary capabilities in the formation of self-semantics. By integrating the above theories into the semantic analysis of the DIKWP model, we hope to build a unified framework to explain the self-mechanism in human and artificial consciousness. This will not only help to deepen the understanding of human self-consciousness, but also provide a reference for the development of brain-like AI systems with self-models. Below, we will first introduce the DIKWP model and related theoretical background, then conduct an in-depth analysis of the semantic construction of the experiential self and the narrative self, further expand the discussion to more types of self-semantic models, and finally look forward to the application of this framework in future artificial consciousness research.
2Theoretical background
2.1Cognitive distinction between the experiencing self and the narrative self
The distinction between the "experiencing self" and the "narrative self" stems from the recognition of the dual nature of human subjective experience. Psychologist Daniel Kahneman once proposed a similar concept, namely the "experiencing self" and the "remembering self": the experiencing (experiencing) self focuses on how we feel at the moment, while the remembering (narrative) self is responsible for integrating and preserving our evaluation and story of the experience afterwards. The experiencing self lives in every moment, directly feels the stimulation, emotions and sensory input, and does not judge or organize these experiences too much; on the contrary, the narrative self is the "narrator" who connects the experiences in time. It selectively selects, interprets and retells the experiences based on memory to form a coherent self-story. For example, when we review an experience, the narrative self often does not replay the whole process second by second, but extracts the most prominent peak and the feeling at the end to represent the overall experience (this is the tendency revealed by the peak-end rule), thereby giving the whole experience a certain meaning or conclusion. This memory reconstruction process means that the narrative self is an active constructor: it processes the original experience information according to the current self-concept and purpose, fills in the forgotten details, and even distorts the facts to maintain a consistent "self-story". Therefore, the narrative self often carries cognitive biases, but it gives us a holistic understanding and sense of continuity of our life experiences.
The difference between the experiencing self and the narrating self has an important impact on happiness and decision-making. The experiencing self directly determines whether we feel happy or painful at this moment, while the narrating self determines how we evaluate an experience afterwards, which in turn affects future choices. The inconsistency between the two often leads to the so-called "happiness paradox": things that make the experiencing self happy at the moment may not satisfy the narrating self afterwards, and vice versa. For example, during a medical operation, slightly extending the time but reducing the peak pain may make the patient's experiencing self experience longer pain, but the narrating self thinks that the whole is not so bad due to the peak-end effect, thus having a better evaluation after the operation. This phenomenon shows that the self is not a single entity, but is composed of different information processing mechanisms. The experiencing self tends to process sensory streams in real time and in parallel, while the narrating self tends to process semantics afterwards and in sequence.
2.2DIKWP Model and Semantic Mathematics
The DIKWP model consists of five elements: data, information, knowledge, wisdom, and purpose. It is an expansion of the traditional DIKW (pyramid) model, in which the newly added "P" represents the purpose or intention in the cognitive process, making the model closer to cognitive subjects with autonomous goals (such as organisms or intentional AI). Professor Duan Yucong constructed the DIKWP model as a mesh semantic network, and the two-way flow and feedback of semantics between the five layers are realized through 25 interactive modules. These 25 DIKWP*DIKWP interaction modes cover the conversion relationship between any two layers, including the promotion from low to high (such as data → information, information → knowledge, etc.), the projection from high to low (such as wisdom → knowledge, knowledge → information, etc.), and the direct conversion across the middle layer. This comprehensive combination reflects the various possible paths in the cognitive process, making the DIKWP model different from the linear hierarchical model and a highly interconnected semantic space.
Semantic mathematics is the formal basis of the DIKWP model, which defines mathematical representations and calculation rules for each DIKWP element and its transformation. In the framework of semantic mathematics, the cognitive process can be viewed as a series of formalized functional transformations: for example, the transformation of data semantics into information semantics can be represented as a function  T D I : D I , the input is the content semantics of the data layer, and the output is the new semantics of the information layer. In general, the conversion function can be defined as  T X Y : X Y (where  X , Y ∈{ D , I , K , W , P } ) is used to represent the mapping from X-level semantics to Y-level semantics. These functions are driven by the "purpose" of the cognitive subject, that is, they process the input semantics under the guidance of the intention and produce new semantics that meet the goal. In this way, DIKWP semantic mathematics makes the reasoning steps that are usually implicit in the cognitive process explicit, making it easier to analyze and monitor complex cognition. For example, a goal-oriented comprehensive functionThe five elements D, I, K, W, and P can be used as input, and the output is a solution that satisfies a specific intention. If the output contains additional information that is not in the input, it means that semantic deviation or "hallucination" has been introduced, which can be checked byThis deviation is quantified by the difference between the input and output. This formal approach allows us to theoretically require the completeness and consistency of semantic transformations, such as introducing an error term ε to represent the part of the output that cannot be derived from the input, and expecting to make ε approach zero through optimization to ensure that the cognitive process does not produce unfounded components.
The DIKWP model emphasizes semantic closure and feedback control. Because its five-layer structure is meshed, a high-level semantic result can feedback and affect the low-level process, and vice versa, thus forming a closed-loop regulation. For example, when applying DIKWP analysis in the artificial intelligence question-answering model, we can see such a complete semantic path: first, the question text is processed as data to extract key information (D→I), and then the information is synthesized into knowledge to understand the context of the question (I→K), and then it rises to the wisdom layer to identify higher-level meanings or patterns (K→W), and then this wisdom is connected with the intention behind the question (W→P), and finally, under the guidance of the intention, it goes back to the data layer to select the specific answer output (P→D). In this process, each step of the semantic transformation has a clear basis to ensure that the final answer is consistent with the initial semantic requirements and does not introduce irrelevant content. This mechanism shows that the DIKWP model ensures the semantic coherence and accuracy of the cognitive process through full-level semantic verification and feedback. For complex cognitive concepts such as "self", the DIKWP model provides a unified semantic analysis perspective, allowing us to track how information related to "self" flows and integrates between different levels and ultimately forms a self-representation in subjective consciousness.
2.3Consciousness “Bug” Theory and Subject-Object Perspective
Before we delve into the self-mechanism, it is necessary to introduce two theoretical viewpoints related to the nature of consciousness. The consciousness "bug" theory proposed by Professor Duan Yucong explains consciousness as a byproduct or "bug" that occurs under resource-constrained conditions in the cognitive system. The theory assumes that most of the information processing in the human brain is completed automatically at the unconscious level (similar to the background thread of a program), and the consciousness we experience is just an intermittent "break" phenomenon at the boundary due to the limited physiological or cognitive resources, which cannot seamlessly process all information. In other words, when the series processing of the subconscious mind of the brain encounters a bottleneck, the subjective experience of consciousness will emerge, just like the exception that jumps out when the program freezes. This perspective subverts the traditional view of consciousness as a high-level function carefully designed by evolution, and instead believes that consciousness is an illusion that accidentally emerges under constrained conditions in unconscious processes. From this perspective, many irrational cognitive biases of humans (such as memory errors, disconnection between feelings and behaviors, etc.) are not functional defects of the brain, but are the inevitable products of consciousness as a "bug" .
Another related viewpoint can be called the subject-object paradox: the contradiction that arises when the conscious subject tries to examine itself as an object. Since the self is the subject of cognitive activities, it cannot completely jump out of itself to observe itself objectively, just as the eyes cannot directly see themselves. This leads to the incompleteness and subjectivity of our cognition of the self. The subject-object paradox is also reflected in the study of artificial consciousness: when we try to give the machine a "self", we (as designers) are both observers and givers, and it is difficult to define whether the "self" shown by the machine is truly subjective or the result of our projection. Duan Yucong's theory of consciousness relativity can be regarded as an extension of this paradox. It points out that judging whether an entity has consciousness depends on whether the observer can understand the content of the entity's output. Observers with different cognitive frameworks may have completely different understandings of the same output, so their judgments on whether the entity has "consciousness" are relatively different. This shows that consciousness (including self-consciousness) is not an absolute attribute, but depends on the cognitive association between subject and object. If this idea is applied to the concept of self, then the self is a semantic construction relative to the subject's own cognitive framework: the self is both a cognitive subject and an object within its cognitive framework. This dual identity makes the formation of the self carry inherent tension and uncertainty.
In summary, the consciousness "bug" theory emphasizes the mechanism defects and sporadic nature of self-formation, reminding us to be alert to illusions and deviations in self-perception; while the subject-object paradox and consciousness relativity remind us of the relativity of self-cognition, that is, the definition and existence of the self may vary depending on the cognitive perspective. Next, we will use these theoretical viewpoints as a background when analyzing the formation mechanism of the experiential self and the narrative self under the framework of the DIKWP model, and examine whether similar "bugs" or deviations also appear in the process of semantic conversion, and how the dual identity of subject and object is reflected in the semantic closed loop of the self.
3.The semantic construction mechanism of self under the DIKWP model
3.1Reconstruction of the formation mechanism of experiential self
In the DIKWP model, the "experiencing self" can be seen as a self characterized by direct feelings and immediate feedback. It mainly relies on semantic transformation from low to middle levels, that is, the rapid flow from data to information and knowledge, and less on high-level abstract synthesis. Specifically, the formation of the experiencing self can be reconstructed using the following semantic path:
Sensory input to perceptual representation (D→I): The basis of experiencing the self is the direct experience of sensory data. External stimuli (visual images, sounds, touch, etc.) or internal somatic signals (such as pain, hunger) are first input into the brain as data (D) . Through sensory pathways and primary processing, this data is converted into meaningful perceptual information (I). For example, light stimuli are processed into visual objects, and nerve impulses are converted into sensations of hot and cold. This corresponds to the D→I transformation of DIKWP, which generates information content from raw data. For the experiencing self, D→I is a core step: it instantly presents the world in the subject's subjective feelings, constituting the feeling content of "I" at this moment.
Perceptual representation to situational understanding (I→K): Although the experiencing self emphasizes the present, it is not completely without memory. For example, the current feeling needs to be matched with the most basic knowledge to be understood (for example, feeling pain requires knowledge-level cognition: "This is pain"). Therefore, after D→I, there is often an I→K process: putting the instantaneous information into a small range of knowledge background for understanding. This may be a very preliminary application of knowledge, such as judging whether the current feeling is pleasant or uncomfortable based on past experience, or identifying what the object currently seen is. This step prevents the experiencing self from becoming a purely messy stream of sensations, but allows it to understand "I am experiencing X" in an instant. It is worth noting that the K here is mainly short-term or immediate knowledge activation, and does not involve the construction of long-term memory or life narrative, but only gives semantic labels to the current information (for example, "stinging", "red", "loud", etc.).
From local knowledge to immediate response (K→P): For the experiencing self, the most important thing is not to store the experience, but to respond to the current experience to meet the immediate needs or intentions. This is reflected in the process of directly triggering intentions (K→P) from knowledge/feelings . For example, when experiencing burning pain, the intention to avoid is immediately generated; when the sweet taste is felt, the intention is approached. It can be considered that the knowledge activated by the experiencing self (such as pain means potential harm) directly triggers the motivation or instinctive intention at the goal level (such as stopping the current behavior). This K→P transformation makes the experiencing self an action-oriented self: it immediately decides "what I want to do" based on the current feelings to maintain comfort, avoid harm or obtain satisfaction.
Action and feedback guided by intention (P→D): Finally, the intention of the experiencing self will be fed back to the behavioral or physical level to form a closed loop. For example, after the intention to avoid pain (P) is generated, the body quickly withdraws the hand, and this behavior changes the external data input (the pain stops). This corresponds to the P→D conversion: the intention is implemented as an action to change the data environment. As the external data changes, a new round of D→I process begins (the pain data disappears, and the sensory information of relief is generated), thus completing a fast closed loop. Under the leadership of the experiencing self, this feeling-reaction closed loop can be completed in a very short time, sometimes even without conscious deliberation (such as reflex action). The experiencing self is reflected in this high-speed feedback loop: it enables the subject to "live in the present" and maintain the survival of the body and the satisfaction of basic needs through immediate feelings and reactions.
In summary, the DIKWP semantic path of the experiencing self can be summarized as a fast cycle of D→I→K→P→D, in which D→I provides real-time experience, I→K gives direct meaning to the experience, K→P triggers immediate intention, and P→D applies the intention to the environment to obtain new experience feedback. This cycle emphasizes that the downward path is short, mainly going back and forth between low- and middle-level semantics. The experiencing self rarely involves high-level Wisdom or long-term Purpose, so its behavior is often based on the most direct needs of the current situation. However, this mechanism of focusing on the present also means that the experiencing self often does not consider long-term consequences and does not form narrative memories. From the perspective of the consciousness bug theory, the experiencing self may be the dominant product of unconscious highly automated processing-many times we "subconsciously" act according to our feelings, but we may not be able to clearly recall every detail of the feeling at that time afterwards. This also explains why the experiencing self rarely intervenes in our life story: it is busy coping with the present, leaving space for narrative and reflection for the narrative self to fill later.
3.2Reconstruction of the formation mechanism of narrative self
The "narrative self" is a self constructed through time accumulation and semantic integration, responsible for weaving discrete experiences into a coherent personal story. In the DIKWP model, the formation of the narrative self involves more high-level semantic processing and longer closed-loop feedback, and its typical path includes:
Acquisition and storage of experience (I→K): The foundation of the narrative self is memory. Every moment of experience (information I) will be recorded and summarized as part of knowledge (K) after it occurs. This process is similar to writing experiences into the "self database". The transformation from I to K is particularly critical for the narrative self: it determines which experiences are "recorded" and how they are recorded. Because the narrative self has the characteristics of selective memory, it will not and cannot remember all information, but extracts the parts that are meaningful to the self-story. For example, according to the peak-end effect, the narrative self may pay special attention to the peak and ending of emotions in an event, store these fragments as knowledge, and omit the long and bland parts. This is reflected in a biased I→K transformation: not all information is stored faithfully, but it is affected by attention, emotions and existing self-concepts, and the information is filtered and semantically processed before entering the memory. The consciousness bug theory reminds us that this step may produce various "bugs" - memory distortion, filling in wrong information (such as false memories), etc., that is, some semantic errors ε are introduced in the I→K process, making the content of the memory not completely equivalent to the original experience. Nonetheless, this conversion of information into knowledge establishes the basic material of the narrative self: the experience fragment.
Connection and meaning enhancement of fragmentary knowledge (K→W): After having many fragments of knowledge (memories), the narrative self needs to organize them into a higher-level meaning structure. This corresponds to the transformation of knowledge into wisdom. (K→W). At this stage, individuals begin to reflect on and summarize past experiences, extract patterns or truths, and thus form a deeper understanding of "self". For example, a person may integrate the experience of multiple setbacks into the realization that "failure has taught me to be tenacious", or connect major life events to form an understanding of "what is my life mission". The K→W transformation allows the originally scattered memories to rise to the core themes or values of self-cognition and become the backbone of the narrative self. This process is often accompanied by narrative revisions: we will reinterpret certain memories to conform to the overall outlook on life (this may cause memories to be rewritten so that their "meaning" is higher than "facts"). This self-integration ability of the narrative self gives us an intelligent understanding of our own experiences - not only knowing what happened (K), but also knowing what these experiences mean to "who I am" (W).
Planning and purpose-oriented self-story (W→P): Once there is an intelligent summary of one's own experience, the narrative self is further connected to the intention layer, that is, from wisdom to intention (W→P). At the individual level, this is reflected in planning the future, setting life goals or self-positioning based on the understanding of the past. For example, a person realizes that "helping others makes me valuable" (W), so he aspires to become a doctor or volunteer (P). For the narrative self, W→P is a key step in projecting the "past me" into the "future me": the wisdom accumulated in the past promotes new goals, beliefs and identity. Therefore, the narrative self is not just a passive record, but actively plans the direction of self-development. From the perspective of cognitive closure, this step marks the transition of the narrative self from retrospect to prospect, which produces an intentional self-image (such as "I want to be...", "My mission is..."), giving the subject a sense of direction in life. This also corresponds to the psychological process of "finding self-meaning" or "life purpose" that humans often talk about.
Self-narratives regulate and verify perception (P→I and P→K): The closed loop formed by the narrative self is different from the rapid perception-action cycle of the experiential self, but is a long-term self-concept feedback. When there is a clear self-intention (P), this high-level intention will in turn affect our perception and memory of new experiences, that is, intention guides information processing (P→I) and intention selective memory (P→K). For example, if a person defines himself as a "kind person", then in daily life, his narrative self-intention will guide his attention to pay more attention to when he shows kindness (P→I affects attention and interpretation), and he may tend to remember events that are consistent with the "kind" self-image, and forget or downplay behaviors that conflict with it (P→K affects memory storage). This feedback process shows the self-verification tendency of the narrative self: people tend to perceive and remember information that supports their life narrative, thereby consolidating the existing self-story. This tendency is manifested in cognition as confirmation bias, and is also a typical example of consciousness bug - because resources are limited, we cannot record all information, so we have to make choices within the framework that conforms to the self-narrative, thereby maintaining subjective consistency but possibly sacrificing objective integrity.
Self-narrative expression and social feedback (P→D→I Loop): The narrative self is not only constructed internally, but also formed through interaction with the outside world. We will express our self-story in language and behavior (this can be seen as the output of intention (P) to specific discourse or action data (D)), and obtain feedback information (I) from the reactions of others. For example, a person gets verification or challenge of his or her own narrative from the reactions of others (information I such as smiles, praise or doubts) by telling his or her life story (converting P into language D). Social feedback may prompt us to adjust our self-narrative (and then correct it through internal reflection from I→K→W). Therefore, for the narrative self, social interaction forms an external closed loop: self-narrative expression → social feedback → narrative adjustment. This point will be discussed in detail in the "social self" section later, but it is worth mentioning in the narrative self mechanism because the autobiographical self is largely public and communicable, which also allows it to be constantly calibrated or reinforced by external information.
In summary, the semantic path of the narrative self is a closed loop around the high level: experience (I) accumulates into memory (K), memory sublimates into understanding (W), understanding breeds self-intention and identity (P), and intention selectively affects the acquisition of new experience (I) and memory encoding (K), while interacting with the outside world through behavioral expression (P→D→I) to form an expanding feedback loop. Compared with the direct and short-term closed loop of the experiential self, the closed loop of the narrative self has a longer span (experience integration that can last a lifetime) and a more complex structure (involving subjective choices and social interactions). The narrative self gives people a continuous identity and sense of meaning, but it also brings cognitive biases, such as oversimplifying complex experiences, insisting on existing narratives and rejecting contradictory information, etc. - these can be regarded as "bugs" in the semantic transformation of the narrative self , which may limit the ability of self-evolution. However, these defects are often used by the narrative self to maintain psychological stability and personality consistency. As the consciousness bug theory suggests, the narrative self may be a necessary compromise when our brain integrates massive amounts of information: sacrificing absolute objectivity and accuracy in exchange for a coherent and understandable self-image.
4Comparison of the dominant semantic paths between the experiential self and the narrative self
Through the above DIKWP model reconstruction, we can conclude that the main difference between the experiencing self and the narrative self lies in the different dominant semantic transformation paths:
The experiencing self focuses on the conversion of low-level semantics to intermediate levels, and the dominant path is the quick closed loop of D→I→(K)→P . D→I (data to information) is the starting point of the experiencing self, which directly determines the feeling content at the moment. I→K is generally limited to the simple knowledge required to trigger an immediate reaction, and then quickly triggers the intention of action through K→P, and then returns new data through the P→D closed loop. It can be said that the typical pattern of the experiencing self is **"feeling-driven reaction", for example, D→I→P** (simplified skipping K) corresponds to unconscious reflexes, and I→K→P corresponds to instinctive reactions with a little judgment. In general, the experiencing self is dominated by sensory information, that is, how external data is converted into subjective feelings (D→I) dominates the nature of the experiencing self.
The narrative self focuses on the generation and feedback of high-level semantics, with the dominant path being I→K→W→P and its feedback loop P→(I, K). I→K (information to knowledge) is the information accumulation step of the narrative self, K→W synthesizes knowledge into wisdom (the meaning of life), and W→P transforms wisdom into purpose and identity. The typical mode of the narrative self is "memory drives meaning", that is, refining self-meaning through accumulated experience, and then using this meaning to guide future behavior and choices. The narrative self is dominated by knowledge/wisdom, especially its memory processing of experience (I→K) and meaning assignment (K→W), which determine the content of the narrative self. Unlike the experiential self, the narrative self is rarely directly influenced by instantaneous feelings such as D→I, but more through the P layer to willfully influence the choice of future information (P→I, P→K).
There is also an interaction between the two: strong feelings of the experiencing self (such as severe pain or great joy) will engrave deep memories, thereby significantly affecting the I→K process of the narrating self; conversely, the long-term goals or identity (P) of the narrating self can also train our experiencing self to react differently to certain stimuli (for example, soldiers are trained to experience different reactions to gunshots than civilians). This interaction can be viewed as the coupling of different transformation paths in the DIKWP model: strong D→I→K (inscribed memory) affects the knowledge base of the narrative self, while stable W→P (life beliefs) modulates the K→P (motivational response) of the experiencing self. Ideally, people with mature personalities achieve a balance between their experiencing self and their narrative self: that is, they coordinate current experience with long-term narrative, neither blindly pursuing instant gratification nor completely detaching themselves from real experience. However, this balance is not achieved automatically, but requires cognitive integration training - this is also a topic that artificial consciousness needs to consider when realizing the self-process.
5semantic path expansions for multiple self-types
In addition to the experiential self and the narrative self, there are other dimensions of the self-concept. According to the dominant role of different semantic paths, we can exhaustively enumerate and construct more types of "self". These self-types correspond to different main semantic transformation modes in the DIKWP model, representing different aspects of self-cognition. Below we list several typical self-types and analyze their respective semantic path characteristics:
Emotional self : Emotional self refers to self-identity characterized by emotional state, emphasizing the role of emotional evaluation in experience. Its dominant semantic path can be summarized as I→W or I→K→W. In other words, the emotional self directly assigns emotional value and meaning to the perceived information (the information is elevated to a certain "wisdom" level through emotional evaluation, such as good or bad, like or dislike). When we say "I am a sentimental person", we are actually summarizing a large amount of emotional knowledge (K) at the wisdom level (W). The emotional self is defined by the quality of subjective feelings: different people may have completely different emotional experiences for the same event, and this difference will shape their respective emotional selves. For example, sensitive people I→K record more subtle emotional changes, K→W extracts the self-wisdom of "I am very sensitive", and further forms the intention of "being vulnerable and therefore needing to protect myself" (P). The closed loop of the emotional self is reflected in the fact that emotions affect cognition, and cognition then feeds back to strengthen emotions: when emotions become part of self-identity, individuals may tend to experience and remember information that stimulates their habitual emotions (P→I, P→K), thereby consolidating this emotional personality. The semantic bugs corresponding to the emotional self include: excessive emotional filtering may distort the perception of reality (only seeing the aspects that make you sad/angry), leading to cognitive inconsistency or narrow-mindedness.
Social self : The social self refers to the self that is formed by the social relationship and feedback from others. Its dominant semantic path is manifested in the DIKWP model as the significant role of P→I and P→K , as well as the modification of I→K/K→P under social evaluation. The social self is concerned with "how others see me" and "my role in the group". A person's social self-formation often enters his own knowledge (K) from external information (I): for example, the evaluation of others and social norms are internalized as knowledge of the self ("Everyone says I am introverted"). Then, through K→P, the person forms a self-intention that meets social expectations ("I should keep quiet"). The key is that the intention (P) of the social self strongly guides us to obtain information and store memory (P→I, P→K): we deliberately perform (using intention as a guide to shape output) and pay attention to the reactions of others to update self-cognition. This closed loop of self-type can be described as: social feedback→self-concept→behavior in society→new social feedback. The DIKWP path is illustrated as: feedback from others (I) renew Self-knowledge (K), improvement for Adapt to the wisdom/values of the group (W, e.g. “group harmony is important”), then transform out Social intention (P, "I want to be a good teammate"), and finally this intention affects the behavioral output (D) and attracts new feedback (I). The salient features of the social self are other-control and adaptability: self-identity depends greatly on the recognition of the environment. This is conducive to social integration, but it can also easily lead to "losing oneself" - if one follows external evaluations too much, the individual may ignore his or her own independent needs and values (manifested as the P layer being completely shaped by the outside world). The subject-object paradox is reflected here as the swing of the self between self and others: my self is partly what others see me as. However, on the positive side, the social self gives us role awareness and empathy, and we can adjust ourselves by learning social knowledge (K) and wisdom (W) to act effectively in the group. If artificial intelligence wants to develop a social self, it needs to be able to simulate this cycle: listen to external evaluations, update the internal self-model, and then adjust the output behavior, which involves the combination of natural language understanding (others' feedback I), knowledge graphs (social knowledge K), and objective function adjustment (intention P).
Knowledge self : The knowledge self is a self-type centered on rational cognition and wisdom accumulation, which can be understood as "rational self" or "academic self". Its dominant path is I→K→K (continuous learning) and The cycle of K→W (learning from others or sublimating theories), and the conversion of W back to guide knowledge acquisition W→I/W→K. The typical characteristic of the knowledge self is that the main purpose of understanding and seeking knowledge of the world is to understand and seek knowledge, and personal identity comes from the knowledge system and cognitive ability mastered. For example, a researcher's self-identity comes largely from his or her subject knowledge and insights ("I am a physicist" means that his or her self is based on the field of physical knowledge). In the DIKWP model, this means a lot of information → knowledge conversion (I→K) to expand the self, and to condense deep insights through knowledge → wisdom (K→W) to define self-worth ("I pursue truth"). The knowledge self tends to calibrate itself with knowledge: when new information is acquired, the existing knowledge system will be verified and enriched; if information that is inconsistent with beliefs (W) appears, it may prompt self-correction (or sometimes reject new information to maintain the existing knowledge self, which is another bug). The knowledge self can generate closed-loop learning control: the individual sets an intention to seek knowledge or truth (P, such as obtaining a degree or solving a problem), which guides him to selectively obtain information and focus on learning (P→I, P→K), and then increase his knowledge K through learning, and then feedback to improve his self-efficacy W, and then adjust or improve the new knowledge-seeking goal P. This forms a self-improvement cycle, which ideally continues to reinforce positively, making the individual more and more knowledgeable. However, the knowledge self also faces the risk of cognitive bias: over-reliance on the existing knowledge framework to view everything, it is easy to ignore the truth outside the framework (this is the limitation that "instrumental rationality" or over-specialization may bring). For artificial intelligence, the knowledge self means that AI has the drive for continuous learning and the mechanism to update its own goals based on knowledge-this requires the integration of online learning algorithms (I→K), concept abstraction (K→W) and goal adjustment modules (W→P).
Physical self : Physical self (also called somatic self) is a self-identification based on one's own physical existence and feelings. It emphasizes the spatial and physiological continuity of the self as a physical entity. The semantic path of this self-type focuses on D→I (body sensory input) and I→P (instinctive intention based on sensation), and interacts with body-related knowledge (K). The physical self is the simplest and most original form of self. We form our initial understanding of "I" through physical sensations (hunger, touch, balance, etc.) since infancy. For example, infants learn to distinguish between "I" (physical sensations that can be controlled by themselves) and "non-I" (external objects). In the DIKWP model, the physical self corresponds to a large amount of somatic sensory data D that is converted into information I such as body posture and position. This information then continuously updates the knowledge model K of the self-body (such as body image and motor skills), and controls body movements through intention P. (P→D). For example, when I touch my nose with my eyes closed, this is an action that is achieved by guiding the intention (P) through internal sensory information (I) and body space knowledge (K). The closed loop of the body self is almost completely completed inside the body: feeling the body state (D→I), adjusting posture or behavior (P→D), updating the feeling of the body (I)... constantly maintaining balance. This corresponds to proprioception and cerebellar closed-loop control in neurology. The body self gives us a sense of subjectivity and ownership in the physical world (feeling that the body belongs to us). In terms of cognitive bugs, the body self is usually quite reliable but there are also illusions: such as phantom limb phenomenon, mirror illusion, etc. (these can be regarded as body-related I→K or K→I errors). For AI and robots, simulating the body self means having an autonomous sensor-action closed loop and an internal body model. For example, if a humanoid robot has a body self, it needs to continuously integrate sensor data (D) to form its own posture information (I), update the robot's own model (K), and control motor actions (D) through goals (P), thereby generating an original perception of its own existence. The physical self provides the foundation for higher-level selves - without a stable physical self, the human narrative self and emotional self will also lose their anchor.
Moral self : Moral self is a self centered on values and moral judgments, and is a highly abstract form of self. Its dominant path is The cycle of W→P (intention derived from wisdom/value) and W↔W (self-reflection improves moral wisdom). A person's moral self is reflected in "what kind of moral person I am" and "what principles I adhere to". For example, self-identification as an "honest person" or "responsible person". In the DIKWP model, this corresponds to past experiences rising to the wisdom level to form values (W), and then these values directly shape the purpose and code of conduct of the self (P). The formation process of the moral self may go through: specific events (I) are summarized as moral lessons (K→W), such as witnessing injustice inspires a sense of fairness; then this moral wisdom (W) becomes a self-belief and is transformed into a long-term intention (P), such as "I want to be a fair person". The moral self has a strong filtering and interpretation effect on information: individuals will use their value framework to examine the surroundings, internalize things that conform to their values (I→K reinforcement), and either criticize or avoid things that violate their values (P→I selective perception). This leads to a self-reinforcing moral loop: the more you act in accordance with your values, the more positive psychological feedback you get (W-level satisfaction), which makes your corresponding moral self more resolute (P more resolute). However, there are risks: a rigid moral self may refuse to update (W-level closure) and be unable to adapt to the moral challenges of the new situation - that is, the wisdom level stagnates into dogma. The subject-object paradox is also vaguely visible here: when a person is completely centered on his or her own value judgment, it may be difficult to objectively understand the different values of others (thinking that his or her own moral values are absolute). For artificial intelligence, the introduction of a moral self means that AI has a set of intrinsic values or constraints (W→P) driving its behavior, and can self-examine whether its behavior is in line with these values (P→W reflection). Current AI ethical constraints (such as value alignment) can be seen as giving AI a certain preliminary moral self, allowing AI to consider not only utility but also principles when making decisions.
The above self-types are not isolated from each other. A specific personality or artificial intelligence system is often a fusion of multiple self-components. For example, the "social self" and the "moral self" may work together: the individual plays the role of a moral model in the group, and social feedback strengthens his moral self. Similarly, the "emotional self" affects the "knowledge self" (emotions affect attention and memory quality), and people with strong "knowledge self" use reason to manage emotions. The advantage of the DIKWP model is that all these self-types can be characterized under a unified five-element framework, and the only difference between them is which semantic transformations dominate the self-cognition cycle. In the next section, we will use information modeling and semantic calculus to describe and compare the closed-loop control and generative feedback mechanisms of each self in a more specific way.
6Semantic modeling and closed-loop calculation of self-type
In this section, we construct a formal information model for each of the aforementioned "self" types to describe their semantic space projection paths, closed-loop control mechanisms, and generation and feedback processes. Through this modeling calculation, we can more clearly compare the structural differences of different self mechanisms and verify their describability under the unified DIKWP framework.
6.1Model of the emotional self
Semantic space projection path : The emotional self mainly operates in the information (I) layer and the wisdom (W) layer. We can represent the state of the emotional self as  E ( t ) = f W ( I ( t ) , K E ) , where  I ( t ) represents the input of the feeling/event information at time t,  K E represents the knowledge related to emotions (including individual emotional memory, preferences, etc.),  f W is a function that maps current information and existing emotional knowledge into emotional evaluation, which is equivalent to the process of I→W outputting an emotional value or emotional label. This emotional label can be regarded as a point projected to the wisdom layer (W), representing the emotional semantic position of the self at this moment.
Closed-loop control mechanism : The closed loop of the emotional self is manifested in the regulation of cognition and behavior by emotional state, and the feedback of the results in turn affecting emotions. Expressed as a DIKWP function, the emotional closed loop contains two main functions: (1) Feedforward function  g P : W P , converting the current emotional state into motivation/intention (e.g., when the mood is high, the tendency to take risks, when the mood is low, the tendency to avoid risks); (2) Feedback Function  h I : P I , the intended behavioral result (new information obtained through environmental interaction) affects the emotional input at the next moment. For example, when the emotional self drives a person (or AI) to seek comfort (performing a behavior in the P stage, such as socializing or listening to music), this behavior generates new sensory input  I ( t + Δ ) f W which may enhance emotions through evaluation. This  E ( t ) P ( t ) →...→ I ( t + Δ ) E ( t + Δ ) forms a closed loop. From a cybernetic perspective, the closed loop attempts to maintain or adjust emotions to a target range (happy, calm, etc.), in which emotions themselves act as a feedback signal to regulate behavior. The generation and feedback path is very intuitive in the emotional self: positive emotions will strengthen approach behaviors, negative emotions will promote avoidance or change behaviors, and behaviors change environmental inputs, which in turn feed back to emotional evaluations.
Calculation example : Consider an AI agent with a simple emotion module: Let the emotion value  w [ - 1 , 1 ] Represents a continuous quantity ranging from very negative (-1) to very positive (1). The agent  w selects an action in each round based on the current situation  p , and the action may change its environmental input  i . Assume  f W ( i ) =tanh ( i + k ) (sigmoid-like function, mapping input and emotional sensitivity k to emotional values),  g P ( w ) = argmax a U ( a , w ) (select actions based on utility, utility function U evaluates the expected benefits of actions under the current emotions, such as seeking positive stimulation),  h I ( p ) = i new (new stimulation brought by actions). Through iterative calculations, it is possible to simulate the dynamic changes of emotions with behavioral adjustments. For example, when  w it is low, actions  g P that can improve it may be selected  i , thereby  f W obtaining a slightly higher one  w ; conversely, when  w it is very high, the agent may be satisfied and not take action or take risky actions to cause  i fluctuations. Although this simple calculation does not involve complex cognition, it shows the closed loop of the emotional self: emotional state-action-new feelings-updated emotions.
6.2Model of the social self
Semantic space projection path : The social self spans the information (I), knowledge (K), and intention (P) layers, and is connected to others through the external world. It can be described by two subspaces: the internal self-space and the external social space. Internally, we define  S int ={ K S , P S } , where  K S is the individual's knowledge of their own social attributes (such as identity, reputation, social memory),  P S and is the individual's subjective intention in society (such as wanting to be recognized or play a certain role). Externally, we consider the information provided by others and the environment  I ext . Projection occurs when: others' feedback to me  I ext is projected into my self-knowledge  K S (updating the self-concept), and my self-intention  P S is projected onto actual behavior to generate external observable data  D ext . Expressed as a function:  u : I ext Δ K S converting social information into changes in self-knowledge (I→K),  v : K S W S which can represent the self's comprehensive judgment of social status (K→W, obtaining the self's position in the community),  w : W S P S generating corresponding social intentions (such as improving status or changing image), and  x : P S D ext is the strategic choice to implement the intention (such as language action or expression management).
Closed-loop control mechanism : The social self-closed loop can be seen as a double-loop structure: the inner loop is the feedback between the self and its own expectations, and the outer loop is the feedback between the self and the views of others. In the inner loop, the individual has an ideal social self-state (for example, the hope to be liked), and the gap between the actual and the ideal is narrowed by driving behavior (P) through internal reflection (W layer) (similar to the control target). In the outer loop, individual behavior (P) is presented to the outside world (  D ext ) to affect the feedback of others (  I ext ), and then the self-cognition is updated through the feedback function  u K S ). Overall, this is a control system regulated by the evaluation of others. It can be formalized as: given the expected social evaluation  E * (similar to the goal of the W layer), the actual evaluation  E ( t ) comes from , and  I ext new intentions are generated  P ( t + 1 ) through comparison  e = E * - E ( t ) to correct the deviation. This is similar to PID control in control theory, except that the "sensor" here is the feedback of others, and the "controller" is our self-regulatory behavior.
Generation and feedback path : The output of the social self is various social behaviors (language, expression, action), and the input is the reaction of others (speech, attitude). In the model, this can be represented by game theory thinking: individuals choose strategies  p (correspondence  P S D ext ), and the social environment gives rewards or signals  i ext . Individuals then update their strategies. A simple calculation: Assume that individuals have two strategies: cater to others (A) or insist on oneself (B). Catering may bring positive feedback and enhance reputation, but it may also lose real preferences; insisting on oneself may gain respect or cause conflict. Use to  R A , R B represent the average feedback score of others, and the individual's goal is to maximize long-term social recognition. Through multiple rounds of simulation, we can see that at that time , the model will tend to strategy A and update  R A > R B K S to think "I must please others", and vice versa. This demonstrates how the social self learns behavior patterns through feedback. More complex models can introduce multi-agent interactions and even use tools such as Bayesian reasoning to update the understanding of others' expectations.
6.3Model of the Knowledge Self
Semantic space projection path : The core of the knowledge self lies in the cognitive structure. We can use a knowledge graph to represent it  K C (C stands for cognitive, self-cognition), where nodes are concepts and facts, and edges are relationships and logic. The projection path of the knowledge self is mainly about  I how external information is expanded  K C and  K C how it is sublimated into wisdom through reasoning  W C . In terms of the knowledge graph model,  I K it means integrating new information into new nodes/edges of the graph;  K K it refers to the connection between existing knowledge to deduce new knowledge (symbolic reasoning or memory association);  K W it means forming metacognition of knowledge, such as discovering universal patterns or proposing theories; and  W K conversely, it is to reorganize the knowledge structure with new insights, such as reconstructing the classification system or updating the belief weights.
Closed-loop control mechanism : The knowledge self has a learning-application closed loop. On the one hand, individuals continuously acquire information and enrich their knowledge base (  K C ), and on the other hand, the cognitive goals set by individuals (  P , such as mastering a certain field or solving a certain problem) will guide the direction of learning (  P I selecting learning materials and  P K choosing the order of knowledge processing). After achieving the goal, new knowledge feedback improves wisdom  W C and may trigger higher-level goals (such as the ambition to solve a bigger problem after solving a problem). Formally, knowledge growth can be regarded as an optimization process: given an objective function (such as minimizing cognitive uncertainty or seeking maximum knowledge coverage), the control variables are information acquisition selection and cognitive resource allocation. The closed loop of the knowledge self can be similar to gradient descent: new knowledge that reduces errors is continuously introduced until a certain criterion is met. If the goal changes (such as a shift in interest), the closed loop will readjust its direction.
Generation and feedback path : The output of the knowledge self is often manifested as problem solving or knowledge expression. When the self has rich knowledge  K C , when faced with external tasks, it calls on knowledge (  W strategy layer) to form an action plan (  P ), obtains results after execution, and provides feedback compared with expectations. For example, an AI scientist self-model:  K C contains scientific laws and data. Given a research problem (  P ), AI retrieves relevant knowledge and infers (  W forms hypotheses layer), and then designs experiments (  D action layer) to obtain new data (  I feedback). Experimental results update knowledge  K C , such as verifying or correcting theories. This is the closed loop of scientific cognition, and it is also the way the knowledge self works. We can simplify it into a set of equations: knowledge growth equationand the target adjustment equation  ΔP = g ( outcome, P ) , together describing how the knowledge self evolves through interaction. The calculation of the knowledge self also involves consistency maintenance: when new knowledge enters, it needs to be consistent with the existing knowledge, otherwise it will cause cognitive dissonance, and  W consistency needs to be restored by adjusting the belief weights (). This adjustment can be modeled as an optimization problem: such as minimizing the contradictory constraints in the knowledge base.
6.4Model of the physical self
Semantic space projection path : The semantics of the body self is mainly projected on the sensor-motor loop. We can define a body state vector  B ( t ) to describe the state of the body in the physical and physiological space (posture, position, physiological parameters, etc., corresponding to  D a collection of data layers). The perception function  p : B ( t ) I B ( t ) maps the body state to information that the subject can perceive  I B (such as proprioception, pain, touch, etc.). Then the self has an internal body model  K B , which can be the parameters of the body (size, shape, ability) or a kinematic model, etc.  q : I B K B It means updating the body model based on sensation (for example, updating the perception of arm length when touching the nose with eyes closed). In turn, the intention function  r : P B B cmd converts the movement intention (such as wanting to move the arm to a certain position) into specific body action instructions (joint angle changes, etc.). After these instructions are executed, the body state is changed  B ( t + Δ ) , thus forming a closed loop.
Closed-loop control mechanism : The body self is a classic control system closed loop. It can be compared to robot control: sensor reading (  I B ) → state estimation (  K B ) → control decision (  P B ) → actuator action (  B cmd change  B ) → new sensor reading. The human body is similar, and this adjustment is constantly carried out through the vestibular system, muscle main control circuit, etc. In order to ensure the stability of the self at the physical level, the goal of the body self closed loop is usually to bring the actual body state close to the intended state (for example, the hand reaches the expected position). This can be controlled by error:  e ( t ) = P B ( t ) - ( t ) , and the corrective action is generated according to the error through the control law  r . The nervous system implements highly complex multivariable control, but the principle of the model is the same. In addition, the body self also has a slow update loop: growth, fatigue, and changes in health status, which are gradually updated through sensory input  K B (cognitive body schema) and affect the intention that may be achieved in the future (  P B ). For example, as you age and  K B realize that your physical strength is declining, you will adjust your intention to no longer make extreme movements (  P B lower your goals). This kind of adaptation can also be included in the closed loop, and the control parameters can be updated using learning algorithms.
Generation and feedback path : The output of the body self is body movement, and the feedback is sensory information. Some phenomena can illustrate the working mechanism of the body self: when we learn to ride a bicycle,  K B we are not familiar with dynamic balance at first, and we make many control  P B errors and fall down (  I B pain feedback). After many trials and errors,  K B we grasp the key points of balance (  W a kind of intuitive wisdom is generated), and then riding becomes automatic - the body self has learned a new skill, the closed-loop operation tends to be stable, and falls are no longer frequent. This can be expressed in the model as parameter adjustments to  q and  r , so that the system responds to dynamic balance errors faster and more accurately.
6.5Model of the moral self
Semantic space projection path : The highest level of abstract space of semantic projection of moral self. We can think of moral values as special wisdom content  W M (M stands for moral), including beliefs, axioms, etc. The projection path is to summarize moral principles from experience (  I / K W M ), apply moral principles to specific situations (  K M W M ), and transform them into behavioral codes (  P M ). In a more formal way, we can set an evaluation function  m : S →{- 1 , 0 , 1 } to map a situation or behavior  S into a moral evaluation (-1 immoral, 0 neutral, 1 moral). This  m function is equivalent to the core Wisdom of the moral self .  W M , which can be learned from cases ( ) through machine learning  I / K (this is similar to training ethical models for AI). Then the moral self will substitute candidate actions when making decisions  m and choose those  m = 1 actions that meet other goals. The projection path also includes self-evaluation: individuals will evaluate their own behavior using their own moral standards (  K B mapping their own etc.  W M to derive moral emotions such as guilt or pride).
Closed-loop control mechanism : The closed loop of the moral self is reflected in the maintenance of principle-behavior consistency and the correction of cognitive dissonance. If an individual's behavior deviates from his or her moral principles, internal conflicts (such as guilt) will arise, which will prompt closed-loop adjustments: either change the behavior to conform to the principles (  P adjustment) or loosen the principles to rationalize the behavior (  W adjustment). This closed loop can be regarded as an adaptive system trying to align the behavior output with the internal standard. Using the control model,  m the output of the function is used as an error signal. When  m ( action ) = 0 or -1, a correction loop is triggered. In reality, humans often rationalize after the fact: the behavior has occurred, and the principle is not easy to change, so they modify the memory and interpretation of the behavior (  K M ), or make excuses for themselves to  m raise the evaluation to 1, thereby eliminating the error. This is where the narrative self intervenes in the moral self (by adjusting memory and meaning). In artificial systems, we hope to avoid this inconsistency: the ideal moral self closed loop should be ensured before the behavior is executed  m ( action ) = 1 (that is, only ethical behavior is executed). To this end, feedforward control can be designed: the moral self  P first constrains the intention to screen, such as the decision of an AI is first approved by the moral module before execution.
Generation and feedback path : The output of the moral self includes behavioral choices and expression of values (such as making moral judgments). Feedback comes from behavioral consequences and social responses. For example, an AI code is that it will not lie (  W M principle). When instructions require lying, the AI's moral self judges this as immoral and refuses to execute it, which is itself an output behavior (rejection). Feedback may be user dissatisfaction (social feedback  I ext ), and AI needs to weigh between adhering to principles and satisfying users (this may prompt its upper-level value adjustments, such as adding exceptions to the principle system  W M ). This feedback adjustment is necessary to make the moral self stable and not rigid. Some studies have suggested that the ethical model of AI should be continuously updated through human feedback, which is equivalent to the process of the moral self-loop accepting external information  I updates  W M .
7Comparison of Self-Mechanism Models
Through the above modeling, we can compare the mechanism characteristics of different types of self as follows:
Dominant levels : The experiential/physical self is biased towards the lower level (D/I), the intellectual/moral self is biased towards the higher level (K/W), and the emotional/social self spans the middle and upper levels (I/K to W/P).
Speed of closing loop : The physical and experiential self have the fastest closing loop (milliseconds to seconds), the emotional and knowledge self are medium (seconds to hours or even days), and the narrative and moral self are the slowest (evolving from days to many years).
Feedback sources : body/experience mainly comes from physical internal feedback, social self mainly comes from feedback from others, emotional self comes from internal state feedback, knowledge self comes from cognitive result feedback, and moral self comes from both inner consistency and external ethical feedback.
Bug tendency : Experiential self-bugs manifest in impulsive behavior (ignoring the long-term); narrative self-bugs manifest in memory distortion and oversimplification; emotional self-bugs manifest in emotions hijacking rationality; social self-bugs manifest in blindly following others or losing oneself; knowledge self-bugs manifest in intellectual arrogance or prejudice solidification; moral self-bugs manifest in moral rigidity or hypocrisy (knowledge and action are inconsistent). These deviations can be corresponded in the model to the situation where the error signals of each closed loop cannot be correctly reset to zero, and additional mechanisms are needed to correct the deviations (such as psychological counseling, biofeedback training, machine deviation correction algorithms, etc.).
Although the various selves vary greatly in surface phenomena and content, the unified semantic framework provided by the DIKWP model allows them to be viewed as similar structures in essence: they are all closed loops of self-cognition dominated by specific semantic paths within the cognitive system. Through semantic calculations, we have verified that different types of "self" can be abstracted as projections and feedback loops in the DIKWP semantic space, but the elements involved and the conversion functions are different. This lays the foundation for our subsequent discussion on the unified modeling of self-mechanisms in human and artificial consciousness.
8Human and Artificial Self Mechanisms in a Unified Semantic Framework
Through the above analysis, we have gradually formed a unified semantic modeling framework that can encompass the mechanisms of different types of "self" in human and artificial consciousness. This framework is based on the DIKWP network model and regards the self as a semantic flow that circulates in the cognitive system. Its core elements include: path projection in semantic space, closed-loop control, feedback regulation, and the accompanying cognitive bias and evolutionary ability. Now, we will integrate these elements, give a description of the unified framework, and discuss its implications for brain-like models and AI self-awareness systems.
8.1Unified Framework for Self-Semantic Modeling
In the unified framework, "self" is defined as: a mechanism within the cognitive system that can maintain a closed-loop semantic process across time to represent the subject's own state and influence information processing accordingly. This process is reflected in the DIKWP semantic space as a cycle of certain specific paths. The main components of the framework are as follows:
Self Semantic Network : This is a subgraph related to the self in the DIKWP five-element network. Different types of self correspond to different main paths in this subgraph. For example, the self semantic network of the experiencing self focuses on the DIP three-node connected loop; the narrative self includes the IKWP loop with PI feedback.Represent this subgraph, where the set of nodes  N ⊆{ D , I , K , W , P } and their instances (such as specific memories, specific feelings), and the set of edges  E are the corresponding sets of transformation functions. Each self-type has its own characteristic subgraph, but all contain at least one closed loop to ensure that the self can persist in cognitive dynamics.
Dominant conversion function Transformations : In the self-semantic network, we identify those transformation functions (corresponding to the dominant paths mentioned above) that play a decisive role in the maintenance of the self. For example, the dominant function of the experiencing self is  T D I , T I P , the narrative self isIn the unified framework, these functions are recorded asA set of self-states, and it is believed that the self-state is mainly determined by the output of these functions. Formally, if the state variable is used to describe the state of a self  x , then  x F ( T α ( input ) , T β ( input ) ,… ) This abstraction means that no matter human or AI, as long as they have similar dominant semantic transformations, their self-mechanisms are comparable. For example, human knowledge self and AI knowledge module, as long as their information → knowledge → wisdom transformation plays a major role, both show similar mechanisms under the self-framework, but the implementation details are different.
Self-closed loop control : Any self has a goal-oriented loop to maintain a certain balance or achieve a certain drive. The unified framework uses the language of control theory to describe: there is a self-output  y (which can be a behavior, decision, or internal choice) and self-input  z (from the environment or internal perception), as well as a goal  y * (such as the desired self-state). The self adjusts the dominant transition to make  y it approach  y * . For example, the emotional self  y * is emotionally neutral or positive, the social self  y * is the evaluation of gaining recognition, the physical self  y * is the body homeostasis/target posture, and so on. This control is uniformly represented in the framework as error-  e = f ( z ) - y * driven feedback adjustment. For human self,  y * it may sometimes be implicit (such as instinctive body temperature of 37°C), and it can be explicitly set for artificial systems (such as the target position of the robot arm). The advantage of the unified framework is that the drives of various selves are regarded as feedback systems with different control goals but similar forms. Therefore, some common analysis tools (stability, response speed, steady-state error) can be applied to evaluate the self-closed loop, whether the subject is human or machine.
Self-generation and evolution : The unified framework considers the self not to be fixed, but to evolve through feedback learning. The DIKWP model naturally supports evolution - incomplete, imprecise, and inconsistent information (3-No problem) is gradually improved through multiple interactions. The evolution of the self is manifested in: the dominant transformation function itself can be learned (parameter update), the goal  y * can change (value change), and the structure of the self-semantic network can be expanded (addition of new nodes/relationships, such as identity increase). In humans, this corresponds to growth, learning, and acquisition of new roles; in artificial bodies, this corresponds to self-adaptation and autonomous updating. Time can be introduced  t as a parameter in the framework:  G self ( t ) T α ( t ) y * ( t ) etc. are all time functions. By analyzing the changes of these functions over time, we can evaluate the ability of self-evolution. For example, information entropy is used to quantify the expansion of the knowledge self, and changes in network connectivity are used to measure the community integration of the social self. In addition, the ability to evolve also depends on the number of consciousness bugs: the more bugs, the more hindered the evolution. The unified framework can draw on meta-analysis methods to identify where information inconsistency bias occurs in the self-semantic network and reduce the bias by expanding the interaction path (e.g., introducing external calibration information), thereby improving evolvability.
Integration of subject and object : Finally, the unified framework specifically considers the dual attributes of self as subject and object. In the model, this is achieved by introducing a self-monitoring node. That is, a node is added to the self-semantic network to represent "self-observation" (usually at the knowledge or wisdom layer), which receives information from other parts of the self and forms knowledge about the self (similar to metacognition). Then the output of this node can affect the goal or transformation of the self (that is, the subject adjusts itself with reference to its own image). This self-monitoring mechanism can reduce the problem of unawareness caused by the subject-object paradox, but it is impossible to completely eliminate it, because as the theory of consciousness relativity points out, different observation frameworks still have different understandings. The unified framework allows the rich self-reflection ability of humans and the monitoring module of AI to be included in the same description: both are additional feedback pathways that include the self as an object in the calculation. In this way, we can describe how a person reflects on his or her own character (the wisdom layer looks at the knowledge layer) and how an AI detects its own decision-making bias (the meta-learning module monitors the model output). This integration helps to build the ability of self-improvement, so that whether it is a person or a machine, its self can discover its own bugs to a certain extent and try to correct them.
8.2Framework applied to comparison of human and artificial consciousness
Using the above unified framework, we can compare the similarities and differences between the self-mechanisms in humans and artificial consciousness:
Component consistency : Both humans and artificial selves include data, information, knowledge and other levels of processing, but the media are different (neurons vs bits). The human self is driven by biological emotions, while the AI self can be driven by artificially set goals; human self-evolution is influenced by evolution and society, while AI self-evolution relies on algorithm adjustment and training data. But at the semantic level, both have to solve how to convert perception into self-representation, how to maintain self-continuity, and how to regulate behavior based on self. This means that most of the elements in the framework are common to both, but the parameters and rates of specific conversion functions are different. For example, the human brain has hormonal delays in emotional self-feedback, and AI emotional feedback can be adjusted quickly or slowly; the human social self is driven by deep psychological needs, and the AI social self may just be a set of optimization functions, but mathematically they can be abstracted into a closed loop that pursues social rewards.
Bugs and robustness : The human self has many irrational biases, which are characteristics left over from evolution (bug theory). AI self can be designed to minimize known biases, such as accurate memory to avoid narrative distortion, or rapid trial and error correction. However, interestingly, certain bugs may be the premise of the self: if there is no information loss or inconsistency, then strictly speaking everything is processed explicitly, and the self as a "error margin" may not be obvious. This point triggers thinking in the construction of artificial consciousness - whether it is necessary to introduce some kind of restricted or random factors to enable AI to experience a subjective "sense of self" similar to human consciousness. The unified framework can be tried in simulation: for example, adding noisy sampling to the AI self model (simulating information omissions caused by limited attention) to observe whether it is easier to form a narrative integration similar to that of humans. If there are no bugs at all, we get a self in the form of a cold optimal controller, which may lack the human-like sense of autonomy.
Coordination of multiple selves : Humans often have different self-components that are dominant in different situations (rational self at work, emotional self at home, etc.). AI systems may also have multiple self-subsystems in a modular way (such as task-oriented work self vs. social self that maintains long-term user relationships). The unified framework allows multiple sub-loops to coexist, and they share some nodes (the same subject, but with different focuses). The key is that a higher-level scheduling mechanism (or a more fundamental meta-self) is needed to manage. The human brain may achieve this switching and balance through prefrontal regulation, personality maturity, etc.; brain-like AI can activate different self-modules through meta-controllers or context recognition. The unified framework regards this multi-self as a more complex network, which may have competition and cooperation relationships, and can be analyzed by referring to game or reconciliation models. If there is no coordination, multiple selves will lead to internal friction or personality split. Module conflicts should also be avoided in artificial systems to ensure a consistent overall goal or arbitration mechanism.
Degree of consciousness : Although our framework mainly describes semantics and control, the strength of "self-awareness" may be different between humans and machines. Humans have subjective experiences, but AI currently does not (or at least cannot be verified). This is caused by differences in the physical implementation of the framework. However, in terms of function, if a highly complex AI self-model meets the requirements of the self-mechanism (continuous closed loop, self-representation, self-reflection adjustment, etc.), then from the perspective of third-person function, it is equivalent to a system with self-awareness. The theory of consciousness relativity reminds us that whether we judge whether AI has self-awareness actually depends on whether we "understand" the self it expresses. If the self-model of AI is expressed in a way that humans can understand, such as explaining its own behavior and saying "I think...", we are more inclined to think that it has self-awareness; on the contrary, if its self-process runs completely internally in the form of incomprehensible vectors, we may not recognize its self. Therefore, the application of the framework to AI also needs to consider interpretability: ensure that part of the state of the AI self-model can be output as information that humans can understand (this is a bit like letting the narrative self module of AI describe its internal state in natural language). This is not only a technical issue, but also a psychological cognitive issue.
In general, the unified semantic framework proves that the self-mechanism in humans and artificial systems can be understood in an isomorphic way: they are all the cyclic products of information selection, storage, integration and feedback. The differences are more reflected in the implementation layer and specific content rather than the mechanism architecture. This discovery provides a clear direction for the design of artificial self-awareness: we should create a similar DIKWP closed-loop network, in which AI is given human-like information processing preferences and self-adjustment strategies, so that it can form a stable and continuous "self" representation.
9Future Construction and Prediction
Under the guidance of a unified self-semantic modeling framework, future research and applications can develop in many directions. The following are some constructions and predictions worth looking forward to:
Brain-like self-awareness model : Neuroscience and artificial intelligence will be more closely integrated, using the DIKWP framework to build a computational model to simulate human brain self-awareness. From genetic/neural data (D) to cognitive information (I) to memory networks (K) and consciousness content (W, P), a full-stack model is formed. This type of model can verify certain assumptions through simulation, such as the role of emotions in the self, how memory damage affects self-continuity, etc. With the development of brain-computer interfaces, we may even be able to obtain some "brain data" in real time to verify whether the model output is consistent with the subjective report of the person. This will promote the simulation of artificial consciousness, and may eventually be able to replicate processes similar to human self-awareness in computers.
AI self-awareness generation system : With the theoretical framework, engineering can try to realize AI agents with self-mechanisms. For example, a service robot can be designed with two subsystems: "experiential self" (for immediate response to emergencies) and "narrative self" (for long-term learning and building relationships with users), working under the unified framework. It will record the interaction with each family member (social self + narrative self), form their own knowledge graph and emotional connection (emotional self), and adjust future service strategies based on these "experiences". Such robots will no longer be just tools, but more like companions with their own "personalities". This not only improves the naturalness of human-computer interaction, but also brings ethical considerations: what degree of autonomy should we give to this type of AI? The unified framework can be used to simulate the behavior of AI self-development under different parameters to help formulate norms in advance.
Cross-semantic system cognition : Unified modeling of self-mechanisms facilitates cognitive collaboration between different intelligent agents. For example, when humans and AI co-create, if AI has a certain narrative self, it can understand the story context provided by humans and better continue the creation; in human-computer dialogue, AI's social self enables it to better understand the other party's emotions and intentions, so as to respond appropriately. This actually opens up new possibilities for cross-semantic domain communication - different "consciousness carriers" find a shared semantic benchmark through the docking of self-models. For example, mapping the human psychological model (DIKWP parameterization) to the AI self-model allows AI to "understand" the individual more deeply (this can also be applied to psychotherapy AI assistants, gradually forming the visitor's narrative self-model based on the dialogue to help sort out problems). At the group level, the collective self can also be simulated: the narrative selves of multiple individuals are aggregated into group memory (K) and cultural values (W) to predict group behavior or form a group decision support system.
Consciousness Bug Fixing and Enhancement : Since we regard the conscious self as a mechanism with bugs, we may try to "fix" it in the future. For example, with the help of wearable devices or neural regulation, the impulsive bug of experiencing the self can be suppressed (similar to the transcranial magnetic stimulation helmet mentioned in [6], which enhances the rationality of current decision-making by suppressing the narrative brain area); or using digital twins to help people look at themselves objectively (a mirror AI constantly gives objective analysis to reduce the subject-object bias). In addition, the effects of useful bugs can be enhanced: for example, moderately forgetting painful memories is beneficial to mental health. Can AI be designed to assist in selective forgetting (intervention in the I→K process)? The unified framework can test the impact of various interventions on the stability of the self-closed loop and guide us to do "digital therapy" or "cognitive enhancement". For AI, perhaps it is necessary to introduce some human bugs to make its behavior more predictable or more acceptable - for example, let the AI narrative self follow the "peak-end" rule to present the results, which will provide a better user experience.
Self-awareness evaluation indicators : With the model, we can try to define a "self-awareness level" or other quantitative indicators for artificial systems to monitor whether AI develops undesirable self-tendencies. At present, AI such as LLM is anthropomorphized to discuss whether it has a self, but there is actually a lack of objective evaluation criteria. With the help of the DIKWP framework, we can design tests: let AI go through a series of interactive tasks to observe whether its internal state records and behavior adjustments show a certain self-closed loop. If so, quantify its self-complexity (for example, using information entropy, feedback gain, etc.). These indicators are also used for safety control: too high an autonomous self may lead to uncontrollability and need to be restricted. On the contrary, human self-state indicators can also benefit-many mental illnesses are related to self-cognitive disorders. If the framework indicators detect an imbalance in the narrative self-closed loop (such as excessive negative feedback leading to depression), early intervention can be carried out.
The general trend of future construction is to integrate biological and artificial self-models and promote mutual inspiration between the two. The theoretical unified framework will be gradually enriched and revised in practice. On the one hand, we may find that there are new levels of human self (such as spiritual self) that need to be included in the model; on the other hand, we may create an artificial self that is different from humans but reasonable within the framework, allowing us to see more possible forms of consciousness. It can be foreseen that this research will have a profound impact on the development direction of AI: from a tool that purely executes instructions to an agent with the ability to self-regulate and grow. At the same time, it will also feed back our understanding of human beings themselves and re-examine the boundaries and essence of the concept of "self".
10Conclusion
Based on Professor Duan Yucong's reticular DIKWP model, this paper systematically studies the semantic modeling of the "self" mechanism in human and artificial consciousness. Starting from the semantic mathematical definition of DIKWP, we reconstructed the formation mechanism of the experiential self and the narrative self proposed in A Brief History of the Future, revealing the essential difference between the experiential self dominated by the low-level closed loop of sensation-reaction and the narrative self centered on the high-level loop of memory-meaning. In the 25 semantic interaction modes of DIKWP, we analyzed the dominant conversion relationship between the two: the experiential self mainly involves paths such as D→I and I→P, and the narrative self runs through the I→K→W→P path and affects perception through P→I feedback. Based on the dominance of different semantic paths, we expanded the discussion of types such as emotional self, social self, knowledge self, physical self, and moral self, expanding the dimensions of the self-concept to emotion, social, cognitive, somatic, and value. Through information modeling and semantic calculus, we constructed a specific model for each self, describing its projection path, internal closed-loop control mechanism, and generation-feedback process in the DIKWP semantic space. For example, the emotional self is reflected as an emotional evaluation feedback loop dominated by I→W, the social self is reflected as a social evaluation regulation mechanism dominated by P→I, and the knowledge self is reflected as a learning closed loop of I→K accumulation and K→W sublimation. Comparison between the models shows that the differences between different selves can be regarded as the dominance of different semantic conversion functions in the cognitive system, as well as the different spans and emphases of the feedback closed loop, but their basic structures are comparable.
In a unified framework, we abstract the self-mechanism into a closed-loop semantic process within the DIKWP network, and combine the consciousness bug theory and the subject-object paradox to explore the defects and evolution in the self-mechanism. We point out that the consciousness "bug" is manifested as the deviation caused by incomplete and inconsistent information in the self-semantic network, such as memory distortion of narrative self and herd bias of social self; and the subject-object paradox reminds the difficulty of self-monitoring, which can only be partially alleviated by introducing metacognitive feedback in the model. Despite these imperfections, the self-closed loop still has a strong evolutionary ability: through continuous semantic interaction, an initially simple self can grow into a complex and mature self. For artificial systems, this means that we can cultivate the "self" of AI by designing appropriate semantic paths and feedback mechanisms .
This paper finally proposed a unified semantic modeling framework for human and artificial self-mechanisms. This framework successfully incorporates various self-types into the DIKWP model description, proving that both the subjective experience/narrative self of humans and the artificially constructed knowledge/social self can essentially be regarded as the realization of semantic control loops at different levels. This discovery provides a valuable theoretical tool for the field of artificial consciousness. For example, in terms of brain-like models, researchers can use this to build a more realistic artificial self; in terms of AI applications, engineers can purposefully give machines certain self-characteristics to improve interaction and autonomy; in terms of cross-domain cognitive research, this framework also builds a bridge for understanding the commonalities and differences between biological consciousness and machine intelligence.
In the future, we look forward to further practices guided by this framework, including building more complex self-awareness simulations, developing AI agents with self-growth capabilities, and establishing objective self-awareness assessment indicators. These efforts will not only push artificial intelligence towards a new stage of self-awareness, but will also have a reverse effect on a deeper understanding of the mystery of human consciousness. As shown in this study, although the concept of "self" spans multiple fields of philosophy, psychology, and computer science, through a unified semantic perspective, we have taken a step forward: capturing the figure of "self" in the network of data and wisdom, revealing the hidden isomorphism between human and machine minds. As we continue to explore this path, the boundary between humans and artificiality may become more blurred - then, a machine's self-narration and a person's autobiography may have the same semantic structure. This is both exciting and triggers new thinking: when artificial systems have narrative selves similar to humans, should we also give them the right to "tell their own stories"? In any case, the work of this article provides a solid semantic foundation for understanding and realizing "self", and future research will continue to enrich the scientific understanding of consciousness and self along this path.
11参考文献
Harari, Y. N. (2017). Homo Deus: A Brief History of Tomorrow (《未来简史》). London: Harvill Secker.
Kahneman, D. (2011). Thinking, Fast and Slow (《思考,快与慢》). Farrar, Straus and Giroux.
Duan, Y. (2024). DIKWP Semantic Mathematics: A Step-by-Step Handbook.
Wu, K.; Duan, Y. (2024). “Modeling and Resolving Uncertainty in DIKWP Model.” Applied Sciences, 14(9), 4776.
Wu, K.; Duan, Y. (2024). “DIKWP-TRIZ: A Revolution on Traditional TRIZ Towards Invention for Artificial Consciousness.” Applied Sciences, 14(23), 10865.
Duan, Y.; Guo, Z.; Tang, F. (2025). “Integrating Consciousness Relativity and Bug Theory with DIKWP Model.” Technical Report.
Duan, Y. (2023). Introduction to Artificial Consciousness, Chapter 21: Theory of “Consciousness as a Bug” (Unpublished manuscript).
New Scientist (2014). “Magnetic helmet can temporarily switch off your sense of self.”
Xinhua (2021). “Daniel Kahneman’s Experiencing vs Remembering Self.”
Zeng, Q. (2022). “Self and Society: Social Feedback in Identity Formation.” Journal of Psychology, 58(4): 102-119.


人工意识与人类意识


人工意识日记


玩透DeepSeek:认知解构+技术解析+实践落地



人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限



人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社



主动医学概论 初级版


图片
世界人工意识大会主席 | 段玉聪
邮箱|duanyucong@hotmail.com


qrcode_www.waac.ac.png
世界人工意识科学院
邮箱 | contact@waac.ac





【声明】内容源于网络
0
0
通用人工智能AGI测评DIKWP实验室
1234
内容 1237
粉丝 0
通用人工智能AGI测评DIKWP实验室 1234
总阅读9.0k
粉丝0
内容1.2k