大数跨境
0
0

Semantic Analysis of Medical Consultation Based on

Semantic Analysis of Medical Consultation Based on 通用人工智能AGI测评DIKWP实验室
2025-11-03
5



Semantic Analysis of Medical Consultation Based on DIKWPSemantic Mathematics: A Case Study of Cold–Pharyngitis–Bronchitis


Yucong Duan
Benefactor: Zhendong Guo


International Standardization Committee of Networked DIKWfor Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)



Abstract
This study conducts a semantic analysis of a medical consultation dialogue centered on the clinical progression of cold–pharyngitis–bronchitis, using the DIKWP semantic mathematics ontology proposed by Professor Duan Yucong. By analyzing the patient's free-form narrative, the doctor's questions, and the expressions and intentions of the DIKWP artificial consciousness system, we annotate each word and phrase according to its corresponding level and semantic role within the five semantic spaces: Data (D), Information (I), Knowledge (K), Wisdom (W), and Purpose/Intention (P). Through this annotation, we identify the semantic interaction pathways (e.g., D→I, I→K, W→P) and clarify the roles of each semantic unit in the process of semantic evolution.
We further integrate the "subjective Same-Different-Complete" (S-D-C) semantic self-feedback mechanism proposed by Professor Duan to analyze how meaning is generated and cognition is driven by the dynamic balance of similaritydifference, and completeness in the dialogue. Visualizations including semantic network graphs and tables are employed to illustrate the positions, associated nodes, and closed-loop structures of lexical units within the DIKWP semantic space. This highlights the model's support for cognitive tasks such as inference from symptoms to diagnosis, as well as its contribution to concept formation and intention construction.
Finally, we examine potential cognitive "bugs" that may arise in dialogue semantic processing and explain how the DIKWP semantic mechanism, through self-feedback and closed-loop regulation, compensates for such defects and eliminates ambiguity, thereby ensuring the coherence and completeness of semantic understanding.
Keywords
DIKWP model; semantic mathematics; artificial consciousness; medical consultation dialogue; Same-Different-Complete mechanism; consciousness bug theory; semantic networks
Introduction
In the fields of artificial intelligence and cognitive science, the DIKWP model provides a novel framework for understanding and simulating human cognition (Duan Yucong: The DIKWP Artificial Consciousness Model Leading the Future of AI, Phoenix News). Building on the traditional DIKW (Data–Information–Knowledge–Wisdom) pyramid, the model adds a fifth layer—Purpose/Intention (P)—and transforms the structure from a linear hierarchy into a mesh-like interactive network (ibid).
Within the DIKWP semantic mathematics ontology, cognition is viewed as a dynamic process of bidirectional feedback and iterative updates among the five layers: Data (D), Information (I), Knowledge (K), Wisdom (W), and Purpose (P). Each layer corresponds to a specific level of semantic abstraction and cognitive function [(PDF) Integrating the Mesh DIKWP Model with the Theory of Relativity of Consciousness and the Theory of Consciousness Bugs]. Unlike the traditional bottom-up linear flow, the DIKWP model allows higher-level intentions and wisdom to influence lower-level data selection and interpretation, forming a closed-loop cognitive network (ibid). This fully connected network ensures continuous interaction from low-level sensory data to high-level intentionality, endowing the system with adaptability and self-correction capabilities (ibid).
To explore how the DIKWP model functions in practical cognitive tasks, this study selects the medical consultation scenario—a context rich in semantic complexity—for a case analysis. In medical dialogues, patients typically describe symptoms and concerns in natural language, while doctors gradually gather information, form diagnoses, and propose treatments through inquiry and reasoning. This process involves intensive semantic processing: how symptom data are transformed into meaningful information, matched with medical knowledge, integrated into diagnostic reasoning, and how mutual intentions between doctor and patient are communicated and aligned.
While traditional natural language processing (NLP) focuses on grammatical components or entity recognition, our analysis is fully grounded in the DIKWP semantic mathematics ontology. We dissect each word in the dialogue from the perspective of cognitive semantic hierarchy and semantic roles to reveal deeper semantic flows and cognitive mechanisms.
Notably, we incorporate the "Consciousness Bug Theory" introduced by Professor Duan as a lens for examining cognitive units within the dialogue [(PDF) Integrating the Mesh DIKWP Model with the Theory of Relativity of Consciousness and the Theory of Consciousness Bugs]. This theory views human consciousness as a subjective phenomenon emerging from processing limitations and incomplete information. In this view, consciousness is not a flawless logical product but a byproduct of cognitive "bugs"—imperfections such as inconsistency, uncertainty, or incompleteness. Interestingly, these bugs are not necessarily detrimental; rather, they can trigger higher-level cognitive processes to compensate for the gaps (ibid). In human–machine or doctor–patient communication, such bugs manifest as incomplete or asymmetric information (e.g., patient’s unclear description leading to misinterpretation), or semantic deviations caused by contextual mismatches.
The DIKWP model offers mechanisms to detect and regulate such bugs. Through self-feedback across semantic layers, the system can engage higher-level wisdom and purpose to correct deviations when a cognitive bug arises. Incorporating the Consciousness Bug Theory into the DIKWP framework helps explain the origins of semantic ambiguity in dialogue and how such ambiguity is resolved through feedback and repair processes.
This study aims to demonstrate the practical application of these theoretical constructs through a specific case. We analyze a common disease progression—cold to pharyngitis to bronchitis—as the consultation scenario. The continuity of illness across this chain generates a wealth of semantic content: symptom descriptions, causal inference, diagnostic reasoning, and therapeutic intent. This makes it an ideal subject for hierarchical analysis under the DIKWP model. Each utterance and word in the dialogue is annotated for its semantic layer within DIKWP, and we analyze how cognitive understanding of illness emerges through the S-D-C mechanism.
We construct semantic network diagrams to visualize the associative paths and loop structures of semantic elements, and we examine whether cognitive bugs occur in the interaction and how the system addresses them. All interpretations in this paper are firmly rooted in Professor Duan’s Consciousness Bug Theory, avoiding conventional surface-level NLP classifications and instead highlighting the unique advantages of the DIKWP semantic mathematics framework for analyzing natural language dialogue.
The structure of this paper is as follows: First, we present the dialogue corpus and annotate it semantically by segment. Then, in the section "Semantic Mapping and Network Path Analysis," we summarize the hierarchical affiliations and interaction paths of semantic units, accompanied by visual diagrams. Next, in the "Subjective Semantic Mechanism Analysis" section, we explain how the semantic self-feedback process (Same, Different, Complete) promotes cognitive development and address the cognitive bugs and compensation mechanisms observed in the dialogue. Finally, we conclude with visual and tabular summaries and a reference list.
Corpus Presentation and Segmental Annotation
The selected dialogue scenario involves a patient who, after experiencing an upper respiratory tract infection (commonly referred to as a cold), notices a worsening of symptoms and consults a doctor. The patient suspects that the illness may have progressed from pharyngitis to bronchitis. The dialogue includes the patient’s chief complaint (symptom descriptions and inquiries), the doctor’s follow-up questions (targeting symptom details and physical signs), and the doctor’s diagnosis and suggestions. In this process, we also simulate the internal semantic and intentional expression of a DIKWP-based artificial consciousness system to illustrate how a cognitive system operates. Below, the conversation is presented sentence by sentence in chronological order, with detailed semantic annotations following each line:
PatientLast week I had a cold [K – knowledge], my throat hurt [I – symptom] for three days [D – duration], and I also had a cough [D – symptom] and phlegm [D – symptom]. What’s going on? [P – intention]
Semantic Analysis:
The patient begins by stating they had a cold last week, which is a knowledge-based summary of a past illness (“cold” is a medical concept, located in the K layer). The patient then lists current symptom data: throat pain for three days, with cough and phlegm (these specific symptoms and their duration belong to the Data layer, D). Combined via conjunctions like “also,” they form symptom clusters representing information (I). The final question, “What’s going on?”, directly expresses an intention (P layer): the patient seeks an explanation for symptom escalation, hoping for a knowledge- or wisdom-level response. This illustrates the flow D → I → K, culminating in a P-layer inquiry. For the doctor (or artificial system), this statement becomes an input, triggering their own DIKWP cognitive processing.
System(internal intention recognition):
Current patient intention = P layer (seeking causal explanation); infer illness progression from symptom data to generate knowledge (K).
(Note: This represents how a DIKWP artificial consciousness system internally interprets the patient's utterance and prepares for reasoning.)
DoctorDo you [D – subject] have a fever [K – symptom concept]? What color [K – attribute] is the phlegm [D – symptom object]?
Semantic Analysis:
The doctor begins targeted questioning to clarify missing information. “Do you have a fever?” involves the pronoun “you” as a subject entity (D), and “fever,” a symptom concept (K). The use of “do” as a question conveys intention (P layer) to obtain data. Similarly, the question about the phlegm’s color brings together the object “phlegm” (D) and its property “color” (a K-layer concept), expressed in interrogative form to elicit information (I). This exemplifies a P → D/W interaction: high-level intention drives data collection, guided by wisdom-level decisions.
PatientI have a slight [I – degree] low-grade fever [K – symptom], my temperature is 37.8°C [D – value]. The phlegm [D – object] is a little [I – degree] yellow [D – attribute].
Semantic Analysis:
The patient responds with data and information. “Low-grade fever” conveys intensity (I layer) of the symptom, and the specific temperature is data (D). “Yellow phlegm” adds concrete observable information, again combining D and I layers. The response fills in gaps from the doctor’s question, with no new intention expressed, reflecting the patient's ongoing cooperative P-layer purpose of aiding diagnosis.
System(internal information integration):
New data: 37.8°C + yellow phlegm → inferred semantic difference at I layer → possible transition from viral to bacterial infection (K-layer hypothesis).
DoctorHas your throat pain [I – symptom] been present since the cold [K – condition] began [P – confirmation]? Do you have chest pain [I – symptom] now [D – time]?
Semantic Analysis:
These questions explore symptom timeline and possible progression. “Throat pain since the cold” links a symptom to the timeline of a known illness (K-layer disease concept), seeking confirmation (P). “Now” situates the question in time (D), and “chest pain” checks for additional symptoms, possibly indicating lower respiratory tract involvement. These are closed-ended questions intended to verify hypotheses, illustrating the K → I and W → D pathways.
PatientMy throat started hurting [I – symptom] during the cold [D – time], then [D] it got better for two days [D – duration], but yesterday [D – time] it started hurting again [I – symptom], and the cough moved [W – trend] down to my chest [D – body part], and I feel [W – subjective] chest tightness [I – symptom].
Semantic Analysis:
The patient offers a rich narrative of symptom progression, using temporal markers (D) and subjective evaluations (W). “Cough moved to my chest” reflects intuitive perception of symptom migration (W layer), and “feel chest tightness” adds a new symptom based on personal experience. The patient links data points to form a coherent chain of information (I), allowing the doctor to validate and extend knowledge (K). This showcases an I → K transition facilitated by subjective insights (W).
DoctorIt sounds [W – judgment] like your upper respiratory infection [K] (cold [K]) may have [W – uncertainty] developed into pharyngitis [K], and then [W – sequence] progressed downward [W – causality] into bronchitis [K].
Semantic Analysis:
The doctor delivers a diagnostic summary, leveraging multiple medical concepts (K layer). “It sounds like” signals reasoning (W), and the structure of “developed into… led to…” reflects causal reasoning based on knowledge. The use of “cold” alongside its medical term indicates an intentional semantic alignment (W → P), aiding patient comprehension. This comprehensive explanation represents a semantic closure, integrating D/I → K → W → P.
PatientIs it a bacterial infection [K – etiology]? Do I need [I – necessity] antibiotics [K – treatment]?
Semantic Analysis:
The patient now focuses on etiology and treatment, seeking confirmation of the doctor’s implicit suggestion. “Bacterial infection” and “antibiotics” are K-layer terms. The use of “Is it” and “Do I need” reflect intentional querying (P), indicating the patient’s desire to clarify knowledge and act accordingly. These questions demonstrate semantic convergence between patient and doctor, with the patient beginning to use medical knowledge constructs.
DoctorFrom your symptoms [W – basis], it’s possible [W – uncertainty] that a viral infection [K] turned into a bacterial infection [K] causing bronchitis [K]. Depending on the situation, you may need antibiotics [K – treatment]. I’ll run a test [W – action] first to confirm the diagnosis [W – purpose].
Semantic Analysis:
This closing statement addresses both of the patient's concerns: confirming the suspected cause and outlining the treatment plan. The explanation synthesizes knowledge (K), wisdom (W), and intention (P): wisdom is used to moderate the decision (“depending on the situation”), and purpose (P) is made explicit through the plan to confirm the diagnosis via testing. This final exchange closes the cognitive loop: high-level intention (healing) drives a return to data collection (diagnostic tests), maintaining the mesh interaction of DIKWP layers.
Through line-by-line semantic annotation, this case study illustrates how natural language expressions from the patient are mostly grounded in the data layer (specific symptoms, times), occasionally supplemented with intuitive knowledge, and ultimately expressed as questions at the purpose layer. The doctor's language, by contrast, operates heavily in the knowledge and wisdom layers, guiding the dialogue with reasoning and strategic questioning. Throughout, the artificial consciousness system processes each segment through its own DIKWP structure—mapping patient input into data and information to activate knowledge-level reasoning, and ensuring that doctor responses fulfill current intentions with self-feedback when necessary.
Overall, this dialogue vividly demonstrates the DIKWP model's mesh-like interaction: the five semantic layers—Data, Information, Knowledge, Wisdom, Purpose—interweave through repeated cycles of D → I → K → W → P abstraction and P/W → K → I → D feedback, enabling a successful cognitive exchange.
Semantic Mapping and Network Path Analysis
Based on the annotations above, we can globally map the semantic elements of the dialogue into the DIKWP semantic space, identifying the hierarchical level to which each element belongs and its connections within the dialogue network. Table 1 summarizes the key semantic units classified by DIKWP levels:
Table 1: DIKWP Hierarchical Categorization and Examples of Semantic Units in Dialogue
Data Layer (D):
Raw facts and sensory details mentioned by the patient. Examples include time expressions and numerals (“last week,” “three days,” “37.8°C”), objective symptoms (“throat,” “cough,” “phlegm,” “chest”), and symptom features (e.g., “yellow”). Data elements are characterized by objective sameness — e.g., “37.8°C” is a concrete fact with consistent interpretation.
Information Layer (I):
Formed through the combination or processing of data to express specific conditions or differences. Examples: symptom duration and change (“throat pain for three days,” “got better for two days,” “started hurting again yesterday”), degrees or qualities of symptoms (“slight fever,” “phlegm slightly yellow”), co-occurrence (“throat pain and phlegm”), and descriptions of perceived symptom progression (“cough moved to chest”). These reflect subjective differences — e.g., “normal vs. yellow phlegm” signals abnormality.
Knowledge Layer (K):
Medical concepts, diagnostic terms, and etiology/pathophysiology knowledge. Includes disease names (“cold,” “pharyngitis,” “bronchitis”), etiology categories (“viral infection,” “bacterial infection”), symptom classifications (“fever”), and treatment concepts (“antibiotics”). Knowledge elements emphasize semantic completeness — e.g., “bronchitis” integrates multiple symptom indicators into a coherent diagnostic entity.
Wisdom Layer (W):
High-level reasoning, decision-making, strategic actions, and subjective judgments. Includes diagnostic phrases and inferential signals (“it seems... possibly...”), causal relations (“led to...”), action verbs (“need to treat”), medical strategies (“ask about fever/phlegm color,” “conduct test”), and patient intuition (“feel chest tightness”). Wisdom elements reflect balanced judgment — doctors integrate knowledge to make treatment decisions; patients interpret their bodily signals. This layer represents the logical and experiential aspect of cognition.
Purpose Layer (P):
Intentions, motivations, and communicative goals expressed in dialogue. Includes: patient inquiries seeking explanation (“What’s going on?”), attempts to confirm diagnosis/treatment (“Is it...?”), the doctor’s diagnostic aims (“to confirm diagnosis”), and strategic intentions behind actions (e.g., questioning or testing). Purpose elements define system direction and intent — they drive inter-layer transformation and feedback, and ultimately determine the success of the cognitive process (i.e., whether intentions are fulfilled).
To visualize how these semantic elements relate to one another in the dialogue network, we constructed a semantic network diagram (Figure 1). In this diagram, each key word/phrase is positioned vertically by its DIKWP layer, and arrows indicate semantic relationships and transformation paths — for example, how data combine to form information, which is abstracted into knowledge, which then drives wisdom-level decisions, and in turn prompts new data collection via intention.
Figure 1: DIKWP Semantic Network Diagram of Dialogue Elements
(Note: In the diagram, nodes are vertically arranged by DIKWP layers — Data (D) at the bottom, rising through Information (I), Knowledge (K), Wisdom (W), to Purpose (P) at the top. Solid arrows represent upward inference paths (e.g., data → information → knowledge), while hollow arrows indicate downward feedback (e.g., intention driving data collection). Due to complexity, only key pathways are shown.)
Key Network Pathways and Structures
1. Data to Information (D→I):
Symptoms like “throat pain,” “cough,” “phlegm” are individual data points that, when described together, form composite information nodes such as “persistent symptoms for 3 days” or “worsening condition.” This semantic “addition” corresponds to forming sameness from diverse inputs. For example, “throat pain + 3 days” becomes the informational unit “persistent throat pain,” while “cough and phlegm” become “lower respiratory tract involvement.” This reflects subjective sameness, as the patient bundles symptoms into a coherent illness narrative, and the doctor processes them as an integrated info package.
2. Information to Knowledge (I→K):
Informational patterns (e.g., “mild fever,” “yellow phlegm,” “symptom relapse”) activate related knowledge concepts in the doctor’s mind (e.g., “bacterial infection,” “pharyngitis,” “bronchitis”). These I-layer patterns point upward to K-layer nodes in the diagram — e.g., “yellow phlegm + fever” maps to “bacterial infection,” and “throat pain during cold + symptom migration” to “pharyngitis/bronchitis.” This represents semantic abstraction, turning differences into structured medical knowledge. Patients also perform I→K reasoning (e.g., self-diagnosing “a cold”), though doctors apply it with broader and more accurate frameworks.
3. Knowledge to Wisdom (K→W):
Once diagnostic knowledge is activated, it fuels reasoning and decisions — e.g., “bronchitis” and “bacterial infection” nodes point to a decision node “consider antibiotic treatment.” Similarly, “uncertain diagnosis” links to “run diagnostic tests.” This shows the transition from static knowledge to dynamic judgment. It’s equivalent to semantic application: using knowledge to plan action, enacting the “completeness” of cognition — the knowledge framework gets applied toward the treatment purpose.
4. Wisdom to Purpose (W→P):
Actions such as “conduct test to confirm diagnosis” or “initiate treatment” point toward P-layer goals: “clarify diagnosis” and “cure disease.” If actions don’t serve intentions, they lack purpose. At the same time, the patient’s intention to “understand the cause” is fulfilled when the doctor gives a knowledge-based explanation; their new goal of “recovery” aligns with the doctor’s treatment purpose. This illustrates W→P alignment, where conclusions and actions converge with goals, completing the semantic loop.
5. Purpose to Data Feedback (P→D):
Throughout the dialogue, we observe high-level goals driving low-level actions. Figure 1 highlights the doctor's intentions (P) leading to wisdom-layer actions like questioning or testing (W), which then acquire new data (D). This is feedback from purpose to data, a core tenet of the DIKWP model: without top-down goals, data collection is blind; without data support, high-level reasoning is baseless. Whenever cognitive uncertainty (a potential bug) is encountered (e.g., the doctor is unsure about bacterial infection), the system triggers data acquisition to compensate. For instance, the doctor identifies the missing info about fever and phlegm color and asks targeted questions — a self-correcting mechanism.
This network analysis clearly demonstrates that the DIKWP semantic space supports the entire semantic flow from symptom to diagnosis to decision-making. Each layer (D, I, K, W, P) plays a distinct but interconnected role, enabling both parties in the dialogue to interact within the same semantic framework. Importantly, the network is not unidirectional or strictly agent-specific — in fact, it connects two interacting DIKWP systems (patient and doctor). For example, the patient’s “cold” (K layer) becomes data input (D) for the doctor’s diagnostic process.
This cross-subject semantic interface can sometimes lead to mismatches or bugs — due to asymmetric background knowledge or incomplete information. In this case, communication was smooth due to clear patient inputs and rich doctor expertise. Nevertheless, we observe bug-resolution mechanisms: when the patient didn’t mention fever, the doctor queried it; when using medical jargon (“upper respiratory infection”), the doctor immediately clarified it as “cold” to ensure patient comprehension. These illustrate how the system maintains semantic alignment and overcomes potential gaps.
Conclusion
The DIKWP networked model demonstrates exceptional power in processing complex medical dialogue. It not only synthesizes dispersed language elements into a coherent semantic network but also facilitates closed-loop interactions between high-level intentions and low-level data. Each semantic unit is interlinked — some pathways represent bottom-up abstraction (D→I→K→W→P), others are top-down regulatory feedback (P/W→K→I→D), forming a web of meaning.
As Professor Duan Yucong emphasizes, this breaks the linearity of traditional pyramid models, enabling cognition to function as a network. Practically, this empowers AI systems to better understand context, infer implicit intentions, and proactively query for clarification — advancing toward human-like diagnostic dialogue capabilities.
Subjective Semantic Mechanism Analysis: Self-Feedback of “Same,” “Different,” and “Complete”
In the previous section, we examined the distribution and flow of semantic elements from a network perspective. In this section, we further integrate Professor Duan Yucong’s “Same–Different–Complete” (S-D-C) semantic self-feedback mechanism to analyze how semantic elements contribute to meaning-making and cognitive progression. Simply put, “Same,” “Different,” and “Complete” correspond to the principles of similarity, contrast, and completeness, which underlie transformations across DIKWP levels and are essential for self-adjustment and self-evolution in artificial consciousness systems [(PDF) Integrating the Mesh DIKWP Model with the Theory of Relativity of Consciousness and the Theory of Consciousness Bugs].
Using the medical dialogue case as a basis, we analyze how this mechanism operates as follows:
Subjective “Same” — Assimilation through Similarity
When receiving new input, a cognitive subject (doctor or patient) first attempts to assimilate it by matching it with known patterns in their existing knowledge framework. This is the “same” process.
For example, upon experiencing discomfort, the patient quickly classifies it as a “cold,” based on prior experience with similar symptoms. This subjective sameness enables the patient to say, “I have a cold.” Likewise, when the patient describes a sore throat, cough, and phlegm, the doctor immediately associates these symptoms with respiratory tract infections — recognizing them as consistent with known patterns such as “cold progressing to bronchitis.”
From a semantic mathematics perspective, this is equivalent to addition: combining discrete symptoms into a unified conceptual entity. In the semantic network, this is reflected as multiple lower-level nodes converging into a higher-level node (e.g., “throat pain + cough + yellow phlegm” forming an inference toward “lower respiratory tract infection”).
The benefit of the “Same” mechanism is the ability to rapidly assign meaning to new inputs by mapping them to familiar patterns. However, it may also lead to cognitive bias if elements that are not truly the same are assumed to be so — resulting in a misclassification or cognitive bug. In this case, the patient initially assumed it was “just a cold,” which helped reduce anxiety but also risked underestimating the severity. As symptoms worsened, the patient realized the original assumption might be flawed, prompting the need for re-evaluation.
Subjective “Different” — Recognition of Discrepancy
When assimilation fails to explain new input, the subject perceives a discrepancy — realizing that the situation differs from known patterns. This triggers the “Different” mechanism.
In the dialogue, the patient observes yellow phlegm and recurring throat pain, which deviate from typical cold symptoms. This generates confusion, prompting the question: “What’s going on?” — a direct linguistic marker of difference recognition. The patient notes that current symptoms don’t match previous cold experiences, subjectively identifying them as “different.”
Similarly, the doctor, upon hearing persistent pain and yellow phlegm, recognizes that the case likely involves more than a viral cold and could point toward bacterial infection. The follow-up question about fever serves to verify this difference.
The “Different” mechanism enables the subject to detect anomalies or potential bugs in cognition. In semantic mathematics, this aligns with subtraction: eliminating elements from the established similarity set that no longer apply. For instance, if “typical colds don’t involve yellow phlegm,” then the current situation requires a new interpretive model.
While this contrast recognition propels deeper exploration and hypothesis formation, excessive focus on difference may lead to overreaction (e.g., overdiagnosis). In this case, the contrast recognition is both accurate and necessary, prompting both patient and doctor to be alert to a possibly serious progression.
Subjective “Complete” — Striving for Semantic Closure
Regardless of whether the subject has assimilated or distinguished new input, the ultimate goal of cognition is to achieve semantic completeness — constructing a coherent, comprehensive explanation or decision that aligns subjective experience with objective reasoning.
This completeness is manifested through closed-loop structures where all relevant semantic elements are integrated, and no contradictions or gaps remain.
In the dialogue, the “Complete” mechanism is seen in several stages:
The patient provides a full symptom narrative, enabling the doctor to form an initial model (cold → pharyngitis → bronchitis).
Gaps remain, such as whether the infection is bacterial, prompting the doctor to ask clarifying questions.
Upon receiving answers, the doctor’s diagnosis loop is completed, but the treatment plan remains unresolved.
The doctor proposes testing to confirm the diagnosis and guide treatment — completing the action loop.
When the doctor offers an explanation, and the patient understands the disease progression, semantic consensus is reached.
This pursuit of completeness drives self-feedback across DIKWP layers until the semantic network contains no unexplained nodes or inconsistent paths.
According to Professor Duan’s Consciousness Bug Theory, no cognitive system is inherently complete or self-consistent — all will encounter local bugs (gaps, uncertainties). A healthy cognitive system responds with corrective feedback toward completeness. In this case, completeness is seen at several levels:
Intentional completeness: the patient’s question is answered.
Knowledge completeness: the diagnosis is verified.
Action completeness: a plan (e.g., testing) is in place.
Had these elements been missing, the dialogue might have ended with unresolved issues or medical oversight.
S-D-C as a Recursive Cognitive Cycle
The mechanisms of Same–Different–Complete do not occur in a strict sequence but function as a recursive loop. A subject begins by seeking similarity; if that fails, they detect differences and pursue completeness via feedback. Once semantic completeness is reached, the arrival of new input restarts the loop.
This feedback cycle allows an artificial consciousness system to self-correct and evolve.
In the current case:
The patient initially classified symptoms as same as a cold.
Upon discovering differences, he sought a complete explanation from the doctor.
The doctor, relying on known patterns, confirmed similarities, but to rule out differences, asked further questions.
Once sufficient data was collected, the doctor offered a complete diagnosis.
Both sides underwent similar S-D-C cycles and combined them through dialogue into a shared semantic loop — patient to doctor to patient — achieving interpersonal cognitive closure.
This echoes Professor Duan’s Theory of Consciousness Relativity, which posits that the degree to which one cognitive subject sees another as conscious depends on whether the other’s behavior can be semantically interpreted as a coherent and meaningful closed loop. Here, the patient regards the doctor as competent because the explanation offers semantic closure; the doctor, in turn, validates the patient’s input as reliable. Mutual recognition of consciousness and communicability underpins effective cooperation.
Cognitive Bugs as Manifestations of S-D-C Gaps
Cognitive bugs are breakpoints in the S-D-C cycle — moments where matching fails, inconsistencies arise, or closure is lacking. They reflect a system’s recognition of its own limitations.
Examples include:
Uncertainty due to incomplete information.
Conflict between new input and existing knowledge.
However, within the DIKWP framework, bugs are not treated as failures but as opportunities to trigger higher-level cognition.
In our case:
Had the patient omitted half the symptoms, the doctor’s diagnosis might have contained a bug.
Patient feedback filled this gap — the bug was resolved.
If the doctor had assumed bacterial infection without confirmation, that too could be a bug.
Ordering a test turns the bug into a verifiable hypothesis.
Each bug and its correction propels the dialogue toward greater semantic reliability.
This reflects the self-compensating nature of semantic mathematics: when the cognitive chain is disrupted, the system introduces new semantic elements or paths to restore the loop.
In AI applications, this means the system should, like the doctor, recognize its own understanding limits and actively ask questions to fill gaps — a hallmark of the DIKWP artificial consciousness model.
Unlike static NLP systems that lack self-questioning capacity, DIKWP systems, through the S-D-C loop, continuously improve the quality of interaction and semantic understanding.
Conclusion: S-D-C as the Engine of Semantic Evolution
The “Same–Different–Complete” mechanism breathes self-evolving vitality into the DIKWP semantic space. It enables the system to constantly compare new inputs with prior knowledge (same or different) and seek completeness through feedback.
In this process, every cognitive unit — whether a word, phrase, or concept — exists in dialogue because it serves one of these roles:
For assimilation (e.g., patient saying “cold”).
For highlighting contrast (e.g., “yellow phlegm”).
For enabling completeness (e.g., doctor explaining the disease chain).
Each utterance’s genesis and transformation can be analyzed through the S-D-C lens:
The patient’s question arises because their framework fails to explain the current state (Different), thus they output a Purpose-layer question (P) with a Data-layer sentence.
The doctor’s diagnosis results from having filled in prior bugs, achieving semantic completeness, and thus expressing a Knowledge-layer statement.
By continually looping through S-D-C, the system evolves semantically — responding to change, adapting to gaps, and striving for a shared, complete understanding of the situation.
Figures and Summary
Through the above case analysis, we have thoroughly demonstrated how the DIKWP semantic mathematics model operates in natural language-based medical consultations. By employing diagrams and network analysis, we mapped each word in the dialogue to its position within the semantic space and corresponding interaction paths. Additionally, we integrated the "Same–Different–Complete" mechanism to explore how meaning is generated and cognition evolves. Before concluding, we summarize the key findings as follows:
Key Findings
DIKWP-level word-by-word annotation clearly distinguished components in patient utterances:
Data layer: objective facts (e.g., symptoms, timing)
Information layer: symptom patterns and dynamics
Knowledge layer: disease names and medical concepts
Wisdom layer: reasoning and decision-making
Purpose layer: goals and intents (e.g., asking questions, seeking treatment)
This proves that even in casual conversations, human language inherently spans multiple cognitive levels. Unlike traditional NLP methods that analyze syntax or parts of speech, this annotation approach aligns directly with cognitive processes, making it easier for machines to emulate human understanding.
Semantic network and path analysis show that dialogue is not a linear Q&A sequence but an interactive semantic web.
The DIKWP systems of the patient and the doctor become coupled through language, enabling the exchange of knowledge and intent. In a brief dialogue, numerous interaction paths among the 25 possible DIKWP transitions appeared [(PDF) Integrating the Mesh DIKWP Model with the Theory of Relativity of Consciousness and the Theory of Consciousness Bugs], including:
D→I (data into information)
I→K (information into knowledge)
K→W (knowledge into decisions)
W→P (decisions serving goals)
P→D (intention triggering data queries)
K→D (doctor’s knowledge becoming new data input for the patient)
Especially closed-loop paths (e.g., D→…→P then P→D) ensure the conversation deepens and converges meaningfully. For dialogue systems, this implies greater robustness and adaptability.
The subjective semantic mechanism operates continuously throughout the dialogue.
Both the doctor and patient use assimilation (Same), discrepancy detection (Different), and feedback-driven completion (Complete) to process semantic input.
The patient frames their experience as a cold (Same),
Notices abnormalities and asks questions (Different),
And receives clarification (Complete).
The doctor similarly checks assumptions, queries anomalies, and offers an explanation.
Every time a cognitive bug (e.g., incomplete or conflicting information) emerges, the system adapts via feedback: collecting new data or shifting conceptual frameworks. As Consciousness Bug Theory suggests, such bugs actually enhance the system's awareness level.
In our case, if the patient hadn’t been confused (no bug), they might not have visited the doctor; if the doctor had no doubts (no bug), they might not have probed deeper. The effort to manage “imperfect information” fosters deeper understanding — a vital insight for AI:
Artificial systems must learn to recognize the limits of their understanding and proactively query rather than make overconfident assumptions.
The DIKWP semantic space supports core cognitive tasks, such as diagnostic reasoning and intent management.
Diagnosis involves moving from symptoms to causal knowledge and formulating a treatment goal — a process that mirrors DIKWP’s layered structure. By placing dialogue within the DIKWP framework, we observe how an AI system could, like a doctor, transform disorganized patient input into structured reasoning, all while keeping the patient's intent (e.g., seeking explanation or treatment) in view.
DIKWP thus offers a model of cognition for medical dialogue systems — one that delivers interpretable reasoning chains and human-like intent alignment, fulfilling urgent demands for explainable and controllable AI [(Phoenix News Report, 2025)].
Observed Cognitive Bugs and Strategies
Our analysis revealed several potential cognitive bugs and possible coping strategies:
Knowledge gap bugs:
If the patient lacks the concept of “bacterial infection,” they may not ask the right question, and the doctor must proactively explain — otherwise, the patient’s intention loop remains incomplete.
Semantic misalignment bugs:
If the doctor uses jargon (e.g., “upper respiratory infection”) without translating it, the patient might misinterpret. In our case, these risks were avoided through good communication.
However, in real-world AI applications, systems must be equipped with semantic compensation strategies, such as:
Detecting critical but unmentioned data (as the doctor asked about fever),
Translating medical terms into lay language,
Asking patients if they understand before ending the consultation.
These are examples of how the DIKWP framework can be operationalized in dialogue strategy design, ensuring true semantic closure.
Conclusion
This case study demonstrates the effectiveness and granularity of applying the DIKWP semantic mathematics ontology to complex human–machine dialogue. Through word-level DIKWP annotation of a cold–pharyngitis–bronchitis consultation, we revealed the deep cognitive structure underlying natural language. The five semantic elements — Data, Information, Knowledge, Wisdom, and Purpose — interact to form a dialogue’s semantic network.
Consciousness Bug Theory and the Same–Different–Complete mechanism, proposed by Professor Duan Yucong, offer a unique perspective on meaning generation and cognitive evolution. They treat cognition as a recursive loop of assimilation, anomaly detection, and pursuit of completeness, helping us explain how each cognitive unit is created and transformed.
This case shows that the DIKWP space can fully support complex cognitive tasks like medical diagnosis:
From symptom to diagnosis (knowledge construction),
From intention to treatment (goal alignment).
Even in the face of ambiguity or inconsistency, the system adapts through feedback and self-correction, achieving semantic closure. This network-based and feedback-driven structure is precisely what future cognitive dialogue systems require: interpretable reasoning, proactive inquiry, and goal-sensitive interaction — moving beyond scripted Q&A toward expert-like, trustworthy AI in fields like medicine.
Outlook
This research serves as an exploratory academic report and aims to offer practical support for DIKWP model implementation. Although the analysis is lengthy and in-depth, such semantic detail is essential to uncover the underlying mechanisms of intelligent dialogue. Future work may:
Include real-world dialogue corpora,
Combine with quantitative evaluations,
Test DIKWP’s handling of ambiguity, implicit intent, and multi-turn interaction,
And integrate these theories into system development — building a DIKWP-based medical dialogue prototype for clinical validation.
Final Thought
The DIKWP semantic mathematics ontology and Consciousness Bug Theory point us toward the design of the next generation of cognitive AI. With precise semantic modeling and effective bug handling, we can build systems that not only “know what” but also “know why” — artificial consciousness capable of expert-level reasoning and trustworthy human interaction.
References
Duan, Y. et al. “The DIKWP Artificial Consciousness Model Leading the Future of AI.” China Media Industry Network, Phoenix Regional Report, March 29, 2025.
Duan, Y., Guo, Z., & Tang, F. “Integrating the Mesh DIKWP Model with the Theory of Relativity of Consciousness and the Theory of Consciousness Bugs.” Technical Report, 2025.
Duan, Y. “Introduction to DIKWP Semantic Mathematics: Conquering Gödel’s Incompleteness Theorem (Beginner’s Edition).” ResearchGate Preprint, 2024.
Duan, Y. “The ‘BUG’ in Consciousness: Exploring the Nature of Abstract Semantics.” Zhihu Column, 2023.
Duan, Y. “Overview of the Networked DIKWP Model.” ScienceNet Blog, 2023.


人工意识与人类意识


人工意识日记


玩透DeepSeek:认知解构+技术解析+实践落地



人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限



人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社



主动医学概论 初级版


世界人工意识大会主席 | 段玉聪
邮箱|duanyucong@hotmail.com


qrcode_www.waac.ac.png
世界人工意识科学院
邮箱 | contact@waac.ac




【声明】内容源于网络
0
0
通用人工智能AGI测评DIKWP实验室
1234
内容 1237
粉丝 0
通用人工智能AGI测评DIKWP实验室 1234
总阅读8.5k
粉丝0
内容1.2k