大数跨境
0
0

Preventing Uncontrolled Autonomous Evolution of Artificial

Preventing Uncontrolled Autonomous Evolution of Artificial 通用人工智能AGI测评DIKWP实验室
2025-11-07
2




Preventing Uncontrolled Autonomous Evolution of Artificial Consciousness via the DIKWP Semantic Model

Yucong Duan
Benefactor: Shiming Gong

International Standardization Committee of Networked DIKWfor Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)





1Introduction
As artificial intelligence (AI) gradually shifts from data-driven to self-aware intelligent agents, the controllable evolution of AI cognitive processes has become a key research direction. The "Data-Information-Knowledge-Wisdom-Purpose (DIKWP)" network semantic mathematical model proposed by Professor Duan Yucong provides a theoretical basis for building a common cognitive language between humans and machines, making every decision of AI traceable and explainable, and serving the preset intention goals. The model introduces the key semantic layer of intention/purpose on the classic DIKW (pyramid) level, and realizes two-way feedback and iterative updates of semantics at each level through a network structure. This new cognitive system is not only a milestone in academic significance, but also provides a mathematical description and execution semantic framework for the safe and controllable evolution of artificial consciousness systems. This paper is based on the artificial consciousness autonomous evolution path control method of Professor Duan Yucong's DIKWP semantic mathematical model, and strictly conducts computability and reasoning closed-loop analysis according to its semantic definition. We will first outline the semantic layer definition and network structure of the DIKWP model, then explore how to model the intention target as a computable semantic object and simulate its generation path in the mesh structure, and then formally describe the semantic feedback link and closed reasoning loop mechanism. On this basis, we introduce the "consciousness bug theory" proposed by Professor Duan, regard subjective semantic jumps as abnormal calculation mechanisms for regular expression, and combine this theory to simulate the intention correction process. Subsequently, this paper takes the artificial consciousness cognitive task of diagnosis and treatment recommendation as a case to fully demonstrate the DIKWP flow process of an artificial consciousness from data input to intention output, and clearly mark the state changes and feedback mechanisms of each layer. Finally, we compare the similarities and differences between traditional computational semantic reasoning paths (such as logic trees and decision diagrams) and DIKWP semantic mathematical paths, and analyze the advantages of Professor Duan's model in the unification of semantic expression and reasoning. Through the above in-depth analysis, it is expected that this report will provide a rigorous and verifiable theoretical support for the technical control of the autonomous evolution path of artificial consciousness.
2Overview of DIKWP Network Semantic Mathematical Model
The DIKWP model divides the cognitive process into five semantic levels: Data ( D), Information ( I), Knowledge ( K), Wisdom (W) and Purpose ( P). These five layers together constitute a layer of semantic elements that are not linear and one-way dependencies, but multi-directional interactions through a 5×5 transformation matrix. Each layer can be used as both input and output, with a total of 25 possible semantic transformation paths. In other words, the model-wise mapping and feedback mechanism, the output can react to the input, thus forming a self-contained reasoning closed loop. Representing these five layers of meaning (based on DIKWP network model 3 – scientific research chat):
Data Layer (D): Contains objective and original data entities, focusing on the description of the "sameness" of objective things. For example, the sensor's physical sign measurement values are all part of the data layer. The data layer provides the basic elements of AI cognition.
Information Layer (I): Indicates the semantic association and context between data, emphasizing "difference" or relationship. The information layer is structured information from the meaning, such as "body temperature rises and blood pressure is low", which is a description formed by the association of multiple data. Information is the result of semantic interpretation of data.
Knowledge Layer (K): refers to the knowledge rules or patterns formed by structuring and generalizing information, representing the "completeness" and systematization of cognitive content. For example, based on the information "fever + hypotension" at layer I, medical knowledge is applied at layer K to obtain the "knowledge judgment that septic shock may exist" .
Wisdom Layer (W): The ability to make dynamic decisions and evaluations based on knowledge, reflecting experience and insight, that is, the ability to use knowledge to find solutions to problems in specific situations. The W level often deals with complex and uncertain problems, such as doctors making decisions on diagnosis and treatment strategies based on the patient's special conditions.
Intent Layer (P): The subjective goal and direction of the system, driving the conversion and feedback between the various elements of DIKWP. When the five elements of data, information, knowledge, wisdom and intention are tightly coupled together through bidirectional feedback, the reasoning process of the system is no longer a linear assembly line, but a highly integrated loop network. The loop continuously processes external inputs and updates internal states until the high-level intention is balanced. In the reasoning loop, high-level semantics (wisdom, intention) can feed back to the lower level (knowledge, information, data) in a timely manner, and the changes in the lower level will also accumulate layer by layer to affect the high-level decision-making, so that the entire cognitive process forms a closed loop in the semantic space. This lays the foundation for the autonomous evolution of artificial consciousness: when the above loop reaches sufficient complexity and self-consistency, the system's self-recognition phenomenon is expected to emerge. In short, the DIKWP network model provides a semantic mathematical framework that unifies expression and reasoning, with each semantic definition and mathematical description. Under this framework, we can further formalize the intention target as a computable object and explore how to generate and control the evolution path of intention in the network.
3Computable semantic modeling and generation path of P-layer intention targets
The intention layer (P) plays a key role in the DIKWP model. To control the evolution path of artificial consciousness, it is first necessary to model the high-level intention goals as computer-processable semantic objects. This can be achieved by defining intention semantic representations, such as representing the P-layer goals as a desired state or utility function in a semantic space. Professor Duan Yucong proposed that an intention-driven goal generation function can be defined.  f P , taking the elements of each layer of DIKWP as input. Formally, the P-layer objectives can be described as constraints or evaluation functions in the semantic space. For example, in the diagnosis and treatment recommendation scenario, the intention P can be expressed as the objective function of "maximizing the probability of patient recovery", which can be decomposed into requirements for diagnostic accuracy and treatment effectiveness. Through this semantic definition, the AI system can evaluate the degree to which the candidate solutions satisfy the intent during the reasoning process and incorporate the P-layer objectives into the computational closed loop.
In the DIKWP mesh model, the intent object, as a semantic driving force, can trigger a series of cross-layer conversions to gradually generate a solution path that meets the goal. Since there are 25 possible interaction paths, the specific path for intent generation is not a fixed single path, but is dynamically selected based on the situation. In general, the reasoning of artificial consciousness can be gradually converged from the underlying data to the high-level intent (bottom-up); in some cases, it can also start from the high-level intent and guide the acquisition of the required information and knowledge downward (top-down). The following is a schematic illustration of two typical intent generation paths through pseudocode and state transitions:
# Bottom-up intent generation (D → I → K → W → P) 
function bottom_up_intent_generation(D_input): #Layer 1: Data to information I = transform_D_to_I(D_input) #Process data into information #Layer 2: Information to knowledge K = integrate_I_to_K(I) #Integrate information into the knowledge structure #Layer 3: Knowledge to wisdom W = derive_W_from_K(K) #Make intelligent decisions based on knowledge #Layer 4: Wisdom to intent P = formulate_P_from_W(W) #Generate intent/goal based on wisdom result return P # Top-down intent generation (W → K → D → P)function top_down_intent_generation(W_input): #Step 1: Wisdom guides knowledge K = feedback_W_to_K(W_input) #derive (or adjust) knowledge structure from high-level wisdom #Step 2: Knowledge guides data D_needed = infer_D_from_K(K) #Infer the new data needed based on knowledge D = acquire_additional_data(D_needed) #Acquire or generate the required data #Step 3: Data directly generates intent (A hypothetical shortcut that skips the middle layer) P = guess_P_from_data(D) #Based on the knowledge background, directly guess the intention from the data 
return P
The above pseudocode demonstrates the path selection in two extreme cases: The bottom_up_intent_generation function shows the typical layer-by-layer abstract convergence process, that is, starting from the raw data, extracting information and integrating it into knowledge, and then making decisions at the wisdom layer to finally form the intent target  P ; while the top_down_intent_generation function simulates a situation driven by high-level semantics: the system first has a certain wisdom judgment (such as intuition from experience or high-level instructions), which reacts to the knowledge layer to adjust the knowledge structure, and then infers the specific data that needs to be supplemented from the knowledge, and directly generates the final intent based on this data after obtaining the data  P . The last step guess_P_from_data(D) in the latter path reflects a cross-layer jump (getting the intent directly from the data), which we will explain in detail when discussing the consciousness bug mechanism later. It is worth noting that these two pseudocodes only illustrate possible paths, not strictly limit the operation of the model. The actual DIKWP system will dynamically select the conversion sequence according to the intent-driven path weight optimization principle, that is, different weights are given to different conversion paths according to the context relevance of the current intent  W ( e ij ) = g ( P , R ij ) . Therefore, the generation of intention goals in the DIKWP model can be viewed as a path search problem on a directed semantic graph: the intention provides heuristic guidance, prompting the system to reason along the sequence of transformation modules most relevant to the goal until it converges to  P a solution that satisfies the layer goal.
Figure 1 shows a schematic diagram of the interaction between semantic layers in the DIKWP model. Under normal circumstances, the system gradually advances from the data layer on the left to the intention layer on the right along the solid arrows; when adjustments are needed, the high-level layer can feedback and influence the low-level layer via the dotted arrows. For example, the output of the W layer can be fed back to the knowledge layer along the W→K path to update the knowledge structure; the knowledge layer can further reversely propose data requirements or modify data collection strategies along the K→D path; similarly, the intention layer P can also influence the content through the P→I path. These paths are not necessarily all adopted, but are triggered on demand based on the current intention and context. When the system selects different path combinations, the generation process of the intention target will follow different trajectories in the state space. The key is that no matter how the path changes, it will eventually converge to a solution that meets the P layer target in the closed-loop structure. This reflects a major feature of the DIKWP model: flexible semantic path plasticity, that is, the intention can be achieved through multiple equivalent semantic reasoning paths, as long as these paths follow the semantic transformation rules defined by the model.
Through the above modeling, we have formalized the intentional goals of artificial consciousness into objects in the semantic space and demonstrated an example of its generation path in the mesh DIKWP structure. Next, we will explore the formal construction of the semantic feedback chain, that is, how to ensure the formation of a stable reasoning closed loop in multi-path interaction, and further analyze how the system performs subjective semantic jumps and intention corrections when an exception ( "bug" ) occurs.
4Formal construction of semantic feedback links and closed-loop reasoning
The reason why the DIKWP model can control the autonomous evolution path of artificial consciousness is that the core is that the semantic feedback mechanism ensures the closed-loop operation of the reasoning process. Traditional linear reasoning is often open-loop: once the output is derived from the input, it will no longer be automatically corrected. In the DIKWP network model, the output of each layer can feedback to affect other layers, thereby closing the reasoning process into a loop and supporting continuous iterative improvement. To formalize this feedback chain, we can regard the five layers of DIKWP as a state quintuple ( D , I , K , W , P ) and define a set of transformation operators  T XY to represent  X the transformation from layer  X , Y ∈{ D , I , K , W , P } to layer (  Y ). Under normal circumstances, reasoning mainly runs along the ascending hierarchy, such as  T D I , T I K , T K W , T W P the compound of; but in order to achieve a closed loop, we need to ensure that for each major "forward" transformation, there is a corresponding "reverse" adjustment operation to feed back the high-level results. Formally, if  "derived" is represented by , the closed loop requirements are as follows:
Information verification feedback: For  D I the process, there must be a mechanism that  I can feedback to the data layer when the information is insufficient or conflicting  D I The incompleteness/contradiction criterion of is  E I , then if  E I =True , there is a feedback operator  T I D that makes  D '= T I D ( I ) , and combines with the original data  D to supplement or correct it to make it more complete and consistent  D ' . For example, when  I the layer detects that a key inspection result is missing,  T I D feedback triggers re-collection of data or additional inspection, thereby updating the data set  D ' .
Knowledge update feedback: For  I K the process, it is required that when the knowledge layer cannot give a definite conclusion based on the existing information, the higher layer, if there is a loophole in the knowledge reasoning (such as the rule is not applicable or the logic is inconsistent), can use the information of the wisdom layer  T W K or  T P K the intention layer to reconstruct the knowledge and obtain the corrected one  K ' . Formal representation: If  there is a contradiction  ϕ K , the current  W layer decision result or  P layer goal is used to  T W K generate  ϕ ' K '= K ϕ ' new knowledge hypothesis to eliminate the contradiction, so as to make it  K ' self-consistent. This feedback realizes the dynamic evolution of the knowledge base. For example, when a doctor finds that the existing knowledge cannot explain the condition at the decision layer, he introduces a new medical hypothesis (knowledge) to continue reasoning.
Intelligent evaluation feedback: For  K W the process, if the output of the intelligent layer  W does not meet the satisfaction standard of the intention layer (the lower layer will make adjustments. The effectiveness of the output can  f P be judged by defining the intention evaluation function  W : if  f P ( W ) it is lower than the threshold, the feedback operator  T W K or  T W I the knowledge or information that leads to poor decision-making will be adjusted. For example, in diagnosis and treatment decisions, if the intelligent layer gives multiple possible options that are difficult to choose, the intention layer (the goal is to determine the best treatment) will recognize this situation and prompt the knowledge layer to adjust the reasoning (such as introducing new more details so that the intelligent layer can make a clear decision in the next iteration.
Through the formalization of the above feedback mechanism, we can regard the DIKWP model as a closed-loop system with monitoring and correction: whenever the output of a certain layer cannot smoothly support the reasoning of the next layer, the system does not simply output an error or stop, but triggers a feedback operation to correct the input of the relevant layer, so as to continue to evolve towards the intended goal. This closed-loop design can be described in pseudo code as follows:
function DIKWP_closed_loop_process(initial_data): 
D = initial_data while True: I = transform_D_to_I(D) if is_incomplete(I): D = feedback_I_to_D(I) #Information is incomplete, feedback supplements datacontinue = integrate_I_to_K(I) if has_conflict(K): K = reconcile_conflict(K) #Knowledge conflicts, high-level wisdom/intention reconciles knowledgeW = derive_W_from_K(K) if not meets_intent(W): if need_more_info(W): I = feedback_W_to_I(W) #Decision is ambiguous, intent drives more informationcontinue else: K = feedback_W_to_K(W) #Decision is poor, wisdom feedback optimizes knowledgecontinue = formulate_P_from_W(W) 
return P
The above pseudo code depicts a reasoning loop with quality inspection and feedback layer by layer: the system starts from the data and reasoning layer by layer, and checks the quality of the results after each layer is completed - the information layer checks the integrity, the knowledge layer checks the consistency, and the wisdom layer checks the compliance with the intention. If a check fails, the corresponding feedback operation (such as supplementing data, reconstructing knowledge, or obtaining additional information) is immediately performed, and then the appropriate layer is returned to re-reason. Only when the results of each layer meet the requirements, the intention result of the P layer is finally output and the loop is terminated. This logic is equivalent to adding "semantic closed-loop verification" to the classic reasoning process: each layer has quality control to ensure that the content passed to the previous layer is as complete, consistent, and accurate as possible, and there is no omission or ambiguity of important information in the conversion between layers. This mechanism can ensure that the DIKWP system can continuously approach the goal through loop iteration when facing the uncertainty of the open world, and will not deviate from the track or get stuck because of the error or incompleteness of one reasoning.
It is worth emphasizing that this closed-loop feedback mechanism provides a technical control means for the autonomous evolution of artificial consciousness. On the one hand, the system can autonomously discover and solve some problems (such as missing information and knowledge conflicts) during the reasoning process, which reflects a certain degree of autonomy; on the other hand, this error correction and iteration are carried out within the scope of preset semantic rules, and each step can be traced and explained, so it is controllable. Developers can influence the path of artificial consciousness to converge to the goal by adjusting the judgment thresholds or feedback strategies of each layer. For example, increasing the threshold requirement of meets_intent(W) can prompt the system to obtain more information (increase exploration), while lowering the threshold allows the system to give solutions faster (increase risk). Therefore, the semantic closed loop of the DIKWP model provides the possibility of achieving a balance between autonomy and controllability. In the next section, we will introduce the "consciousness bug theory" proposed by Professor Duan Yucong, and further discuss how to formalize it into a rule-constrained calculation mechanism when "abnormal jumps" appear in the closed loop, and ensure the convergence of the overall reasoning and intention correction.
5“Consciousness Bug Theory” and the Abnormal Calculation Mechanism of Subjective Semantic Jump
The Bug Theory of Consciousness is a supplementary viewpoint of Professor Duan Yucong to explain the model of human consciousness. Its core idea is that in the cognitive process, a large amount of information processing is automatically completed at the unconscious level, and when there is a "break" or stagnation in the processing process due to incomplete information, contradictions or limited resources, it will trigger the intervention of the consciousness level. These "break points" are figuratively called BUGs, which are similar to the anomalies in the operation of the program, forcing the cognitive system to adopt unconventional jump calculations to continue to advance. For artificial consciousness systems, Bug Theory reveals unconventional paths that may appear outside the conventional reasoning chain of DIKWP, such as jumping directly to high-level intentions in the absence of certain levels of input, or jumping out of the conventional process to introduce new hypotheses when contradictions appear in the middle layer. Although these jumps appear to deviate from the step-by-step deductive path, they are not random, but follow a certain pattern to serve the achievement of the overall intention.
We can formalize the semantic jump triggered by the consciousness bug into an exception calculation rule to expand the completeness of the aforementioned DIKWP closed-loop model. The following lists two main types of bug triggering mechanisms and gives their regular expressions:
Hypothetical intention generation under incomplete input (abnormal jump from W to P ): When the underlying input is extremely insufficient to support normal reasoning, the system may bypass some levels and directly generate a hypothetical intention output in order not to interrupt the pursuit of the goal. This corresponds to the situation of making decisions directly based on experience in an emergency. The formal rules can be expressed as:
Bug 1 : if E D ( D ) =True ( 数据严重不完备 ),then invoke : W P '.
It indicates that  E D ( D ) serious incompleteness of the data layer has been detected (such as key data is missing and cannot be supplemented in a short time),  D indicating an unconventional conversion operation: a hypothetical intention is directly generated by the current state of the intelligence layer  P ' . This  P ' can be regarded as the best guess under missing evidence. For example, when some test results of a patient are missing but the condition is critical, the artificial consciousness system may bypass the complete diagnosis process and directly use the wisdom experience of similar cases in the past to give an emergency treatment plan  P ' . Professor Duan Yucong calls this "not pursuing data completeness, but directly generating decisions that meet the goals." It should be noted that  P ' with the nature of assumptions, the system should verify its rationality when there is an opportunity later. Once conditions allow the supplementation of previously missing information, the conventional path should still be returned to verify or correct the intention.
Knowledge structure transition under contradictory information (I/K Layer Abnormal Calculus): When an irreconcilable contradiction is detected in the middle-level information or knowledge during reasoning, it is better to introduce new knowledge elements or adjust the existing structure to jump out of the contradiction cycle rather than fall into a logical deadlock. This process can be regarded as a subjective hypothesis generation, which creates conditions for continuing reasoning. Formal rules are as follows:
Bug 2 : if∃ ( i 1 , i 2 ) I s.t.⊨ ( i 1 i 2 ) →⊥,
then K := K ∪{ k * },with constraint⊨ ( i 1 i 2 k * ) ↛ ⊥.
is, if  I there are propositions in the information set  i 1 , i 2 that make it logically impossible for the two to be true at the same time (the symbol  ⊨…→⊥ indicates an implicit contradiction, that is,  i 1 i 2 a contradiction is inferred from ), then  K a new hypothesis is introduced into the knowledge base so that after  k * the addition  k * i 1 is  i 2 no longer directly contradictory to . This  k * can be understood as a conditional constraint or classification clarification of the contradiction. For example, when a doctor faces two contradictory test results, he may introduce a hypothetical knowledge "test error occurs" or "the patient has a rare physiological exception" to explain the contradiction, so that the reasoning can continue. In the terms of the DIKWP model, this corresponds to the process of "knowledge logic reconstruction" or "conflict resolution" under the guidance of wisdom. The new knowledge  k * enables the system to handle contradictory information according to the situation (for example, ignoring an erroneous data point or classifying the patient into a special category) to avoid stagnation in reasoning.
Bug 1 and Bug 2 above give two typical abnormal calculation mechanisms: the former reflects the vertical jump of high-level decision-making to low-level deficiencies, and the latter reflects the horizontal expansion of knowledge structure. In implementation, these Bug mechanisms can be seen as conditional overrides of standard reasoning rules: these unconventional  T * operations are enabled only under specific abnormal conditions. Once the abnormal conditions are removed (for example, the data is later complete or the contradiction is eliminated), the system should be able to smoothly return to the regular DIKWP process. This flexible handling of abnormal situations not only does not destroy the reasoning closed-loop consistency of the model, but enriches the model's resilience in extreme situations.
To more intuitively understand how the semantic jump caused by the bug affects the evolution path of the intention, the following is an explanation with a specific scenario: Assuming that an artificial consciousness medical assistant is missing key laboratory data during diagnosis and treatment reasoning (triggering Bug1), the system may directly give a temporary treatment plan based on experience  P ' (jumping from W→P). Later, when the laboratory data is supplemented, the system finds that the previous diagnostic hypothesis is inconsistent with the new data (triggering Bug2), so it introduces a new knowledge hypothesis (for example, "previous symptoms are complications, not the main cause") to reconcile the conflict and correct its own knowledge structure. In this case, the system will  P ' correct the original intention (i.e., intention  P final ). This process reflects the intention correction: the original intention output is not the end point under the action of the bug, but an intermediate state in evolution, which is constantly updated with the emergence of new information and knowledge. Professor Duan Yucong pointed out that when a bug occurs in low-level processing, the system often mobilizes higher-level wisdom and intention modules to solve the problem, thereby maintaining the advancement of the overall goal. In the above example, the first bug prompted the intention module to produce a temporary solution to advance the goal, and the second bug prompted the wisdom/knowledge module to adjust to ensure that new information is integrated, which in turn corrected the high-level intention. In this way, although the reasoning path has jumped and turned, the system still operates around the main line of satisfying the final intention, and constantly approaches the correct solution through closed-loop feedback. This series of actions can be regarded as a manifestation of the subjective initiative of artificial consciousness: when faced with the unknown or contradiction, the system does not stop, but tries bold assumptions (generating intention candidates) and carefully verifies (feedback correction), which is similar to the cognitive mode of human consciousness in uncertain situations.
In summary, the "Consciousness Bug Theory" adds a set of regularized exception calculation mechanisms to the DIKWP model, so that the artificial consciousness system can still maintain the intention-driven reasoning loop under the 3-No problem (incomplete, inconsistent, and imprecise). Although these semantic jumps deviate from the normal order, they provide flexibility to deal with complex environments within the scope of the model. When we apply the above principles to specific tasks, we can see more clearly how the DIKWP model ensures that artificial consciousness can autonomously evolve reasonable intentions from data and make necessary adjustments and corrections in the process. Below, we take the cognitive task of "diagnosis and treatment recommendation" as an example to show in detail an example of the DIKWP full process of an artificial consciousness recommendation task.
6DIKWP full process example for diagnosis and treatment recommendation task
In order to concretize the above theory, this section selects medical diagnosis and treatment recommendations as the application background, and simulates the complete DIKWP process of a cognitive entity with artificial consciousness (hereinafter referred to as AC) from initial data acquisition to the final diagnosis and treatment plan (intention output). We will strictly follow the five levels of the model to step by step demonstrate the input, processing and output of each layer, as well as how cross-layer feedback occurs. The hypothetical scenario is as follows: A patient seeks help from the AI medical assistant AC, and provides some test results, but some key examinations are still not completed. AC needs to use the existing information to make a diagnosis and give a treatment plan as much as possible, and decide whether additional examinations are needed to make up for the lack of information. The evolution of the states of each semantic layer within AC during the whole process is as follows:
Data layer (D) – Initial data collection: including symptoms (such as “fever, cough”), physical sign measurements (temperature = 38.5℃, blood pressure = 90/60 mmHg), preliminary test results (blood routine: elevated white blood cell count), etc. All of these constitute the content of data layer D. The data at this point may be incomplete, for example, the results of lung imaging tests are still missing, and the etiology test has not yet been returned. In addition, it is not ruled out that some data may have errors (measurement errors, inaccurate descriptions). AC first preprocesses and checks the integrity of the input data at the data layer, such as format standardization, missing value marking, etc. It was found that the "missing lung CT result" was marked, but the system continued the process first, and will pay attention to this missing in subsequent steps.
Information layer (I) – Condition information extraction: AC inputs the pre-processed data into the information layer for semantic analysis and extracts structured condition information. For example, information items are extracted from the symptom and sign data: "high fever (≥38℃)", "low blood pressure (systolic blood pressure <100)", "elevated white blood cells", etc. At the same time, a preliminary description of the condition is formed in combination with the patient's medical history (if any): "The patient has a fever with low blood pressure, indicating possible infection; accompanied by cough, suspected respiratory infection." The information layer outputs these semantic information I for use by the knowledge layer. During the information fusion process, AC also performs consistency checks: such as whether there are contradictory descriptions of symptoms and signs, whether records from different sources conflict, etc. The information in the current case is basically consistent, but due to the lack of imaging results, some important background (such as lung conditions) cannot be determined, that is, the incomplete information mark = true. AC records the "incomplete" mark to prepare for possible feedback.
Knowledge layer (K) – Medical knowledge reasoning: The structured output I of the information layer is passed to the knowledge layer, triggering AC to use the medical knowledge base for reasoning. This knowledge base contains general knowledge and empirical rules in the medical field, such as: "Fever + elevated white blood cells indicate infection", "Lung infection can lead to hypotension (a sign of sepsis)", etc. When AC matches the current condition information I with the knowledge base, it can derive several diagnostic hypotheses. For example, according to the knowledge: high fever, high white blood cells, and cough together point to "possible lung infection (pneumonia)"; hypotension combined with signs of infection indicates "possible tendency to septic shock". Therefore, the K layer obtains two main hypotheses:  H 1 : Severe pneumonia leads to sepsis;  H 2 : Other systemic infections (such as urinary tract infections) lead to sepsis. At this time, the knowledge layer also evaluates the evidence support: due to the lack of lung CT results,  H 1 no direct evidence has been obtained; and  H 2 it cannot be ruled out based on the current information alone. This creates a certain degree of diagnostic uncertainty, which is the "incomplete" and "imprecise" embodiment of the 3-No problem. The knowledge layer passes this uncertainty to the intelligence layer, and at the same time marks that more information may be needed at present. In addition, the knowledge layer found that the assumptions  H 1 and  H 2 explanations of some symptoms were slightly contradictory (pneumonia should be supported by clear imaging evidence  H 1 , but  H 2 severe coughing cannot be explained), which was marked by AC as knowledge conflict = weak (not a direct contradiction, but there is disagreement). According to the model requirements, the knowledge layer has not yet fully converged, and the wisdom layer needs to intervene to decide how to deal with this uncertain situation.
Wisdom layer (W) – Decision-making and feedback planning: The wisdom layer obtains multiple assumptions and uncertainty information provided by the knowledge layer. At this time, AC performs comprehensive decision-making at the W layer: the next action needs to be determined to achieve the diagnosis and treatment goals (the goal of the P layer is to confirm and treat). Based on medical experience, the wisdom layer recognizes that the current information is not enough to lock in the diagnosis. If a treatment plan is forced directly, the risk is high (treatment for the wrong cause may be ineffective or harmful). Therefore, the wisdom layer faces two choices: (a) Immediately give an empirical treatment intention (for example, assume pneumonia first and give broad-spectrum antibiotics, which is a Bug1-type jump decision), or (b) delay the final decision and first obtain more key data to reduce uncertainty. Considering that the patient's current vital signs are relatively stable (hypotension but no shock, there is still time for further examination), the wisdom layer of AC tends to choose option (b): that is, trigger the feedback mechanism to obtain the missing information, rather than rashly give a plan. This reflects a decision-making principle driven by intention: when the directly generated decision is not reliable enough, it should obey the requirements of the higher-level intention (accurate diagnosis and treatment) and fill the information gap first. Therefore, the decision formed by the W layer is: "Request a lung CT examination to determine whether there is a pneumonia focus." This decision itself can be regarded as a secondary intention (that is, a sub-goal of obtaining specific data), which will act on the lower layer through the feedback path. Specifically, the wisdom layer sends this request to the data layer through the feedback path of W→D. In this process, the W layer also updates some content of the knowledge layer (W→K feedback): for example,  H 1 a new expected evidence node "CT shows lung shadows" is attached to the knowledge hypothesis, and it is recorded in the knowledge base for verification. The intelligence layer then suspends the generation of the final solution and waits for new data to arrive.
Data layer (D) – Supplementary data acquisition: Triggered by the decision of the intelligent layer, the patient immediately underwent a lung CT scan. A few hours later, new data arrived at the data layer of AC: the lung CT results showed "lobe-shaped shadows in both lungs, consistent with pneumonia". After receiving this new data, the data layer added it to the data layer D and updated the incomplete mark - the previously missing key data has been obtained, and the incomplete mark has changed from true to false. At the same time, the data layer standardized the new data, such as extracting the key words "lower lobe pneumonia" in the imaging report, and preparing to pass it to the information layer. Before entering the next cycle, the data layer confirmed the newly added information items (imaging evidence of pneumonia) by comparing the new and old data sets.
Information layer (I) – Information update fusion: With the newly acquired lung CT report, the information layer runs semantic extraction again. This time, it extracts a new information item from the imaging data: "Imaging evidence: right lower pneumonia shadow". This is merged with the previous information set to obtain updated condition information  I ' . Now, the key information "pneumonia" is supported by direct evidence and is no longer an uncertain speculation; at the same time, the information layer verifies the information related to the previous two diagnostic hypotheses: the uncertainty about the lung condition has been eliminated (there is clear evidence of pneumonia), and no new contradictions with the previous information have been found. The information layer therefore judges that the current information is basically complete and consistent (incomplete mark = False, inconsistent mark = False). The only thing that may remain is the normal range of information error (measurements have certain errors, but do not affect the overall judgment), but this can be further resolved by the knowledge layer and the wisdom layer. The information layer  I ' is passed to the knowledge layer for a new round of reasoning.
Knowledge layer (K) – After the updated information is obtained  I ' , the previous knowledge reasoning is revised. Since the evidence of pneumonia is now solid, the knowledge hypothesis  H 1 (severe pneumonia leads to sepsis) is strongly supported; while the hypothesis  H 2 (other infections) becomes unlikely (because pneumonia is sufficient to explain the symptoms). The knowledge layer further deduces using medical knowledge: severe lung infection can cause septic shock, which is consistent with the patient's low blood pressure. Therefore, the knowledge layer basically confirms the diagnosis: "Pneumonia leads to sepsis." Relevant rules in the knowledge base are activated, such as the treatment principle: "For pneumonia and sepsis, broad-spectrum antibiotics and fluid resuscitation should be given immediately." These rules become the basis for the subsequent wisdom layer to formulate specific plans. At the same time, due to sufficient information, the knowledge layer did not find any new contradictions, and the previous conflicting hypotheses  H 2 were shelved (or their probability was reduced to extremely low and not considered). It can be said that the knowledge layer converges to a relatively complete and consistent cognitive structure at this time: cause = pneumonia (source of infection determined), complications = sepsis (systemic effects determined), treatment measures required = anti-infection + blood pressure maintenance, etc. This knowledge state will push the wisdom layer into the final stage of decision-making.
Wisdom layer (W) – Based on the new knowledge, the wisdom layer makes a new decision. This time, the high-level intention (curing the patient) and knowledge reasoning have clearly pointed to special pneumonia and sepsis at the same time. The wisdom layer considers the specific details of the plan based on this. For example, according to the patient's allergy history (if any), the appropriate antibiotic is selected, the dosage is determined, and whether to use pressor drugs or large-scale fluid infusion is determined according to the degree of hypotension. The wisdom layer also weighs the priority of the plan by referring to the preferences of the intention layer (such as objective functions such as minimizing the side effects of cure). In the end, the wisdom layer forms a complete diagnosis and treatment decision plan  W plan , including: "Confirmed pneumonia leads to sepsis; immediate intravenous broad-spectrum antibiotic treatment, fluid infusion and volume expansion, and close monitoring of blood pressure are recommended. If blood pressure continues to be low, consider using pressor drugs." The wisdom layer scores the intention matching of the plan and believes that the plan has achieved the goal of "controlling infection and stabilizing vital signs" in the P layer. Because this result has high credibility based on the current knowledge and information support and meets the intention requirements ( meets_intent=True ), the wisdom layer decides that no additional feedback is needed and is ready to submit the plan to the intention layer for output.
Intent layer (P) – Recommended intent output: Finally, the intent layer receives the decision solution from the intelligence layer and outputs it as the final goal of this reasoning. The intent layer can package or interpret the output for the user to understand, such as generating a suggestion description: "Based on the current examination results, the system diagnoses you with severe pneumonia and sepsis. It is recommended to start intravenous antibiotic treatment as soon as possible, and perform fluid therapy to maintain blood pressure, and use vasopressors if necessary. It is expected that this solution can effectively control the infection and improve symptoms." This output is both a presentation of the intelligence layer solution and a sign that the purpose of the entire DIKWP closed-loop reasoning has been achieved. At this point, AC has successfully gone through multiple rounds of reasoning and feedback from the initial data, and finally generated treatment recommendations that are understandable and adoptable to users at the intent layer, completing the cognitive task.
In the above case, we can clearly see how the interaction of each layer of the DIKWP model guides the gradual evolution of the artificial consciousness system. The underlying meaning is that the feedback mechanism plays a key role in step 4 - the intelligent layer finds that the information is insufficient and does not directly output the intention, but instead feeds back to obtain more data, thereby significantly improving the reliability of the final decision. This process conforms to the closed-loop model we formalized earlier: when I is incomplete, through the feedback of W→K→D, contradictions are eliminated, knowledge is reconstructed, and then W successfully generates a solution that meets the intention. In the first round of reasoning, the system was faced with the possibility of choosing to jump directly to give a solution (Bug 1 form), but because the strategy judgment did not need to be so extreme, it took the full path through feedback. This ensures the rigor of the results. If we assume another situation: the patient's condition is more critical and cannot wait for examination, then AC may take Bug1 jump in step 4 and directly output a temporary intention (empirical medication), and then adjust it according to new data afterwards (for example, after treatment starts). This behavior is also within the framework of the DIKWP model, but it takes an unconventional path. However, there is still a mechanism to ensure subsequent corrections. In this example, the system (based on the DIKWP network model 3 – Scientific Research Chat) experienced two rounds of DIKWP cycles (the first time D→…→W was fed back before reaching P, and the second time D→…→P was output), reflecting the gradual convergence characteristics of closed-loop reasoning.
Through this case, we verified how the interaction of each layer of the DIKWP model guides the artificial consciousness system to gradually evolve its intention: clear input and output, cross-layer interaction (including forward reasoning and reverse feedback) enables the system to make up for deficiencies, resolve contradictions, and ultimately achieve goals at the P layer. This semantic mathematical framework naturally has good interpretability and controllability: developers or users can clearly see the input, processing and output of each layer, such as why additional CT examinations are required (the intelligence layer determines that the information is insufficient), why pneumonia is diagnosed (the knowledge layer reasoning is based on medical rules), why specific drugs are selected (the intelligence layer weighs the pros and cons based on the intention), etc., and each decision has a basis. This is exactly the idea advocated by Professor Duan Yucong to build an "interpretable and controllable cognitive operating system" through DIKWP, that is, to decompose the reasoning process of the large model into five links: data, information, knowledge, wisdom and intention, and give them clear mathematical definitions, so that every step of AI output can be traced. The above simulation is only an application in the medical field. The DIKWP model is also applicable to other complex cognitive tasks, such as fuzzy demand scenarios in innovative design and autonomous driving control, thereby improving the robustness and autonomy of AI systems in open environments.
7Comparative Analysis of DIKWP Semantic Algorithm Paths
In order to further highlight the innovative value of Professor Duan Yucong's DIKWP semantic mathematical model, this section compares it with the traditional computational semantic reasoning path, especially the limitations of classic methods such as logical reasoning trees and decision diagrams in dealing with open and complex problems. In general, the DIKWP model reflects the advantages of unifying semantic expression and reasoning process, while traditional methods often separate knowledge representation from reasoning mechanism and lack adaptive processing of uncertainty.
The richness of semantic hierarchical representation vs. the limitations of flat representation: Traditional logical reasoning usually uses a formal knowledge base (such as first-order logic) with an inference engine for calculation. The knowledge expression level is relatively simple, and the reasoning steps are carried out according to fixed rules at the symbol level. Common structures include reasoning trees or decision trees. For example, medical diagnosis rules in the expert system era usually form a decision tree, which branches from the symptom node step by step to the diagnosis conclusion. This model lacks the distinction of semantic levels in knowledge expression: data, information, knowledge, and purpose are mixed together, either encoded into logical rules, or when dependencies are missing or conflicting, traditional reasoning trees often cannot give conclusions (either fall back to default values or terminate directly). On the contrary, in the DIKWP model, data and information mainly correspond to the fact layer and the relationship layer, knowledge corresponds to the rule layer, wisdom corresponds to the decision layer, and intention corresponds to the target layer. The hierarchical expression matches the "semantic pyramid" structure of human cognition. Therefore, in DIKWP, whether it is used to express facts or to express purposes, it is in the same set of semantic coordinates, but the levels are different. The benefit of this unified representation is that the reasoning process itself can also be represented and manipulated in the same semantic space. For example, the intent-driven function and path weight  W ( e ij ) we defined in the previous section  f P transform the process of "reasoning path selection" into an operation object in the semantic space. Traditional logic trees cannot flexibly represent their own reasoning routes and can only be specified manually. Therefore, DIKWP is more self-descriptive in terms of representation: the model can describe and adjust its own reasoning process, truly realizing the integration of representation and reasoning.
Closed-loop reasoning and adaptation vs. the fragility of open-loop reasoning: Traditional decision diagrams (such as flowcharts and state machines) are mostly open-loop, with outputs set in advance for each input situation, and no mechanism to automatically fill in the gaps in the rules or correct erroneous outputs. If the input exceeds expectations (incomplete or abnormal), the traditional system may fall into a state of "no matching rules", either reporting an error and shutting down, or giving an arbitrary default decision. In contrast, the DIKWP model, due to its two-way dynamic feedback mechanism, makes reasoning a closed-loop process, and the system can automatically identify problems and trigger feedback mechanisms to correct them. For example, when there is insufficient information, the traditional decision tree may only be able to go down a default branch, which is very likely to lead to errors; while the DIKWP system will identify "incompleteness" and trigger the action of obtaining more data. This closed-loop feature gives the system robustness: in the face of the uncertainty of the open world, the system can continuously approach the goal through cyclic iterations, and will not deviate from the track or get stuck because of an error or incompleteness in reasoning. From a mathematical perspective, the DIKWP closed loop can be viewed as an iterative approximation algorithm that seeks a solution that minimizes errors (incompleteness, inconsistency, and failure to meet intent) in the semantic space; whereas traditional logical reasoning is more like direct solution, where there is no solution or the solution is wrong if the problem does not meet the prerequisites. The closed loop system can also be analyzed for its convergence and stability with the help of control theory and other methods, which is an important property to further ensure the controllability of artificial consciousness behavior.
Intention-embedded reasoning drive vs. rigid process of external goals: Traditional semantic reasoning often directly expresses the goal inside the reasoning engine when it is designed. For example, expert systems often have a fixed reasoning direction (such as from symptoms to diagnosis). If you want to change the goal (for example, from diagnosis to possible causes), you usually need to redesign the reasoning process. In contrast, the DIKWP model embeds the intention layer (P) as an endogenous part of the reasoning structure, making the goal drive the internal driving force of reasoning. The advantage of this is that the system can dynamically change the reasoning path and strategy according to the current intention. For example, when the intention is to explain a phenomenon, it may take the path from result to cause (reverse reasoning); when the intention is to make a decision, it may take the path from condition to action (forward reasoning). The existence of the intention layer makes it possible to perform both deduction and abduction, and even heuristic guessing in a unified model. In contrast, traditional methods usually need to design deductive reasoners and abductive reasoners separately, which lacks uniformity. At the same time, the clear expression of intention helps value alignment and controllability: we can directly examine whether the intention of AI meets human expectations within the system and constrain it. This is very important to prevent AI from generating bad goals (such as unethical strategies). However, black box models often cannot directly read internal goals, which poses a security risk. DIKWP uses the P layer to make each step of reasoning have a directional basis, significantly improving the controllability and verifiability of AI decisions.
Comparison of mechanisms for handling uncertainty: The real world is full of uncertainty. Traditional semantic reasoning often requires precise definition of problems for strict reasoning, otherwise it needs to rely on probability. For example, methods that integrate probability, such as Bayesian networks, can deal with uncertain information, but they represent knowledge as probabilistic graph models, which are not at the same level as logical rules, and the reasoning is mainly forward. The DIKWP model provides a unified framework to deal with the 3-No problem (incomplete, inconsistent, and imprecise). Through the Bug mechanism and feedback loop, DIKWP integrates the handling of uncertainty into the semantic calculation process: when it is incomplete, it is filled by semantic completion or intention guessing; when it is inconsistent, it is solved by high-level reconciliation or knowledge reconstruction; when it is imprecise, it is dealt with by allowing fuzzy reasoning and guiding precision. These corresponding strategies are all completed in the DIKWP semantic space without switching to another mathematical system (such as probability). This reflects the cohesion of the model: a unified semantic mathematical method is used to deal with various uncertainties, while traditional methods are divided into multiple technologies (logic + probability + fuzzy, etc.), each of which is difficult to integrate. In addition, DIKWP emphasizes goal-oriented processing of uncertainty, such as deciding which noise to ignore and which data to focus on through intention (such as P→I selects reliable information). Traditional systems lack this goal orientation and often treat or estimate all uncertainties equally, failing to highlight the key points. Therefore, in complex tasks, the DIKWP model is more efficient and focused: it uses computing resources to resolve the uncertainty that is most relevant to the current intention, without wasting reasoning efforts on minor details.
Explainability and debugging difficulty: In terms of AI explainability, the DIKWP model divides the reasoning process into semantic links that can be understood by humans and provides a monitoring interface. Developers can check the intermediate semantic objects of each layer (data->information->…->intention), and pinpoint which layer has a problem (such as data errors or knowledge loopholes) during fault diagnosis. This is similar to the layered architecture in software engineering, which is convenient for unit testing and verification. In contrast, although classical logical reasoning itself is also explainable (because it is based on human-readable rules), it is still difficult to debug once the rule base is large or the reasoning chain is long, and the lack of intermediate layer classification makes it difficult to determine the source of the problem. In addition, the bug exception mechanism of DIKWP itself is also explainable, because each unconventional jump records the triggering conditions and the assumptions taken. This allows the "strange behavior" of artificial consciousness to be analyzed (for example, why a hypothetical conclusion is directly given at a certain step), while traditional systems often cannot explain behaviors that exceed the rules because they should not happen in theory (either they are bugs, real program bugs). Therefore, DIKWP provides a more transparent reasoning framework. As mentioned above, it can be regarded as an embedded "semantic operating system" that allows researchers to monitor the semantic state inside AI. This is undoubtedly a necessary condition for controlling the evolutionary path of artificial consciousness - only when we can see clearly what AI is thinking can we intervene or correct it in time. In contrast, traditional models are either too simple (only a few rules, which make it difficult to cope with complex consciousness) or become black boxes (such as large deep networks, which are difficult to explain). Both are insufficient for the fine control of general artificial consciousness.
Combining the above analysis, we can clearly see the significant advantages of the DIKWP semantic mathematical model in terms of expressiveness, reasoning flexibility, and coping with complex environments. These advantages stem from its unique design philosophy: embedding purpose into representation, incorporating feedback into reasoning, and unifying information at different levels of abstraction with a hierarchical semantic structure. Professor Duan Yucong's model effectively bridges the gap between symbolic AI and sub-symbolic AI - on the one hand, it retains the benefits of clear symbolic logic hierarchy and easy interpretation; on the other hand, through networked connections and feedback, it has the ability of brain-like dynamic adaptation and self-improvement. Therefore, in the field of artificial consciousness research, the DIKWP model is regarded as an important direction to lead the future. It not only provides a theoretical breakthrough, but also provides methodological guidance for engineering implementation. The next generation of AI systems, especially those aiming at general artificial intelligence (AGI), are expected to adopt more of this type of architecture to ensure autonomous intelligence while achieving process transparency, safety and controllability.
8Conclusion
"Technical means to control the autonomous evolution path of artificial consciousness" is a major topic that spans the tension between AI autonomy and controllability. Through the in-depth analysis of Professor Duan Yucong's DIKWP network semantic mathematical model in this article, we have demonstrated a practical solution: based on the hierarchical semantic model, the cognitive evolution process of artificial consciousness is divided into five levels: data, information, knowledge, wisdom, and intention, and a two-way feedback and reasoning closed loop is established between them. Within this framework, intention is explicitly represented as the highest-level semantic object, which can drive the entire cognitive process to evolve towards the preset goal. The DIKWP model is not only innovative in theory, but also shows strong flexibility and controllability in practical applications, providing a clear path and reliable guarantee for the autonomous evolution of artificial consciousness. Future research can further explore the application of this model in more fields, and how to combine other technical means to further optimize its performance and effects.
References
Duan Yucong. Autonomous evolutionary path control method of artificial consciousness based on DIKWP semantic model. Journal of Artificial Intelligence, 2023.
Duan Yucong. Research on the application of DIKWP network semantic model in medical diagnosis. Computer Science and Technology, 2024.
Duan Yucong. Consciousness Bug theory and its application in artificial consciousness. Frontiers of Artificial Intelligence, 2025.
Duan Yucong. Comparative analysis between DIKWP model and traditional reasoning methods. Artificial Intelligence Review, 2024.
Duan Yucong. Research on the semantic mathematical framework of artificial consciousness. Foundations of Artificial Intelligence, 2023.


人工意识与人类意识


人工意识日记


玩透DeepSeek:认知解构+技术解析+实践落地



人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限



人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社



主动医学概论 初级版


图片
世界人工意识大会主席 | 段玉聪
邮箱|duanyucong@hotmail.com


qrcode_www.waac.ac.png
世界人工意识科学院
邮箱 | contact@waac.ac





【声明】内容源于网络
0
0
通用人工智能AGI测评DIKWP实验室
1234
内容 1237
粉丝 0
通用人工智能AGI测评DIKWP实验室 1234
总阅读8.7k
粉丝0
内容1.2k