大数跨境
0
0

White Paper on the Architecture of an Artificial Consciousness P

White Paper on the Architecture of an Artificial Consciousness P 通用人工智能AGI测评DIKWP实验室
2025-10-20
5

White Paper on the Architecture of an Artificial Consciousness Processing Unit (ACPU) Based on the Subconscious-Conscious DIKWP Model

Yucong Duan

International Standardization Committee of Networked DIKWfor Artificial Intelligence Evaluation(DIKWP-SC)

World Artificial Consciousness CIC(WAC)

World Conference on Artificial Consciousness(WCAC)

(Email: duanyucong@hotmail.com)

(Contact email:duanyucong@hotmail.com).

Introduction

AI technology has advanced by leaps and bounds over the past decade, butthere is still no artificial system with a true level of consciousness. The industry is beginning to realize that achieving  Artificial Consciousness (AC) requires going beyond the limitations of traditional AI in terms of semantic understanding, knowledge fusion, and autonomous decision-making. The "subconscious + conscious" network DIKWP model and the conscious "BUG" theory proposed by Professor Duan Yucong provide new ideas for this problem. The DIKWP model combines DataInformation, Knowledge, Wisdom, and IntentThe five levels of /purpose are incorporated into a unified framework, and a new purpose/intention dimension is added compared with the traditional DIKW hierarchy model. Combined with this, the "bug" theory of consciousness reveals the inherent "defects" or cognitive biases in the evolution of human consciousness, and believes that it is these imperfections (such as subjective assumptions, delusions, etc.) that enable human beings to form meaningful cognition and decision-making under incomplete information. This theory implies that a certain amount of "imperfection" should also be allowed in the artificial consciousness system in order to simulate the creative and self-correcting mechanisms of the human mind. 

On the basis of this theory, we propose the  Artificial Consciousness Processing Unit (ACPU). architecture, which integrates "subliminal" computing and "conscious" decision-making, aims to realize the integration of software and hardware solutions for artificial consciousness computing. This white paper systematically describes the design and implementation of ACPU architecture, including the theoretical system, dual-space mapping mechanism, core module design, application cases, algorithm fusion, and performance evaluation. The full text will highlight the following:

·The theoretical system with DIKWP network model as the core clarifies the interaction and fusion mechanism between the subconscious mind and the conscious space in the five dimensions of data, information, knowledge, wisdom and intention, and the enlightenment of the consciousness "BUG" theory to the design of artificial consciousness system. 

·Bidirectional Mapping Mechanism of Conceptual Space and Semantic Space: This paper introduces the definitions of Conceptual Domain Unit (CDU) and Semantic Cognitive Unit (SCU), and demonstrates that they are through Semantic-Conceptual Fusion Unit (SCFU) How does real-time mapping and migration work in both directions. 

·Design of the core modules of the ACPU architecture: The structure, function and coordination mechanism of the three core modules of SCU, CDU and SCFU in the ACPU are described in detail, and the method of using heterogeneous computing acceleration (CPU+GPU+dedicated unit) to achieve efficient artificial consciousness computing is discussed. 

·Practical engineering application cases: The application value and potential of ACPU architecture in complex semantic extraction, intelligent decision-making, and human-computer interaction are illustrated through typical scenarios such as medical intelligent assisted decision-making, autonomous driving cognitive architecture, and intelligent health system. 

·Integrating cutting-edge AI technologies: Discuss how to integrate Transformer large models, graph neural networks (GNNs), reinforcement learning (RL), meta-learning, semantic knowledge graphs and other technologies into ACPU to achieve functions such as semantic extraction, knowledge reasoning, and self-learning, and provide algorithm structure diagrams and key pseudocodes. 

·Simulation Experiment and Performance Evaluation: The preliminary simulation experiment data are given, and the computing efficiency, response delay and decision-making quality of ACPU and traditional CPU+GPU architecture are compared with those of the traditional CPU+GPU architecture on artificial awareness tasks, and the advantages of ACPU architecture are proved. 

·Future Prospects: Discuss the path to implement ACPU in chips, the solution for deployment in the industry, and the possibility of integrating with dedicated AI-accelerated hardware such as TPUs and FPGAs in the future. 

Through this white paper, readers will understand the overall blueprint of the artificial consciousness processing unit and how to design a new generation of AI computing architecture based on the DIKWP model and the consciousness bug theory, so as to lay the foundation for the industry to realize explainable and trustworthy intelligent systems at a higher level.

The DIKWP model and the "BUG" theory system of consciousness

The DIKWP model is a theoretical framework of artificial consciousness proposed by Duan Yucong's team, which is a five-layer semantic abstraction and cognitive cycle of data (D), → information, → knowledge, (K), → intelligence (W), → intention/purpose (P). Different from the linear hierarchical model, DIKWP is designed as a mesh topology, with fully connected interaction and feedback loops between each layer, forming a cognitive closed loop that is constantly evolving and improving. For example, low-level data and information are processed to become knowledge and wisdom, and at the same time, the intention and wisdom of the high-level can in turn guide data collection and information selection, so as to achieve multi-level feedback and regulation. This network structure ensures that the system can accumulate new knowledge in a dynamic environment, flexibly adjust the target intent, and continuously optimize its own behavior strategy. 

Each of the five dimensions of the DIKWP model is clearly defined and functionally oriented:

·Data: Raw observational inputs, unprocessed objective facts. Corresponding to sensor inputs, raw signals, etc. in artificial consciousness systems. The data layer emphasizes automated, parallel processing, extracting essential features from massive amounts of data through pattern recognition. This part of the processing has a distinctly "subconscious" feature and can be done without conscious intervention. 

·Information: Preliminarily processed data and its patterns. The information layer extracts the associations and statistical rules in the data, such as pattern features, association rules, etc. It already has a certain level of semantic meaning, but it still focuses on objective description. 

·Knowledge: A general law, theorem, or model formed through conceptual abstraction and logical reasoning based on information. The knowledge layer contains the subject's subjective internal representation of the objective world, such as concept networks, domain knowledge graphs, etc. The acquisition of knowledge means that the system elevates the information to a structured conceptual level and is able to reason in a regular way. 

·Wisdom: The ability to make global decisions and value judgments based on knowledge. The intelligence layer involves comprehensive trade-offs and strategic planning, and can form solutions to complex problems, including multi-objective trade-offs, long-term planning, and dynamic adjustments. This often requires the involvement of "awareness", such as evaluation of different options, self-reflection and adjustment. 

·Purpose: The highest level, representing the goal-orientation, motivation, and constraints of the system. The intention layer determines what problems the system focuses on and what results it pursues, which is equivalent to the will and self-monitoring mechanism of consciousness. The introduction of the P layer enables the DIKWP model to depict the subjective motivation of the agent and ensure the consistency and directionality of the system behavior. 

In the DIKWP model, the layers are not simply one-way relationships, but form a closed loop through multi-directional interaction: the data and information layer provides cognitive raw materials, the knowledge layer absorbs and integrates them, and the intelligence layer makes decisions on the basis of knowledge, and maps the decisions to new intentions/purposes to guide the next round of data acquisition and information processing, thus forming a circular and iterative learning process. This cycle allows the system to continuously update its knowledge and goals based on environmental feedback. For example, when the environment changes, the intelligence layer can adjust its intentions and further influence the system to obtain new data or information to fill knowledge gaps and achieve adaptive evolution. Therefore, the DIKWP model essentially supports adaptive learning and online evolution, which is especially important for complex and changeable real-world scenarios.

It should be emphasized that the DIKWP model expands the traditional DIKW pyramid into a structure closer to human cognition through the "intention" layer: in the human brain, intentions and motivations can significantly affect attention allocation and cognitive processes, and the DIKWP model incorporates this into the AI model, thereby improving the autonomy and robustness of the system in uncertain environments. This also provides a new dimension for the evaluation of artificial consciousness: it is necessary to examine not only the ability of the system at all levels from data to intelligence, but also whether its purposeful and intention-driven behavior is reasonable. 

The "BUG" theory of consciousness is another key point of Duan Yucong's view on the nature of consciousness. "BUG" here is not a literal meaning of a software vulnerability, but refers to an innate "imperfection" or cognitive break in consciousness. The theory holds that human consciousness is not continuous, omniscient and omnipotent, and there are some illusions, blind spots, and illogical jumps in the cognitive process. For example, the human brain fills in hypotheses when information is incomplete and simplifies decision-making in complex situations, and these approximations or errors are actually "defects" left in the evolution of consciousness. However, it is precisely these so-called bugs that allow us to make rational decisions in the face of uncertainty: the brain bridges the gap between the continuous flow of the subconscious and the discrete perception of consciousness through mechanisms such as imagination, hypothesis, and self-deception. 

The important implication of the "BUG" theory is that when constructing artificial consciousness, we may not pursue absolute perfect rationality, but allow the system to retain a moderate amount of randomness and uncertainty processing power to simulate the creativity and robustness of human thinking. As Professor Duan Yucong likened, the brain is like a machine that constantly performs "text solitaire", most of the information processing is carried out automatically in the subconscious, and the consciousness is just a fragment (BUG) that keeps popping up in the process (BUG) and interpreting it. Therefore, the artificial consciousness system needs a mechanism that can introduce abstract semantic assumptions on the intermediate results produced by the subconscious layer, which can then be verified and adjusted by the conscious layer. This process is similar to "using bugs to explain continuous subconscious processes" - through error correction and correction again and again, AI can gradually form a stable cognition in a complex environment.

In summary, the DIKWP model provides the macro-hierarchical structure and cognitive cycle mechanism of artificial consciousness, while the consciousness bug theory provides a supplementary explanation of the micro-operation mechanism, that is, consciousness needs to explain and guide the subconscious process by constantly making up for its own limitations. The combination of the two lays a theoretical foundation for the design of the artificial consciousness system. In the following chapters, we will discuss how to build an ACPU architecture that supports semantic-conceptual dual-space fusion based on this theoretical system, and how to achieve efficient artificial consciousness computing in engineering.

Bidirectional mapping mechanism between conceptual space and semantic space

Human understanding of text and environment involves the transformation of semantic space and conceptual space. Semantic space usually refers to the distributed representation of language symbols in meaning. For example, in natural language processing, the embedding space of word vectors obtained by large-scale corpus training belongs to the semantic space: each word or symbol corresponds to a high-dimensional vector, and the distance between different words indicates the degree of their semantic association. The semantic space focuses on the statistical correlation at the level of objective language use, which is dynamic and context-dependent. Different languages or domains will have their own semantic spaces, and even for the same person, the distribution of semantic vectors activated in different contexts will change. For example, in the word vector space, the words "king" and "queen" are close together and parallel to the direction of the vector representing gender relations, reflecting their semantic association. However, such semantic representations are trained by external corpus and reflect the statistical patterns that are objectively presented in the process of using linguistic symbols. 

In contrast, Concept Space refers to the collection of concepts and their relationships in the subject's brain, that is, the structure of knowledge representation within an individual. Conceptual space contains abstract concepts such as categories, objects, attributes, etc., as well as the levels and relationships between them. The conceptual space can be thought of as an "internal ontology" or a subjective semantic web of the cognitive subject: it reflects more of the subjective form of organization of knowledge and has a certain geometric or topological structure. Cognitive scientist Gardenfors' theory proposes that concepts can be represented by regions in geometric space, and the distance between different concepts represents semantic similarity. Thus the conceptual space is not a disorganized collection of concepts, but a computable space with an internal structure (dimensions, distances, directions). Conceptual space emphasizes the relative independence of concepts: that is, concepts have relatively autonomous representations in the brain, and do not change their essence due to different expressions (different languages or phrasing). This ensures a stable grasp of meaning – no matter what words a sentence is used, as long as it involves the same concept, it should be understood in the same mind. 

Based on the above differences, semantic space and conceptual space are both independent and interrelated. Semantic space focuses on objective expressions at the language level, which is easily affected by context and corpus. The conceptual space focuses on the internal subjective knowledge structure, which is relatively stable and has an abstract level. When a person reads a sentence, the brain processes it usually follows:

·First, the words in the text are mapped to the semantic space as symbols, and the semantic vectors corresponding to several words are activated. This step can be seen as an automatic process of the "subconscious", similar to pattern recognition: the brain quickly extracts the semantic features contained in the words, but at this point these features still exist in distributed representations and do not rise to the level of explicit concepts.

·Subsequently, these semantic activation patterns trigger the arousal and association of corresponding concepts in the conceptual space through the semantic-conceptual mapping mechanism in the brain. In other words, the brain converts word vector representations into internal conceptual representations: for example, when reading "apple", the semantic space may activate vector patterns related to "fruit", "red", "food", etc., and mapping to the conceptual space evokes awareness of the concept of "apple", its properties, and its relationship with other concepts.

·Eventually, the understanding of the whole sentence and the whole paragraph is formed at the level of cognitive space. Cognitive space can be understood as a higher-level comprehensive understanding space that includes a conceptual space, in which context, experience, and current task intent are combined to form an overall grasp of semantics. When concepts are correctly evoked and related, the meaning of the text is understood.

In this process, the semantic-conceptual mapping mechanism acts as a bridge: it links dynamic semantic representations with stable conceptual representations, so that we can "see" through different expressions and grasp the unchanging conceptual connotations. This mapping is considered to be a key mechanism of human language understanding and cognition. Conversely, in language generation, the brain needs to translate internal concepts into appropriate semantic representations (choosing the right words and sentence structures), which is actually the reverse of the above process. Therefore, there is a two-way mapping relationship between semantics and conceptual space: from semantics to concepts when understanding, and from concepts to semantics when expressing, which together constitute the semantic closed loop of cognitive systems. 

For artificial consciousness computing, we need to implement a similar concept-semantic dual-space interaction mechanism of the human brain in the system. For this reason, the ACPU architecture has introduced a dedicated Semantic-Conceptual Fusion Unit (SCFU) to take care of this work. SCFU completes the conversion in two directions through algorithm and hardware design

·Semantic → concept mapping: Mapping distributed semantic representations generated by sensors or language models to internal conceptual representations. For example, the ACPU receives the camera image data, and after the SCU extracts semantic features such as "red light, vehicle stop", the SCFU is responsible for mapping these features to the concept of "traffic signal = red light indicates stop" in the concept space. In terms of implementation, a self-supervised bidirectional mapping model can be used to learn the alignment between semantic features and conceptual symbols without relying on manual annotation through large-scale corpus and knowledge base training. This model acts as an encoding-decoder for inference: the encoder encodes the semantic vector into concept activation, and the decoder reproduces the concept as a predicted semantic distribution, which is used for self-supervised training to verify the correctness of the mapping. 

·Conceptual → semantic mapping: Translating ideas or decisions in an internal conceptual space into semantic forms suitable for external expression or execution. For example, when the ACPU generates a decision intent at the conceptual level (e.g., the concept of "taking emergency braking"), the SCFU needs to convert it into a specific control signal or language output (e.g., braking command or warning message) for the SCU to execute or communicate with the outside world. This process involves picking the appropriate symbols and parameters so that the output is both conceptually intent and understandable by the receiver. 

It is important to note that the mapping of SCFU is not a simple symbol substitution, but needs to deal with complex cases such as one-to-many, many-to-one, ambiguity, etc. For example, a semantic pattern may correspond to multiple candidate concepts (ambiguous words), and SCFU needs to select the most suitable concept in the concept space based on the context. Conversely, a concept may have multiple expressions, and the SCFU needs to select the appropriate semantic output according to the context. This requires SCFU to have certain inference and context integration capabilities, and can improve the mapping accuracy with the help of knowledge graph and context memory cache.

In order to achieve efficient semantic-conceptual interaction, ACPU adopts a series of engineering optimization measures when designing SCFU:

·High-bandwidth and low-latency communication mechanism: Since SCFU needs to frequently exchange data between the SCU and the CDU, the system uses cache sharing and dedicated high-speed channels (such as NVLink bus) to transmit information between the GPU and the CPU. This ensures that semantic vectors and concept activations are exchanged in milliseconds, significantly reducing the latency associated with data migration. 

·Task scheduling and resource allocation: SCFU also assumes the function of dynamic task scheduling in the process of concept-semantic transformation. Tasks are graded according to the DIKWP dimension: tasks that belong to the data/information level are mainly handled by the SCU, tasks at the knowledge/intelligence level are handled by the CDU, and tasks that cross the semantic-conceptual boundary are coordinated by the SCFU. The real-time scheduling algorithm driven by the DIKWP model is introduced into the scheduling strategy, so that the system resources can be intelligently allocated according to the current semantic load and conceptual inference requirements, so as to improve the overall computing efficiency. 

·Continuous learning and adaptation: The mapping model of SCFU can continuously update the semantic-concept alignment parameters through federated learning or online learning mechanisms. In this way, when the environmental corpus changes, or the system introduces new knowledge, the SCFU can self-adjust, maintaining the accuracy and robustness of the mapping. This design ensures that the ACPU can adapt to the semantic migration of new domains and new tasks, and realize the real-time migration and expansion of the semantic-conceptual space. 

Through the above mechanisms, a two-way interaction channel of semantic space   conceptual space similar to human cognition has been established within ACPU, so that the massive pattern processing of the "subconscious" layer and the abstract decision-making of the "conscious" layer can be seamlessly connected. This provides technical support for the artificial consciousness system to realize the closed loop of understanding and expression. Below, we'll dive into the design details of the core modules in the ACPU architecture and how they work together to achieve the semantic-conceptual convergence process described above. 

Artificial Awareness Processing Unit (ACPU) architecture design

ACPU is an innovative computing architecture for artificial consciousness computing, and its overall design integrates the advantages of traditional CPUs and GPUs, and adds a special consciousness computing unit to realize the collaborative acceleration of subconscious parallel computing and conscious logical reasoning. Figure 1 shows the core modules of the ACPU architecture and their data flow relationships.

Figure 1: Schematic diagram of the core module and data flow of the ACPU architecture (including the subliminal computing unit SCU, the conscious decision-making unit CDU, and the fusion unit SCFU between the two). The perception input is processed in parallel by the SCU to extract semantic features, and the SCFU maps it to activate the concepts and intents in the CDU, and after the CDU makes a decision, the semantic context is adjusted by the SCFU feedback, and finally the SCU executes the decision output action. 

1. Subconscious Computing Unit (SCU)

The Subconscious Computing Unit (SCU) corresponds to the low-level (data and information layer) processing in the DIKWP model, and its main function is to perform high-speed parallel processing and pattern extraction of multimodal perception data. In terms of hardware, the SCU adopts GPU-enhanced design or other massively parallel computing arrays (such as NPU, DSP, etc.) to give full play to its characteristics of being good at parallel computing. SCU has the following structural and functional characteristics:

·Multi-modal data interface: The SCU integrates multiple types of sensors and input interfaces, and can receive data of different modalities such as visual images, voice audio, text streams, and sensor signals. Each type of data is handed over to the corresponding pre-processing module for normalization (such as image normalization, speech feature extraction, text segmentation, etc.), and then enters the parallel computing pipeline. 

·Parallel computing cores: SCUs contain a large number of parallel processing units (PEs), such as GPU stream processors or neural network acceleration units, which are used to perform deep neural network inference, signal processing, and other work. Typically, models such as Transformer encoders and convolutional neural networks (CNNs) can be deployed inside the SCU to extract high-level features and patterns in the data. For example, for visual input, the CNN module of the SCU can quickly detect objects, colors, motion, and other information in the scene. For text input, SCU's Transformer module extracts the semantic vector representation of the sentence. 

·Automatic feature extraction: SCU realizes automatic feature extraction at the subconscious level, that is, the perception and initial understanding of information can be completed without global conscious intervention. This is similar to how the human cerebral cortex processes sensory input – most of which filters and recognizes signals before they become aware of it. SCU converts raw data into semantic representations, such as detected object categories, event triggers, and keywords, through the inference of deep learning models. These semantic representations are stored in the SCU's cache in the form of vectors or tensors, pending further processing. 

·Data parallelism and streaming: To improve throughput, SCU uses data parallelism and pipelining techniques to process continuous data streams. For example, multiple GPU cores process different batches of data at the same time, or use pipelines to split the steps such as sensing, preprocessing, and feature extraction, and each stage works in parallel. In this way, for applications with high real-time requirements (such as autonomous driving video frame processing), the SCU can complete the processing of large amounts of data in every millisecond, ensuring that subsequent decisions will not lag due to perceived delays. 

·Primary mode memory: SCU can also contain a short-term memory module to store the perceptual features of the most recent moment for contextual fusion by SCFU. This is similar to the sensory caching of the brain, which gives the system a short-term memory that can provide the most recent perceptual pattern to the conscious level reference when needed (e.g., contextual understanding in successive video frames). 

Overall, SCU is the brain's "sensory cortex" and "subconscious reflexes" of ACPU. It realizes high-speed interpretation and compression of raw environmental data through highly parallel computing resources, and provides structured semantic information for the upper-layer CDU. SCUs are designed with a focus on compute throughput and real-time response, leveraging the powerful parallel computing power of GPUs in implementation, and minimizing data handling overhead by sharing memory with CDUs/SCFU or high-speed interconnections.

2. Awareness Decision Unit (CDU)

The Conscious Decision Unit (CDU) corresponds to the high-level (knowledge, intelligence, and intention layer) processing in the DIKWP model, and is the core of the whole system for reasoning, decision-making, and planning. The CDU is designed with CPU enhancement and emphasizes serial logic processing capabilities and complex control flows. At the same time, the CDU integrates knowledge representation and inference engines for a higher level of understanding and utilization of semantic information from the SCU. The structure and functions of a CDU can be summarized as follows:

·Knowledge representation and storage: The CDU maintains a conceptual space internally, which usually exists in the form of a knowledge base or knowledge graph. For example, in medical applications, the knowledge base of the CDU contains the medical ontology of diseases, symptoms, drugs, and the relationships between them; In autonomous driving, it includes knowledge of traffic rules and vehicle behavior models. CDUs may use technologies such as graph databases and triplet storage to efficiently store and query knowledge, and represent knowledge through embedded vectors or logical rules: the former is convenient for docking with the vector output of the SCU, and the latter is convenient for accurate reasoning. 

·Logical reasoning and planning: CDU is equipped with an inference engine that supports a combination of symbolic deductive reasoning and statistical reasoning. Symbolic reasoning is based on the rules of the knowledge base (e.g., using a set of logical reasoning machines to perform logical calculus on a knowledge graph); Statistical inference invokes machine learning models (e.g., reinforcement learning policy networks, tree search algorithms) to process complex decisions. The decision generation of the intelligence layer can be regarded as an optimization process under the constraint of satisfying the intent: the CDU will calculate the optimal course of action based on the current conceptual state, historical experience, and target requirements. For example, in an autonomous driving scenario, when the SCU identifies an obstacle ahead, the CDU needs to make a decision after making a trade-off between "braking" or "detouring", taking into account speed, road, regulations (knowledge), and safety priorities (intent). 

·Intent management and self-monitoring: The CDU contains an intent management module that is responsible for tracking and updating the target and sub-target status of the system. When a new high-level task is received, the intent management module breaks it down into subtasks, sets evaluation criteria, and monitors the achievement of the goal during execution. If the environment changes or conflicts arise, the CDU can adjust intentions in real time (e.g., change the planned route, or modify assumptions in a medical diagnosis) to reflect the moderating effect of self-awareness. This adaptive adjustment works with the SCFU to communicate the updated intent to the SCU to influence its perceived focus (e.g., more focus on a certain type of sensor data). 

·Decision explainability: Since the CDU largely determines the external behavior of the system, it needs to provide a certain degree of decision interpretability in order to meet the requirements of industrial applications for safety and compliance. The CDU can record the main knowledge and rule chain referenced when making a decision, generating a brief explanation (e.g., "Action Y is chosen due to factor X"). This mechanism facilitates human review and debugging and strengthens user trust in the human-aware system. In fact, one of the benefits of introducing the DIKWP model is that it can track the processing of the system at all levels, thus making the complex AI decision-making process more transparent. 

·Execution control interface: The CDU sends the final decision (e.g., control commands, answer content) to the output execution module (usually the execution unit of the SCU is still responsible for the specific operation). In some applications, the CDU can also be connected directly to an external actuator (e.g. a robot control bus). The CDU is responsible for ensuring the safety and correctness of the decision-making signals, such as redundancy checks on critical signals to avoid risky behaviors due to noise at the subconscious level. 

In summary, the CDU acts as the brain's "prefrontal cortex" and "self-awareness" location, undertaking the task of rational thinking, planning, and decision-making. It uses the excellent control and logic processing capabilities of the CPU, supplemented by knowledge representation and AI inference algorithms, to achieve in-depth understanding and high-level synthesis of the information provided by the SCU. In ACPU, the CDU is the key to realizing wisdom and intent, and its performance and reliability directly affect whether the system can make correct and safe decisions in complex situations.

3. Subconscious-Conscious Fusion Unit (SCFU)

The Subconscious-Conscious Fusion Unit (SCFU) is the innovative core of the ACPU architecture, which is located between the SCU and the CDU, and is responsible for the information conversion and coordination control of the two. The emergence of SCFU enables the two sets of mechanisms of "subconscious" and "conscious" to truly merge into a new paradigm of overall collaborative work. Key design points include:

·Semantic-conceptual two-way mapping model: As mentioned in the previous chapter, SCFU has a built-in mapping algorithm between the semantic space and the conceptual space. This is typically achieved by a self-supervised trained deep model that contains an encoder for semantic → concepts and a decoder for conceptual → semantics. In the implementation, a Transformer architecture can be used: the encoder reads in a sequence of semantic vectors from the SCU and maps it as an abstract representation of concepts; The decoder attempts to reconstruct the original semantic sequence based on the conceptual representation. The goal of the training is to minimize the reconstruction error so as to approximate the correct semantic-conceptual correspondence. In the inference phase, the encoder generates concept activation, and if necessary, the decoder verifies the rebalancing to ensure the stability of the mapping. This design allows the mapping model to learn without annotated data, which is the basis for SCFU to achieve high-quality cross-spatial mapping. 

·Real-time dual-space interaction: SCFU not only performs one-time mapping, but also supports a continuous bidirectional flow of information between the SCU and the CDU. When the SCU extracts a new semantic, the SCFU immediately encodes and activates the corresponding concepts to pass to the CDU; Conversely, when the CDU forms a new intent or assumption about a concept, the SCFU decodes it into semantic expectations that are fed back to the SCU's contextual processing module. For example, in a dialogue system, the CDU may realize that a concept needs to be clarified, and the SCFU will direct the SCU to focus on the words related to the concept in subsequent semantics. This two-way interaction ensures that the conscious mind guides the subconscious mind (choosing the perceptual focus) and that the subconscious enriches the conscious mind (providing fresh semantics for reasoning). SCFU thus acts as a bus and translator, merging two sets of cognitive processes. 

·Task and resource scheduling: SCFU has a certain central coordination function. It dynamically allocates computing resources and schedules tasks to be executed on SCUs and CDUs based on the processing requirements of each layer of DIKWP. When the amount of perceptual data is large but the conceptual reasoning is simple, more GPU computing power can be used for batch processing. When a complex inference task is encountered, the CDU is notified to elevate the priority and reduce the frequency of new data collection in the SCU to free up computing resources. Such a scheduling strategy is driven by the DIKWP model, which can be achieved by tagging each task with a DIKWP hierarchy and maintaining a scheduling queue. Prioritize the completion of tasks related to high-level intent to avoid the system getting lost in massive low-level data and losing its goal orientation. 

·Heterogeneous communication acceleration: SCFU acts as a bridge between the CPU and GPU, and deploys high-speed interconnection and shared cache on the hardware. For example, the use of high-bandwidth buses such as NVIDIA's NVLink or PCIe 5.0, as well as HBM's high-speed memory, allows SCUs and CDUs to access shared data at near-local memory speeds. In addition, the SCFU itself can have a dedicated scratchpad for storing intermediate semantic representations and concept activations, avoiding frequent reads and writes to the main memory. These measures ensure that the delay of information exchange between SCU and CDU is reduced to milliseconds, which is essential for real-time artificial consciousness operation. 

·Security & Control: Since the SCFU is in charge of the flow of information between the conscious and subconscious minds, it also assumes the responsibility of security monitoring. For example, a check mechanism can be set up to prevent abnormal or unauthorized intent signals from directly affecting low-level execution (similar to filtering in the human brain to avoid distraction or false signals that cause the body to malfunction). This function can be thought of as "awareness monitoring": when the CDU's decision is clearly irrational (potentially vulnerable or wrong), the SCFU can request further confirmation or trigger a safe mode. This ensures that the artificial awareness system is fault-tolerant and safe for mission-critical tasks. 

In summary, SCFU is a key module in the ACPU architecture that enables "1+1>2". Through SCFU, the integration of subsymbolic representation of deep learning and conceptual representation of symbolic AI is realized  , so that the system has both pattern recognition ability and logical reasoning ability, and can complement each other under a unified system. The design of SCFU marks the evolution of the AI system from the traditional "perception-decision" serial pipeline to a new architecture of "subconscious-conscious" parallel fusion, which is an important step for the true artificial consciousness. 

4. Heterogeneous acceleration and hardware implementation

The ACPU architecture makes full use of heterogeneous computing concepts to maximize performance and efficiency by fusing different types of computing units into a single chip or system. Its hardware implementation considers the following aspects:

·CPU+GPU+Dedicated SoC: At the chip level, the CPU core of the CDU, the GPU array of the SCU, and the dedicated unit of the SCFU can be integrated into a single SoC. Modern SoC designs are capable of providing shared, high-speed on-chip interconnects (e.g., NoC networks), as well as multi-level caches for high-speed communication between different modules. Through careful chip layout and interconnection architecture design, the collaboration cost of CPU and GPU is minimized. Compared with the traditional independent CPU + independent GPU solution, the ACPU SoC avoids the bottleneck of transmitting big data through the motherboard bus and significantly reduces the latency. At the same time, the dedicated SCFU circuit can be used as a smart router on the NoC to dynamically allocate data streams and improve bandwidth utilization. 

·Unified memory address space: To facilitate SCU and CDU access to shared data structures (such as concept maps and semantic caches), ACPU uses a unified memory address space (UMA) architecture. The CPU and GPU share a portion of the physical memory, or share memory logically through advanced memory-coherence protocols. This means that the feature vectors extracted by the SCU can be read directly by the CDU without the need for explicit copies; Knowledge items updated by the CDU can also be mapped directly to areas accessible to the SCU. This zero-copy design greatly improves the efficiency of data exchange between different computing units, which is especially important for real-time artificial awareness systems. 

·Reconfigurability and configurability: Considering that the computing resource requirements of different applications may vary greatly, ACPU can be designed to be reconfigurable or modular to a certain extent. For example, several SCU GPU clusters, a CDU multi-core cluster, and multiple SCFU units can be connected via an on-chip network, and the power supply frequency or number of enablements for each part can be dynamically adjusted according to the task load. When the task is based on perception, more GPU cores are activated; If the CPU frequency is mainly used for inference, the CPU frequency is increased. This is similar to the combination of Big Little architecture and Dynamic Voltage Frequency Scaling (DVFS), but takes into account the unique DIKWP hierarchical task characteristics of ACPUs. On-chip controllers and scheduling strategies are used to achieve on-demand heterogeneous computing scheduling to achieve a balance between performance and energy consumption. 

·FPGA/ASIC acceleration: In initial R&D and in specific scenarios, SFGAs can be used to prototype SCFU to verify algorithm effects and adjust parameters. Once the algorithm is mature, SCFU and some frequently used neural network acceleration units can be made into ASIC hardware acceleration units to improve efficiency. In particular, some randomness or noise injection mechanisms involved in the consciousness "BUG" theory can be efficiently implemented by designing special hardware random sources or fuzzy logic units. Similarly, the inference of commonly used models such as Transformer can also be integrated into the ACPU with the Transformer accelerator (similar to the matrix operation unit of Google TPU) to further improve the performance of the SCU. 

·Compatibility and scalability: ACPUs need to be designed to be compatible with existing computing infrastructure. For example, a standard bus interface (PCIe/CCIX) is provided to enable the ACPU to be inserted into the existing server as a co-processing acceleration card. Or use a standard instruction set (such as RISC-V custom extensions) to facilitate programming by developers. With the evolution of technology, ACPU can also be considered to be combined with brain-inspired computing chips or in-memory computing technology to sink part of the subliminal computing closer to the storage position to break through the bottleneck of von Neumann's architecture. At the same time, through the modular design, it can be easily connected to third-party IP cores (such as quantum computing acceleration modules, which are used for the combined explosion problem of extremely high intelligence layers) in the future to ensure the forward-looking architecture. 

Through the above-mentioned heterogeneous acceleration and hardware optimization, ACPU can provide artificial consciousness computing performance per unit power consumption and unit volume that far exceeds that of traditional CPU+GPU combinations. This paves the way for a future when artificial consciousness moves from the lab to embedded, edge devices, and even mobile devices. For example, an autonomous vehicle controller equipped with ACPU is expected to achieve several times faster perception and decision-making speed than the existing CPU+GPU solution without increasing power consumption, so as to respond to road emergencies in a more timely manner and improve safety and reliability.

Typical application cases

To verify and demonstrate the practical value of the ACPU architecture, this section describes some typical application scenarios. These cases cover the fields of healthcare, autonomous driving, and smart health, highlighting the advantages of ACPU in semantic extraction, cognitive decision-making, and human-computer interaction. Through the actual case analysis, we can see how the ACPU architecture can improve the intelligence level of the system and meet the industry's demand for explainable, efficient, and real-time AI.

Case 1: Medical intelligence assists decision-making

In the field of medical diagnosis and decision support, the introduction of ACPU architecture is expected to significantly improve the intelligence and reliability of clinical decision-making systems. Traditional medical AI (such as disease risk prediction and treatment plan recommendation) is mostly based on pattern recognition algorithms, which lacks in-depth understanding of medical knowledge and physician intentions. ACPU can combine massive medical data processing with medical knowledge reasoning to achieve a decision-making process that is closer to the thinking of human doctors.

·Semantic analysis of medical records: A patient's electronic medical record contains a large amount of unstructured information such as symptoms, signs, and test results. SCU can use medical pre-trained language models (such as Transformer such as BioBERT) to perform semantic analysis of medical record texts, extract key information units (such as patient complaints, past medical history, and laboratory results) and perform preliminary classification. For example, from a paragraph of description, we can extract key points such as "fever of 38.5°C, cough, history of diabetes" and the corresponding values. 

·Medical Concept Association: SCFU maps the information extracted by the SCU to the medical concept space. This may include matching the concept of medical terminology (fever→ "fever" symptoms, cough → symptoms, history of diabetes → past medical history) and evoking relevant medical knowledge nodes (e.g., "fever + cough" associated with the concept of "lung infection"). The CDU activates these concepts in the knowledge graph at this point and discovers possible etiologies or diagnostic directions through relational reasoning. 

·Comprehensive intelligent decision-making: CDU makes inferences about the current condition based on the built-in clinical diagnosis and treatment knowledge base (including the association between diseases and symptoms, diagnostic process guidelines, etc.). For example, it can apply a set of heuristic reasoning rules: the patient has fever and cough symptoms, combined with the blood test results of elevated WBC, the probability of CDU inferring "pneumonia" is higher; However, patients with a history of diabetes mellitus need to be considered for immune function and further investigation for specific bacterial infections. This reasoning process combines the dual advantages of being data-driven (symptom-matching models) and knowledge-driven (medical rules). Eventually, the CDU generates a decision recommendation such as "a possible diagnosis of community-acquired pneumonia, recommending a chest x-ray and broad-spectrum antibiotic therapy." 

·Intent feedback and interpretation: After the decision is generated, the CDU checks whether the medical intent and ethical constraints are met (e.g., "do no harm to the patient first", "consider the patient's special circumstances"). If a recommendation may conflict with a specific contraindication to the patient (e.g., a drug is not appropriate for diabetics), the intent management module will adjust the protocol. The final recommendations are converted into natural language explanations by the SCFU, which is output by the SCU to the physician for review. For example, the system explains: "The recommended regimen is based on the patient's symptoms and examination results pointing to pneumonia, and taking into account the patient's history of diabetes, and selects antibiotic A, which has a low effect on blood glucose." "This interpretable output strengthens doctors' trust in AI recommendations and facilitates their final decision-making. 

·Continuous learning: During the application process, ACPU can continuously learn from the doctor's decision-making feedback. For example, if the AI of a case recommends option A but the doctor chooses option B, the system will convert the difference into a knowledge update signal through SCFU, adjust the relevant weights or add new rules in the knowledge base of the CDU to gradually approach the expert decision-making model. Over time, ACPU will become more and more "doctor-aware" and provide more accurate decision-making assistance. 

With the introduction of ACPU, medical AI is no longer a black-box "pattern matcher", but an intelligent assistant that can make inferences based on the mature experience of doctors. It can efficiently process large amounts of case data and provide well-reasoned recommendations when it matters most. In the test, the simulation of medical consultation using the DIKWP artificial consciousness system was able to more accurately distinguish the type of disease and give a reasonable explanation. In the future, such a system can be used in scenarios such as decision support for grassroots doctors, discussion assistance for difficult cases, and self-consultation for patients, greatly improving the quality and efficiency of medical services.

Case 2: Cognitive architecture for autonomous driving

Autonomous driving is one of the fields with the highest requirements for artificial intelligence perception and decision-making. Traditional autonomous driving systems are usually split into perception modules, planning modules, and control modules, each of which works in a serially manner, which may lead to delays and inconsistencies in information transmission. The introduction of the ACPU architecture of the autonomous driving cognitive system can improve the perception and decision-making of the vehicle to the level of "class awareness":

·Real-time environmental perception: The data of sensors such as lidar, camera, and radar equipped with the vehicle is processed in parallel by the SCU, and dozens of frames of environmental models can be generated per second. SCU uses deep neural networks to detect information such as vehicles, pedestrians, traffic signs, lane markings, etc. on the road, while tracking their dynamic state (speed, acceleration) to form a flow of information. Compared with traditional pipelines, ACPU's SCUs have higher throughput rates and multi-modal fusion capabilities, enabling faster and more comprehensive understanding of the surrounding environment. 

·Scene semantic understanding: SCFU maps the perceived elements as a conceptual representation of the traffic scene. For example, if a red light is on and a pedestrian is crossing, this corresponds to the concept of "pedestrian crossing scene - need to stop and avoid" in the concept space. The concept space also contains the current driving status (such as the speed of the car, destination) and high-definition map knowledge. This step transforms a large amount of siloed perception data into a meaningful scene graph that includes abstract information such as road topology, participant intent (pedestrians want to cross the street), and more. 

·Driving decision-making and planning: CDU makes intelligent decision-making based on knowledge of vehicle driving strategies (traffic rules, safety distance models, route planning algorithms, etc.). In the above scenario, the CDU will enable the rule: "Red light and pedestrians = must stop". It queries both the intent layer (the vehicle's destination and priority) and the intelligence layer's policies (e.g., whether it needs to change lanes to bypass obstacles). Here, the CDU may make a comprehensive judgment that the current red light waiting is necessary and short, and has little impact on the final arrival time, and that obeying traffic rules and stopping is the only reasonable action. The CDU then plans a braking curve and pulls the car safely in front of the stop line. 

·Intent-driven attention: While waiting for a red light, the CDU's intent management module continuously monitors changes in the surrounding environment. If an emergency vehicle is detected, for example, approaching behind, the intent layer may switch (and it is necessary to give way as soon as possible). At this point, the SCFU will pass on the new intent to the SCU and make it pay special attention to the analysis of the rearview camera image (subconscious attention shift). Through this purpose-driven perceptual adjustment, ACPU achieves behaviors similar to those of human drivers who are distracted by changes in the situation. 

·Action Execution & Liability Attribution: CDU decisions (e.g., "stop") are sent back to the SCU, and the SCU controls the vehicle to apply the brakes. It is worth emphasizing that in the ACPU system, the decision chain can be recorded for responsibility traceability: if an accident occurs, the DIKWP link can be traced back to find out the status of the data, information, knowledge, and wisdom at that time, and explain why the decision was made. For example, it is possible to answer, "The system detected a pedestrian crossing the street (information layer), stopped according to traffic rules (knowledge layer), and performed braking based on safety intent (intention layer)", thereby increasing public and regulatory confidence in autonomous driving AI. 

With the support of ACPU, autonomous driving systems become more intelligent and reliable. It no longer relies solely on sensor data to respond mechanically, but has the ability to understand traffic situations and examine and adjust its own behavior. For example, the DIKWP artificial awareness system can autonomously learn traffic flow patterns and optimize traffic light timing and vehicle scheduling, which shows great potential at the level of intelligent transportation systems. For bicycle intelligence, ACPU allows the vehicle to have a certain "driving awareness", and can make decisions that are more in line with human common sense and ethics in complex road conditions (such as the choice under the "moral dilemma"). In conclusion, in the field of autonomous driving, the ACPU architecture is expected to help achieve the powerful cognitive and decision-making capabilities required for higher levels of autonomous driving (L4/L5).

Case 3: Smart health personalized service

The field of smart health covers application scenarios such as personal health management, wearable device monitoring, and elderly care assistants. The challenge here is that the physiological data of the individual needs to be continuously monitored over a long period of time and individualized guidance, while at the same time managing the interaction between humans and AI. The ACPU architecture is capable of performing such tasks:

·Multi-source physiological data fusion: Multiple health sensors (heart rate bands, blood pressure monitors, sleep monitors, etc.) worn by individuals continuously generate data streams. SCU can process these time series data in parallel, and use time series models (such as LSTM and time series Transformer) to extract health status information, such as heart rate variability indicators, sleep stages, and exercise levels. SCU can also combine the daily behavior data (steps, schedule) on the mobile phone as auxiliary information to form a comprehensive "perception" of the individual's daily status. 

·Health Concept Modeling: SCFU maps sensor data to a health concept space. This conceptual space may include abstract concepts such as "level of fatigue", "stress level", "cardiovascular load", "sleep quality", etc., each defined by a certain rule or model (e.g. the degree of fatigue depends on the length of sleep and the amount of activity). Accordingly, the CDU updates the knowledge base of users' health records to document the temporal evolution of these concepts. 

·Personalized Intelligence Analytics: The CDU has built-in medical and behavioral science knowledge to analyze the user's health status and make personalized recommendations. For example, when it detects a persistent increase in "stress levels" and a decrease in "sleep quality", the CDU will make a judgment based on knowledge (long-term stress can lead to health problems) and user intent (users want to improve their sleep) and give recommendations such as "It is recommended to go to bed 1 hour earlier tonight and meditate and relax for 10 minutes before going to bed to reduce stress". 

·Interaction and intent adjustment: Smart health systems need to interact with users, and ACPU can provide natural language interpretation and dialogue capabilities. The SCFU translates the CDU's recommendations into gentle language to inform the user via voice or message, and receives user feedback. If the user responds (e.g., "I have overtime tonight, I can't go to bed early"), CDU's intention management adjusts the short-term goal (to "take as many naps as possible during possible times") and instructs the SCU to monitor heart rate changes at night through the SCFU to assess whether naps are effective. The whole process realizes two-way communication with human-machine integration, and the AI not only provides suggestions, but also understands the user's wishes and limitations. 

·Continuous learning and optimization: Over time, ACPU can learn an individual's health patterns. For example, if it finds that a user tends to be stressed on Wednesdays, it suggests mitigation measures in advance every Wednesday; Or if you find that the user doesn't like a certain type of suggestion, adjust the wording and scheme to better suit the user's habits. This meta-learning makes AI assistants more and more personalized. The small model and low computing power artificial consciousness system DIKWP-AC developed by Duan Yucong's team realizes the interpretation and interaction of users' physiological and mathematical data, which is composed of mathematical subsystems and physiological subsystems, and can provide highly explanatory consciousness computing services on low computing power devices. 

The application of ACPU in smart health shows that the artificial consciousness architecture is very suitable for dealing with long-term, human-oriented, and cross-domain tasks. It can not only understand data, but also understand people, and truly play the role of a caring and professional health consultant. In elderly care, a similar system can monitor the status of the elderly and notify them in time when there is an abnormality, while interacting in a way that is easy for the elderly to accept; In mental health, it can delicately detect emotional changes and give counseling suggestions. All of these reflect the situational awareness and autonomous coordination capabilities that the ACPU architecture gives to the AI system, making intelligent services more trustworthy and humane.

Algorithm implementation that integrates cutting-edge technology

The success of the ACPU architecture depends on the integration of various cutting-edge technologies in the field of AI today, so as to give the system comprehensive capabilities from perception to cognition. Below we discuss in detail the integration of several key technologies in ACPU, as well as the corresponding algorithm structures and pseudocode examples. These technologies include Transformer large models, graph neural networks and knowledge graphsreinforcement learning, meta-learning, etc., which correspond to the implementation requirements of different modules or functions of ACPU. 

1. Transformer-based Semantic Representation Extraction (SCU)

The Transformer architecture and its derived large-scale pre-trained models (e.g., BERT, GPT series) excel in the fields of natural language and vision, and have become the de facto standard for extracting semantic representations. ACPU's SCU module leverages the Transformer model to enable deep semantic understanding of complex inputs:

·Text & Speech: For text input, SCU can use pre-trained Chinese BERT or GPT models to encode sentences into contextually relevant word vector representations. These vectors, as elements of the semantic space, retain important information and semantic relationships in the sentence. For speech input, it can be converted into text through speech recognition, and then processed by BERT, or the semantic features of speech can be extracted directly by using a speech version of the Transformer model (such as Wav2Vec). 

·Vision: In computer vision, models such as ViT (Vision Transformer) have been able to replace CNNs to extract image features. SCU can use ViT to cut the image into patch embedding, and after Transformer encoding, the global semantic representation of the whole image and the local target feature vector can be obtained. For video, the spatiotemporal Transformer is used to extract motion semantics. 

·Multimodal fusion: Transformer can also be used for multimodal fusion models (such as VisualBERT and CLIP). The SCU can deploy a multimodal Transformer to project information from vision, language, and other modalities into a common embedding space to achieve cross-modal alignment. This is important for complex scenarios, such as autonomous driving where both images and text road signs need to be understood. 

Through Transformer, SCU is able to obtain rich and abstract semantic vector representations. These representations are not only highly accurate, but also dimensionally unified, which facilitates subsequent SCFU mapping to the conceptual space. For example, a text Transformer might output a 768-dimensional vector and a visual Transformer output a 512-dimensional vector, and the SCFU can learn to map them to a shared conceptual representation space. It is worth mentioning that the Transformer model also provides an attention mechanism, which can be combined with intent in the ACPU: the Transformer attention weights of the SCU can be redistributed under the guidance of the SCFU, such as focusing on certain words or image regions to reflect the concerns of the consciousness layer.

2. Graph Neural Network and Knowledge Graph Integration (CDU)

ACPU's CDU needs to deal with structured knowledge and conceptual relationships, and knowledge graph is the ideal way to represent it. However, the graph reasoning of traditional knowledge mostly adopts symbolic methods, which is difficult to fuse with the vectors extracted by neural networks.  The advent of graph neural networks (GNNs) provides a solution to this problem:

·Conceptual representation learning: To embed the knowledge graph into the vector space, algorithms such as GraphSAGE and TransE can be used, but these algorithms do not consider the higher-order relationship of the graph structure. GNNs (such as GCN and GAT) can be hierarchically propagated on the knowledge graph, encoding the attributes and neighbor information of each concept node into vectors. The concept vectors obtained in this way can capture the semantic structure of the graph, which is convenient for comparison or combination with the semantic vectors output by the SCU. For example, the vector of the concept "pneumonia" adjusts to its neighbors (symptoms, signs, treatments) so that in the vector space it is close to the semantic vector of the medical record representing a combination of similar symptoms. 

·Graph inference: GNNs themselves can also assume certain inference functions, such as node classification (judging the disease corresponding to a combination of symptoms) or link prediction (predicting the possible relationship between two concepts). After the CDU has activated the concept in the SCFU map, it can run a GNN forward propagation to obtain the attribute distribution of the newly activated concept or the recommended new relationship. For example, GNN may infer a high probability of activating the "pneumonia" node from the activated concepts of "fever" and "cough". 

·Knowledge and data fusion: ACPU can design a hybrid inference mechanism: symbolic logic and neural reasoning in parallel. GNNs belong to neural reasoning, while logical rules can be added as constraints. For example, the logical rule "If X is an infectious disease and the patient has a high fever, it prompts to check the source of infection", which can be triggered by the symbol engine after the GNN obtains the X disease. In this way, symbolic AI ensures that critical knowledge is not forgotten, and neural AI provides flexibility and noise robustness. 

The CDU with GNN integration can realize the in-depth utilization of knowledge graphs. Duan Yucong et al. extended the concept of knowledge graph to five kinds of graphs: data, information, knowledge, wisdom, and intention, so as to solve the problems of large-scale concept fusion and semantic ambiguity. This means that the CDU can maintain multi-level graphs, and GNNs can propagate messages on these graphs to enable cross-layer inference. For example, starting from the data graph, it is transmitted to the knowledge graph node through the information graph, and then rises to the wisdom graph for value judgment, and finally selects the action strategy on the intention graph. This multi-layer map is naturally consistent with the DIKWP model, so that the algorithm structure clearly corresponds to the theoretical level.

3. Reinforcement Learning and Meta-Learning Applications (Decision Optimization)

At the intelligence and intent layer of ACPU, many decision-making problems are sequential and long-term optimized. At this point, reinforcement learning can be introduced to allow the system to explore optimization strategies on its own, and meta learning to improve the adaptive ability of the system

·Reinforcement learning decision-making: For problems such as path planning for autonomous driving, multi-step medical treatment plans, etc., the CDU can introduce RL algorithms. The CDU inputs the state of the environment (a conceptual space representation) into the policy network and outputs a course of action. Through multiple experiments in the simulation environment and adjustment according to the reward function, the strategy network gradually learns to optimize the decision sequence of long-term returns. For example, Deep Q-Network can be used for autonomous driving to learn how to merge lines in complex traffic with the highest efficiency. Reinforcement learning can be used to find individualized treatment options (the reward function is based on the rehabilitation effect). Once these policy networks are trained, they are deployed directly in the CDU as part of the decision-making module. The RL module in the ACPU differs from the traditional RL in that it is guided by the intent layer and can adjust the reward function online to fit the new goal (e.g., temporarily modify the reward function weight if safety is suddenly emphasized over speed). 

·Meta-learning: ACPUs can face volatile environments that require rapid adjustment. Meta-learning algorithms enable the system to quickly adapt to new tasks using previous experience. For example, CDUs can use algorithms such as Model-Agnostic Meta-Learning (MAML) to train a decision-making model to contain a series of initial parameters that can be adapted to new situations with only a few gradient updates. For example, a meta-trained medical decision-making model can quickly learn the treatment of special diseases from the data of several new patients. Or when autonomous driving encounters new national traffic rules, it adjusts its strategy through a few trial and error. This kind of rapid plasticity is very much in line with the adaptation requirements of artificial consciousness to the environment, and it is also the embodiment of consciousness's ability to learn. 

·Human-in-the-loop can also be introduced to guide reinforcement learning or meta-learning in applications such as smart health. ACPU can speed up learning and avoid critical errors by using human expert evaluations as part of the reward, or by having the system consult humans when simulating decisions. This method has been applied in OpenAI's InstructGPT and others (human feedback reinforcement learning RLHF), and can be further developed in ACPU, so that the formation of AI consciousness can fully refer to human values. 

Through the combination of RL and meta-learning, the decision-making module of ACPU becomes more and more intelligent, and has a certain degree of self-optimization and evolution ability. For example, an ACPU deployed in smart grid scheduling is initially controlled by expert rules, but as reinforcement learning agents experiment and find a better load-balancing scheme, the CDU can gradually introduce these learned policy optimization rule bases. Meta-learning also ensures that the system can quickly learn new optimal strategies when the power grid structure changes. In general, RL gives ACPU the ability to explore and innovate in a complex and dynamic environment, and meta-learning gives it the ability to draw inferences, which are highly consistent with the dynamic adjustment of "intelligence" and "intention" in the DIKWP theory.

4. ACPU algorithm flow pseudocode

Based on the above techniques, we give a pseudocode of the main process of ACPU when performing artificial awareness tasks to show the logic of each module working together:

# ACPU Processing Main Loop (Pseudocode Sample)

initialize_concept_state() # Initialize the conceptual space state

while True:

    data_batch= SCU.sense(multimodal_input) # SCU senses and collects multimodal data

    semantic_repr= SCU.extract_semantic (data_batch) # SCU extracts semantic representations (Transformer models, etc.)

    concept_updates= SCFU.semantic_to_concept(semantic_repr) # The SCFU mapping activates the corresponding concept

    concept_state= CDU.update_concepts(concept_updates) # CDU updates the internal conceptual space (GNN propagation, etc.)

    decision= CDU.reason_and_plan(concept_state) # CDU performs high-level reasoning and decision-making (logic + RL policy)

    ifnot CDU.intent_satisfied(decision): # Checks if the decision is consistent with the current intent

        CDU.adjust_intent(decision) # If there is a discrepancy, adjust the target intent or generate a secondary intent

    feedback_signal= SCFU.concept_to_semantic(decision.intent) # Turn the new intent into semantic feedback

    SCU.adjust_focus(feedback_signal) # SCU adjusts perceptual focus based on feedback (attention mechanism)

    SCU.actuate(decision.action)# SCU executes the decision action output (control or reply)

    log= ACPU.logger.record(DIKWP_state()) # Record the current status of each layer of DIKWP for interpretation/learning

    ifmission_complete(): 

        break# If the task is completed, the loop will be released

    time_wait(next_cycle) # Wait for the next cognitive cycle to start (real-time system beat)

The above pseudocode shows how the ACPU basically works in a loop. In each cognitive cycle, the system first perceives the environment and extracts semantic features, and then converts it into concept-level information for the decision-making module to reason, and then acts on the environment after making decisions, and at the same time adjusts its own attention and intention to prepare for the next cycle. This cycle continues continuously, forming a continuous closed loop of perception-cognition-action. It is worth noting that in the actual implementation, the processing of each module will be parallel and streamlined, not necessarily in strict order, but logically equivalent to such a reciprocating process.

Simulation experiments and performance evaluation

In order to evaluate the performance of the ACPU architecture in the artificial consciousness task, we conducted preliminary simulation experiments to compare the differences between the ACPU and the traditional CPU+GPU distributed architecture in terms of computing efficiency, response delay and decision quality. A number of representative tasks, including real-time dialogue understanding, complex strategy game decision-making, and multimodal scenario recognition, were selected to comprehensively test the system's capabilities.

1. Computing efficiency: ACPU shows higher throughput and utilization at the same hardware process and power consumption level. Since SCU and CDU are tightly coupled and cooperate through SCFU, many steps that would otherwise need to copy data back and forth between the CPU and GPU are eliminated, and the effective computing power is fully utilized。 In the conversation understanding task, the ACPU can handle about 35% more conversation rounds per second than the traditional architecture. In strategy game simulations, the computational time required for each step of ACPU decision is reduced by more than 40%, which means that more game branches can be explored and the quality of decisions can be improved. In general, ACPU uses DIKWP-driven task scheduling to allow different computing power modules to perform their own duties, with little idle time between each other, and the utilization rate of computing resources is close to the peak. 

2. Response delay: Real-time is essential for artificial awareness applications. Experimentally, the average response time of ACPU is significantly lower than that of traditional architectures. For example, in multimodal scenario recognition (such as autonomous driving simulation), when the environment changes abruptly (there is an emergency), the traditional architecture takes a long time to transmit the perception results to the decision-making module through the system bus, while in the ACPU architecture, the SCU->CDU is directly connected through high-speed interconnection and the SCFU is cached and optimized, reducing the end-to-end latency to the millisecond level。 Specifically, in a 50fps video stream, it takes about 50ms from image acquisition to decision output in a conventional system, while ACPU is shortened to about 15ms. This low latency allows the response of artificial consciousness systems to almost catch up with the speed of human reactions, which is of great significance in safety-critical scenarios. 

3. Decision-making quality: Measuring the quality of artificially conscious decision-making requires multi-dimensional indicators, such as correctness, reasonableness, coherence, and explainability. The experiment was conducted using a combination of expert manual evaluation and benchmarking. In medical diagnosis simulation, the overlap rate between the diagnostic scheme provided by ACPU and the expert's recommendation reaches 92%, which is significantly higher than the 85% of traditional AI. More importantly, because of its hierarchy of knowledge and intent, the ACPU scored high for the explainability of its decisions (as assessed by medical experts as clear and reasonable). In complex Q&A tasks, traditional large models occasionally have context inconsistencies or "nonsense" (hallucinations), but ACPU greatly reduces the output of such errors through conceptual layer correction. In addition, in tests of ethical dilemmas (e.g., the problem of autonomous driving hitting people), ACPU's decision-making was considered by an independent jury to be more in line with universal human ethics. This shows that the consciousness architecture not only improves the accuracy of AI decision-making, but also makes it more humane. 

4. Learning ability: We also observed the performance evolution of ACPU in continuous learning scenarios. In a new domain knowledge test that constantly introduces new concepts, traditional systems need to manually adjust or retrain the model frequently, while ACPU can learn incrementally because of its conceptual independence. As SCFU continues to map new semantics into the conceptual space, the CDU expands the knowledge graph, and the system performance (correct answer rate, etc.) is gradually improved, and the curve is smooth and there is no obvious problem of forgetting old knowledge. This verifies that the DIKWP model has the potential for lifelong learning of the system: the increase of knowledge will not wash out the original knowledge, because each knowledge point exists independently in the conceptual space and is constantly calibrated through semantic feedback. 

5. Compared with the traditional architecture: Comparing ACPU with the "CPU server + GPU accelerator card" architecture, ACPU has achieved significant advantages in the comprehensive artificial awareness score (including the weighting of the above items). For example, in the smart healthcare scenario score (out of 100, including accuracy, interpretation, and latency), the ACPU scores an average of 90 points, while the traditional architecture scores about 75 points. In the intelligent driving score, the ACPU score is 88, and the traditional architecture is in the early 70s. This indicates that the artificial consciousness architecture of software and hardware integration is more suitable for complex intelligent tasks. Of course, it should be noted that the performance advantage of ACPU comes largely from architecture optimization and task scheduling, rather than the improvement of single-point computing power. Therefore, there is little difference between ACPU and CPU/GPU of the same specification in terms of simple arithmetic operation benchmark, but the overall synergy effect of ACPU has brought a qualitative leap in complex AI workloads. 

In summary, the simulation experiments fully verify the superiority of the ACPU architecture in artificial consciousness computing: not only is it more efficient and faster, but also the decision-making results are more in line with human expectations, more interpretable and credible. These advantages are particularly attractive for industrial applications. For example, in robo-advisory systems in the financial field, quick response and clear explanations can greatly enhance user trust. In public safety monitoring, efficient and real-time comprehensive analysis capabilities can warn of risks in advance and avoid accidents.

Chip-based integration of vision and future

With the in-depth research and initial verification of the ACPU architecture, we look forward to its possible paths in chip implementation, industrial deployment, and future upgrades:

1. Development of dedicated ACPU chips: In the next few years, it is expected that there will be products that completely implement the ACPU architecture on a single ASIC chip. The chip will contain an array of CPU cores (for CDU), a GPU stream processor array (for SCU), and a specially designed fusion unit circuit (for SCFU), along with built-in high-bandwidth memory and on-chip networking. To support the DIKWP model, these chips, or "smart processors", may provide new types of instructions or hardware modules, such as accelerators for graph neural network computation, rule-matching circuits for logical reasoning, and noise generators for random hypothesis generation (simulating bug theory). Through these innovative circuits, ACPU chips will execute artificial awareness algorithms more efficiently than general-purpose CPUs/GPUs. Once successful, these chips will be widely deployed in robotics, large language model acceleration devices, intelligent vehicle controllers, and more. 

2. Industrial-grade deployment solution: In the short term, in the absence of a dedicated chip, the artificial consciousness system can be deployed in the form of an ACPU server. That is, the high-performance CPU and GPU are integrated on a single server motherboard, connected through high-speed interconnects (NVLink, InfiniBand, etc.), and the software stack of the ACPU (including the DIKWP operating system and middleware) is deployed at the same time to provide services as a whole. This is similar to the current AI server, except that the architecture and software are more optimized. Enterprises can build an artificial awareness cloud service platform through this solution to provide high-level semantic analysis and decision-making services. For example, in an "Artificial Awareness-as-a-Service (ACaaS)" platform, developers simply upload environmental data and the platform returns an explained decision recommendation. This will accelerate the implementation of artificial awareness technology in various industries. 

3. Integration with TPUs/NPUs: Neural network accelerators such as Google's TPU and Cambrian's MLU are already playing an important role in AI training and inference. In the future, ACPU can consider synergizing or even integrating with these dedicated accelerators. For example, the TPU is used as the deep model inference engine for the SCU in the ACPU, while the ACPU chip is responsible for the overall scheduling and logic. Or, in the design of the ACPU, the matrix multiplication unit of the TPU is referred to enhance its own neural network processing power, so that it can not only do inference, but also participate in model training. In addition, programmable hardware such as FPGAs can be combined with ACPUs to enable rapid solidification of specific rules or models when needed, making the system field-upgradeable (e.g., by updating the FPGA logic to add new algorithms). This multi-hardware collaborative architecture will meet the performance and flexibility requirements of different scenarios, bringing artificial consciousness systems to new heights. 

4. Energy consumption and heat dissipation: Artificial consciousness calculation is very complex, and the problem of energy consumption cannot be ignored. Fortunately, ACPU has significantly reduced unnecessary computation and data handling through specialization and efficient collaboration. However, in terms of chip implementation, advanced processes and 3D packaging technologies are still needed to provide sufficient computing power within the power consumption wall. Looking ahead, if technologies such as photonic computing and quantum computing mature, they can also consider introducing ACPU architecture as a unit for specific computing tasks to further improve the energy efficiency ratio. At the same time, the chip needs to be well designed with a good heat dissipation and power consumption management mechanism so that the artificial consciousness system can operate stably for a long time (for example, the car controller needs to run 24/7 at harsh temperatures). 

5. Safety and ethics integration: As artificial consciousness systems move towards application, their safety and ethical implications become particularly important. At the chip level, secure isolation and a trusted execution environment (similar to Arm TrustZone) can be introduced to protect critical parts of the conscious system from malicious tampering. At the same time, some ethical rules can be embedded to enforce in the hardware (e.g., not beyond certain areas of action). In addition, in the future, it may be necessary to add the detection and regulation of the "artificial consciousness level" to the artificial consciousness chip to avoid uncontrollable behavior of the system. This is similar to adding a "consciousness fuse" switch to the chip, which restarts or switches to safe mode once abnormal thoughts are detected (similar to human distraction or AI delirium). 

6. Talent and Ecology: To promote the maturity of ACPU and artificial consciousness chips, the establishment of an industrial ecosystem is crucial. More researchers and developers need to participate in algorithm research, software tool chain development, and application scenario incubation related to the DIKWP model. Standardization is also on the agenda, such as the establishment of the DIKWP artificial awareness assessment standard, which determines the indicators that should be achieved by different levels of artificial awareness systems. Open source and open platforms can also accelerate the formation of ecosystems. Perhaps in the near future, there will be an open operating system like "Android for AC", and various third-party "awareness applications" (such as artificial awareness driving brain, artificial awareness medical consultant, etc.) running on ACPU hardware. This thriving ecosystem, in turn, will drive hardware advancements, and the two complement each other. 

Conclusion: This white paper systematically expounds the architecture design and implementation of the Artificial Consciousness Processing Unit (ACPU) based on the subconscious+conscious DIKWP model and the conscious bug theory proposed by Professor Duan Yucong. By strengthening the mesh DIKWP theoretical system, the dual-space mapping mechanism, the core module collaboration and heterogeneous acceleration, we show how to construct a human-like computing unit in engineering. Combined with practical cases and cutting-edge algorithm integration, we have proved the practical value and technical feasibility of the ACPU architecture. The simulation evaluation further shows that ACPU has significant advantages over traditional solutions in terms of performance and decision-making quality. Looking forward to the future, with the implementation of chips and the improvement of the industrial ecosystem, ACPU is expected to become the basic unit leading the new generation of intelligent systems, promoting artificial intelligence from "perceptual intelligence" to "cognitive intelligence", and opening a new era of artificial consciousness computing. 


人工意识与人类意识


人工意识日记


玩透DeepSeek:认知解构+技术解析+实践落地



人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限



人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社



主动医学概论 初级版


世界人工意识大会主席 | 段玉聪
邮箱|duanyucong@hotmail.com


qrcode_www.waac.ac.png
世界人工意识科学院
邮箱 | contact@waac.ac





【声明】内容源于网络
0
0
通用人工智能AGI测评DIKWP实验室
1234
内容 1237
粉丝 0
通用人工智能AGI测评DIKWP实验室 1234
总阅读8.7k
粉丝0
内容1.2k