大数跨境
0
0

Current Evolutionary Trends and Physical Foundations of

Current Evolutionary Trends and Physical Foundations of 通用人工智能AGI测评DIKWP实验室
2025-11-21
13



Current Evolutionary Trends and Physical Foundations of Emerging Computational Theories



Yucong Duan


International Standardization Committee of Networked DIKWPfor Artificial Intelligence Evaluation(DIKWP-SC)
World Academy for Artificial Consciousness(WAAC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)


Abstract
This paper systematically reviews several emerging computational theoretical systems, including the "Natural Computing" thought (i.e., "Everything is Computation") proposed by Academician Jingnan Liu, the "Semantic Mathematics" and "Semantic Universe" theories proposed by Professor Yucong Duan, the thermodynamic computing paradigm based on thermodynamic principles and probabilistic bits (p-bits), as well as computational thermodynamics theory and computational models based on physical evolutionary processes (such as Boltzmann machines and Ising models).
First, the introduction clarifies the background and significance of breaking through the traditional Turing paradigm and exploring new computational concepts. Next, the theoretical review section introduces the origins, core content, and representative progress of each theoretical system: Natural Computing asserts that computation is a ubiquitous natural process and that everything can be viewed as computation; Semantic Mathematics attempts to explicitly introduce semantic information into mathematical axiom systems, establishing a multi-level semantic structure from data to "Wisdom/Purpose," and proposes an evolutionary model of the cosmic semantic network; the thermodynamic computing paradigm utilizes physical thermal fluctuations and energy dissipation principles to network random bit devices (p-bits) for direct sampling calculation of probability distributions; computational thermodynamics studies energy and entropy constraints in computational processes (such as the Landauer limit); and computational systems based on physical evolution (Hopfield networks, Boltzmann machines, Ising models, etc.) draw on the self-organization principles of statistical physics to solve optimization or storage patterns through the gradual reduction of system energy.
Then, this paper deeply compares the similarities, differences, and intrinsic connections of the above theories across several key dimensions: examining whether their definitions of "computation" break through the classical Turing machine framework, whether they regard actual physical processes as computation rather than merely mathematical simulations, whether they reflect the relationship between semantic hierarchy and information entropy, whether they can provide a more natural or human-like approach to cognitive modeling, and their respective potential for real-world technological transformation. The comparative analysis finds that: the "Everything is Computation" thought expands the scope of computation to all things in nature, emphasizing interdisciplinary computational thinking; the Semantic Mathematics system goes beyond pure symbol manipulation, introducing formal characterization of meaning and providing new methods for evaluating the cognitive level of artificial intelligence; while thermodynamic computing and physical evolutionary computing paradigms break the limitations of the von Neumann architecture, utilizing physical parallel evolution to achieve efficient solutions, and are expected to significantly reduce computational energy consumption.
Finally, this paper looks forward to the role of thermodynamic computing in the future development of artificial intelligence: this type of new architecture has the potential to surpass the traditional von Neumann system, achieving parallel asynchronous computing closer to the brain, approaching thermodynamic limits in energy consumption, and potentially stimulating semantic emergence capabilities due to the introduction of noise and adaptive mechanisms. The article is accompanied by model diagrams [see Figure 1] and other auxiliary explanations, striving to provide a comprehensive and in-depth reference for researchers in related fields.
Introduction
Traditional computational theory is based on the Turing machine model, viewing computation as a deterministic operation on symbols. However, with the development of information technology and artificial intelligence, people have gradually realized that the classical paradigm faces many challenges, including computational performance improvements approaching physical limits, bottlenecks in the cognitive abilities of artificial intelligence, and the excessive power consumption of the von Neumann architecture. In recent years, researchers have begun to reflect on the meaning and implementation pathways of "computation" from an interdisciplinary perspective, proposing a series of new thoughts and paradigms that transcend the traditional Turing model. Among them, some advocate extending the definition of computation, viewing all physical evolution in nature as a computational process; some introduce semantic concepts, attempting to reflect levels of meaning in mathematical and computational models to be closer to human cognition; and others return to physical origins, utilizing thermodynamic principles or the evolution of physical systems themselves to perform calculations, in hopes of breaking through the efficiency and functional limitations of current architectures. These theories include the " Natural Computing " thought proposed by Academician Jingnan Liu, namely "Everything is Computation," the " Semantic Mathematics " and "Semantic Universe" theories proposed by Professor Yucong Duan, as well as the new "Thermodynamic Computing" paradigm based on Thermodynamics and Probabilistic BitsComputational Thermodynamics theory, and physical computational systems like Boltzmann Machines/Ising Models. They have reinterpreted "computation" from the perspectives of philosophical worldviews, cognitive science, and physics, respectively.
The Purpose of this study is to provide a comprehensive review of the above different theoretical systems and to analyze their intrinsic connections and differences under a unified framework. Specifically, we will explore: how each theory defines "computation" and whether it breaks through the range of classical Turing machine computability; whether they treat actual physical processes themselves as computational behaviors rather than merely simulations of physical processes through algorithms; whether they reflect or discuss the relationship between semantic information hierarchies and information entropy; in terms of cognitive modeling, whether they provide pathways that are more natural or closer to human intelligence; and whether these theories possess potential technical application prospects in engineering implementation. Through such comparative analysis, we hope to outline the possible development directions of future computational paradigms.
The structure of the article is arranged as follows: First, Section 2, "Theoretical Review," sequentially introduces Natural Computing and the " Everything is Computation " thought, Semantic Mathematics and Semantic Universe theories, the Thermodynamic Computing Paradigm and Computational Thermodynamics foundations, and Computational Systems Based on Physical Evolution (such as Boltzmann machines, Ising models). Section 3 is "Comparative Analysis," conducting an in-depth comparative discussion on each of the above theories across five aspects: definition of computation, role of physical processes, semantics and entropy, cognitive modeling, and technological transformation. Section 4 focuses on discussing the potential role of thermodynamic computing in future artificial intelligence, including its advantages in architecture, energy consumption, brain-like intelligence, and semantic emergence, and provides deduction analysis and schematic diagrams. Section 5 summarizes the full text and looks forward to future research directions. Finally, references are listed.
Through the above content, it is expected to provide readers with a systematic understanding of emerging computational thoughts, helping to grasp the essence and interrelationships of different theories, and providing beneficial inspiration and reference for subsequent exploration of next-generation computational paradigms and breaking through artificial intelligence bottlenecks.
Theoretical Review
2.1 Natural Computing Thought: "Everything is Computation"
"Natural Computing" was proposed by Academician Jingnan Liu and others. Its core proposition is to break the traditional narrow understanding of computation and, from a more macroscopic worldview, view computation as a process ubiquitously present in all things in nature. This thought is often colloquially expressed as " Everything is Computation," meaning that all phenomena in the universe can essentially be attributed to some form of computation or information processing. Under this view, computation occurs not only in artificially designed computers but permeates the evolution of physical, chemical, biological, and even social systems.
The "Everything is Computation" thought is influenced by computational thinking concepts including those of Stephen Wolfram. In his works, Wolfram proposed that the universe and various disciplines can be re-examined from a computational perspective, believing that computation is not only a tool in the computer field but also an interdisciplinary mode of thinking that affects our cognition of the world. This view overturns the traditional worldview, emphasizing that various complex phenomena in nature (from physical laws to life evolution, to social behavior) can all be understood through some form of algorithms or computational models. In other words, almost everything we observe and experience can be considered to be evolving according to certain rules or procedures, i.e., performing computation.
The concept of Natural Computing also echoes the philosophical thoughts of Digital Physics or Pancomputationalism. For example, some scholars believe that the universe is essentially similar to a giant computer, where elementary particles and interactions can be viewed as information being calculated and advanced according to physical laws (algorithms). Every natural process (such as planetary motion, gene expression, ecological cycles, etc.) can be seen as a process of information conversion from input via rules to output. Therefore, physical laws are "algorithms," material evolution is "computation," and nature itself acts as a "computer" executing its inherent programs.
The background of Academician Jingnan Liu's proposal of the Natural Computing thought partly stems from his frontier thinking on spatiotemporal information and surveying and mapping science. Driven by the new generation of information technology (IoT, AI, etc.), humans can acquire and process massive amounts of spatiotemporal data, thereby "measuring" all things in nature with computational means. Jingnan Liu points out that the arrival of the intelligent era is leading a "computational society" era—computation is ubiquitous, deeply integrated with various fields, and social production and life increasingly rely on computational drive. This coincides with the concept of "Everything is Computation": it is both a philosophical perspective and technically heralds the arrival of the era of Ubiquitous Computing.
From an implementation perspective, the Natural Computing thought encourages the development of various new computational technologies utilizing natural principles. For example, some research views Natural Computing as a broad concept including evolutionary computation, molecular computation, quantum computation, etc. The ultimate goal of Natural Computing is summarized as: "to present a brand-new mode of thinking for humanity, bringing a true computational revolution," and to develop future molecular computers, biological computers, quantum computers, neural computers, and even 'natural computers'. It can be seen that this concept not only has philosophical implications but also proposes an outlook for the direction of computational technology development, that is, utilizing existing mechanisms in nature (such as DNA molecules, cellular neural networks, quantum states, etc.) to construct computational media.
In summary, Jingnan Liu's Natural Computing/Everything is Computation thought provides us with a worldview from a computational perspective: all laws and phenomena in the world can be understood and simulated from the perspective of computation, and humans should break through disciplinary barriers and use computational thinking to explore the mysteries of the universe. This concept theoretically expands the connotation of "computation," elevating it to a basic category alongside matter and energy; in practice, it inspires new computational paradigms and technologies (such as quantum computing, brain-like computing, as described later), promoting the deep cross-integration of computer science and natural science. It should be pointed out that although this generalized view of computation is grand, it also sparks some discussions: for example, whether all natural processes can be precisely characterized by computable models, and whether there are natural phenomena beyond Turing computability. These issues will be further addressed in the comparative analysis section.
2.2 Semantic Mathematics and Semantic Universe Theory
Semantic Mathematics is a unique theoretical system proposed by Professor Yucong Duan of Hainan University. Its original intention lies in bridging the gap between traditional mathematical symbol manipulation and actual semantic cognition. Traditional mathematics emphasizes formal axiomatization and symbolic deduction, where the meaning behind symbols is usually implicit; the core concept of Semantic Mathematics is to explicitly introduce semantic meanings and their hierarchical structures within the formal system of mathematics. Simply put, it allows mathematical symbols to carry "meaning" while operating, thereby combining symbolic logic with cognitive semantics.
To achieve this goal, Yucong Duan proposed a series of new conceptual frameworks and models. The most important of these is the extension of the classic Data-Information-Knowledge-Wisdom (DIKW) hierarchy model, adding " Purpose " (or Will) at the top layer, forming the DIKWP five-layer semantic model. DIKWP stands for: DataInformationKnowledgeWisdom, and Purpose. This model assumes that any intelligent cognitive process involves five levels from bottom to top: underlying raw data is parsed into meaningful information, sublimated into knowledge systems, further condensed into wisdom for handling complex problems, and finally serving the subject's goal or Purpose. Yucong Duan believes that only by incorporating " Purpose " into the framework can the driving force of cognitive activities be fully characterized, thereby realizing the possible form of artificial consciousness.
In terms of theoretical construction, the Semantic Mathematics system includes sub-theoretical frameworks such as " Meaning Definition Theory," " Concept Construction Theory," and " Mathematical Logic—Language Hook Theory." It formally defines the correspondence between symbols and concepts through a set of axiom systems, ensuring that semantic elements (concepts, relations, superordinate/subordinate hierarchies, etc.) are explicitly represented in the mathematical system. For example, the axiomatic system of Semantic Mathematics specifies mapping rules from symbols to real-world concepts, enabling symbolic reasoning to directly affect (or reflect) changes in conceptual semantics. This contrasts with traditional axiom systems like ZFC set theory—the latter is unconcerned with the real meaning referred to by symbols, while Semantic Mathematics focuses on the actual semantic content behind symbols. Through such design, Semantic Mathematics attempts to merge the rigor of formal logic with the richness of real semantics.
Based on the Semantic Mathematics framework, Yucong Duan's team further proposed the conceptual model of the " Semantic Universe " (also known as the Cosmic Semantic Network). This model assumes that the entire universe can be viewed as a multi-level semantic network, exhibiting a structural evolution of Data-Information-Knowledge-Wisdom-Purpose across different evolutionary stages. For example, in the chaotic state at the beginning of the universe, there was only meaningless data or random information; with the formation of structures and the emergence of life, patterns (Knowledge) and Wisdom gradually accumulated; finally, intelligent life with purpose appeared, endowing the universe with subjective Purpose. Yucong Duan et al. constructed a dynamic evolutionary model of the cosmic semantic network to deduce how information and knowledge structures evolve at different stages of the universe, and to analyze parameters such as information entropy changes and semantic compression degrees. The "semantic compression degree" mentioned here reflects the degree of compression and refinement of low-level information by high-level semantics—for example, a scientific law can condense a large amount of empirical data, which from an information theory perspective is equivalent to reducing entropy. By analyzing the relationship between semantic compression and information entropy, the Semantic Universe model attempts to explain that as the semantic hierarchy increases, the system's disorder (entropy) decreases and meaning density increases, which aligns with the process of the universe producing intelligent life.
In addition to the macroscopic Semantic Universe hypothesis, Yucong Duan's team is also actively applying Semantic Mathematics to specific problems in artificial intelligence, especially in Artificial Consciousness and Large Model Cognitive Ability Evaluation. They established a White-box Evaluation System for artificial intelligence based on the DIKWP model, assessing whether the capabilities of AI systems are sound and balanced across the levels of Data, Information, Knowledge, Wisdom, and Purpose. Unlike traditional black-box evaluations that focus on task performance, DIKWP evaluation designs probe tasks to observe how models process information at different semantic levels to judge their "Cognitive Quotient" (cognitive intelligence level). For example, for Large Language Models (LLMs), they test the model's pattern recognition at the pure Data level (e.g., spelling), fact extraction at the Information level, reasoning at the Knowledge level, cross-domain synthesis at the Wisdom level, and whether it can manifest a certain active Purpose. This evaluation framework was released in a 2025 report and has been reported by media such as Science and Technology Daily.
Furthermore, in the research of Artificial Consciousness Systems, Yucong Duan proposed a framework: Artificial Consciousness = Subconscious System (such as LLM) + Conscious System (DIKWP). In other words, large pre-trained models can be viewed as a "subconscious" processing massive data, while a "conscious layer" based on DIKWP explicit semantic reasoning is grafted onto it; the combination of the two constitutes a complete artificial consciousness. This idea has been used in AI prototypes for medical consultation scenarios: they simulate the generation and flow of Data, Information, Knowledge, Wisdom, and Purpose in the brains of both doctors and patients, corresponding external dialogue with internal cognitive processes, and construct a Deep Cognitive Interaction Model. Through DIKWP brain region mapping theory, the status and role of the five concepts in the cognitive process are clarified, and semantic fusion technology is used to handle inconsistent or imprecise information in doctor-patient dialogues. Ultimately, a DIKWP physiological artificial consciousness prototype system was achieved, which can ensure to a certain extent that the AI's semantic understanding and behavior are consistent with humans, providing interpretability for fuzzy semantics in natural language. This proves the application potential of the Semantic Mathematics framework in cognitive modeling and AI interpretability.
In summary, Yucong Duan's Semantic Mathematics and Semantic Universe theories provide a new path for artificial intelligence and cognitive science: Introducing semantic dimensions and hierarchies into formal computation, enabling machines to no longer be mere symbol manipulators but to understand the meaning behind data level by level like humans. This theory transcends classical computational theory's neglect of semantics, attempting to fundamentally improve AI's understanding of the world and autonomous consciousness capabilities. Of course, this system is still in the development stage, and some concepts (such as the formal definitions of "Wisdom" and " Purpose ") remain to be perfected. However, it provides a concrete framework for us to discuss the question "Can computation handle meaning?" In the comparative analysis section of this paper, we will further evaluate its characteristics.
2.3 Thermodynamic Computing Paradigm Based on Thermodynamics and Probabilistic Bits
The Thermodynamic Computing Paradigm is a new approach that fully utilizes physical thermodynamic principles (energy dissipation, fluctuations, etc.) to perform computation. The key concept introduced here is the probabilistic bit (p-bit). A p-bit is a computational unit between a classic deterministic bit and a quantum bit; its state is not a fixed 0 or 1, but randomly fluctuates between 0 and 1 with a certain probability. Simply put, a p-bit outputs a random stream of 0s/1s, where the probability of taking 1 can be adjusted by input bias. Therefore, a p-bit can be viewed as a controlled random number generator whose output satisfies a specific Bernoulli distribution.
This concept was proposed by Datta and other researchers around 2017, aiming to build a "probabilistic computer" (p-computer). Intuitively, transistors in traditional digital circuits are designed to avoid thermal noise as much as possible to stably represent 0 or 1; whereas in the probabilistic computing framework, we do the opposite—utilizing the thermal noise of components to make them jitter spontaneously and randomly, and then applying bias to make them tend towards a certain state, thereby obtaining a "temperature-controlled random bit." These p-bits can be realized in hardware through special devices, such as thermal fluctuations of magnetic tunnel junctions, random voltage sources in circuits, etc. Recent research has even reported p-bit implementations based on organic polymer memristors: random resistance fluctuations inside the material are converted into binary outputs via simple circuits, generating random bit streams conforming to Logistic probability distributions. This indicates that soft matter systems can also serve as tunable entropy sources to provide p-bits, thereby opening up thermodynamic computing pathways based on polymer materials.
Although a single p-bit is just a controlled random bit, when we connect a large number of p-bits in a certain interconnection structure to form a network, it constitutes a powerful "thermodynamic computer" or Thermodynamic Sampling Unit (TSU). Extropic recently released details of its TSU architecture: the TSU consists of massive parallel p-bit sampling cores, where these p-bits are locally connected to each other, forming a probabilistic graphical model (e.g., equivalent to the energy network of a Boltzmann machine). Unlike traditional CPUs/GPUs executing a series of deterministic instructions, the TSU performs sampling of probability distributions. Its input is the parameter configuration of the energy model, and its output is random samples obeying the probability distribution defined by that energy function. In other words, the TSU hardware directly implements stochastic algorithms like Gibbs sampling at the physical level, corresponding to solving problems of a certain class of probabilistic graphical models. This is very suitable for tasks requiring sampling from complex distributions in generative AI. For example, modern large models need to constantly perform high-dimensional random sampling, which is usually obtained on GPUs via matrix multiplication followed by sampling; whereas TSU can complete this process directly based on the energy function, significantly reducing intermediate calculation steps.
In such a p-bit-based thermodynamic computer, thermal fluctuations and random noise are no longer enemies but become computational resources. Because traditional digital circuits need to consume energy to expel noise to maintain reliability, probabilistic circuits go with the flow, using noise for probabilistic operations, thereby promising to greatly improve energy efficiency. On one hand, TSU adopts a locally connected, memory-computing fusion architecture, avoiding the high-power data movement in von Neumann architectures; on the other hand, the randomness of the p-bit itself is driven by environmental thermal motion without extra energy supply, and through clever design, the energy consumption for each p-bit to generate a random bit can be far lower than traditional true random number generators. It is reported that Extropic's new p-bit design reduces energy consumption by several orders of magnitude compared to existing implementations. Theoretically, thermodynamic computers are expected to bring the energy consumption of each calculation close to the physical entropy limit, i.e., the Landauer bound (about  k B T ln 2 ), because it essentially utilizes environmental thermal energy for computation rather than relying entirely on external work.
As validation, Extropic used simulations to demonstrate the efficiency of its TSU on generative AI tasks: when running a specially designed "Denoising Thermodynamic Model" (DTM, a generative model inspired by diffusion models), the TSU simulation achieved an energy efficiency improvement of about 1 to 4 orders of magnitude compared to GPUs for the same task. They predict a "Thermodynamic Machine Learning Revolution", where researchers develop models and algorithms specifically adapted to TSU, far exceeding existing deep learning in performance per watt. These advances indicate that the thermodynamic computing paradigm has huge potential in the AI field.
In summary, the thermodynamic computing paradigm based on p-bits is a disruptive innovation to traditional computing architectures: it does not aim for complete determinism but introduces controlled uncertainty, allowing computation to be completed through the interaction between the system and the thermal environment. From an information theory perspective, this means the calculation process is accompanied by the generation and exchange of entropy, rather than being a purely logical deduction flow. This paradigm is built on profound physical principles, and its rationality is also supported by computational thermodynamics theory (next section). In the later comparative analysis and outlook, we will discuss in detail how thermodynamic computing breaks through the von Neumann bottleneck, approaches biological intelligence, and reduces energy consumption.
2.4 Computational Thermodynamics: Principle Constraints of Computing and Entropy
Computational Thermodynamics (Thermodynamics of Computing) is a theoretical framework that studies energy dissipation and thermodynamic limits in computational processes. The foundation of this field stems from the exploration of the relationship between Information and Entropy, the most famous result being Landauer's Principle. Landauer pointed out in 1961: Any irreversible logic operation (such as erasing 1 bit of information) must generate heat, and at environmental temperature  T , it must dissipate at least energy  k B T ln 2 . Where  k B  is the Boltzmann constant, and  ln 2  comes from the entropy change of one bit of information. Converted to room temperature (about 300K), this minimum energy consumption is about  2.9 × 10 -21  Joules. Although such minute energy has been verified by experimental measurements, to this day, the energy consumption of ordinary computers executing each logic operation is still about 5 orders of magnitude higher than this limit. In Figure 1, the blue line represents the Landauer energy consumption lower limit, and the red line represents the switching energy consumption achieved by CMOS technology as the process scale decreases. It can be seen that there is still a huge gap between the two. This means that traditional computing hardware still has room for improvement of tens of thousands of times in energy consumption, but it also implies that we are not far from the physical limit, and continuing to improve computational performance will be constrained by thermal power bottlenecks.
!(https://www.birentech.com/Research_nstitute_details/18087969.html)
Figure 1: Comparison of Classical Computing Energy Consumption and Landauer Thermodynamic Limit. The blue line represents the minimum dissipated energy for a single-bit operation given by Landauer's Principle (varies weakly with temperature); the red line represents the historical trajectory of CMOS transistor switching energy consumption with process development. It can be seen that despite continuous process improvements, switching energy consumption is still about 5 orders of magnitude higher than the physical lower limit. New paradigms such as thermodynamic computing aim to narrow this gap. (Redrawn based on Biren Technology report)
After Landauer's work, researchers expanded the theory of computational thermodynamics. For example, to resolve the famous Maxwell's Demon paradox, Brillouin pointed out in 1951 that the measurement process also consumes energy; in the 2010s, Sagawa & Ueda proposed the generalized Landauer boundary condition including measurement and manipulation processes. They proved that if the role of the observation system in acquiring information is considered, then the lower limit of total energy consumption for erasing 1 bit of information should add the thermodynamic cost paid for observation. These results further rigorously formalized the view that "information entropy reduction must be at the cost of environmental entropy increase," explaining that whether recording information, transmitting information, or erasing information, thermodynamic costs are unavoidable.
Another important aspect of computational thermodynamics is considering the impact of Thermal Fluctuations and Noise on computation. When the device scale enters the nanometer level, bit random flipping caused by thermal fluctuations is difficult to ignore. This is "noise interference" in traditional concepts, but under new thinking, it can be viewed as a useful random source (just like the p-bit idea described earlier). Computational thermodynamics helps us understand: when power consumption is extremely low approaching the Landauer limit, the system will inevitably be frequently affected by fluctuations, so computational schemes capable of tolerating and utilizing fluctuations must be designed. Otherwise, if one blindly attempts to eliminate fluctuations, it will lead to an inability to further reduce power consumption.
In summary, computational thermodynamics provides a principle benchmark for measuring the energy efficiency of any computational process, and points out two paths for designing new computers: one is Reversible Computing, avoiding entropy increase through completely reversible logic, theoretically achieving zero dissipation (in reality, there are still losses due to finite speed, etc.); the other is Thermodynamic Computing, such as utilizing fluctuation-assisted operations to reduce active energy consumption. In recent years, there have been explorations in both aspects, such as small ball logic gates based on hyperelastic collisions (reversible computing hardware), and the p-bit networks we discussed (fluctuation-driven computing). Computational thermodynamics is thus not an empty theory; its conclusions are guiding engineering practice.
It is worth mentioning that the chip industry's concern about energy consumption has also spawned the concept of the "Thermal Power Wall." Modern high-performance CPUs/GPUs are close to the upper limit of heat dissipation capacity in power consumption, restricting frequency improvements and integration increases. Computational thermodynamics explains this phenomenon from principle and inspires people to look for innovation directions different from increasing clock frequency and instruction parallelism, such as In-Memory Computing, Analog Computing, Quantum Computing, etc., which are all schemes to bypass traditional power consumption obstacles. Thermodynamic computing, as a rookie among them, attempts to directly turn thermal energy into an ally of computation, walking a path that both conforms to physical laws and subverts traditional architectures. The Boltzmann machine and other models introduced in the next section are exactly one of the early results of the intersection of thermodynamics and computing.
2.5 Computational Systems Based on Physical Evolution: Boltzmann Machines and Ising Models
In the history of computational theory development, models inspired by physics have emerged one after another. Among them, the Ising ModelHopfield Network, and Boltzmann Machine are a series of closely related computational systems. They utilize the evolutionary principle of physical systems spontaneously tending towards low-energy states to complete computational tasks, which can be collectively referred to as "Physically-Evolved Computing." These models hold an important position in fields such as artificial neural networks and optimization computing.
The Ising Model originates from the work of physicists Lenz and Ising in the 1920s, originally used to describe the interaction of spins in ferromagnetic materials. The model consists of a series of spin variables that can only take +1 or -1. These spins are arranged on a lattice; identical adjacent spins lower energy, while different ones increase energy. The total system energy can be expressed as:  E=- ij J ij s i s j - i h i s i , where  J ij  is the spin coupling strength, and  h i  is the external field. Naturally, the system tends to evolve to the spin configuration with the lowest total energy (ground state). When the temperature is not zero, thermal fluctuations cause the system to occasionally jump out of local low-energy states, thus having a certain probability of escaping local minima to find the global minimum.
Professor Hopfield noticed in 1982 that if spin=±1 is analogized to two value states of neurons, and  J ij  is analogized to neuron connection weights, then the energy form of the Ising model is consistent with the energy function of a simple symmetric associative memory neural network. Thus, he proposed the Hopfield Network, which is a single-layer fully connected neural network with node values +1/-1 (or 1/0). Its state update rule is to randomly select a node at any moment and flip its orientation in the direction that reduces the total system energy, until no flip can lower the energy. The energy function of the Hopfield network is usually written as  E=- 1 2 i j w ij s i s j - i θ i s i , where symmetric connections  w ij  correspond to the Ising model's  J ij , and  θ i  corresponds to the external field bias. Since this network will converge to a certain local energy minimum state, it can be used for storage memory: set weights in advance to make expected patterns low-energy valleys; when inputting a partial incomplete pattern, the network dynamics will evolve to fill the missing parts, finally falling into the attractor state corresponding to a stored pattern, i.e., realizing associative memory function.
The Hopfield network demonstrated the possibility of using the energy minimization principle for computation, but it has the problem of easily falling into local minima. As shown in Figure 2, a simple Hopfield network might get stuck in a state with energy -4 (local optimum) instead of reaching the global optimum state with energy -5. To alleviate this problem, researchers introduced randomness, forming the Boltzmann Machine. The Boltzmann machine can be seen as a version adding "noise" to the Hopfield network: it lets each node not mechanically take the fixed value with the lowest energy during state updates, but take values with a probability conforming to the Boltzmann distribution. Specifically, if flipping a node from state 0 to 1 changes system energy by  Δ E , then the probability of that node taking 1 is  p=1 / [ 1+ exp ( Δ E / kT ) ] . This is equivalent to introducing a "temperature" parameter  T  to the network. When  T  is high, node changes are almost random; when  T  is low, they tend towards deterministic optimality. Through a simulated annealing process of slowly lowering  T , the Boltzmann machine can avoid freezing in suboptimal states prematurely like the Hopfield network, having a probability of finding the global lowest energy state.
In fact, what the Hopfield network solves is a combinatorial optimization problem: given the connection weight matrix, find the spin configuration  s i  that minimizes energy  E . For example, the Traveling Salesman Problem (TSP) can be transformed into an Ising/Hopfield energy form, letting the network evolve to solve it. Simulated annealing is a classic algorithm, using random flipping with gradual cooling to simulate the physical annealing process, searching for the global optimal solution. The operating principle of the Boltzmann machine is exactly simulated annealing, making it a general stochastic optimization computational model.
Besides being used for optimization, Boltzmann machines also play an important role in machine learning. In 1985, Hinton et al. proposed the Restricted Boltzmann Machine (RBM), dividing the Boltzmann machine into visible and hidden layers with no intra-layer connections. RBM can be efficiently trained and used for unsupervised learning to extract data features. The breakthrough of early deep learning (around 2006) was partly due to stacking RBMs to form Deep Belief Networks (DBN) to pre-train models, showing that physics-inspired Boltzmann principles provided powerful tools for AI.
It should be pointed out that although the aforementioned Boltzmann machines and the like introduce randomness, they essentially can still be simulated by software on Turing machines. However, their philosophy is more suitable for direct hardware implementation to leverage parallel advantages. The Quantum Annealing Computers (such as D-Wave systems) that have appeared in recent years are physically implemented Ising model solvers: utilizing quantum bit superposition and tunneling effects to efficiently find Ising energy minimum states, capable of solving large-scale combinatorial optimization problems. There are also experimental systems like Optical Coherent Ising Machines, using optical resonator networks to simulate spin interactions, also used for solving combinatorial optimization. These advances indicate that letting the physical process itself act as the calculation process (for example, letting a physical system automatically tend to the lowest energy state, thereby solving an optimization problem) is technically feasible and efficient.
In summary, the Hopfield network, Boltzmann machine, and Ising model introduced in this section represent the thought of Computation Based on Physical Evolution: the system completes computational tasks through self-consistent energy evolution, rather than strictly executing according to predetermined algorithm steps. They break through classical computation's reliance on serial logic, working in a parallel distributed manner, more similar to the brain's information processing method (the brain can be viewed as a complex energy-entropy balance system; there is a conjecture that brain neural activity follows the free energy minimization principle). In the subsequent comparative analysis, we will discuss them together with the aforementioned Natural Computing, Semantic Mathematics, and Thermodynamic Computing Paradigm to see how these theories individually expand the conceptual boundaries of "computation" and whether they echo each other in philosophy.
Comparative Analysis
After reviewing the various theoretical systems, we compare and discuss them from five aspects: 1) Whether the definition of the concept of computation goes beyond the traditional Turing model; 2) Whether physical processes themselves are viewed as computation; 3) Whether the relationship between semantic hierarchy and information entropy is reflected; 4) Whether a new path for human-like cognitive modeling is provided; 5) How is the potential for real-world technological transformation. Through comparison, we can more clearly understand the similarities, differences, and connections between these theories.
3.1 Scope of Computation Definition: Beyond the Turing Model?
The Traditional Turing Model defines computation as a symbol transformation process executable by a Turing machine, that is, any problem that can be described by a formal algorithm. Its scope is limited to discrete, deterministic calculations. However, most of our theories expand the boundaries of computation to varying degrees:
Natural Computing (Everything is Computation): This thought most radically proposes that computation is not limited to Turing machines or algorithms but is a ubiquitous process isomorphic to natural evolution. From this perspective, even continuous physical processes and random phenomena without clear algorithmic descriptions belong to "computation." For example, the growth of a plant or changes in Earth's climate can be viewed as "computing" forms or patterns according to intrinsic rules. Obviously, this goes far beyond the scope that Turing computation can directly simulate, because Turing machines require discretization and algorithmization, while Natural Computing believes that nature itself can be non-discrete or even possibly non-algorithmic (e.g., chaotic systems might correspond to incompressible complex calculations). Of course, strictly speaking, "Everything is Computation" is more of a philosophical declaration; it does not provide a standard for determining which processes are hyper-Turing. But at least, conceptually, it breaks the shackles of the Church-Turing thesis, allowing for the consideration of the existence of non-recursive computable processes (such as simulations in continuous space possibly having computational power beyond discrete Turing machines). Therefore, Natural Computing is the broadest in terms of computational definition: in the eyes of pancomputationalists, the entire universe is a running computer.
Semantic Mathematics and Semantic Universe: This theory does not directly discuss the issue of computability boundaries, but it expands the types of objects processed by computational models—from pure symbols to mixed objects of Symbols + Semantics. Traditional Turing machines are indifferent to the meaning of symbols, only manipulating syntactic tokens. Semantic Mathematics requires the computational system to understand the concepts represented by symbols and their hierarchical structures. This makes computation not only execute formal logical reasoning but also involve operations like knowledge acquisition and meaning interpretation. This itself does not violate Turing computational capabilities, because theoretically, a Turing machine can also be programmed to simulate some semantic reasoning process. But the problem lies in that when we ask a computer to truly "understand" meaning, classical algorithmic paradigms are often stretched—for example, conventional computers find it hard to flexibly grasp context and metaphors like humans. Does this imply a need for hyper-Turing models? Currently, Semantic Mathematics does not claim to break through Turing computability, but it enriches the working mode of Turing machines: moving from mechanically processing 0/1 bits to processing symbols and conceptual networks with referential meanings. This can be viewed as a " Vertical Expansion " of the definition of computation—introducing new levels and connotations on the same computational power (after all, the bottom layer can still be simulated by algorithms). Some might ask, if a system cognizes through the five layers of DIKWP, is its overall computational power equivalent to some Turing machine model? Strictly speaking, semantic hierarchy does not increase the set of computable problems (still a recursive computable set), but improves the expressive power and efficiency of the computational process. For example, humans solving problems through semantic understanding are far more efficient than pure exhaustion; this is an improvement in "intelligent computation." Although it does not exceed the Turing machine in computational power theory, it presents cognitive performance in actual effects that Turing models find hard to match. In summary, Semantic Mathematics does not pursue hyper-Turing computation, but pursues computational wisdom beyond traditional programming paradigms.
Thermodynamic Computing Paradigm (p-bit): regarding computability, computations using p-bit networks are theoretically still within the Turing simulatable range. Any probabilistic algorithm can be implemented by a Turing machine with random number resources. However, the thermodynamic computing paradigm makes us reflect on some assumptions of the Turing model: Turing machines assume stepwise, controllable operations, while thermal computing is continuous-time, parallel random evolution. Theoretical computer science has probabilistic Turing machines, parallel models, etc., which can correspond to this. But if simulation efficiency is considered, a Turing machine pays a high price to precisely simulate a large stochastic parallel system (exponential states), while a physical system updates countless variables simultaneously "for free." Therefore, there is a view that thermodynamic computing might surpass traditional computers in computational efficiency or computational capability. For example, some speculate that if certain NP-hard problems are mapped to physical simulated annealing, they might be solved faster than known Turing algorithms (though not rigorously proven). From the definition of computation, thermodynamic computing emphasizes computation as a special case of physical evolution, which is actually consistent with the spirit of Natural Computing. Thus, it implies that the Turing machine is not the only/optimal carrier of computation, and the computational potential inherent in physical processes has not been fully captured by classical models. In summary, thermodynamic computing theoretically does not step out of the Turing computability category, but provides a new mode different from discrete Turing machines in computational paradigms, which may approach the effects of certain "analog/continuous" computations, thereby touching upon areas not easily handled by Turing machines (e.g., absolutely precise simulation of continuous systems might encounter computability obstacles for Turing machines, but simulation of physical processes comes naturally).
Computational Thermodynamics: As a principled theory, it does not itself define a new computable class, but constraints the energy consumption and physical feasibility of the computational process. For example, if a problem requires consuming more entropy reduction than available energy, it cannot be run on a large scale physically even if a Turing machine algorithm exists. Here, the emphasis is more on the concept of Physical Computability, rather than logical computability. Therefore, computational thermodynamics extends another facet of the computational concept, that is, considering the physical cost of computation as part of the definition. Some scholars even propose the concept of "Thermodynamic Computability," referring to the set of computations that can be completed under limited energy supply. Generally speaking, computational thermodynamics is not about exceeding Turing capabilities, but re-examining computation in the physical world.
Boltzmann Machine/Ising Computing: Theoretically, these models do not exceed Turing computable capabilities (in fact, Hopfield networks, etc., are equivalent to a class of restricted computational models). However, they provide new computational paths for simulating NP-hard problems. For example, the Ising model is formally equivalent to the NP-hard Max-Cut problem. Classical Turing machines require exponential time to solve it, but physical Ising devices can find approximate solutions relatively quickly through simulated annealing. Some conjecture based on this that nature might be performing some kind of "quasi-computation" to solve combinatorial difficulties. Although strict hyper-Turing capability has not been proven, these physical computational systems at least redefine the process of computation: from programming-execution to configuration-letting evolve. What changes is not "whether it can compute," but "how to compute."
In summary, in the scope of computational definition, "Everything is Computation" undoubtedly goes the furthest, almost redefining computation = all evolution of nature, transcending the artificial limitations of the Turing model conceptually. Semantic Mathematics broadens computational content but does not significantly change the computability boundary, only making computation more characteristic of human intelligence. Thermodynamic computing and physical evolutionary computing give computational models that are Turing-equivalent but different in style; they imply that there might be specific computational methods more efficient than Turing machines, but strictly speaking, they have not jumped out of the computable set, only possibly exceeding traditional upper bounds of certain complexity classes in efficiency. Computational thermodynamics reminds us to consider the physical reachability of computation, which is another "definition expansion." Therefore, from the narrow question of "whether it is hyper-Turing," these theories currently have not provided proof of strict hyper-Turing capabilities (true hyper-Turing would require solving undecidable problems or hypothetical oracles stronger than Turing machines). But they have each made extensions in computational views: either horizontally generalizing computation to non-artificial processes, or vertically endowing computation with semantic and physical attributes. These extensions are of great significance for the evolution of future computational paradigms.
3.2 Physical Processes as Computation vs. Mathematical Simulation
In traditional computational paradigms, we use abstract algorithms to simulate physical processes (e.g., using numerical integration algorithms to simulate planetary motion on a computer); the physical process itself is not regarded as the computer. Many of the theories we discuss directly equate or use physical processes as computational processes, which is a paradigm shift. Comparisons are as follows:
Natural Computing (Everything is Computation): It most explicitly asserts that physical processes themselves are a type of computation. For example, the movement of planets under gravity can be viewed as "computing" the results of the law of universal gravitation; DNA replication within cells is "computing" biochemical reaction rules. This is not a metaphor, but strictly believing that natural processes are performing information processing. Accordingly, if we want to utilize principles of natural computing, we should construct computational devices based on natural processes as operating mechanisms, rather than just using computers to numerically simulate natural processes. For example, designing biological computers to let biochemical reactions directly solve certain problems (DNA computing is an example, encoding problems into DNA strands and letting chemical reactions directly give the solution, rather than writing a program to solve it). Therefore, Natural Computing encourages " Using Physics as Computation " rather than " Using Computation to Simulate Physics." This distinction is crucial: the former treats the universe as a computer, the latter treats the computer as a depiction tool of the universe. "Everything is Computation" is obviously the former.
Semantic Mathematics/Semantic Universe: Yucong Duan's theory is mainly a Symbol-Semantic Level theory, not directly involving using physical processes for computation. However, in his Semantic Universe model involving the information field of cosmic evolution, it can be understood that the information flow/entropy change of the entire physical universe is also a semantic computational process. However, this is more of a macroscopic analogy rather than providing specific methods to let physical processes act as algorithms. The main implementation of Semantic Mathematics is at the logical reasoning and knowledge processing level, such as cognitive process modeling in dialogue systems. Those still run on computers (simulating human cognition). So strictly speaking, Semantic Mathematics does not incorporate external natural processes into its computational framework. It focuses on how meaning is represented and used within the computational system. This differs from paradigms that directly utilize physical evolution. Therefore, on this comparison point, Semantic Mathematics leans towards Mathematical Simulation/Abstract Level, not emphasizing physical carriers. Even if its artificial consciousness model needs neuroscience reference, it abstracts physical brain mechanisms into semantic processes, rather than really saying to use a biological organism to calculate numbers.
Thermodynamic Computing Paradigm (p-bit): It strongly emphasizes Physics is Computation. A p-bit network itself is a physical simulator: it utilizes electronic noise and transistor circuits to realize probability distribution sampling in real-time. previously on classical computers, we wrote randomized algorithms to sample; whereas on thermal computing hardware, electronic devices are directly fluctuating and interacting, and this process is computation itself. There is no need to simulate it with software again because it has already completed the calculation at the physical layer. For example, using a burst of electronic noise to solve a stochastic optimization problem through an analog circuit is more direct than writing a Monte Carlo program. Therefore, thermodynamic computing is fully integrated physical processes as computation—running physical laws as algorithms. Extropic's TSU is typical: they do not program to calculate each step of Gibbs sampling, but let thousands of p-bits fluctuate and interact in parallel in the circuit to naturally reach an equilibrium distribution, thereby completing sampling in one go. This is completely different from traditionally discretizing physical processes into instruction sequences. Simply put, thermodynamic computing realizes a dream: replacing theoretical algorithms with real physical processes. It should be noted that this does not exclude numerical simulation, but says that in actual operation, you don't need to write lengthy code, just set up the hardware and let it evolve the result itself. It is truly letting "nature calculate for you." Therefore, in this dimension, thermodynamic computing firmly belongs to the Physical Process = Computation camp.
Computational Thermodynamics: As a theory, it only tells us the constraints of physical processes on computation. However, computational thermodynamics also inspires an idea: utilizing physical mechanisms like Reversible Processes and Fluctuation Dissipation to design computation so that its energy consumption is lowest (this is consistent with thermal computing above). If traditional computation focuses on algorithmic correctness, computational thermodynamics makes us focus on the physical implementation of algorithms. This guides us to consider computation from the perspective of physical implementation rather than pure mathematical processes. So computational thermodynamics indirectly promotes the concept of "physical process-type computation." For example, to reduce energy consumption, one can adopt physical simulated annealing instead of complex software simulation, because physical annealing comes with an optimal dissipation path. In general, computational thermodynamics lets computation break away from pure mathematical models and return to physical level analysis, so it can be regarded as a bridge: it won't directly say physical processes are computation, but it emphasizes that any computation cannot be separated from physical processes. Therefore, to a large extent, computational thermodynamics supports the view "Computation = Specific Physical Process," only more from a restrictive angle (if you want to compute, you have to obey physical laws, so why not directly use physical laws to compute?).
Boltzmann Machine/Ising Model Computing: These initially were Using Mathematics to Simulate Physics (e.g., Hopfield used algorithms to simulate Ising annealing). But as mentioned, later Physical Implementations developed: e.g., D-Wave quantum annealers essentially let superconducting quantum flux logic simulate Ising model energy relaxation; Optical Ising Machines let optical devices simulate spin coupling. At this time, the physical process itself (such as quantum state collapse, laser resonance) assumed the computation of solving combinatorial optimization. This represents a shift from "using code to simulate Ising models" to "building a real Ising system to optimize." Similarly, Hopfield networks were once made into electronic circuits simulating capacitor charging and discharging (corresponding to energy reduction). Therefore, physical evolutionary computational systems can be both algorithmically simulated and physically implemented. It itself encourages people to look for natural systems to handle computational tasks, for example mapping a difficult problem to the ground state problem of a magnetic material, and then letting the material magnetize itself to reach the ground state to get the solution. This coincides with thermodynamic computing thoughts. One difference is that Boltzmann machines etc. appeared in the 80s, when hardware limitations mostly kept them at the software simulation level; whereas now with nanotechnology and quantum devices, we can truly construct physical Boltzmann/Ising calculators. Therefore, these systems are essentially designed to let physical laws solve problems, only the implementation method experienced two stages from "Mathematical Simulation" to "Physical Isomorphism."
Comprehensive summary: Natural Computing and Thermodynamic Computing obviously belong to the theoretical frontier emphasizing physical processes themselves as computation. Natural Computing even views all natural evolution as generalized computation, which certainly includes physical, chemical, and biological processes acting directly as information processing. Thermodynamic computing specifically proposed practical schemes like p-bit networks, making physical noise a part of computational steps. Boltzmann machines, Ising, etc., provided examples showing that letting a system spontaneously tend to the lowest energy state is equivalent to solving a computational problem, so as long as a physical system is built and allowed to develop towards the lowest energy, it equals completing the computational task. This is fundamentally different from iterative simulation using computers: the former utilizes real parallel physics, the latter time-consuming serial simulation. This also explains why some NP-hard problems hope to be solved through quantum annealing and simulated annealing hardware—because the latter is equivalent to "borrowing the hand of the universe" to process information.
Conversely, Semantic Mathematics is closer to traditional computational modes, only improving symbol structure; it does not provide methods to directly map physical processes (unless in the future brain-computer interfaces incorporate real neural activity into computation). Therefore, in this dimension, Semantic Mathematics represents Abstract Simulation, while Natural/Thermal Computing and Physical Evolutionary Computing represent Direct Physical Computation. Computational Thermodynamics provides theoretical basis and limiting conditions for physical computation.
3.3 Relationship between Semantic Hierarchy and Information Entropy
Information entropy is a physical quantity measuring uncertainty, while semantics is the meaning and structure carried by information. Different theories focus differently on the relationship between Meaning (or Ordered Information) and Entropy (Disorder):
Natural Computing (Everything is Computation): This thought itself does not explicitly discuss the relationship between semantics and entropy. It focuses more on the universality of computation rather than the hierarchy of information content. However, broadly speaking, if everything in the universe is computation, then the Entropy Increase Principle might be part of natural computing. For example, entropy increase in the universe means certain computational processes (such as randomization) are continuously ongoing. And the emergence of life and intelligence is accompanied by local entropy reduction and increased information structure, which can be viewed as the generation of "semantics" during the natural computing process—such as DNA sequences or human knowledge reducing chaos and having higher organizational information. These phenomena are not deeply elaborated by Natural Computing itself, but its framework can encompass such discussions. So "Everything is Computation" provides a perspective: The emergence of semantics may be a phase result of natural computing, but it lacks tools to quantitatively analyze information entropy and semantic hierarchy. Overall, Natural Computing does not explicitly involve the semantics-entropy relationship.
Semantic Mathematics/Semantic Universe: This is a theory specifically introducing semantic hierarchies, naturally paying great attention to measures like information entropy. For example, when constructing the Cosmic Semantic Network model, Yucong Duan considered variables like Information Entropy and Semantic Compression Degree. His view can be summarized as: Higher levels of semantics correspond to greater information concentration (entropy reduction). For instance, massive discrete data (high entropy) can be summarized into a knowledge law (low entropy, high semantics); Wisdom and Purpose further highly order information to serve a certain goal, and entropy drops further. Semantic compression degree is precisely used to quantify this degree of "expressing more meaning with fewer symbols." On the other hand, information entropy can also be used to measure the uncertainty of an AI system at various DIKWP levels. For example, if the Data layer has high noise, entropy is high; if the Knowledge layer has strong regularity, entropy is low. If a system still exhibits high entropy random behavior at the Wisdom layer, it indicates a lack of true wisdom. In addition, Yucong Duan's team's DIKWP evaluation also involves indicators like Information Completeness, implicitly requiring models to reduce uncertainty at high levels and behave more consistently (which is also a reduction in entropy). Generally, Semantic Mathematics explicitly links entropy with semantics: Semantics is the structuring and compression of information, so the higher the semantic hierarchy, the greater the meaning per bit, and the lower the entropy value. This is also consistent with discussions on the concept of "semantic information" after Shannon: meaningful information in context has lower entropy than raw symbol sequences because meaning introduces constraints.
Thermodynamic Computing Paradigm: It focuses on energy and fluctuations, and usually deals with low-level probability distributions, not necessarily involving semantic concepts. However, if we view occurrence frequency as an entropy measure, then p-bits involve the system sampling different states according to Boltzmann weights, to some extent reflecting entropy drive. To discuss semantics, one might need to see if thermodynamic computing can spontaneously generate high-level patterns. Theoretically, thermal computing can be used to infer model parameters, which involves some transformation from information to knowledge, but there are no explicit semantic markers within this system. A possible connection is: Thermodynamic computing values the role of entropy, while semantic emergence is often accompanied by local reduction of entropy, so thermal computing might need to be combined with semantic theory to avoid noise disorder caused by pure entropy drive and introduce hierarchical structures. To date, literature on thermodynamic computing rarely touches upon semantic issues, so it does not reflect the semantic-entropy relationship here.
Computational Thermodynamics: It directly discusses entropy, but at the physical level. It tells us that the reduction of information entropy must entail an increase in environmental entropy, without distinguishing semantics. For example, erasing one bit of information lowers entropy by  Δ S=k ln 2 , regardless of whether that bit is meaningful data or random noise, the physical cost is the same. That is to say, computational thermodynamics does not consider semantics, only entropy values. This inspires us: from a physical perspective, semantic hierarchy is just a certain organizational form of information, but no matter how high the level, as long as entropy is reduced by the same amount, energy costs must be paid. So a wisdom generation process (large entropy drop) is often accompanied by more energy consumption to expel entropy. Additionally, Brillouin pointed out that measuring to acquire information also dissipates entropy, meaning the process of acquiring meaning has a cost. Therefore, computational thermodynamics provides constraints for semantic theory: any process forming a meaningful structure (reducing entropy) cannot be a "free lunch"; either entropy increase is transferred out, or there is a compromise. This might explain why highly organized life needs to constantly ingest energy from the environment to expel entropy—because maintaining semantic structures (such as DNA sequences, knowledge in the brain) requires energy to resist entropy increase. In summary, computational thermodynamics emphasizes Entropy Conservation, while Semantic Mathematics emphasizes Entropy Reduction corresponds to Meaning Refinement. Combining the two reveals: the higher the semantic hierarchy, the further the system is from thermal equilibrium, requiring continuous energy input to maintain.
Boltzmann Machine/Ising Model: In these models, the relationship between entropy and "ordered patterns"/"information" is widely utilized. For example, the Boltzmann distribution  P e -E / kT  includes entropy effects; the higher the  T , the flatter the distribution (high entropy, disordered), and when  T  is low, the distribution is concentrated (low entropy, ordered). In simulated annealing, we start with high temperature, letting the system fully explore randomly (high entropy state), and then gradually cool down, and the system enters an ordered (low entropy) minimum energy state, i.e., the solution. This process is actually using entropy transformation to control the system to gradually generate "meaning"—here "meaning" can be understood as the solution to an optimization problem or a stored memory pattern, which is a state with more information structure than the initial random configuration. Hopfield network storage memory is essentially embedding patterns in weights (reducing entropy), and removing noise during recall to complete the pattern. So these models reflect the correspondence between entropy and pattern/information: Disorder -> Order, spontaneous evolution or assisted by annealing strategies can achieve pattern extraction. This is similar to the concept of Semantic Mathematics, except that semantic theory deals with cognitive meaning, while Boltzmann machines deal with statistical patterns. But both essentially say: Obtain higher-level information structures by reducing entropy. The difference is that the patterns of the Hopfield model do not yet involve semantics (for example, it remembers certain patterns, no semantics), while Semantic Mathematics emphasizes that those patterns are meaningful concepts. To make a Boltzmann machine have semantics, it can be used on meaningful data, such as training an RBM to learn image features, then the learning process is the entropy drop = semantic feature extraction process. So actually, when the RBM hidden layer learns features, the uncertainty (entropy) of input data drops in the hidden representation, and these features can be seen as capturing certain "semantics." For example, for handwritten digit images, RBM hidden layers might correspond to stroke thickness, shape, etc. features—this has lower entropy than raw pixels and is more "meaningful" information to humans. In summary, in Boltzmann/Ising class models, the waxing and waning of entropy and ordered information drives computation, also partially reflecting semantic extraction, just not yet rising to the level of linguistic meaning.
Comprehensive comparison: Semantic Mathematics is the theory explicitly focusing on the Semantics-Entropy (information disorder and meaning structure) relationship. Although Natural Computing and Thermodynamic Computing do not discuss semantics themselves, the frameworks they provide allow us to view semantics as special information structures and analyze their evolution combined with entropy principles. Computational Thermodynamics constrains the entropy cost that must be paid for semantic formation from the perspective of physical necessity, which is fundamental to any semantic hierarchy model. Boltzmann machines and other models show from a technical level how to implement pattern (semantic) extraction by controlling entropy. It can be said that the Relationship between Semantic Hierarchy and Information Entropy is gradually becoming unified in these theories: from a physical perspective, high semantics = low entropy, requiring energy drive; from an information perspective, low entropy means concise and regular, which is exactly where meaning lies. Therefore, in the future, Yucong Duan's Semantic Mathematics might be combined with thermodynamic computing thoughts to study "semantic evolution with energy consumption constraints"—for example, how intelligent agents optimize their own knowledge (reduce entropy) efficiency under limited energy conditions, etc.
3.4 Implications for Cognitive Modeling: Natural or Human-like Path?
The theories differ significantly in whether they provide new paths for cognitive modeling (especially human-like intelligence modeling):
Natural Computing (Everything is Computation): It is not specifically proposed for cognitive science, but the implicit view is that Mind is also part of nature, and thus is also a type of computation. This is consistent with computational theories in cognitive science (e.g., the brain is an information processing system). However, "Everything is Computation" does not give a specific brain model path. It mostly tells us we can try to view the cognitive process as a link in natural processes. If extended, Natural Computing encourages us to use natural complex systems to simulate/implement cognition. For example, swarm intelligence (ant colony, bee colony behavior) is originally a category of Natural Computing, and also provides inspiration for human-like decision-making (self-organization emerging simple intelligence). But overall, "Everything is Computation" does not provide a specialized cognitive architecture, only providing a thought foundation: since even the universe is computing, it's not strange for the brain to compute something, perhaps we can treat the brain as a special computer or use other natural systems to simulate the brain. So its role in cognitive modeling is Philosophical Support, not a specific technical path.
Semantic Mathematics/Semantic Universe: This is a theory directly serving Artificial General Intelligence and Artificial Consciousness. Yucong Duan explicitly proposes the DIKWP framework to characterize the five-layer cognitive process. This is equivalent to giving a model structure of human-like cognition: Perceptual Data   Extract Information   Form Knowledge   Apply Wisdom   Guide Purpose, with mathematical formal representations for each step. This clearly simulates the human process from perception, memory to decision-making. For example, when the human brain processes information, there is also a process of abstracting concepts from low-level sensory signals and then using them for reasoning and decision-making. DIKWP attempts to formalize this, thereby providing a natural hierarchical framework for cognitive modeling. In addition, his artificial consciousness system framework views LLMs etc. as subconscious and DIKWP modules as explicit consciousness, which is almost simulating the computational implementation of the Freudian psychological model, remarkably human-like. Furthermore, their simulation of doctor-patient interaction to build a cognitive interaction model is also an effort to simulate the integration of information flow inside and outside the human brain. So Semantic Mathematics contributes significantly to cognitive modeling: First time introducing meaning into computational models to align with human cognition. Compared to traditional AI's "Symbolism" and "Connectionism" having their own biases, Semantic Mathematics attempts to merge the advantages of both, using symbols to represent meaning while being able to connect deduction, possessing originality. In this dimension, it can be said that Semantic Mathematics provides a Brand-new Path for Human-like Cognitive Modeling, that is, constructing AI close to human understanding capabilities through formal semantic hierarchy models. This is exactly its theoretical selling point.
Thermodynamic Computing Paradigm: On the surface, this has little to do with cognition, because it targets computational hardware and algorithmic energy consumption. However, if we associate deeply, we will find that the brain itself is a thermodynamic computing machine: the brain consumes about 20 watts of energy to produce consciousness and intelligence, an efficiency far exceeding any computer. Brain neurons firing has randomness, synaptic strength adjustment is like annealing, and many scholars believe the brain utilizes noise, dynamics, etc., to achieve efficient random parallel computing. This is similar to p-bit networks: for example, neurons are either excited or inhibited, which can be seen as randomly stimulated binary units; synaptic learning may follow some minimum energy or free energy principle (such as Friston's Free Energy Principle). Therefore, Thermodynamic Computing is in a sense closer to the brain's information processing method. Compared to von Neumann sequence calculation, the parallel asynchronous random update of p-bit networks is more like neural networks. Indeed, Hinton long ago analogized Boltzmann machines to unsupervised learning in the brain. Then thermodynamic computing new hardware is likely to be used for Brain-like Computing. For example, the TSU architecture cancels memory-computation separation, which is similar to the brain's "Memory is Computation" (memory exists in connections, processing in the network). TSU local communication reducing energy consumption is also like the brain's mainly short-distance synapses. Most critically, thermal computing can naturally implement probabilistic reasoning, which is an important aspect of brain-like cognition (the human brain is good at probabilistic inference in uncertain environments). Therefore, we can foresee that using p-bit networks for cognitive models or combining with neural networks (such as introducing stochastic neurons, soft stochastic constrained networks) may achieve better human-like performance than traditional ANNs, such as better robustness and associative ability. In summary, Thermodynamic Computing although not proposed for simulating cognition, it provides a very "natural" road to brain-like intelligence, because it conforms to some core characteristics of biological computation: parallel, distributed, random, adaptive. If the semantic hierarchy (previous point) can be combined with thermodynamic implementation in the future, such as constructing a p-bit neural network with DIKWP levels, it will possess both semantic understanding and physical efficiency, perhaps not far from the human brain architecture.
Computational Thermodynamics: Its direct contribution to cognitive modeling is small, but there is an inspiration: the brain is a system, and its information processing is also subject to thermodynamics. For example, the brain releases  10 16  neural synapses per second, so many operations but only 20W, indicating the brain might be an efficient mechanism close to the Landauer limit. The challenge Computational Thermodynamics gives to cognitive science is: Either the brain has a very entropy-saving coding strategy, or it uses reversible processes, otherwise so much computation volume cannot be completed at such low energy consumption. So this theory motivates us to understand cognition from the perspective of energy consumption, such as whether the brain reduces unnecessary calculations through probabilistic prediction (saving entropy), or memory storage is a high-energy consumption process while retrieval is low-energy consumption, etc. These analyses belong to the intersection of cognitive neuroscience and information theory, but are still in their infancy.
Boltzmann Machine/Ising Model: They have certain historical significance in Human-like Intelligence Modeling. The Hopfield network was originally viewed as a human-like memory model: distributed storage, content-addressable extraction, which exactly fits certain properties of human memory in psychology (such as memory association and noise resistance). Boltzmann machines simulate neural networks through random activation, and were used by Hinton et al. to explain some hypotheses of cerebral cortex function. Although deep learning later took the non-probabilistic ReLU network path, recently some people have re-mentioned "there may be an energy minimization principle in the brain, related to the new version of Hopfield models." At the same time, Ising-class models are also used to explain the statistical properties of neuron population firing. Therefore, physical evolutionary computing models were inspired by Brain Cognition from the beginning and in turn inspired cognitive science: Hopfield proved that "memory can be a global network attractor state instead of locally stored," changing our view of memory storage; Boltzmann machines demonstrated possible mechanisms for unsupervised learning to extract features, which corresponds cognitively to how the human brain learns abstract concepts from perception. So these models provided prototypes for cognitive modeling, making people believe that perhaps cognition is large-scale parallel energy optimization. Today, the energy minimization thought is reflected in Friston's "Free Energy Principle" theory, believing that the brain perceives and acts by minimizing (in the Bayesian sense) free energy. This coincides with Boltzmann thought. So, it can be said that physical evolutionary computational systems have a profound impact on human-like cognitive modeling. Although specific models are not complex enough, they introduced concepts of nonlinear dynamic systems, energy/entropy to understand the brain, expanding the scope of classical symbolic AI.
Summary: Semantic Mathematics contributes most directly to cognitive modeling—proposing a systematic framework for human-like intelligence. Thermodynamic Computing is closer to the brain in implementation method, promising to be the hardware basis for building brain-like AI. Natural Computing provides philosophical background letting us accept cognition is also natural computing, thereby promoting interdisciplinary thinking. Boltzmann machines etc. provide model paradigms proving that some human-like functions can be realized through physical-style networks. It can be said that these paths lead us to Brain-like Intelligence from different sides: Semantic Mathematics solves the problem of "understanding meaning," Thermodynamic Computing solves the problem of "large-scale low-power computation," and physical network models solve the problem of "associative memory and autonomous learning." If these can be merged, building truly human-like cognitive machines in the future will be more promising.
3.5 Potential for Real-world Technological Transformation
Theories ultimately need to be applied. Their technical feasibility and prospects are also important aspects of evaluation:
Natural Computing (Everything is Computation): As a concept, its impact on technology is indirect. "Everything is Computation" has inspired many "unconventional computing" attempts, such as molecular computing, DNA computing, biological computing, quantum computing, etc. These technologies have indeed developed in the past few decades: DNA computing can solve partial combinatorial problems, quantum computers have realized prototypes with dozens of qubits, biological cell computing is used for complex sensors, etc. These all belong to the category of Natural Computing (utilizing natural media and mechanisms to compute), and can be seen as products of the "Everything is Computation" thought. However, leaving specific fields, "Everything is Computation" itself does not provide a certain patented new device. It acts more like a "compass," for example letting academia and enterprises realize that future computing may be ubiquitous and permeate all walks of life (the rise of IoT is to some extent also a manifestation of "Everything is Networked/Computing"). From Jingnan Liu's perspective, the Beidou navigation, spatiotemporal big data and other projects he led actually also reflect ubiquitous computing (combining sensor networks, communication networks with computing services) application in the industry. But strictly speaking, these cannot be counted as direct applications of the new Natural Computing theory, more like information technology trends coinciding with Natural Computing concepts. Therefore, in terms of technological transformation potential, Natural Computing is important as a forward-looking thought, but does not provide specific technology itself. Its "transformation" depends on the development of other technologies. For example, when quantum computing is successfully commercialized, people can say this validates the foresight of Natural Computing. Overall, "Everything is Computation" has infinite potential, but limited immediate implementation.
Semantic Mathematics/Semantic Universe: This set of theories has preliminary application exploration, mainly in Artificial Intelligence Software and Evaluation. For example, the DIKWP white-box evaluation system has been used to assess the "Cognitive Quotient" of large models; the team developed a medical dialogue artificial consciousness prototype. These can be considered prototypes of technological transformation, but are not yet mainstream. If Semantic Mathematics is to be widely applied, it may need to be integrated into existing AI frameworks, such as enhancing the semantic hierarchy modules of deep learning models. Currently, deep learning mostly stays in the Data -> Representation -> Task flow, without explicit Knowledge, Wisdom layer modules. If Semantic Mathematics matures, Hybrid Intelligent Systems may appear in the future: the bottom layer uses neural networks to process data and low-level information, the middle layer uses knowledge graphs/symbolic systems to process knowledge and wisdom, and the top layer has human guidance Purpose, forming a DIKWP structure overall. Such a system would be more controllable, interpretable, and functionally rich than pure deep learning. This can be seen as the long-term technical landscape of Semantic Mathematics. Additionally, Semantic Mathematics has guiding significance for Artificial General Intelligence (AGI), because AGI needs to understand and reason about knowledge in various fields, which is exactly what Semantic Mathematics is good at describing. Currently, some AI companies have begun to pay attention to the cognitive ability evaluation of large models, and it is foreseeable that semantic evaluation standards (similar to IQ but targeting AI's "Purpose, Wisdom" indicators) will be adopted. Therefore, Semantic Mathematics has great technical transformation potential, but high threshold: it requires breaking through existing machine learning paradigms and introducing new cognitive frameworks and training methods, which cannot be done overnight. Also, for the industry to accept semantic layer modeling, it needs to be proven that it brings performance improvements or safety enhancements. Currently, the black box problem of large models is prominent, and semantic white-box evaluation and improvement may have a market. In summary, the Semantic Mathematics system is expected to play a role in AI software, evaluation systems, and future human-like AI architectures, and its potential rises with increasing AI demand.
Thermodynamic Computing Paradigm (p-bit): This is a hardware direction with considerable practical prospects. Currently, the contradiction between the slowing of Moore's Law and the explosive demand for AI computing power is acute, urgently requiring new computing architectures. Thermodynamic computing fits perfectly by simulating hardware and non-traditional bits to reduce power consumption and improve parallelism. Several prototypes have appeared in recent years: such as the aforementioned organic memristor p-bit devices, Spintronics spin p-bit chips (Purdue University demonstrated an 8-bit p-computer), and some companies (such as Extropic mentioned above) are attempting to manufacture commercial TSU chips. Looking at the progress, the maturity of p-bit technology is still lower than quantum computing but higher than brain-computer interfaces, belonging to a direction with visible results in the medium term. Once chips with scales of hundreds or thousands of p-bits are developed, they may have orders of magnitude advantages over GPU/CPU in specific tasks (such as random sampling, combinatorial optimization, probabilistic reasoning). In terms of application, it can first be used as a Classic Computing Acceleration Coprocessor: for example, adding a p-bit array card to an existing server to execute heavy stochastic tasks like Monte Carlo simulations, Bayesian network inference, etc., generating sample distributions for the main CPU to read. This is similar to GPU accelerating matrix operations. In the long run, if TSU can be expanded and generalized, it might even become a new computer category (p-computer), replacing von Neumann machines in some scenarios. For example, IoT edge devices requiring low-power operation of simple AI algorithms can use small p-bit network hardware to complete simple intelligent perception, with potentially negligible energy consumption. Another example is that the sampling decoding link of large models is very time-consuming and energy-consuming, and using TSU can significantly optimize it. Therefore, the thermodynamic computing paradigm has specialized acceleration applications in the short term, and potentially disrupts general computing in the medium to long term. Its challenges mainly lie in manufacturing processes (need to integrate stochastic devices and CMOS) and programming paradigms (need new software tools to let developers use p-bit hardware). But existing open-source tools like Thrml simulating TSU show that the community is starting to lay out. Therefore, we have reason to believe that thermodynamic computing has strong technical feasibility, and practical products are expected to appear within 5-10 years. Compared to other theories, this might be the one closest to industrialization.
Computational Thermodynamics: As a theory, it will not directly transform into a product, but it guides efficient computing design. For example, the rise of Near-Threshold Low-Power Circuits and Reversible Logic Gates research in recent years draws on computational thermodynamics thoughts. There are also energy-adaptive computing (adjusting algorithms under power limits), etc., which also have thermodynamic analysis behind them. Especially in quantum computer development, thermal noise and entropy are attached great importance, and computational thermodynamics is one of its basic theories. It can be said that computational thermodynamics invisibly permeates chip design and low-power computing fields. Its "products" are various Energy-saving Computing Technologies, such as sub-threshold voltage technology, thermal capacitance computing, etc. Further away, if a computer near the Landauer limit is truly realized, that would be the ultimate victory of computational thermodynamics, but that is still far away. In short, its contribution leans towards theoretical guidance and is not specifically unfolded.
Boltzmann Machine/Ising Model Computing: Technical transformation in this area is already happening. Quantum Annealing Computers (D-Wave series) offer cloud services commercially. Although there is controversy compared to general quantum computing, there are indeed clients using it to solve optimization problems. Digital Annealing (such as Fujitsu Digital Annealer) uses FPGA to simulate Ising models, and is also applied in fields like financial optimization. Optical Coherent Ising Machines once solved combinatorial optimization with tens of thousands of variables, beating traditional algorithms at amazing speeds (though on specific problems). In AI, Restricted Boltzmann Machines (RBM) were once widely used as deep learning modules. Although they have faded out in recent years, they have received attention again in scenarios like quantum machine learning. There is also new research integrating Hopfield network principles into Transformer models to improve associative memory capabilities, called Modern Hopfield Networks. These all indicate that the concepts of physical evolutionary computing are continuously transforming into practical tools. In the future, perhaps there will be specialized Ising Chips for combinatorial optimization in logistics scheduling, chip wiring, etc. RBMs etc. might find new life in Generative Models (because with the rise of diffusion models and energy models, people might use RBM to pre-train diffusion processes). Additionally, with the maturity of p-bit hardware, we can even implement Boltzmann machine networks directly at the hardware layer instead of simulating, which will significantly accelerate unsupervised learning etc. The foreseeable prospect is: Simulated annealing will serve as an algorithmic service, widely available on hardware and cloud, and large-scale combinatorial optimization problems will be handed over to these specialized platforms instead of being hard-solved by general CPUs. Physical evolutionary computing moving from the lab to commercial use already has some success stories, the aforementioned D-Wave, Fujitsu being milestones. Therefore, its technical transformation potential has been partially realized and is still growing.
In summary: Natural Computing is grand but loosely implemented; Semantic Mathematics is highly innovative but slow to show results in the short term; Thermodynamic Computing has high compatibility with technology and market, developing rapidly; Computational Thermodynamics acts as basic assurance; Physical Evolutionary Computing is already bearing fruit. We can see that some theories are more like long-term blueprints (Natural Computing, Semantic Universe), and some have spawned new paradigms (Thermodynamic, Ising Computing). Thermodynamic Computing and Physical Evolutionary Computing are currently the directions most hopeful to bring hardware performance revolutions, while Semantic Mathematics is expected to lead a new paradigm in AI software, paving the way for true strong artificial intelligence. Ideally, these three (and Natural Computing concepts) can combine: future intelligent systems will adopt thermodynamic and physical parallel computing architectures in hardware, and integrate semantic hierarchies in software algorithms, making machines possess both high energy efficiency and deep cognitive capabilities. This might be the key to breaking through the limitations of current artificial intelligence and achieving human-like intelligence.
Outlook on the Role of Thermodynamic Computing in Future Artificial Intelligence
Based on the above comparative analysis, especially considering the advantages of the Thermodynamic Computing Paradigm in technology and concepts, this paper makes a comprehensive judgment and outlook on its role in the future development of artificial intelligence (AI). We will deduce from dimensions such as Architecture Beyond von NeumannTowards Brain-like IntelligenceEnergy Consumption Optimization, and Semantic Emergence Capabilities, and illustrate possible implementation forms with model diagrams.
4.1 Beyond von Neumann Architecture, Achieving Brain-like Intelligence
Since the birth of modern computers, the von Neumann architecture has been dominant, but its characteristics of Storage-Processing Separation and Sequential Control are far from the way the human brain works. With the increasing demand of AI for large-scale parallel computing and real-time learning, the von Neumann bottleneck has become increasingly obvious (e.g., the "Memory Wall" problem causes GPUs to consume a large amount of power on data movement). Thermodynamic computing offers a new path beyond the von structure: Distributed Storage-Computation Fusion Architecture. In the TSU (Thermodynamic Sampling Unit), there is no independent memory and CPU; information is stored in p-bit network connection weights in the form of probabilities, and computation is completed in parallel through the mutual influence of p-bits. This architecture is very similar to the human brain neural network—synaptic strength implies memory, neuron firing is processing information and affecting neighbors, without a centralized control clock.
Specifically, Brain-like Intelligence requires processing asynchronous, fault-tolerant, context-related cognitive tasks. The random parallel update of thermodynamic computing matches the brain's asynchronous neural firing, both requiring no global synchronization clock. p-bits only interact locally, somewhat like local connection clusters of neurons. Even better, the brain utilizing randomness (e.g., indefinite synaptic release probability) may be useful in exploring state spaces and preventing behavioral limitations, and thermodynamic computing naturally supports random exploration. It is conceivable that a sufficiently large p-bit network, if using a brain-like topology connection (e.g., hierarchical modules, small-world networks, etc.), can execute perception, memory, and decision tasks by adjusting connections (learning), similar to neural network functions. And because there is no redundant instruction control and storage movement in hardware, the efficiency of such a system would be far higher than simulating a neural network on a von Neumann machine.
Therefore, Thermodynamic Computing fundamentally surpasses the von architecture in architecture: it does not rely on faster clocks and wider buses to speed up, but changes the computational paradigm, fully exerting Parallelism and Physical Processes. This is exactly the essence of brain-like computing. In fact, not only p-bits, many brain-like chips (such as IBM TrueNorth, Intel Loihi) also adopt memory-computing fusion, but they mainly imitate static neural firing. p-bit networks introduce dynamic thermal noise, which is closer to biological reality and gives the system stronger adaptability. Combining these, we predict that future AI Hardware will appear as "Hybrid Architectures"—CPU/GPU responsible for deterministic calculations, TSU and other thermodynamic accelerators responsible for probabilistic reasoning and learning, the two combining to form a new generation computational platform supporting more advanced AI applications.
4.2 Energy Consumption Optimization and Performance Improvement
Energy consumption is one of the key constraints for large-scale AI application. Currently, training a large deep model consumes huge amounts of electricity, and data center cooling costs are high during deployment inference. The introduction of thermodynamic computing is expected to significantly reduce the energy consumption ratio of AI computing power. There are two reasons: first, thermodynamic computing reduces long-distance transmission of data signals through local communication (which is a major energy consumption item in digital chips); second, utilizing environmental thermal noise for computation to some extent "for free" borrows environmental energy, thus reducing the device's own energy consumption.
From the perspective of Landauer's Principle, current digital AI computing is far from approaching the thermal physical limit. For example, the energy actually consumed by a 32-bit multiplication may be tens of thousands of times higher than the Landauer limit required to process these 64 bits. But inside the TSU, many calculation steps are not implemented by logic gate switching, but by analog implementation (e.g., capacitor discharge reaching a certain threshold triggers p-bit flip). These analog processes can work at low voltages and allow noise to exist, without needing to strictly drive transistor flips at every step, so the energy consumption for a single state update can be very low. Extropic claims that the pure transistor p-bit they designed consumes several orders of magnitude less energy to generate random bits than early designs. Furthermore, if the entire algorithm is completed by TSU, the overall energy efficiency can be improved by three to four orders of magnitude compared to GPU. This means tasks that used to consume electricity can run on mobile devices in the future, or large models can be run within power budgets that could originally only run smaller models.
Another benefit brought by energy consumption optimization is Heat Suppression, thereby allowing more computing units to be stacked in limited space (because heat dissipation is not a bottleneck). One of the ideal blueprints for brain-like computing is to imitate the brain's density: the human brain has about a hundred billion neurons and a quadrillion synapses in a volume of less than 1500 cubic centimeters, with a power consumption of only 20W. This puts human engineering to shame, but thermodynamic computing may narrow the gap. If p-bits and connection weight units are also manufactured at the nanometer scale, and do not require strong drive currents, density and energy consumption can be significantly improved. Perhaps in the future, a TSU chip the size of a fingernail will be equivalent to the computing power of a cabinet full of GPUs today, while power consumption is only at the level of an LED bulb. This is very attractive for Edge AI and Wearable AI.
In summary, Thermodynamic Computing is an important path to the "Green Revolution" of AI computing power. It brings us back to the most basic connection between computation and physical energy consumption, making AI computing power growth no longer simply dependent on Moore's Law but on physical innovation. It is foreseeable that in a future with increasingly strict requirements for carbon neutrality and energy efficiency, whoever masters thermodynamic computing technology will gain an advantage in AI infrastructure. This is also one of the motivations for major chip manufacturers to start investing in brain-inspired and new device computing. We believe that the promotion of the thermodynamic computing paradigm will make "Energy efficiency of quadrillions of operations/J" AI systems possible, thereby making AI implementation a reality in wider fields.
4.3 Semantic Emergence and Autonomous Intelligence
Semantic emergence refers to a complex system spontaneously generating meaningful structures or behaviors. The uniqueness of human intelligence lies in the ability to emerge abstract concepts and meanings from disordered perception. We are concerned whether the physical-driven computational system of thermodynamic computing has the potential for semantic emergence, thereby pushing AI towards more autonomous intelligence.
Although thermodynamic computing itself does not process high-level semantics, its characteristics—Randomness, Parallelism, Adaptability—are exactly the foundation for many self-organizing systems to produce emergent phenomena. For example, in evolutionary algorithms (genetic algorithms, etc.), random mutation and selection can emerge solutions adapted to the environment; in neural networks, adding noise training sometimes results in better generalization. Thermodynamic computing can be seen as implementing a dynamic of continuous perturbation and equilibrium at the hardware layer, which is very similar to natural self-organization processes, such as spatial patterns appearing in chemical reactions, or ecological systems producing synergistic behaviors.
More specifically, if we use thermodynamic computing to train generative models or reinforcement learning agents, we might observe richer behavior patterns. Extropic has already proposed Denoising Thermodynamic Models (DTM) as generative models. These models generate data starting from noise, somewhat like diffusion models. Unlike traditional deterministic networks, DTM runs on TSU, meaning physical randomness participates in every generation. The possible result is: The model will emerge diversified and creative outputs, because physical randomness provides a true random source rather than pseudo-randomness, allowing the system state space to be more comprehensively explored. At the same time, TSU hardware allows sampling of a large number of samples simultaneously during training, which may promote the model to discover implicit patterns (semantic concepts). For example, in unsupervised learning of images, perhaps TSU can cluster semantic features faster because the network can use fluctuations to jump out of bad local extrema, thereby finding globally better abstractions.
From another angle, in cognitive architecture, if upper-layer semantic modules (such as the DIKWP model) can reside on a thermodynamic computing foundation for execution, then the existence of Noise and Entropy might endow the system with a certain "spontaneity." Human thinking is not completely algorithmic; there are often non-deterministic processes like inspiration and creation, which may be related to the brain's chaotic dynamics. Brain-like thermal computing systems, due to inherent randomness, might similarly produce some unexpected new ideas. Of course, this is currently a conjecture and requires experimental observation based on prototype systems to determine. But at least, we see Hope: thermodynamic computing provides a mechanism for emerging complex structures from the bottom up for AI by simulating physical evolution. If combined with semantic hierarchy models, intelligent emergence might be achieved: underlying p-bit networks form stable attractors representing concepts, higher layers express reasoning through energy interaction between networks, all happening self-consistently under physical laws, then true autonomous intelligence is on the horizon.
4.4 Model Diagram: Blueprint for Future Brain-like AI Systems
To understand the above fusion more intuitively, we drew a Conceptual Model of a Future Brain-like AI System in Figure 2, which combines thermodynamic computing hardware with semantic hierarchical software to achieve high-efficiency autonomous intelligence.
[Model Schematic (Figure 2)] The upper part is the traditional computer module, and the lower part is the thermodynamic computer module. The two are connected via an interface. The traditional part (Host) is responsible for interaction with the outside world, human control, etc.; the thermodynamic part (Co-processor) connects directly to the environmental potential of the real world (sensor input) and a controllable heat bath (providing required fluctuation noise). Inside the thermodynamic computer, the DIKWP semantic structure is combined with the p-bit network: Data and Information layers are processed by perceptual units, the Knowledge layer forms a conceptual semantic graph by Boltzmann machine networks, the Wisdom layer makes decision optimization through Hopfield-like networks combined with knowledge, and the Purpose layer determines goals by a global energy evaluation module. All units are architected on TSU hardware, running in a locally connected manner; when necessary, the Host can apply external potential for constraints (equivalent to setting problem conditions).
When working, external sensor input (e.g., camera image) acts on the system as environmental potential, equivalent to prescribing bias for certain p-bits, adding data terms to the energy function. Under the action of heat bath noise, the system spontaneously fluctuates and evolves: in a short time scale, node states are quickly adjusted to adapt to input (corresponding to perception and initial reaction, node charge changes satisfy thermal fluctuation equilibrium principles); in a longer time scale, connection weight structures are slowly updated (corresponding to learning and memory formation, realizing knowledge accumulation through irreversible weight adjustment processes). Throughout the process, thermal fluctuations are no longer interference but one of the driving forces. When local thermal equilibrium is reached, the structure does not change, only states adjust; when locally far from equilibrium, structural adaptive changes are triggered. Cycling like this, the system undergoes a series of equilibrium-dissipation-re-equilibrium processes, gradually integrating environmental input into internal representation and minimizing free energy (energy). The result is: With energy dissipation, the entropy produced by the system tends to be minimized, so finally the system state stays in a relatively ordered (low entropy) configuration, corresponding to finding the best explanation for the input or the corresponding action.
For example, if the input is an unseen scene image, the system adjusts until the internal knowledge layer network finds several activated concepts matching image features, and the Wisdom layer triggers a possible action Purpose. At the same time, these concepts and their connection weights change slightly (learning the new scenario). All this is completed under the drive of physical laws, without line-by-line programming steps. Human cognition also happens unconsciously like this (guessing the use of an unfamiliar object upon seeing it, all completed by the subconscious doing massive calculations).
The Host plays the role of supervision and interface in this process: it can set external potentials to lock certain p-bit values (e.g., requiring finding a solution meeting certain constraints), or adjust heat bath temperature to affect random intensity (similar to adjusting exploration vs. exploitation). When the thermodynamic co-processor completes calculation, the Host reads the result (such as extracted semantics and decisions made), then executes or feeds back to the user. This human-machine collaboration ensures the system is controllable and usable.
In summary, Figure 2 describes a brain-like AI system blending thermodynamic computing, semantic hierarchy, and physical process computing into one: the Physical Layer implements energy-driven self-organizing computation using p-bit networks, the Cognitive Layer organizes information structure with Semantic Mathematics models, and the Interface Layer ensures human-machine interaction and stability with traditional computers. We believe this represents a direction for future AI development: Realizing intelligence in the physical world, rather than completely abstracting it in electronic computation. Such a system has the potential to possess powerful adaptability and learning capabilities, while energy consumption is far lower than traditional AI servers, and behavior possesses interpretable semantic levels.
Of course, realizing this blueprint requires overcoming numerous challenges, including large-scale p-bit manufacturing, DIKWP model engineering, controlling thermal noise to ensure system stability, etc. But existing theories and prototype results (such as Extropic TSU, Duan's Semantic AI experiments, etc.) have provided feasibility proof for us. We have reason to expect that with the convergence of progress in these fields, thermodynamic computing will play a Core Engine role in future AI, enabling AI to truly break through current framework limitations and move towards high-efficiency, strong-cognitive brain-like intelligence.
Conclusion and Outlook
This paper has systematically organized and analyzed the theoretical connotations, interrelationships and differences, and significance in the future of artificial intelligence of Jingnan Liu's "Natural Computing" (Everything is Computation), Yucong Duan's "Semantic Mathematics/Semantic Universe," the Thermodynamic Probabilistic Computing Paradigm, Computational Thermodynamics theory, and Boltzmann Machine/Ising physical computational systems.
Through the review, we see: Natural Computing thought redefines the scope of computation from a philosophical height, emphasizing interdisciplinary ubiquitous computing concepts, providing guidance for developing unconventional computing technologies. Semantic Mathematics theory fills the gap in traditional computing for semantic cognitive processing, introducing the DIKWP five-layer model, elevating symbolic computing to a level capable of representing and reasoning meaning, laying the foundation for artificial general intelligence architectures. Thermodynamic Computing Paradigm achieves breakthroughs in computing hardware and energy efficiency, utilizing p-bit devices and energy self-consistency principles to realize a brand-new computing mode, significantly reducing power consumption while meeting AI computing power demands, promising to create efficient parallel new computers. Computational Thermodynamics reveals the relationship between computation and physical entropy increase from principles, providing scientific basis for understanding computational limits and designing low-power computing. Boltzmann Machine/Ising Model and other physical computational systems use natural self-organization to solve computational problems, enriching computational methodology, and showing unique advantages in optimization, associative memory, etc.
Comparative analysis shows that these theories both differ and complement each other in multiple aspects. Natural Computing and Thermodynamic Computing challenge the central status of traditional Turing machines, emphasizing that computation can be directly completed by physical processes; Semantic Mathematics focuses on the fusion of Meaning and Information, endowing computation with cognitive level content; Boltzmann machines etc. reflect the dynamic balance between Entropy and Structure, coinciding with the essence of semantic extraction. In cognitive modeling, Semantic Mathematics provides a clear framework for human-like intelligence, while Thermodynamic Computing and physical networks bring possible paths for brain-like implementation; the combination of the three is expected to produce new autonomous intelligence systems. Technically, Thermodynamic Computing and Physical Evolutionary Computing have bright prospects, with some prototypes already showing disruptive potential for existing computing; Semantic Mathematics theory is gradually influencing AI evaluation and system design, potentially leading the next generation AI software revolution; Natural Computing concepts guide people in the long term to explore the computational capabilities of various natural media, such as quantum, DNA, biology, etc.
Specifically, we focused on the outlook of Thermodynamic Computing in Future AI. It is foreseeable that thermodynamic computing will become the key to breaking through von Neumann architecture bottlenecks and achieving brain-like energy efficiency and intelligence: it achieves high performance through physical parallelism, reduces energy consumption driven by thermal noise, and injects random exploration and emergence characteristics into the system, possibly spawning unprecedented autonomous intelligence capabilities. The conceptual model shown in Figure 2 depicts a brain-like AI blueprint fusing thermodynamic hardware and semantic software, which belongs to a vision at present but is not an unreachable fantasy, but a natural extension of many frontier results. If this direction succeeds, it will thoroughly change the trajectory of AI development: artificial intelligence will no longer be limited to silicon-based, von Neumann, high-consumption paths, but turn to existence closer to biological intelligence forms—low energy consumption, structurally adaptive, and more flexible and diverse behaviors.
Of course, we must also soberly see that these theories and visions have many challenges to be solved. For example, how to quantify Natural Computing, how to fuse the axiom system of Semantic Mathematics with existing machine learning algorithms, how to ensure the scalability and stability of p-bit devices, how to program and control thermodynamic computing, etc. This requires continuous exploration and cooperation in multiple fields such as computer science, physics, neuroscience, and cognitive science. However, it is precisely these cross-border fusions that nurture new paradigms in the computing field. It can be foreseen that the next 10-20 years will be a critical period for the evolution of computational paradigms: traditional Moore's Law slows down but intelligent computing demand surges, forcing us to move towards these "unconventional" paths discussed in this paper. And when some of them start to become practical and mainstream, that will mark a profound Paradigm Shift in the fields of computer and artificial intelligence.
In summary, emerging theories of Natural Computing, Semantic Mathematics, Thermodynamic Computing, and Physical Evolutionary Computing jointly paint a grand picture of future computing development: computation will become more natural, semantic, and physical. Everything can be computed and everything is computing; computing systems will be deeply integrated with the physical world and seamlessly docked with human cognition, becoming intelligent agents that are truly efficient and smart like the brain. On this road, we have taken initial but solid steps. Looking forward to the future, as long as academia and industry work together to break through key technical bottlenecks, the light of these theories will surely shine in the new generation of artificial intelligence and computer systems, pushing humanity into a new era where computation is ubiquitous and intelligence emerges naturally.
References
"Everything is Computation": Exploring the Mysteries of the Universe from a Computational Perspective - Book Recommendation - Cold Moon Chat, https://www.xinfinite.net/t/topic/9191
Polymer-based probabilistic bits for thermodynamic computing, https://www.arxiv.org/abs/2509.21372
Thermodynamic Computing: From Zero to One | Extropic, http://extropic.ai/writing/thermodynamic-computing-from-zero-to-one
"Thermodynamic Computing": From Landauer Boundary to Ultimate Computer - Biren Technology Smart Painting Global | BIRENTECH, https://www.birentech.com/Research_nstitute_details/18087969.html
Professor Yucong Duan's DIKWP Artificial Consciousness Model and Related Theoretical Analysis Report - ScienceNet Blog, https://blog.sciencenet.cn/blog-3429562-1493393.html
Papers from the DIKWP-AC Artificial Consciousness Team of the School of Computer Science and Technology Won the "Best Poster Award" at the CCF2023 China Digital Service Conference - School of Computer Science and Technology, https://cs.hainanu.edu.cn/info/1020/5820.htm
First in the Nation! This New Major at Wuhan University is Worth Taking, Good Prospects! Spacetime Intelligent_ Positioning Navigation, https://www.sohu.com/a/903103642_120454746
Artificial Intelligence and Natural Computing - Oliver2022 - CNBlogs, https://www.cnblogs.com/oliver2022/p/16609529.html
Research Report on Cross-Domain Applications Based on Yucong Duan's Semantic Mathematics Theory - ScienceNet, https://wap.sciencenet.cn/blog-3429562-1488456.html
Professor Yucong Duan's DIKWP Artificial Consciousness Model and Related Theoretical Analysis Report - ScienceNet, https://wap.sciencenet.cn/blog-3429562-1493393.html
Semantic Evolution and Self-Feedback Mechanisms under DIKWP Semantic Mathematics Framework - Yucong Duan's Blog - ScienceNet, https://wap.sciencenet.cn/blog-3429562-1480932.html?mobile=1
(PDF) Hypothesis Models of Universe and Life Origins Based on DIKWP Semantic Structure - ResearchGate, https://www.researchgate.net/publication/396900168_jiyuDIKWPyuyijiegoudeyuzhouyushengmingqiyuanjiashemoxingyuyiyuzhoulun
Theoretical Modeling of Cosmic Semantic Network and Semantic Autonomy of Artificial Consciousness, https://zhuanlan.zhihu.com/p/1888942247531247156
Semantic Mathematical Analysis Report on Professor Yucong Duan's DIKWP Model and Artificial Consciousness White-box Evaluation, https://blog.sciencenet.cn/blog-3429562-1493394.html
Large Language Model Consciousness Level "Cognitive Quotient" White-box DIKWP Evaluation 2025 Report Released - Science and Technology Daily, https://www.stdaily.com/web/gdxw/2025-02/19/content_298792.html
Yucong Duan: From "Artificial Consciousness System = Subconscious System (LLM) + Conscious System (DIKWP)..., https://www.researchgate.net/publication/378610276_duanyucongcongrengongyishixitongqianyishixitongLLMyishixitongDIKWPdaosuanfapianjiandeyuyijiejue
Self-correcting High-speed Opto-electronic Probabilistic ..., https://www.themoonlight.io/zh/review/self-correcting-high-speed-opto-electronic-probabilistic-computer
Blog - Purdue-P - An Exploration into Probabilistic Spin Logic (p-bits), https://www.purdue.edu/p-bit/blog.html
Poor Man's Qubit: Quantum Computers are Too Hard to Build, Try Probabilistic Computers First? - LinkResearcher, https://www.linkresearcher.com/theses/c43e4084-04fa-4215-8086-fc4a8d0ebae1
Ising model - Wikipedia, https://en.wikipedia.org/wiki/Ising_model
Boltzmann Machine - Tutorials Point, https://www.tutorialspoint.com/artificial_neural_network/artificial_neural_network_boltzmann_machine.htm
In Quest for Quantum Computing, the Coherent Ising Machine ..., https://ntt-research.com/in-quest-for-quantum-computing-the-coherent-ising-machine-shows-the-most-promise/
Far in the Horizon, Near at Hand—Decoding Wuhan Practice of "Artificial Intelligence + Beidou" - 6488avav, https://www.msup.com.cn/gamess/bid0a033f.html


人工意识与人类意识


人工意识日记


玩透DeepSeek:认知解构+技术解析+实践落地



人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限



人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社



主动医学概论 初级版


图片
世界人工意识大会主席 | 段玉聪
邮箱|duanyucong@hotmail.com


qrcode_www.waac.ac.png
世界人工意识科学院
邮箱 | contact@waac.ac





【声明】内容源于网络
0
0
通用人工智能AGI测评DIKWP实验室
1234
内容 1237
粉丝 0
通用人工智能AGI测评DIKWP实验室 1234
总阅读8.7k
粉丝0
内容1.2k