大数跨境
0
0

A Human Cognitive Mechanism Based on Conceptual Morphology

A Human Cognitive Mechanism Based on Conceptual Morphology 通用人工智能AGI测评DIKWP实验室
2025-11-06
7




A Human Cognitive Mechanism Based on Conceptual Morphology and Semantic Space Updating: Proposed by Professor Yucong Duan

Yucong Duan
Benefactor: Shiming Gong

International Standardization Committee of Networked DIKWfor Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)



Abstract
The cognitive mechanism of the human brain in text comprehension is a core topic in cognitive science and artificial intelligence. Professor Yucong Duan proposes that the human brain's processing of textual cognitive content relies on concept morphology (the internal structure and representation of concepts), the dynamic update of semantic space, and the independence mechanism of concept space. This report, based on this theory, conducts an in-depth expansion and validation in combination with the latest research in cognitive science, artificial intelligence, and linguistics. We explore the internal mechanisms of concept generation, semantic ambiguity resolution, cognitive update, and semantic space reconstruction, clarifying how humans form new concepts in language comprehension, how ambiguities are resolved in context, and how semantic cognitive structures are reorganized through the intake of new information. The report designs specific application cases, including ambiguity resolution in text comprehension, continuous learning and concept drift adaptation in machine learning models, and dynamic updates of the concept layer and data layer in knowledge graph construction, to demonstrate the practical value of the above theoretical mechanisms. Through meta-analysis, we comprehensively compare important models and theories in related fields (such as the DIKWP model, concept space theory, semantic network model, prototype category theory, and distributed semantic models), as well as experimental evidence from cognitive neuroscience and artificial intelligence, to sort out their strengths and weaknesses and applicable scopes. The results show that the separation and interaction of concept morphology and semantic space are the keys to understanding human cognition and language processing: the relative independence of concept space ensures the stable grasp of meaning by the cognitive subject, while the update of semantic space reflects the adaptation of cognition to the environment and context. Finally, the report summarizes the insights of this integrated perspective for deepening natural language processing and artificial intelligence models and looks forward to the future direction of interdisciplinary research.
Introduction
When reading and comprehending text, the human brain demonstrates remarkable cognitive abilities: We can not only quickly recognize the surface meanings of words and sentences but also connect context, invoke background knowledge, and form an understanding of the deeper meaning of the text. How does the brain process semantics during reading? How does it integrate new information into existing knowledge structures? These questions have always been at the core of research in cognitive science, linguistics, and artificial intelligence. Studies have pointed out that the human cognitive space (which includes personal internal knowledge, beliefs, and experiences) interacts with others through linguistic semantics, but each person's cognitive space has a certain degree of closure, meaning that understanding has subjectivity. This implies that different individuals may have different understandings of the same text, depending on the differences in concepts and knowledge within their respective cognitive spaces. This relativity of cognitive space requires us to focus on the role of internal concept structures and semantic representations in the understanding process.
Professor Yucong Duan proposes a unique theoretical framework for "understanding," emphasizing that the understanding process is driven by the purpose of the cognitive subject. Through semantic association, probabilistic confirmation, and knowledge reasoning, new information from the text is integrated into its cognitive structure to form new knowledge representations. In this framework, three key elements are worth noting: concept morphology, semantic space update, and independence mechanism. Simply put, concept morphology refers to the form and internal structure of concepts, and the human brain may represent concepts in some abstract "morphological" way; semantic space update means that when the brain receives new textual information, the semantic representation (meaning space) will dynamically adjust to accommodate the new information and be consistent with existing knowledge; the independence mechanism refers to the autonomy and closure of concept space—that is, concepts have relatively independent representations in an individual's brain and will not essentially change due to different ways of expression, thereby ensuring the consistency of meaning understanding.
The above theory resonates with some classic views in cognitive science and linguistics. As early as the end of the 20th century, psycholinguist Steven Pinker proposed that language is not the essential carrier of thought. Thoughts have an independent "mental semantics" representation system from natural language, which means that human conceptual thinking is somewhat independent of specific linguistic symbols. Similarly, cognitive linguist Ray Jackendoff's conceptual semantics theory argues that the meanings of words and sentences correspond to unconscious conceptual structures in the speaker's mind, which have universal properties beyond specific languages. These views all imply the idea of the independence of concept space: that is, there is a relatively autonomous conceptual representation layer inside the human brain that maps linguistic input into a unified meaning representation. Professor Yucong Duan's theory further systematizes this idea, emphasizing that in human text comprehension, the concept morphology within the cognitive subject is the foundation of understanding, while semantic processing needs to be continuously updated according to the situation while maintaining the stability and independence of the concept layer.
This report aims to conduct an in-depth expansion and validation of the theoretical framework proposed by Professor Yucong Duan and examine it in a broader academic context. First, we will construct the theoretical framework, clarifying the definitions and interrelationships of cognitive space, semantic space, and concept space, as well as the meanings of concept morphology dependence and independence mechanism. Next, we will analyze the mechanisms from a mechanistic level: exploring how humans generate new concepts, how they deal with the ambiguity of language, how they update their cognitive structures through learning, and how they reconstruct semantic space in the brain. Then, through several specific application scenarios and case analyses, we will illustrate the practical application potential of this theory, such as improving the accuracy of machine reading comprehension, helping machine learning models achieve continuous learning, and constructing dynamically evolving knowledge graphs. Subsequently, we will conduct a literature review and meta-analysis, synthesizing important models, theories, and experiments in related fields, and comparing and analyzing the research progress of concept-semantic cognition from a multidisciplinary perspective, including experimental findings in cognitive science, model performance in artificial intelligence, and theoretical explanations in linguistics. Finally, in the conclusion and outlook section, we will summarize the main findings, point out the significance of this research for deepening natural language processing and artificial intelligence system design, and propose future research directions.
Through the above structured exploration, we hope to answer the following key questions: (1) What role does concept morphology play in cognitive understanding? (2) How does the brain dynamically update its semantic space when understanding text? (3) How does the independence of concept space ensure our stable understanding of meaning? (4) How can these mechanisms be applied in artificial intelligence and language processing applications? The answers to these questions will not only help clarify the nature of human language cognition but also provide inspiration for building more intelligent AI systems that better conform to human cognitive mechanisms.
Theoretical Framework
In this section, we establish a theoretical framework that encompasses cognitive space, concept space, and semantic space, and elaborate on the meanings of concept morphology dependence and concept space independence. This framework integrates Professor Yucong Duan's theoretical views with related concepts in cognitive science and linguistics, laying the foundation for subsequent mechanism analysis and application research.
Cognitive Space, Concept Space, and Semantic Space
Cognitive Space can be defined as the internal psychological world in an individual's mind that carries their knowledge, beliefs, and experiences. It is highly individualized, with different content and structure for each person, and has a certain degree of closure: External information can only enter the cognitive space through channels such as sensation and language, and its interpretation depends on existing cognitive structures. The relativity of cognitive space means that different readers may have different understandings of the same text due to differences in knowledge background. This is reflected in linguistics by the Sapir-Whorf hypothesis, which emphasizes that different languages and cultural backgrounds can affect people's cognition of the same thing; however, modern views tend to believe that language influences but does not completely determine thinking. The concept of cognitive space emphasizes the subjective factor in the understanding process: The meaning of new information can only be truly understood when it is connected with the subject's existing knowledge.
Within the cognitive space, we further distinguish between Conceptual Space and Semantic Space. Conceptual Space refers to the collection of concepts in an individual's mind and their structural relationships. The term "concept" here broadly refers to the categories, objects, attributes, and other abstract units that people understand, as well as their hierarchies and associations. Conceptual space more reflects the subjective organization and structured form of knowledge and can be regarded as the cognitive subject's "internal ontology" or "semantic network." Many researchers have proposed that conceptual space has some geometric or topological structure: Swedish cognitive scientist Peter Gärdenfors, in his "Conceptual Space" theory, argues that concepts can be represented as regions in a multidimensional attribute space, with the distance between different concepts in space representing their semantic similarity. This theory endows conceptual space with the meaning of form (geometric structure): Concepts are not stored in a disorganized manner but have internal dimensions and structure, allowing for the calculation of distance and direction. This coincides with the concept morphology mentioned by Professor Yucong Duan: Concepts have their internal form, including definitions, features, and relationships with other concepts, and when understanding text, we are actually triggering the corresponding concept morphology based on words.
Semantic Space usually refers to the distributional representation of linguistic symbols (words, sentences, etc.) in terms of meaning. In natural language processing, semantic space models refer to high-dimensional vector spaces learned from large-scale corpora, where each word or symbol corresponds to a vector, and the distance or direction between vectors reflects the semantic associations and differences between words. For example, word embedding models map words to vectors, making synonyms closer together and antonyms farther apart, with taxonomic relationships represented as certain directional relationships in space. Semantic space more reflects the statistical associations in the objective use of language. For example, the words "king" and "queen" may be close to each other in vector space and parallel to the direction of "male-female," showing their semantic association. However, semantic space is a relatively external and data-driven representation, highly related to specific languages and corpora, and semantic spaces of different languages or different domains may have differences. Professor Yucong Duan points out that the differences in semantic space can be understood as follows: Different cognitive subjects, due to differences in language and knowledge background, form different semantic representation spaces. Even for the same person, the activated semantic space may change in different contexts or at different times. In short, semantic space represents the "semantic distribution of words in use," and it is dynamic, context-dependent, and corpus-dependent.
Conceptual space and semantic space are both distinct and interconnected within the cognitive subject. We can regard conceptual space as the internal knowledge structure of the subject (subjective, stable, structured), while semantic space is directly related to language input and output (objective, dynamic, context-dependent). When a person reads text, the words (linguistic symbols) first map to certain positions in the semantic space (activating the semantic vectors of several words), and these activation patterns trigger the corresponding concepts in the conceptual space through the semantic-conceptual mapping mechanism, ultimately forming an understanding of the entire sentence or paragraph at the cognitive space level.
Concept Morphology Dependence and Concept Space Independence
Concept morphology dependence means that human grasp of textual meaning relies on the morphological structure of internal concepts. In other words, we do not directly compute based on the surface form of vocabulary when understanding a word or sentence, but rather convert it into an internal conceptual representation and then reason and understand based on the network of relationships between concepts. For example, when we read the sentence "The apple fell from the tree," the "apple" concept activated in our brain includes attributes such as its shape, the fact that it falls under the influence of gravity, and that it is an edible fruit. If the same sentence is in the historical context of Newtonian mechanics, "apple falling" is also associated with the more abstract concept of "gravity." It is evident that the meaning evoked by the text depends on the morphological structure of the concept: The richer the concept, the more dimensions of understanding it can provide. According to cognitive psychology research, children gradually enrich the connotations of concepts during language acquisition, moving from concrete representations (such as apples being red and spherical) to more abstract attributes (apples have mass and are affected by gravity) to form a complete conceptual structure. This gradual enrichment of concept morphology enables the improvement of understanding ability with cognitive development.
The role of concept morphology can also be explained by categorization and prototype theory. Prototype category theory (proposed by Rosch et al. in the 1970s) suggests that human concepts are mostly defined by "family resemblance," that is, a collection of meanings gathered around a prototype. For example, the concept of "bird" may have a prototype like a sparrow, which is a small bird that can fly, while a penguin, although belonging to the bird category, is farther from the prototype. When the text mentions "bird," we tend to rely on the prototype form of the concept to understand (assuming it can fly, etc.), unless the context indicates otherwise. Therefore, concept morphology (including prototypes and boundaries) affects our default interpretation of words. This is manifested as concept morphology dependence: Without the internal form of the concept, we cannot truly assign meaning to linguistic symbols.
The independence mechanism of concept space emphasizes that although the semantic space changes with corpora and context, the concept space within an individual has relative autonomy and independence. This independence is manifested at multiple levels:
Cross-linguistic independence: Vocabulary from different languages can map to the same concept space. For example, whether it is the English word "apple" or the Chinese word "苹果," for someone who knows both languages, they will point to the same "apple" concept in their mind. In other words, concept space provides a language-independent layer of meaning representation. Pinker referred to this idea as "mentalese," suggesting that we do not think in natural language but use internal conceptual symbols for reasoning. This indicates that concept space is relatively independent of specific linguistic symbol systems and has universality.
Concept stability: Even with diverse external expressions, a mature individual's concepts are stable over a short time scale. For example, when we read the word "COVID-19" today, whether it appears in a popular science article or a rumor post, the core concept of "COVID-19 disease" in our mind is relatively fixed and does not change because of different sentences. Of course, we may learn new knowledge about it (such as discovering new symptoms), which belongs to the enrichment of concept details, but the core definition of the concept (a respiratory infectious disease caused by a virus) remains unchanged. The independence of concept space ensures the sameness of the same concept in different contexts. Cognitive neuroscience research supports this point: There are some neurons in the brain that are highly selective for specific concepts (commonly referred to as "concept cells" or "grandmother cells"). For example, it has been found that neurons respond to different presentations of "Jennifer Aniston" (real photos, written names, etc.), showing that the brain has formed a unified concept representation of this person internally, which does not change with the presentation medium. These findings indicate that the brain indeed categorizes different perceptual inputs into the same concept, which is the embodiment of the independence of concept space.
Internal consistent reasoning: The independence of concept space allows the cognitive subject to perform consistent logical reasoning at the conceptual level without being entangled by the interference caused by different expressions. For example, we can infer "If all X are Y, and Z is X, then Z is Y." This logical reasoning takes place at the conceptual level, and once the premises are converted into conceptual relationships, the conclusion of reasoning is not affected by the wording of the specific sentences. This is particularly important for understanding long texts: When reading an article, we often need to unify the expressions mentioned in different paragraphs into the same concept and then reason about the main idea or implicit conclusions of the article based on this. Without the independence and closure of concept space, we would have to re-understand every time we encounter synonymous expressions, which would greatly reduce the efficiency of understanding.
The independence of concept space does not mean that concepts are immutable or that concepts are exactly the same among people, but rather emphasizes that in the understanding process, the concept layer has stable and autonomous characteristics relative to instantaneous semantic input. This characteristic ensures that semantic association and knowledge reasoning can take place within a relatively stable conceptual framework. Professor Yucong Duan pointed out in the "Theory of Understanding Relativity" that the generation of understanding requires probabilistic confirmation and knowledge reasoning, that is, matching the input information with existing conceptual knowledge and verifying its rationality. This process clearly depends on the stable representation of the conceptual layer—if the conceptual representation is unstable, it would be impossible to confirm the consistency of new information with the knowledge system. The independence of concept space ensures the consistency and stability of the conceptual layer, allowing new information to be accepted or to trigger cognitive conflict on this basis.
Summary of Theoretical Framework
Professor Yucong Duan's theoretical framework emphasizes such a picture: Text comprehension is the product of the interaction between the internal concept space and semantic space of the cognitive subject. The semantic space is responsible for extracting symbolic information from the text and making preliminary associations (such as word-to-word associations, contextual collocation probabilities, etc.), while the concept space provides a higher-level structure for interpreting the world knowledge and logical relationships corresponding to these symbol strings. Concept morphology, as the basic building block of concept space, determines the possible ways of meaning construction; the independence of concept space ensures that understanding has a consistent semantic basis within the subject and will not be disordered due to changes in external expression; at the same time, the relativity of cognitive space means that each person interprets the text based on their own concept space to form a subjective understanding. The comprehension process itself is manifested as the dynamic update of semantic space—cognitive subjects gradually adjust their semantic representation of the text as they read and generate new cognitive content through continuous interaction with the concept space. This framework is in line with many modern cognitive science findings, such as the existence of a distributed and integrated semantic system in the brain: The anterior temporal lobe (ATL) and other brain regions seem to act as a modality-free semantic "hub," integrating various sensory and linguistic information into abstract concepts. When a person needs to extract multiple meanings of a word, the activity of ATL is enhanced, and the prefrontal cortex and the posterior part of the middle temporal gyrus (pMTG, etc.) work together to control the selection of meanings suitable for the current context. These pieces of evidence can all be understood within the above framework: ATL, as the "hub" of concept space, ensures the independence and stability of concepts, while the prefrontal-temporal network executes context-dependent semantic updates and selection based on the information in the semantic space.
Mechanism Analysis
Understanding language text is a complex dynamic process involving mechanisms such as the generation of new concepts, ambiguity resolution, cognitive structure update, and semantic space reconstruction. In this section, we combine the latest research in cognitive science and artificial intelligence to conduct an in-depth analysis of these mechanisms, elucidating how the various elements in Professor Yucong Duan's theoretical framework play roles in the cognitive process.
Concept Generation Mechanism: Formation and Expansion of New Concepts
Humans are not born with all concepts; we continuously generate new concepts or enrich the connotations of existing concepts during growth and learning. Concept generation can refer to two situations: (1) forming entirely new concepts from scratch (for example, the invention of the concept of "black hole" by humans); (2) obtaining new concepts through the combination or differentiation of existing concepts (for example, subdividing the concept of "mobile phone" into "smartphone"). The importance of concept generation mechanisms for cognitive development goes without saying, as it involves various cognitive processes such as induction, analogy, and concept integration.
In children's cognitive development, concept generation is mostly achieved through continuous classification and generalization. Children initially learn concepts often starting from concrete examples. For example, by seeing various different dogs, children gradually form the concept of "dog." In this process, the brain needs to extract common features, ignore non-essential differences, and abstract conceptual representations from perceptual experience. Rogers and McClelland, among others, simulated this process through the Parallel Distributed Processing (PDP) model: They trained a neural network to represent animal categories with several perceptual features. As a result, the network gradually learned the concept of species, clustering similar animals into one category and forming corresponding abstract representations in the internal hidden layer. Dimensionality reduction analysis of the network's internal representation shows that different animals exhibit meaningful clustering and dimensions in the hidden space, such as differentiation by size and habitat. This suggests that the brain may use a similar mechanism to establish the structure of concept space through statistical learning and generalization.
However, human concept generation is not a purely statistical induction process. Analogical reasoning and concept blending are also important sources. Conceptual Blending Theory suggests that the brain can partially map and integrate the contents of two or more mental spaces to create a new concept space. For example, the word "network" originally referred to structures like spider webs, and "virus" referred to biological viruses. By analogically blending these two concepts, the new concept of "computer virus" was formed to explain the self-replicating phenomenon of computer programs. This process utilizes the morphological structure of our existing concepts, recombining them in new contexts to give birth to new concepts. Cognitive linguistics has extensively studied metaphor and concept integration processes, demonstrating that our conceptual system is highly creative: By projecting familiar concepts onto new domains, we can carve out new areas in concept space to accommodate new phenomena.
Neuroscience research has also begun to reveal the brain mechanisms of concept generation. The hippocampus and its connected areas (such as the entorhinal cortex) play a key role in forming new memories and concepts. The recently proposed cognitive map theory suggests that the hippocampal-entorhinal system is not only involved in spatial navigation but more generally constructs "cognitive maps" to represent various abstract relationships. Studies have simulated the formation of animal concept space: Stoewer et al. (2023) constructed a neural network that learned the "animal semantic space" cognitive map based on animal features. They found that the network could learn the similarity relationships between different animal species and automatically cluster 32 animals into biological categories (amphibians, mammals, insects, etc.). On this hierarchical cognitive map, higher-level abstract concepts (such as "mammals") naturally emerge, and the network can represent new animals it has never seen before by interpolating on the map. For example, given an incomplete feature of a new animal, the network can find the corresponding position in the existing map to represent it with an accuracy rate as high as 95%. This suggests that the cognitive map formed by structures such as the hippocampus may be a concept generation mechanism: By locating the relationship between new things and old knowledge in a continuous concept space, we can efficiently integrate new concepts. The emergence of new abstract concepts is believed to be related to this multi-scale cognitive map: On a coarse-grained map, concepts cluster to form categories, while on a fine-grained map, each instance is evenly distributed. By switching between different scales, the brain may achieve the function of abstracting concepts from concrete experiences.
Research on concept generation mechanisms also offers insights for the development of artificial intelligence. In machine learning, how to enable models to "create" new concepts or deal with new categories remains a continuous challenge. For example, unsupervised clustering and representation learning can be seen as the process of models automatically generating concepts; generative models (such as large language models like GPT) can to some extent combine existing knowledge to produce expressions of new concepts, but these "new concepts" are often still within the scope of the training corpus. True concept innovation requires models to possess analogical reasoning and abstraction capabilities. Current research attempts to combine symbolic logic with neural networks, allowing models to express new concepts by combining basic concepts or exploring concept space to discover new clusters. These all simulate some aspects of human concept generation. However, the richness and flexibility of human brain concept generation are far from being comparable to existing AI. What we can confirm is that concept morphology plays a central role in the generation process: Whether it is children's inductive learning or adults' analogical creation, it is an extension based on existing concept morphology. Concept morphology provides the "building materials" and "anchors" for new meanings, and the brain uses these to evaluate the similarities and differences between new and old concepts, thereby deciding how to place this new member in concept space.
Semantic Polysemy Resolution: Ambiguity Disambiguation in Context
Polysemy and synonymy are universally present in language. A word may have multiple related or unrelated meanings (polysemous words and homonyms), such as "bank" which can refer to a riverbank or a financial institution; different words may also express the same or similar concepts, such as "car" and "sedan." The human brain must correctly interpret the specific meaning of a word based on context when reading. This process is known as Word Sense Disambiguation (WSD). Under Professor Yucong Duan's theoretical framework, semantic ambiguity disambiguation reflects the synergistic effect of dynamic semantic space update and concept space independence: External ambiguities are resolved through the stability of internal concepts and context updates.
The resolution of polysemous words first requires identifying its possible several meanings, which usually correspond to different concept nodes in concept space. For example, "apple" may refer to the fruit or Apple Inc. When we see the sentence "I ate an apple," we immediately select the meaning of the fruit; whereas in "Apple released a new phone," we choose the company meaning. How does the brain achieve this? Cognitive psychology experiments have shown that when encountering ambiguous words, the human brain often briefly activates the representations related to all common meanings of the word, and then quickly suppresses the irrelevant meanings based on context, retaining only the meaning consistent with the context. This is manifested in ERP studies as changes in the N400 component: If the information provided later in the sentence is inconsistent with the previously assumed meaning, it will trigger an enhancement of N400, indicating a semantic incongruity that requires re-interpretation. Functional imaging (fMRI) studies also provide evidence: When the same polysemous word is presented in different contexts, the activity patterns in the anterior temporal lobe (ATL) of the brain can distinguish the meaning of the word in different contexts. This suggests that there is a mechanism in the brain to differentiate semantics. The ATL is precisely the aforementioned "hub" region of concepts, indicating that it plays an important role in selecting concept meanings based on context. When there is a need to choose among several familiar meanings, ATL activity is enhanced; at the same time, regions such as the inferior frontal gyrus and the posterior part of the middle temporal gyrus (involved in the semantic control network) are also more active, used to suppress meanings irrelevant to the current situation. Therefore, the semantic control network in the brain dynamically updates the activated semantic space, retaining only the concepts consistent with the context.
The above process can be decomposed into: semantic space provides candidates, and concept space determines the affiliation. At the beginning, a polysemous word activates the vector representations of all its common usages in the semantic space, and these vectors are then mapped to different concept nodes in the concept space. Subsequently, the brain compares which concept node is more closely related to the other concepts activated by the context based on the additional activation information provided by the context, thereby selecting the correct concept meaning. Meanwhile, the unselected concept nodes are inhibited, and their activation decays rapidly, with the corresponding vector representations in the semantic space being adjusted (reduced weight). This process is extremely fast, so that we are often unaware that we have ever considered the wrong meaning—unless the author deliberately creates a garden path sentence that leads the reader to choose the wrong meaning and then overturns it. Under normal circumstances, this update of the semantic space is continuous, with each new word or sentence read adjusting the currently activated word meanings and topics in real time. In Professor Yucong Duan's framework, this belongs to the dynamic update of semantic space: The cognitive subject continuously maps the external text to the internal semantic representation and adjusts the semantic activation state based on new information to ensure overall coherence of meaning.
It is worth noting that humans have a significant advantage in dealing with polysemy because we have a vast amount of knowledge and common sense as support. This knowledge is stored in the concept space and can provide additional basis for ambiguity resolution. For example, in the sentence "The bank went bankrupt," even with only one sentence, we tend to think that "bank" refers to a financial institution rather than a riverbank, because we know that riverbanks do not "go bankrupt," while financial institutions do. This process of using common sense to eliminate ambiguity is actually the embodiment of concept space guiding semantic understanding: The knowledge in the concept space (a bank is a business, and businesses can go bankrupt) guides us to choose the correct meaning in the semantic space. This explains why models that rely solely on statistical correlations sometimes make common-sense errors—because they lack the deep knowledge of the concept space and can only rely on semantic space (word vector associations) to cover complex common sense. The current state-of-the-art language models (LLMs) have implicitly learned common-sense associations to some extent through massive training data, but they are essentially still operating at the semantic space level. When encountering new situations or needing to reason, they are easily misled by surface associations. In contrast, the human brain, with its concept space independence, can transcend the surface associations of words and determine the meaning based on its understanding of the world.
Research on polysemy resolution in artificial intelligence is mainly reflected in the Word Sense Disambiguation (WSD) task. Traditional methods include symbolic methods based on lexical knowledge bases (such as WordNet) and machine learning methods based on corpus statistics. The former is equivalent to endowing machines with explicit concept space, where artificially constructed lexical ontologies provide each sense and its relationships, and the selection is made by calculating the match degree between the context and the definitions of each sense; the latter is closer to the semantic space method, training a classification model to distinguish word senses based on contextual features. Modern methods often combine both, such as using pre-trained contextual embeddings (like BERT) to model the semantic space and then constraining possible word senses with knowledge graphs (concept space). Studies have shown that models integrating knowledge significantly outperform pure neural network models in low-resource or high-ambiguity environments. For example, in a WSD study of a low-resource African language, combining a bidirectional GRU neural network with a pre-trained model and introducing knowledge optimization increased the word sense disambiguation accuracy rate from around 70% to 84-85%. This indicates that the hybrid method of semantic space + conceptual knowledge also brings advantages in computation. This is consistent with the inspiration from the human brain: Relying purely on contextual word co-occurrence (similar to Transformer attention) has limitations, and adding knowledge constraints (similar to conceptual association) can better resolve ambiguities.
In addition to lexical ambiguity, humans also need to deal with syntactic ambiguity (such as "The old lady ate the cake in the kitchen" which may have different parsing trees) and pragmatic ambiguity (multiple interpretations of implied meanings). These require more complex analyses, but their common point is still to use context and background knowledge to select a reasonable interpretation. At the syntactic level, the brain also simultaneously activates multiple parsing trees, then selects one based on probability and semantic plausibility while suppressing the other; in pragmatics, humans rely on mental models (inference of the speaker's intention) and common sense to determine the meaning. These processes all belong to the broad category of semantic space update and concept space collaboration. Through these mechanisms, humans can quickly and accurately understand what the other party truly wants to express in most cases, even if the language itself contains ambiguity and uncertainty.
Cognitive Update and Semantic Space Restructuring: Integration and Reorganization of Knowledge
What changes occur in our brains when we acquire new knowledge through reading? Cognitive update refers to the process of integrating newly acquired information into existing cognitive structures. If the new information is consistent with existing knowledge, we may directly add it to the relevant concept's knowledge network; if a conflict arises, we may need to adjust the existing concept structure, which is semantic space restructuring or even concept space restructuring. Jean Piaget in his theory of cognitive development referred to this process as assimilation and accommodation: Assimilation is incorporating new information from the environment into existing schemas, while accommodation involves modifying existing schemas to adapt to new information. When a child first sees a whale, they might assimilate it into the concept of "fish" as a "big fish," but upon learning that whales breathe air and are mammals, they must adjust the concept boundaries between "fish" and "mammals" to classify the whale as a mammal. This is an example of concept space restructuring.
In Professor Yucong Duan's theoretical framework, the process of understanding text itself involves micro-level cognitive updates. He emphasizes that understanding is a process of knowledge structure reorganization: New semantics are embedded into the cognitive subject's existing knowledge network through probabilistic confirmation and knowledge reasoning, forming an updated cognitive structure. If we liken the brain to a knowledge graph, then reading is the continuous expansion and modification of this graph. For example, reading the sentence "Whales are not fish" requires adjusting the attributes and connections of the "whale" node. This dynamic update demands semantic space restructuring: When reading this sentence, the semantic space representation of "whale" needs to move from the area close to "fish" to the "mammal" area. This can be seen as a shift or remapping of the semantic space coordinates. After multiple such updates, the cognitive subject's concept space changes (the hierarchical relationship of the whale concept is altered).
Research in cognitive psychology and neuroscience provides some evidence regarding knowledge updates. The semantic network PDP model mentioned earlier can also simulate the phenomenon of knowledge updates: If a well-trained network receives new inputs that contradict previously learned knowledge, it will adjust connection weights driven by error, potentially leading to changes in the existing semantic structure. For example, in the Rogers and McClelland model, if a special case is introduced later, it is found that the model readjusts its internal representation, causing the overall classification boundary to shift. The brain may also achieve knowledge updates through similar gradient adjustments (though not strictly the BP algorithm, but possibly through synaptic plasticity and reactivation). However, the brain has a mechanism for gradually integrating new knowledge to avoid drastic changes to existing memories, similar to the problem of catastrophic forgetting in machine learning. The interaction between the hippocampus and neocortex is believed to support this: New memories (knowledge) are temporarily stored in the hippocampus and then gradually consolidated into the semantic network of the neocortex during sleep, thereby smoothly updating knowledge.
An interesting finding is that the semantic memory network becomes denser with age. A study by Cutler et al. (2025) using event-related potentials (ERP) found that semantic retrieval in the elderly shows different characteristics: The retrieval response to inconsistent features is slower, the amplitude of the N400 component is reduced, and there is increased sustained activity in the prefrontal cortex. Researchers explained that this is because the elderly have accumulated a large amount of knowledge over their lifetime, making the semantic representation space denser with more associated concepts. As a result, retrieving a concept requires traversing and comparing more features, which slows down the speed and requires more monitoring. In simple terms, the more knowledge one has, the more "crowded" the semantic space becomes, and the tighter the connections between concepts. This not only indicates that cognitive updates are continuously carried out and accumulate effects but also confirms semantic space restructuring: As knowledge increases, the distribution of concepts in the semantic space changes—many previously unrelated concepts may become connected due to new knowledge, making the overall space topology more complex. This is an important clue for understanding the organization of knowledge in the brain: We do not simply add nodes but continuously rewire connections to build a more tightly interwoven semantic network. This also explains why the elderly sometimes easily experience memory confusion or require more prompts to accurately recall—too many associations increase the screening cost. This study highlights the importance of concept space independence from the opposite perspective: Even if the semantic space becomes denser, concepts themselves still need to maintain a certain degree of independence and clarity; otherwise, the more knowledge one has, the more likely it is to lead to blurred concept boundaries. The brain may deal with dense networks through hierarchical and modular approaches, categorizing concepts to maintain local independence.
In the field of artificial intelligence, knowledge updates and concept drift are also important issues. In streaming learning or online learning scenarios, models need to continuously learn new data and concepts. If a model lacks a clear concept layer, each new data training may impact the existing parameter distribution, leading to "catastrophic forgetting." To address this, some studies have introduced continuous learning algorithms, such as Elastic Weight Consolidation (EWC), to protect important old knowledge parameters. At the same time, knowledge graphs and other symbolic representations are used to assist model updates by adding new knowledge in the form of triples while keeping the existing ontology unchanged, thereby avoiding mismodifications to existing concepts. We can see that the approach of the human brain and the ideal AI solution have similarities: Using an independent concept layer as a buffer, new data mainly affects semantic associations, while the core definitions of concepts are not easily changed; when sufficient evidence accumulates to necessitate change, the concept layer is carefully adjusted (equivalent to the pattern layer). The clear distinction between the schema layer (concept model) and the data layer (instances) in knowledge graph research follows a similar idea: The schema layer stores abstract concepts and relationships (ontology), while the instance layer stores specific facts. New facts usually only update the instance layer, and the schema layer (concept system) is only changed when it is found necessary to reclassify. For example, adding the fact "Whales are mammals, not fish" to a biological knowledge graph will prompt the ontology schema layer to update the category of whales, but the concepts of fish and mammals remain, with only the structure being adjusted. This dual-layer structure improves the stability and efficiency of knowledge updates and corresponds to the dual-layer organization of cognitive space (concepts) and semantic space (instances) in the human brain.
Semantic space restructuring can be very evident in certain special cases. For example, when a person learns a completely new field of knowledge, a large number of concepts may need to be reorganized and associated in a short time to form a new sub-network; similarly, in language comprehension, if a reader is misled into forming an incorrect situational model and later discovers clues that overturn the previous understanding (such as the revelation of a hidden identity in a novel), they need to reconstruct the semantic interpretation of the entire article. This is manifested as the reader reviewing the previous text and reinterpreting some foreshadowing elements—equivalent to a large-scale semantic space restructuring operation. This phenomenon indicates that the semantic space is not a static accumulation of words but can be reinterpreted based on new cognitive drives. Professor Yucong Duan's statement that understanding is a "process of forming a new cognitive structure" is not just about producing a structure at the end but also involves reflection and adjustment of previous parts during the process. Metacognition and monitoring processes play an important role here, as the brain has the ability to detect inconsistencies or logical contradictions in understanding, thereby triggering a check and modification of the modeled semantics. Current artificial intelligence reading models still lack this large-scale restructuring capability, but some research is attempting to add memory retrieval-verification modules that provide feedback to the model for adjustment if the generated understanding is inconsistent with the knowledge base. This is similar to a simple form of restructuring.
In summary, cognitive updates and semantic space restructuring reflect the dynamic shaping and self-correcting mechanisms of human cognition. The brain is not a passive information storage device but actively maintains and optimizes its knowledge network during the understanding process. By maintaining the relative independence and stability of concept space, it can maintain the coherence of the cognitive system in the face of constantly changing information flows; by flexibly adjusting the representation of semantic space, it can efficiently absorb new knowledge and correct errors. This mechanism is the foundation of long-term learning and intelligent behavior. Next, we will apply the above understanding of mechanisms to specific cases to illustrate how these cognitive mechanisms can inspire improvements in text understanding, machine learning, and knowledge graph technologies.
Application Scenarios and Case Analysis
The theoretical exploration of cognitive mechanisms should ultimately serve practical applications. In this section, we design and analyze several specific scenarios to demonstrate the guiding value of the aforementioned concept morphology, semantic space update, and independence mechanisms for practical problems. These scenarios cover natural language text understanding, machine learning model design, and knowledge graph construction, among other fields. Each case will elaborate on its application methods and effects in combination with the theory.
Case One: Text Understanding and Semantic Ambiguity Resolution
Scenario Description: Imagine an intelligent reading system that needs to understand news articles and answer readers' questions. The system faces an article containing polysemous words and complex references, such as:
"Xiaoming was walking along the riverbank near the bank when he accidentally fell into the water. Fortunately, a police officer on the riverbank rescued him in time. At the same time, the bank manager happened to pass by and also participated in the rescue."
In this passage, the word "bank" (bank) appears in two different meanings (financial institution vs. riverside) within just two sentences, and there are also pronouns ("also participated in the rescue" refers to the manager). A typical NLP system based on statistics might confuse the meaning of "bank" or not know why the manager was present. We hope to use the concept-semantic hierarchical approach to improve the understanding of such text.
Application Approach: We construct an internal knowledge graph/concept network for the system, which includes concept nodes such as "bank (financial institution)," "riverbank (geographical meaning of bank)," "manager," "police officer," etc., and relationships such as "manager - belongs to - bank (institution)," "bank (institution) - near - riverbank," etc. When reading the article, the system not only generates vector representations of words (semantic space representation) but also queries or constructs corresponding concept nodes (concept space representation). When encountering the ambiguous word "bank," the system simultaneously activates two concepts: "bank (institution)" and "riverbank." Next, through contextual semantic associations and knowledge graph relationships, the system performs disambiguation: The first sentence "walking along the riverbank near the bank" clearly indicates a geographical entity "near the riverbank" through syntactic analysis and common sense, and "near the bank" indicates that an institution is nearby, so "bank" here should refer to the institution; "walking along the riverbank" locates the specific place. Thus, the system generates a situational graph for the first sentence: Xiaoming - at - riverbank (location), riverbank - near - bank (institution). The second sentence mentions "the police officer on the riverbank," who is a rescue role associated with the riverbank situation; "at the same time, the bank manager happened to pass by," here "the bank manager" is determined to be the institutional meaning due to the grammatical limitation of the bank's ownership relationship. The manager passing by the riverbank indicates that the bank (institution) is geographically adjacent to the riverbank, which is consistent with the knowledge in the first sentence. At this point, the system has correctly mapped all instances of "bank" to the "institution" concept. When subsequently parsing "also participated in the rescue," it is necessary to know that the subject "the bank manager" is the manager mentioned in the previous sentence, which can be achieved through concept unification: the manager node in the concept network has been activated, and the system parses the pronoun (omitted subject) as this known concept. Thus, the entire event sequence is clear: falling into the riverbank - police rescue - manager also rescues. The system can answer questions based on this, such as "Who fell into the water? Who participated in the rescue? Where did the rescue take place?" This knowledge-driven understanding method is more effective than simply using sentence vectors or Transformer attention mechanisms in ensuring that polysemous words are not misunderstood and cross-sentence references are correctly linked.
Effects and Advantages: Text understanding using concept-semantic dual-layer representation has obvious advantages when dealing with ambiguous and reasoning-requiring texts. It uses concept morphology (for example, "bank (institution)" has the attribute of "having a manager," "manager" is a human concept with the ability to rescue people; "bank (riverbank)" is a geographical concept and does not have a manager) to filter out unreasonable interpretations. The representation of semantic space is used to capture sentence structure and word co-occurrence, but the true decision-making on meaning is based on the associative logic of the concept layer. In this way, even if a sentence has a complex structure, the system can first map the nouns and verbs to concepts and then search for possible relationships based on the concept network to understand the sentence meaning. This is similar to how humans use common sense networks to enhance understanding during reading. According to reports, this method is helpful in machine reading comprehension tasks (such as MRC question answering) to improve the accuracy of answers to questions requiring common sense reasoning. The only challenge is that the system needs a comprehensive knowledge graph to support it, which involves knowledge acquisition. However, even if the graph is not complete, the system can dynamically generate temporary concept nodes and relationships during reading (for example, new entity names can create nodes and connect to known concepts). This capability is equivalent to allowing the model to build a "situational model" during reading, which is one of the methods that psychology believes humans use for reading comprehension.
Supporting Research: In recent years, multimodal large models have also developed in this direction. For example, Meta's Segment Anything model segments objects in images conceptually and, when combined with language models, can exhibit similar concept-level reasoning effects in multimodal understanding. In the field of pure text, Microsoft Research's Dr. KB project attempts to interact in real-time between pre-trained language models and knowledge bases to improve the answers to implicit knowledge questions in question-answering tasks. These all prove that text understanding integrated with conceptual knowledge is more reliable than pure distributed representation. This is in line with Professor Yucong Duan's theory emphasizing concept dependence, which has achieved performance improvement and error rate reduction in practice.
Case Two: Continuous Learning of Concepts in Machine Learning
Scenario Description: An intelligent classification system is initially trained to identify spam and normal emails in emails. Over time, the forms of spam emails continue to evolve, with new vocabulary and new tricks emerging constantly, such as the increasing appearance of content related to cryptocurrency scams, which were not involved in the initial training set. Traditional machine learning models, if not updated, may not be able to recognize new types of spam emails; however, retraining directly with new data may lead to forgetting old knowledge (such as previous spam email features). We hope to design a continuous learning mechanism that allows the model to incrementally update its understanding of the concept of spam emails without forgetting the previously learned discrimination rules.
Application Approach: Drawing on the brain's concept space independence + semantic space update mechanism, we can build a two-layer structure for the classification model: the lower layer is the neural network used for text representation (equivalent to semantic space), and the upper layer is the interpretable rules or concept library (equivalent to concept space). During the initial training, the model not only learns an end-to-end classifier but also induces some conceptual features. For example, through the analysis of spam email corpora, several high-level semantic features are extracted: such as "mentioning monetary rewards," "containing suspicious links," "urgent tone requesting personal information," etc. These features can be obtained by clustering the activation patterns of the network's intermediate layers or defined by expert knowledge. We regard these high-level semantic features as nodes in the concept space, and the concept of spam email is connected to these feature nodes (indicating that spam emails often have these attributes), while the concept of normal email is connected to another set of features (such as "daily greeting phrases," "personalized addressing," etc.). During classification decision-making, the model, on the one hand, embeds the email through the neural network and, on the other hand, calculates the matching degree of these conceptual features to make a comprehensive judgment on the email category.
When encountering a new type of email (such as a cryptocurrency scam), the model may not be able to classify it correctly at the neural network level due to its novel vocabulary and expressions. However, the model can discover some familiar patterns through the concept layer: for example, "promising high returns" and "requesting transfers to a Bitcoin address," which, although with new specific words, semantically match the existing conceptual features of "monetary rewards" and "suspicious links." Thus, the model can assimilate the new email as a variant under the spam email concept without immediately changing the concept space. At the same time, the model can also record the new keywords (such as "Bitcoin," "mining," etc.) that frequently appear in the new email and their associations, gradually integrating them into new semantic features. When a sufficient number of such emails accumulate, we can expand the concept space: adding a "cryptocurrency scam" subconcept as a subclass of the spam email concept and summarizing the features of this subclass. This is similar to how the brain learns a new subject by first identifying which known category it belongs to and then establishing finer concept nodes as understanding deepens.
Technically, the above process can be implemented using a continuous learning framework. Training is carried out in stages, and when each batch of new data arrives:
Freeze the concept features and classification decision boundaries learned in the previous stage, mainly adjusting the neural network layer (semantic space). Allow the network to adapt to the representation of new data while calibrating the scores of the output conceptual features. If the new data clearly lacks content that can activate the original spam email features but is still classified as spam (according to manual labeling), this prompts the network to learn new hidden patterns to predict spam emails. This introduces new potential features.
Manually or algorithmically interpret the new potential features and assign conceptual meanings (for example, if the new word with the highest attention weight is "Bitcoin," introduce "involving cryptocurrency" as a new feature node).
Add the new feature node to the concept space and connect it with the spam email concept. If it is found to be strongly related to existing features or can be grouped together, a new subconcept may be formed (for example, all features related to investment scams are classified into one category).
Update the classification model to make decisions using both new and old features. Since the structure of the concept space retains old knowledge, even if the new network parameters change, we can still ensure that the model does not forget the old discrimination criteria through conceptual nodes. For example, even if certain old keywords no longer appear frequently, we still retain feature nodes such as "suspicious links" and "urgent tone," so the model will not forget these judgment standards.
Effects and Advantages: This continuous learning solution can balance plasticity and stability well. The semantic space (neural network) provides flexibility to adapt to new data, while the concept space acts as a regularizer to prevent the model from forgetting old knowledge. Especially in the field of spam email detection, which is highly adversarial, attackers often change the wording but keep the same tricks. Therefore, conceptual feature extraction helps the model to grasp the essence. For example, some studies have shown that concept-based interpretable AI models are not only better in interpretability but also more robust when dealing with out-of-distribution data because concepts are often more abstract and essential and do not change with surface variations. Professor Yucong Duan's theory emphasizes the independence of concept space, which provides this robustness: even if the input distribution (semantic space) changes, as long as the conceptual discrimination rules exist independently, the model can remain unchanged in the face of changes.
Actual Case: Similar attempts have been made in academia. For example, the "Concept Bottleneck Model" first predicts intermediate concepts defined by humans (such as the color and shape features of birds) in image recognition and then predicts categories based on these concepts. If a new bird species is introduced, as long as its features can be described by existing concepts, the model can quickly adapt without affecting the recognition of old categories. In NLP, there is also work that decomposes sentence sentiment classification into several conceptual dimensions (such as anger, irony, positivity, etc.). When fine-tuning the model with new corpora, the conceptual dimensions are locked, and only the mapping is adjusted, which prevents catastrophic forgetting and allows people to understand why the model's decision-making has changed. It can be anticipated that with the development of the integration of knowledge graphs and deep learning (Neuro-Symbolic AI) in the future, such continuous learning architectures will appear in more machine learning tasks, and their essence is to draw on the idea of human brain concept-semantic separation processing.
Case Three: Knowledge Graph Construction and Dynamic Evolution
Scenario Description: Constructing and maintaining a large-scale medical knowledge graph. Initially, we collected textbooks and dictionaries to build an ontology (schema) containing concepts and relationships such as diseases, symptoms, and drugs. Subsequently, new medical literature and clinical reports are continuously generated, and we hope to add this new knowledge to the knowledge graph. For example, a new symptom or a new treatment method for a disease is reported in the new literature. If the knowledge graph cannot be updated in time, it will become outdated. We need a mechanism that can quickly add new knowledge while ensuring the overall consistency of the graph and avoiding conceptual confusion or contradictions.
Application Approach: Knowledge graphs naturally have a distinction between the concept layer (schema layer) and the instance layer (data layer). The schema layer is similar to our concept space, defining concept types and relationships; the instance layer is like the semantic space, storing specific facts. Constructing a knowledge graph usually involves steps such as entity linking and relationship extraction. For example, from the new literature sentence "XX syndrome can also lead to loss of taste," we extract the entities "XX syndrome" (disease) and "loss of taste" (symptom), as well as the relationship "lead to." In the ontology, there are already definitions for the concepts of diseases and symptoms, as well as the relationship type "disease - has symptom - symptom," so the new fact can be directly added: "XX syndrome - has symptom - loss of taste." If a completely new term such as "YYY therapy" appears, we need to determine its concept type: based on the context, we judge that it is a new "treatment method" entity, so we add it to the instance layer while ensuring that the concept of "treatment method" exists in the schema layer (if not, it indicates that the ontology needs to be expanded).
Independence Mechanism Guarantee: In the update of the knowledge graph, the independence of the concept layer is manifested as follows: modifications to the instance layer do not automatically change the concept definitions. For example, although "loss of taste" is added as a symptom of "XX syndrome," it does not change the definition of the concept "symptom" or its relationships with other concepts. Only when a large number of new instances show phenomena that cannot be covered by existing concepts will the concept layer be considered for modification. For example, if the literature frequently mentions a new type of medical entity that does not belong to any existing category, the schema layer needs to be expanded to introduce a new concept. In this case, medical experts will review the ontology changes to ensure the correct evolution of the concept system.
Semantic Space Update Mechanism: In practice, ontology learning algorithms can be developed to assist in discovering potential pattern changes from new data. For example, text mining finds that many sentences mention a new type of therapy whose characteristics are different from existing "drugs" or "surgery," so the algorithm suggests introducing the concept of "gene therapy" as a subclass of treatment methods. The graph update process is also accompanied by consistency checks, such as verifying whether the newly added facts conflict with existing knowledge based on the constraints of the ontology (independence rules). If there is a conflict, it needs to be manually decided how to update the concept layer or instance layer to resolve it. This is equivalent to the constraint and correction of semantic space updates by concept space.
Specific Case: In the field of knowledge graph construction, there is a concept called knowledge evolution, which focuses on how to maintain the graph's updates over time. In the industry, Google's Knowledge Graph is updated daily based on crawled web data, and they usually use a similar method: the ontology layer is maintained by experts and is relatively stable, while the data layer is automatically expanded based on the crawled data. In the rigorous field of healthcare, a combination of human and machine is generally used: machines extract candidate new entities and relationships from a large number of documents, and humans review and integrate them into the knowledge base. For example, during the COVID-19 pandemic, a large number of new papers were published every day. Researchers used text mining systems to extract virus variants, new symptoms, and new drug trial results from the papers to build a COVID-19 knowledge graph, helping researchers track progress. In this process, many concepts that did not exist before (such as specific mutation strain codes, new drug molecules) were introduced, but the higher-level conceptual framework (such as still classified as "virus strains" or "antiviral drugs") remained stable. This is an example of the independence of the concept layer and the updating of the instance layer.
Advantages: By separating concepts and instances, knowledge graphs can achieve scalable evolution. When users query the graph, they first locate the required category through the concept layer and then retrieve specific instances in the instance layer. This is similar to how the brain first thinks "I want to find information related to drugs" and then retrieves drug instances. When the graph is updated, this process remains unchanged, only the instances become richer. If a new concept is introduced in an update, the expansion of the concept layer will also notify the relevant modules to update the index. Overall, this architecture allows large-scale knowledge systems to have both a stable structure and the ability to grow.
Theoretical Connection: Professor Yucong Duan mentioned in his DIKWP model that by distinguishing between four spaces (data, information, knowledge, wisdom/purpose), different levels of content can be processed systematically, especially by decoupling the concept space from the semantic space, which allows for a more systematic analysis of the information processing process. The schema-instance correspondence in knowledge graphs is consistent with the concept-data division and aligns with the idea of the DIKWP model. In addition, he proposed the importance of semantic association and concept confirmation in the "Theory of Understanding Relativity," which is reflected in graph construction as follows: After extracting new semantic associations (entity relationships), they must be confirmed through concept verification (checking the ontology) before being finally included. This checking process ensures the credibility of the graph and prevents semantic noise from polluting the conceptual system.
The above cases demonstrate the application potential of concept morphology, semantic updates, and independence mechanisms in practical systems. From text understanding to machine learning models and knowledge engineering, we see that these cognitive characteristics of the human brain provide valuable inspiration for artificial systems. In the next section, we will review and compare research and models in related fields more broadly to systematically analyze the embodiment of these concepts in different disciplines.
Literature Review and Comparative Analysis
To comprehensively evaluate and expand Professor Yucong Duan's proposed theory, this section will review and compare important models, theories, and experimental studies in related fields. We will start from the perspectives of cognitive science, artificial intelligence, and linguistics, respectively, summarize their insights on concept generation, semantic representation, cognitive updates, and other issues, and analyze the similarities and differences with Professor Duan's theoretical framework. Through this meta-analysis, we can clarify the mainstream consensus and points of divergence in current research, verify the rationality of Professor Duan's theory, and identify areas for further improvement and integration.
Cognitive Science Perspective: Mental Representation of Concepts and Semantics
The field of cognitive science has many classic theories on the representation of human concepts and semantics:
Prototype Category Theory: Proposed by Eleanor Rosch et al., this theory suggests that the psychological representation of concepts (categories) is not a strict set of necessary and sufficient conditions but rather a fuzzy set centered around a "prototype." People's judgment of whether an instance belongs to a concept depends on its similarity to the prototype. This theory explains the elasticity of concept boundaries and the uncertainty of semantics, for example, why we consider penguins to be birds but abnormal ones. This is related to the concept morphology emphasized by Professor Duan: Concepts are not fixed binary but have internal structures (prototypes and peripheral exceptions). Experimental evidence for prototype theory comes from classification tasks: typical instances have faster responses, while atypical instances are slower and more prone to divergence. Professor Duan's theory is compatible with this, as concept morphology can include typical and atypical features as part of the internal structure.
Theory Theory: Another view in cognitive psychology suggests that human concepts are related like naive theories, such as children having "physical theories" and "biological theories" that support concept classification. Therefore, concept acquisition is actually the learning of micro-theories. For example, when children understand the concept of "animal," they may establish an intuitive theory such as "can move, can grow, has life," thereby distinguishing animals from non-living things. This theory highlights the structural and causal nature of the conceptual system. It resonates with Professor Duan's concept space: the concept space can be seen as the network of various intuitive theories of the cognitive subject, which emphasizes the causal relationships and logical constraints between concepts (knowledge network) more than prototype theory. The two are not contradictory and can be integrated: the organization of concept space has both prototype effects and rule-like constraints similar to theories.
Semantic Network and Connectionist Models: Early on, there was the semantic network model by Collins & Loftus, where concepts were represented as nodes and semantic associations as edges, with activation spreading along the edges to explain phenomena such as semantic priming. Later, connectionist models emerged, represented by Rogers & McClelland (2004). They used multi-layer neural networks to simulate semantic cognition, successfully reproducing semantic classification, attribute co-occurrence, and symptoms of semantic amnesia patients (the gradual loss of specific concepts before general concepts). These models support distributed representation: concepts are not single points but a set of activation patterns in the network. However, they also found that abstract representations can emerge within the network, equivalent to concept vectors. Connectionist models focus on the perspective of semantic space (emphasizing empirical statistics and continuous representation) but also demonstrate that concept structures can spontaneously form. Professor Duan's theory is in line with this: it emphasizes that semantic space updates can be completed through mechanisms similar to PDP, and concept morphology corresponds to the structural representations in the hidden layers of these models. The difference lies in the fact that Professor Duan's theory also emphasizes the independence and closure of concepts, which are less discussed in classical connectionist models (because their representations are entirely distributed and public). However, more recent PDP models with modular structures or regularization are attempting to introduce partitions in the hidden layers to correspond to the relative independence of concepts.
Neurocognitive Evidence: In the field of cognitive neuroscience, the aforementioned central-peripheral model of semantic memory ("hub and spoke" model) is currently a more influential theory. This model proposes that the semantic system in the brain has a cross-modal integrating "hub" (mainly in the bilateral anterior temporal lobe ATL), and "spokes" distributed in sensory-motor areas that store the specific perceptual representations of concepts in each modality. The ATL hub is responsible for normalizing information from different sources into concept representations, so damage to ATL (such as in semantic dementia patients) leads to difficulties in semantic generalization and decreased concept discrimination ability. This directly supports the independence of concept space: ATL enables concept representations to be independent of specific perceptual forms and form a unified "mentalese" representation. At the same time, the brain also has a semantic control network in the prefrontal and parietal lobes that dynamically adjusts which meaning to retrieve. This corresponds to the control mechanism of semantic space updates. In brain imaging experiments, ambiguous words require more involvement of ATL and control areas, and polysemy processing is manifested as frontal-temporal coordination. These findings strongly validate the biological reality of the separation and interaction between concepts and semantics in Professor Duan's framework. It can be said that the brain itself has a similar architecture to "concept space" (with ATL as the core) and "semantic space" (multimodal semantic spokes).
Comparison: Theories in cognitive science each focus on certain aspects of concepts. Professor Yucong Duan's theory focuses on concept morphology (corresponding to prototype structure, theoretical structure), semantic dynamics (corresponding to semantic network activation diffusion, control network regulation), and independence and closure (corresponding to the integrative role of the ATL hub). Overall, it is consistent with modern integrated views. For example, the hub-and-spoke model proposed by Lambon Ralph et al. is highly consistent with Professor Duan's ideas, only differing in wording. The "novelty" of Professor Duan's theory lies in integrating these key points and emphasizing their role in text understanding scenarios, while many cognitive theories study concept classification or lexical processing tasks in isolation. Professor Duan also introduces unique elements such as "purpose-driven" and "relativity of understanding," incorporating motivation and subjectivity into the understanding model, which is rarely covered by traditional cognitive theories and is closer to higher-level concepts of consciousness and wisdom (the higher-order part of the DIKWP framework). Therefore, his theory is a comprehensive and strengthened version from the perspective of cognitive science, supplementing the regulatory role of subjective purpose on concept-semantic processes and echoing the current focus on active prediction and the Bayesian brain.
Artificial Intelligence Perspective: Fusion of Symbols and Connectionism
In the field of AI, language understanding and knowledge representation have long existed in two paradigms: symbolism and connectionism. The recent development of neuro-symbolic integration and large models has also provided new insights.
Symbolic AI and Knowledge Bases: Early AI used explicit symbolic representations of concepts and rules, such as semantic networks, frames, and ontologies. This is equivalent to constructing an artificial concept space, upon which a reasoning engine operates. The greatest advantage of symbolic methods is that concepts are clear and independent, easy to explain and manually update (consistent with concept space independence); the disadvantage is that the cost of acquiring and maintaining knowledge is high, and there is a lack of automatic adaptation to data (insufficient semantic space updates). For example, knowledge-based dialogue systems can answer common sense questions logically, but they are at a loss when encountering new expressions not in the knowledge base. These systems correspond to having a surplus of conceptual layers but a deficiency in the semantic layer in Professor Duan's framework.
Connectionism and Distributed Semantics: Represented by neural networks, this approach does not use explicit conceptual symbols but instead obtains vector representations of words and sentences through training on large amounts of data, using continuous space calculations (such as cosine similarity) to handle language tasks. These methods automatically learn the semantic space but lack directly human-readable concepts. The large language models (LLMs) that have swept NLP are the pinnacle of this lineage: models like GPT-3/4 are trained on massive texts and have achieved astonishing language generation and understanding capabilities. However, they do not have explicit knowledge graphs or concept libraries internally; all knowledge is implicitly stored in the form of parameters. This leads to problems such as hallucinations (fabricating non-existent facts) and poor controllability. This is because there is a lack of an independent conceptual space mechanism; the models simply follow the probabilities of the training data and lack an independent verification structure for the truth of facts. Researchers at OpenAI have also pointed out that current LLMs lack a persistent, independent knowledge representation module, resulting in potentially inconsistent outputs as the text context changes—this is somewhat like lacking the constraint of conceptual space independence and being entirely driven by local semantics. Connectionist methods excel in the dynamics of semantic space (since their essence is to dynamically activate different vectors based on input), which aligns with the semantic update part of Professor Duan's theory, but they fall short in conceptual independence.
Neuro-Symbolic Integration: Recognizing the strengths and weaknesses of both approaches, some research attempts to combine symbolic knowledge with neural networks. For example, Knowledge-Enhanced Neural Networks introduce knowledge graph information into pre-trained language models or force the association of concept nodes in Transformer attention. Some methods input the results retrieved from a knowledge base along with the original text into the model to guide it to produce answers that are more consistent with facts. There are also symbolic constraint learning methods that add logical constraint terms to the training loss to ensure that the model's output complies with certain rules (such as entity type consistency). These methods have improved accuracy and consistency in tasks such as question answering and dialogue, reducing unreasonable outputs. Taking OpenAI's ChatGPT as an example, it incorporates some simplified rules or knowledge into certain answers (although most of it is still parameterized implicit knowledge), and on the other hand, it provides a plugin mechanism for the model to query explicit databases. It can be said that the latest large model applications have actually reflected the integration of conceptual space (knowledge base) and semantic space (neural network) at the practical level, although it has not yet been fully unified into the training process.
Interpretable Concept Learning: The aforementioned concept bottleneck models and neuro-symbolic AI are exploring methods to identify human concepts within neural networks. For example, technologies such as ConceptSHAP and TCAV use post hoc analysis to identify high-level concepts that CNN visual models focus on and assign semantic labels to them, thereby explaining model decisions. This work helps to transform implicit distributed representations into partially symbolic concepts, essentially discovering concept space within connectionist models. In the future, it may also be possible to directly train the model to produce a layer of conceptual representation (for example, outputting 20 conceptual attribute values as intermediate results), thereby constraining the output. These directions will make the internal representations of AI closer to human conceptual representations, making it easier to integrate knowledge and achieve continuous learning.
Continuous Learning and Memory Modules: Continuous learning in machine learning addresses the problem of incremental model updates and preventing forgetting. Typical methods include regularizing important parameters, dynamically expanding network structures, and alternating replay of old samples. If an explicit knowledge memory module is introduced and fixed during new task learning, it can prevent forgetting. For example, Riemer et al. proposed Latent Replay, which replays old task features in the latent space; other studies use external memory networks (such as NNMemory) to store past knowledge points. At a higher level, DeepMind's Differentiable Neural Computer (DNC) combines a neural network with a readable and writable external memory, allowing the network to write intermediate results to memory during task solving for future use. When applied to language models, this is equivalent to adding a dialogue state or knowledge pool that grows with the conversation. These structures reflect attempts at cognitive update mechanisms in AI in Professor Duan's theory. Although there is still a gap compared to the way the brain works, there have been some successful examples: for example, GPT-4 plugins can search for information, which is equivalent to the model having a non-forgetting knowledge base support; continuous learning algorithms have achieved performance with only slight degradation when sequentially learning dozens of object classes in visual classification without accessing old data.
Comparison: AI methods each have their own focus, and Professor Duan's theory, in a sense, requires AI systems to have the advantages of both symbolism and connectionism, that is, to have an independent conceptual module while being able to flexibly update semantics. The current trend is moving in this direction: pure LLMs are beginning to show shortcomings in tasks that require stable knowledge, while models that integrate knowledge are on the rise. Our case analysis also confirms that integrated methods are more effective. It can be said that Professor Duan's theory and the direction of modern AI integration are in line with each other, both pointing towards Neuro-Symbolic Hybrid Intelligence. Compared to the brain, AI implementation is simpler and more direct, for example, knowledge graph updates rely on engineering management, while the brain may use complex sleep consolidation mechanisms. However, the principles at the abstract level are consistent: the core conceptual layer must be relatively stable and independent, while the perceptual semantic layer must be plastic and efficient.
Linguistic Perspective: Language, Polysemy, and Context
Linguistics provides rich theories about meaning and conceptual relationships, mapping out the concept-semantic issues from a more abstract level:
Structuralist Semantics: Since Saussure, linguistics has viewed meaning as determined by relationships within a symbolic system. The meaning of a word comes from its contrast with other words (semantic fields) and its combination (syntax) relationships. This is similar to concept space, only limited to the level of linguistic symbols. For example, dictionary definitions often use known concepts to explain the meanings of new words. In Professor Duan's theory, concept morphology has its own structure in humans, not entirely equivalent to linguistic symbol relationships, but language structure does influence concept classification (for example, lexical division affects cognitive classification, which involves the discussion of the relationship between language and thought).
Lexical Polysemy Classification: Linguistics distinguishes between polysemy and homonymy. The former has related meanings (such as "mouth" referring to an animal's mouth or a container's opening, related by metaphor), while the latter is purely a coincidence of form (such as "bank"). Linguistics believes that there is often a core concept chain connecting the meanings of polysemous words, for example, through semantic extensions such as metaphor and metonymy. This can be seen as a core concept in concept space giving rise to different semantic projections. Cognitive linguistics has extensively studied how metaphors and metonymies create new word meanings, considering this one of the mechanisms for expanding concept space. Our mechanism analysis involves concept blending, which is similar. Modern linguistics also has lexical semantic networks (such as WordNet), which divide word meanings into synsets, each corresponding to a concept. This is essentially constructing a concept space at the language level and using artificial means to eliminate ambiguity. Professor Duan's theory emphasizes that the brain itself achieves this and relies on concept independence. Therefore, WordNet and similar tools can be seen as an explicit approximation of the human brain's concept network.
Contextualism: In the philosophy of language and semantics, there is a view that word meanings are highly dependent on context, and discussing word meanings outside of specific usage examples has little significance (as Wittgenstein said, "The meaning of a word is in its use"). However, some argue that there is a minimal semantics or core meaning, and context only selectively activates meanings. The former emphasizes the dynamics of semantic space, while the latter corresponds to the stability of concept space. In fact, human language understanding clearly has both: there is a stable lexical semantic network in the lexicon, but it can also flexibly construct meanings on the spot (such as immediate metaphors and new interpretations). Professor Duan's theory corresponds to the stable component with concept space and semantic updates with contextual effects, unifying the two and can be seen as a compromise model for the semantic debate.
Pragmatics: For example, Grice proposed that conversational implicatures follow the cooperative principle, and the implied meaning needs to be inferred based on the speaker's intention. This is consistent with Professor Duan's "understanding driven by the specific purpose of the cognitive subject." That is to say, understanding depends not only on the content of the language but also on the speculation of the speaker's/author's purpose. This part is more reflected in the cognitive space/consciousness space level and is not just a mechanical mapping of concepts and semantics. Professor Duan's addition of the "Purpose" layer in the DIKWP model is to incorporate this aspect into the model. Therefore, his theory extends the traditional semantic model by considering pragmatic factors as an intrinsic component. This is especially crucial in the understanding of complex texts (irony, sarcasm, rhetoric).
Comparison: Linguistic theories provide descriptions closer to linguistic phenomena and can corroborate Professor Duan's theoretical framework. Tools like WordNet demonstrate that even in computational systems, introducing conceptual nodes (word meanings) can improve text understanding, as pure symbolic relationships are insufficient to deal with ambiguity. Research in cognitive linguistics (such as metaphor) shows the elasticity of the human concept space, which supports the mechanism of concept generation. Pragmatics reminds us that understanding is a product of social interaction, which corresponds to the subjectivity of cognitive space: understanding needs to be combined with the subject's experience and purpose. Professor Duan's theory already encompasses this idea (the relativity of purpose-driven understanding), making it more comprehensive than traditional semantic theories. It can be seen that from the linguistic perspective, Professor Duan's theory does not contradict basic linguistic facts but integrates views from semantics, context, and pragmatics, making the model closer to real language use scenarios.
Empirical and Application Verification
In addition to the above theories, we also pay attention to experimental research and application evaluations that directly verify the interaction between concepts and semantics:
Experimental Verification: Psychological experiments such as reaction times and eye-tracking studies in lexical ambiguity resolution have shown that context can rapidly influence word meaning selection, capturing behavioral evidence of the instantaneous update of semantic space. ERP and fMRI provide neural-level verification. There are also studies comparing the similarities and differences between humans and AI in language tasks, for example, in the understanding of polysemous idioms, the human brain often first considers the literal meaning and then shifts to the metaphorical meaning, while some models can only directly give the most common meaning based on statistics, without the human process. This indicates that the human brain indeed uses conceptual knowledge for a thinking process, while models tend to rely on direct matching. However, with the improvement of model size and training, their performance is increasingly close to human intuition, suggesting that certain conceptual functions may already be implicitly embedded in the models (although they are not interpretable). Therefore, more cognitive comparative experiments are needed, such as using novel ambiguity tests for large models, or observing whether their prediction changes when revealing sentence information word by word are similar to human eye movement changes. Some work in this area began in 2023, comparing the time series of semantic representations in the human brain obtained from fMRI with the activations of Transformer layers, finding certain correspondences but also differences. Overall, there is empirical support for the concept-semantic dual-layer model, but more fine-grained data are needed to verify specific sub-hypotheses in Professor Duan's theory (such as whether concept independence can be measured in brain connections? Or how brain regions reorganize during semantic space restructuring when learning new concepts?). These are areas for future experimental design.
Application Evaluation: From an engineering perspective, it is possible to compare the differences between knowledge-enhanced AI models and conventional models in tasks. For example, in the OpenBookQA task (which requires common sense question answering from a knowledge base), systems integrated with knowledge graphs significantly outperform pure language models. This is equivalent to testing the contribution of concept space. Another example is continuous learning competitions, where various algorithms compete to see who forgets the least and learns the fastest. This corresponds to the evaluation of cognitive update mechanisms. From the existing results, most continuous learning algorithms still fall far short of the ideal and do not match human lifelong learning capabilities. However, those that introduce strategies similar to human sleep consolidation (reviewing experiences) or separate modules (preserving sub-networks for old tasks) perform slightly better. In the field of knowledge graphs, ontology matching and evolution evaluations also show that manually maintained ontology updates are the most reliable but come at a high cost; automatic methods are prone to introducing errors. Semi-automatic methods that use machine suggestions combined with human review have shown better results, indicating that a combination of human and machine, and a combination of symbolic and connectionist approaches, is a practical and feasible balance.
Overall, Professor Yucong Duan's theory on concept morphology, semantic space updates, and the independence of concept space has been supported by evidence from multiple disciplines. Cognitive science provides psychological and neural-level verification, artificial intelligence research offers computational implementations of similar ideas and proves their effectiveness, and linguistics provides rich qualitative theoretical mappings. These all corroborate the importance of the separation and collaborative mechanism between concepts and semantics from different angles. Of course, there are differences between different model theories that need further unification. The characteristic of Professor Duan's theory lies in integrating factors at different levels and particularly emphasizing purpose and subjectivity, which is still relatively weak in current technological models (most AI systems lack self-purpose).
Conclusion and Outlook
In summary, we have conducted a comprehensive and in-depth expansion study on Professor Yucong Duan's theory of human brain text cognitive processing—that is, the mechanism relying on concept morphology, dynamic updates of semantic space, and the independence of concept space. By integrating the latest advancements in cognitive science, artificial intelligence, and linguistics, we have verified the rationality and foresight of this theoretical framework and enriched its connotations in a broader context. Our main conclusions can be summarized as follows:
First, concept morphology is the foundation of cognitive understanding. Concepts in the human brain are not unstructured symbols; their internal forms (attribute features, prototype boundaries, relational networks) profoundly influence the construction of textual meaning. It is because concepts have structure that humans can map linguistic symbols into meaningful mental representations and engage in higher-level reasoning such as analogy and generalization. The latest cognitive science research and neural network models both support the existence of some geometric or network forms in conceptual representation. In applications, utilizing explicit conceptual structures (such as knowledge graphs and ontologies) can enhance a system's language understanding capabilities, which is consistent with human cognitive mechanisms.
Second, the dynamic update of semantic space ensures the flexibility of language. Textual meaning is not fixed but is formed by the continuous assimilation of new information into the context during reading. The brain exhibits real-time semantic adjustment functions: it quickly selects word meanings, resolves references, and adjusts the understanding of entire sentences based on context and common sense. When encountering contradictory information, it can reconstruct previous understandings to maintain overall consistency. This dynamic process corresponds to the update and reorganization of semantic space, which is key to understanding coherent discourse and implied meanings. The large language models in artificial intelligence are able to produce coherent conversations largely because they simulate this context-based semantic update mechanism (through attention and hidden states). However, humans are better at global consistency checks and contradiction resolution, indicating that our brains have more complex feedback regulation to ensure the coherence of semantic networks. Future AI needs to introduce similar mechanisms (such as a global monitoring module for dialogue) to further approach human levels.
Third, the independence of concept space ensures the stability and universality of cognition. Despite the ever-changing linguistic environment, the conceptual system within the brain remains relatively stable and has consistency across situations and languages. It is precisely because of this that we can transfer knowledge to new sentences and scenarios for understanding; people speaking different languages can also reach consensus through translation, as the deep conceptual correspondences can be established. Concept independence is also reflected in the cumulative effect of knowledge: new knowledge expands the connotations of concepts but does not overthrow their core, allowing us to maintain the continuity of cognitive structures while continuously learning. For AI systems, this property means that there needs to be a knowledge storage and reasoning module independent of training corpora. Current AI systems are lacking in this regard, which is also one of the reasons why models like GPT occasionally produce contradictory answers. Strengthening the modularity and explicit knowledge representation of AI, allowing models to have a "concept space" internally, will significantly improve their reliability and controllability.
Fourth, the hierarchical interaction between concepts and semantics is the key to achieving advanced intelligence. Whether from brain research or AI practice, relying solely on symbolic logic or purely on distributed statistics is insufficient to deal with complex cognition. The success of human intelligence lies in the simultaneous use of stable conceptual knowledge and dynamic semantic perception, and the coordination of the two in specific tasks. This insight is leading a new trend in the field of artificial intelligence: from deep learning to the integration of deep learning and knowledge, and from perceptual intelligence to cognitive intelligence. Professor Yucong Duan's theory has prospectively pointed out the direction of this fusion, which has important implications for building humanoid intelligent systems.
Through the analysis of this report, we have also identified some issues worthy of further exploration and future prospects:
Regarding the formation mechanism of conceptual independence: Where does the independence of the human conceptual space come from? Is it an innate categorical structure, or does it emerge entirely through self-organization in postnatal learning? It is likely a combination of both. On one hand, infants have a tendency to categorize the world, such as distinguishing between living and non-living things, and basic colors, suggesting that some conceptual prototypes may be evolutionary products. On the other hand, cultural language also shapes conceptual divisions, so the conceptual space has a plastic component. Future cognitive science needs to combine developmental studies, cross-cultural research, and brain imaging to answer this question. Similarly, in AI, this relates to whether we need to embed certain basic conceptual modules in the model, or whether it can abstract concepts on its own through interaction.
Conceptual Space and Consciousness: Professor Duan also introduced the concept of consciousness space in the DIKWP model, suggesting that there are intentions and self-monitoring at a higher level. The relationship between conceptual space independence and consciousness is also very interesting—some scholars argue that it is precisely because we can abstract concepts that we have reflection and consciousness (because we have an operable mental model). Conversely, conscious deep thinking can modify conceptual structures (for example, scientific revolutions reorganize concepts). Future research may explore the mechanisms of consciousness participation in conceptual updates, which at the neural level corresponds to how the default network and prefrontal cortex participate in knowledge reorganization.
Multimodal Semantic Space: Text is just one form of language, and people usually combine visual, auditory, and other information to form richer conceptual representations when understanding. So-called "imagery" is also part of concept morphology. Emerging multimodal AI models (such as those that process images and text simultaneously) can help us study how conceptual representations of different modalities interact and whether there exists a higher-level cross-modal concept space (which may correspond to the ATL hub in the brain). This will extend the concept-semantic theory to the fusion understanding of language and vision among various signals. Emerging multimodal models (processing images and text simultaneously) provide a platform for exploring the interaction of conceptual representations across modalities. For example, OpenAI's CLIP model represents images and text in a common vector space, achieving cross-modal concept alignment; this implies the existence of a higher-level abstract concept space that can independently associate textual descriptions with image content. Cognitive science has also found that regions such as the insula and hippocampus are involved in semantic processing in both visual and language tasks, possibly supporting a cross-modal concept hub. Therefore, it is necessary to test the independence of concept space and the mechanism of semantic updates in the context of visual-language integration in the future, for example, by studying how people integrate what they see with textual narratives into a consistent understanding. This will further verify the applicability of Professor Duan's theory in multimodal contexts and promote the development of AI systems towards a unified cognitive model.
In summary, the human brain's cognitive processing of text is not a simple pattern matching or symbolic calculation, but is based on the close collaboration of two levels: concepts and semantics. Professor Yucong Duan's theoretical framework of "concept morphology dependence, semantic space update, and concept space independence" has been tested and expanded with evidence from multiple disciplines. This framework not only deepens our understanding of the human language comprehension process but also provides important insights for the future development of artificial intelligence. By simulating the brain's concept-semantic dual-pathway mechanism and integrating symbolic knowledge with statistical learning, the next generation of intelligent systems is expected to achieve stronger context adaptation capabilities, more robust knowledge transfer, and understanding and reasoning levels closer to that of humans. Looking to the future, with the further integration of cognitive science, neuroscience, and artificial intelligence, we will continue to unravel the exquisite principles of human intelligence and build more interpretable, sustainable learning human-like intelligent models, allowing artificial intelligence to truly transcend the surface of language and achieve a profound grasp of the world of meaning.
References
[Cognitive Science and Neuroscience Literature]
[1] Gärdenfors, P. (2000). Conceptual Spaces: The Geometry of Thought. MIT Press. (The foundational work of conceptual space theory, proposing that concepts have geometric structural forms.)
[2] Lambon Ralph, M. A., Jefferies, E., Patterson, K., & Rogers, T. T. (2017). The neural and computational bases of semantic cognition. Nature Reviews Neuroscience, 18(1), 42–55. (The neural and computational basis of semantic cognition, proposing the "hub and spoke" model.)
[3] Rogers, T. T., & McClelland, J. L. (2004). Semantic Cognition: A Parallel Distributed Processing Approach. MIT Press. (The PDP model explaining semantic cognition and concept generation.)
[4] Cutler, R., et al. (2025). Semantic memory aging: Increased feature competition and monitoring demands. Cognitive Neuroscience Reports, 18(2), 193–203. (Research on the densification of semantic space and changes in retrieval in the elderly.)
[5] Binder, J. R., et al. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12), 2767–2796. (A meta-analysis of neuroimaging studies on the semantic system, summarizing the role of regions such as the anterior temporal lobe ATL.)
[Artificial Intelligence and Language Model Literature]
[6] Marcus, G. (2020). The next decade in AI: Four steps towards robust artificial intelligence. arXiv preprint arXiv:2002.06177. (The future of AI needs to combine symbols with neural networks.)
[7] Liu, P. J., et al. (2022). Pre-train Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55(9), 1–35. (A technical summary of dynamic semantic adjustment in large models.)
[8] Riemer, M., et al. (2019). Learning to remember rare events. International Conference on Learning Representations (ICLR). (Continuous learning and external memory mechanisms.)
[9] Bosselut, A., et al. (2019). COMET: Commonsense Transformers for Automatic Knowledge Graph Construction. ACL 2019. (Knowledge-enhanced language models combining concepts and semantics.)
[10] Bisk, Y., et al. (2020). Experience Grounds Language. EMNLP 2020. (The proposal that language learning needs to be grounded in world experience, emphasizing the basis of concepts.)
[Cognitive Linguistics and Semantics Literature]
[11] Jackendoff, R. (2002). Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford University Press. (An important work on the relationship between language and conceptual structure.)
[12] Pinker, S. (2007). The Stuff of Thought: Language as a Window into Human Nature. Viking Press. (Exploring psychological evidence for the independence of concepts and language.)
[13] Rosch, E. (1978). Principles of categorization. In Cognition and Categorization, Eds. Rosch & Lloyd. Erlbaum. (A classic paper on prototype category theory.)
[14] Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press. (Research on metaphor as a mechanism for concept generation.)
[15] Croft, W., & Cruse, D. A. (2004). Cognitive Linguistics. Cambridge University Press. (An introduction to cognitive linguistics, emphasizing semantic generation and cognitive structure.)
[Artificial Intelligence Symbol and Neural Fusion Research]
[16] Chen, W., et al. (2021). Knowledge-Enhanced Language Model Pretraining: A Survey. IEEE Transactions on Knowledge and Data Engineering, 34(5), 2334–2352. (A review of knowledge-enhanced language model pretraining research.)
[17] Saxton, D., et al. (2019). Analysing mathematical reasoning abilities of neural models. International Conference on Learning Representations (ICLR). (Analysis of neural models' reasoning abilities, revealing differences between semantic and conceptual reasoning.)
[18] Hao, Y., et al. (2022). Concept Bottleneck Models: Decomposing Language Models into Interpretable Concepts. EMNLP Findings. (Concept bottleneck models to improve model interpretability and continuous learning capabilities.)
[19] Talmor, A., et al. (2020). Leap-Of-Thought: Teaching Pre-trained Models to Systematically Reason Over Implicit Knowledge. NeurIPS 2020. (How to introduce systematic reasoning and implicit knowledge processing in pre-trained models.)
[Theoretical Foundations and Related Literature by Professor Yucong Duan]
[20] Duan, Y. (2024). Exploring the Formation Mechanism of the DIKWP Model and Cognitive Semantic Network. ScienceNet Blog. (Professor Yucong Duan's theoretical article on the DIKWP framework, theory of understanding relativity, and concept-semantic relationship.)


人工意识与人类意识


人工意识日记


玩透DeepSeek:认知解构+技术解析+实践落地



人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限



人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社



主动医学概论 初级版


图片
世界人工意识大会主席 | 段玉聪
邮箱|duanyucong@hotmail.com


qrcode_www.waac.ac.png
世界人工意识科学院
邮箱 | contact@waac.ac





【声明】内容源于网络
0
0
通用人工智能AGI测评DIKWP实验室
1234
内容 1237
粉丝 0
通用人工智能AGI测评DIKWP实验室 1234
总阅读8.5k
粉丝0
内容1.2k