Research on Digital Space "Security" Reconstruction and Cognitive Manipulation Deception Mechanisms Based on the DIKWP×DIKWP Interaction Model
Yucong Duan
International Standardization Committee of Networked DIKWPfor Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)
Introduction
In an era of high digitalization and information overload, security threats are no longer confined to the technical aspects of network systems but have extended deep into the human cognitive and semantic layers. "Cognitive security" refers to protecting human cognitive processes from malicious influence and manipulation, safeguarding individuals and groups from misinformation, psychological manipulation, and other harms. With the rise of Artificial Intelligence Generated Content (AIGC) and the widespread use of social media, attackers can create "cognitive fog" on an unprecedented scale and speed, disrupting the public's judgment of facts. Large-scale disinformation and manipulation activities (such as fake news, deepfakes, social bots, etc.) can intentionally induce audiences to form cognitions and behaviors that deviate from their true intentions, thereby threatening social stability, democratic decision-making, and even national security. The traditional view of security primarily emphasizes protecting systems from intrusion and damage. In the new digital environment, we urgently need to introduce semantic and cognitive dimensions to reconstruct "security," expanding it to include the alignment and trust of multiple parties in their cognitive goals and practical intentions.
To systematically study this issue, this paper introduces an emerging semantic cognitive model—the DIKWP model, a five-layer model of Data, Information, Knowledge, Wisdom, and Purpose. The DIKWP model is an extension of the classic DIKW pyramid model, adding the "Purpose" dimension to more fully express the context and goal factors in decision-making. By adding the "P" element to the top of the DIKW architecture, the DIKWP framework emphasizes that the generation of any information or knowledge is embedded in a certain intention and practical context, thus compensating for the traditional DIKW model's lack of expression for the motive of "why this information is used." Based on the DIKWP concept, we further propose the DIKWP×DIKWP interaction model to describe the interaction and mutual influence of multiple agents (such as human-human, human-machine) across the five semantic dimensions. Simply put, the DIKWP×DIKWP model can be understood as a 5×5 mapping matrix, covering all possible transformation relationships from any one DIKWP element to another, totaling 25 forms of semantic transformation. For example, the "Knowledge" output by one agent may become the "Information" received by another, or one party's "Data" may be transformed into the other's "Information" through interpretation. This matrix-based interaction describes the mechanism of cross-layer transmission and transformation of semantic content between different cognitive agents and provides a formal tool for analyzing cognitive alignment and deviation.
This paper, centered around the DIKWP×DIKWP model, conducts an in-depth exploration of the semantic reconstruction of "security" in the digital space and the mechanisms of cognitive manipulation deception (i.e., deception achieved by influencing others' cognitive processes). First, in the first part, we detail the methodological framework of the DIKWP model and DIKWP×DIKWP semantic interaction modeling, explaining how to construct an interactive structural model of multiple agents at the levels of data, information, knowledge, wisdom, and purpose. In the second part, we redefine the concept of "security" in the digital environment—from a DIKWP×DIKWP perspective, security not only means preventing technical intrusions but also means that the cognitive states and practical intentions of all participants remain semantically consistent and aligned, thereby avoiding misunderstandings, biases, and conflicts. In the third part, we analyze the mechanism of deceptive behavior from a cognitive psychology perspective, establishing a model that describes how manipulators adjust their own DIKWP structure and output designed content to induce targets to produce cognitive deviations, even DIKWP structures contrary to their original intentions. In the fourth part, we propose a new indicator system for measuring cognitive deviation and semantic imbalance, including semantic entropy, cognitive distance, and DIKWP projection alignment, to quantitatively describe the degree and consequences of cognitive deviation. In the fifth part, drawing on the ideas of security economics, we discuss how to construct a security mechanism centered on cognitive autonomy (enhancing individual independent thinking ability), information authenticity verification, content traceability, and symmetric feedback, to make the attacker's cost greater than the potential benefits, thereby curbing cognitive manipulation deception from an economic motive. In the sixth part, we select typical scenarios such as AI content recommendation, generative media (deepfake), social platform information manipulation, and cognitive warfare for analysis, sorting out the paths and steps attackers use to influence the audience's DIKWP structure in these scenarios, and discussing corresponding prevention strategies. In the seventh part, we combine the above research to propose future-oriented governance and institutional recommendations, including clarifying the ethical boundaries of cognitive security, establishing responsibility mechanisms for all parties, and promoting cross-platform collaborative management. Through this multi-level analysis, this paper aims to provide a systematic and comprehensive theoretical framework and practical guidance for the challenges of cognitive security in the digital age.
The following sections will unfold according to the outline above, starting with the semantic interaction mechanism of the DIKWP×DIKWP model, and gradually delving into the core issues of cognitive security and deception.
1. DIKWP and DIKWP×DIKWP Semantic Interaction Modeling
DIKWP Model Overview
DIKWP represents the five levels of Data (D), Information (I), Knowledge (K), Wisdom (W), and Purpose (P). It is a semantic framework initially extended from the classic DIKW (pyramid) model by adding the "Purpose" dimension. In the traditional DIKW model, data is processed and refined into information, information is summarized and sublimated into knowledge, and knowledge accumulates through practice to form wisdom. However, the DIKW model does not explicitly include the key element of "purpose," which may lead to the "wisdom" obtained by machines or systems lacking the guidance of human intent. For example, a purely data-driven algorithm might provide a statistically optimal solution, but because it does not consider the decision-maker's true intentions or ethical context, the solution may not align with human values or needs. The DIKWP model, by adding "Purpose" at the top level, emphasizes clarifying "why we do it" and "for what purpose" in the process of knowledge generation and decision-making. The introduction of the Purpose dimension provides a complete chain description of how raw data is transformed into actionable wisdom under the drive of specific goals. This extension makes DIKWP a more comprehensive cognitive semantic framework, more closely aligned with the human thought process in solving complex problems, and provides a model support for integrating human intent into artificial intelligence systems.
Specifically, in the DIKWP model, the meaning of each level is as follows:
·Data (D): Raw, discrete objective facts or observations that lack interpreted meaning. For example, sensor readings and log records are content at the data level.
·Information (I): Meaningful fragments obtained after processing and assigning meaning to data, i.e., "data about something." Information reveals the patterns or facts represented by the data, such as statistical reports or a piece of descriptive text.
·Knowledge (K): Generalized understanding formed from information through comprehension and induction, which is systematic and teachable. Knowledge reflects causal relationships, laws, and principles, such as scientific laws or empirical rules in an operating manual.
·Wisdom (W): High-level judgment or decision-making ability based on knowledge combined with experience and values. Wisdom includes insight into and trade-offs in complex situations and is a manifestation of a decision-maker's high-level cognition, such as expert diagnosis or comprehensive judgment in strategic planning.
·Purpose (P): The intention, goal, and value orientation behind a decision or action, as well as the execution of putting wisdom into practice. The "P" here has a dual meaning: on the one hand, it specifies the ultimate purpose served by the cognitive activity, and on the other hand, it refers to the practical action itself to achieve that purpose. The purpose layer clearly answers "what we hope to achieve." For example, in medical decision-making, "curing the patient" is the ultimate purpose served by the doctor's application of wisdom.
DIKWP×DIKWP Interaction Model
When two or more agents (which can be humans or AI systems) interact, the DIKWP content of each party may affect the other. This bilateral (or multilateral) semantic interaction can be formally described by the "DIKWP×DIKWP" model, which characterizes the process of cross-agent, cross-level information exchange and cognitive updating. The key to the DIKWP×DIKWP interaction is that the output of one party at a certain level can become the input of another party at the same or a different level. Therefore, we need to consider all possible mapping relationships. For example:
·Data → Information: Raw data provided by one party is transformed into information by another's interpretation (e.g., sensor data becomes meaningful information after being read by a person).
·Information → Knowledge: Specific information stated by one party is summarized into general knowledge by another (e.g., facts taught by a teacher are internalized into knowledge by a student).
·Knowledge → Wisdom: Methodologies or principles (knowledge) taught by one party help another to make a comprehensive judgment (wisdom) in a specific situation.
·Wisdom → Practice: A wise decision (wisdom) by one party guides the specific actions (practice) of another, such as expert advice influencing a decision-maker's action plan.
·Purpose → Data: The purpose of one party guides it to collect specific data and transmit it to the other, thereby initiating the subsequent information processing chain.
·... (and the other 20 possible cross-level mappings, for a total of 25)
Through this 5×5 mapping matrix, the DIKWP×DIKWP model can cover interactive transformations between any levels. For example, in a doctor-patient dialogue, the patient's description of their symptoms is an "Information" output, while the doctor, after comparing it with their medical knowledge base, generates a diagnostic "Wisdom," which is then fed back to the patient as new "Information." At the same time, the doctor's diagnostic purpose (e.g., the intention to rule out a certain disease) will drive them to ask questions to collect more "Data" or "Information," which in turn will affect the patient's cognition. It is evident that the DIKWP elements of both parties continuously interact and influence each other, forming a dynamic, cyclical, two-way semantic exchange process. As some research points out, DIKWP is not a static linear pyramid but an interactive circular structure, where the layers form a closed loop through feedback, making the cognitive process transparent and traceable.
Content Model and Cognitive Model
Within the DIKWP×DIKWP framework, we can distinguish between the content-level DIKWP model and the cognitive-level DIKWP model. The content model focuses on the representation of the information itself being transmitted across the five levels, such as what data, explicit information, and implicit knowledge a sentence contains. The cognitive model, on the other hand, focuses on the agent's internal understanding and absorption of this content, i.e., what kind of DIKWP representation is formed in the brain (or the AI's internal state). By combining the content model and the cognitive model, we can "see" the entire semantic communication process between the interacting parties. For example, in a doctor-patient scenario, we can construct the DIKWP cognitive maps of the doctor and the patient respectively, as well as the DIKWP semantic map of their communication content. The external dialogue during the doctor's consultation can be seen as the content DIKWP map, while the doctor's judgment about the illness in their mind and the patient's level of trust in the doctor belong to their respective cognitive DIKWP maps. By comparing the semantic correspondence between the content and cognitive levels, we can identify potential differences in understanding (e.g., the patient expects the doctor to understand certain feelings, but the doctor does not actually perceive them). These differences can be marked in the form of a DIKWP difference map, thereby increasing interaction transparency. In fact, some scholars have already applied the DIKWP model to visualize the entire process of doctor-patient communication, revealing the sources of uncertainty and differences in communication, thus enhancing the transparency and interpretability of the medical process. This proves the potential of the DIKWP×DIKWP model to effectively represent semantic flow in complex interaction scenarios.
Relationship Definition and Knowledge Fusion
The proposer of the DIKWP model further introduced the concept of "Relationship Defined Everything of Semantics" (RDXS), which aims to achieve cross-domain knowledge fusion by mapping mixed subjective and objective DIKWP resources through relationships. Simply put, it uses the hierarchical structure of DIKWP to uniformly map incomplete, inconsistent, and imprecise information into an associated DIKWP graph system. This helps to expand traditional knowledge graphs to include nodes of data, information, knowledge, wisdom, and purpose, as well as their interrelationships. In the fields of natural language processing and the semantic web, most methods assume that semantic content is objective and annotatable. However, in real-world environments, semantics are often an interweaving of subjective and objective elements. DIKWP provides a new approach by explicitly distinguishing between subjective and objective elements at different levels to handle cross-domain semantic uncertainty. For example, a news report simultaneously contains Data (facts), Information (narrative statements), Knowledge (background common sense), Wisdom (implicit evaluation), and Purpose (possible bias). Different readers may have different understandings of the same report due to their different knowledge backgrounds and judgments of purpose. Representing these elements separately at the DIKWP levels and analyzing the differences between the DIKWP structures of the reader and the author can help explain why cognitive biases arise. In other words, the DIKWP×DIKWP model can represent not only semantic content but also the cognitive state of the agent, thus providing a tool for analyzing problems of semantic alignment or misalignment.
Summary
This section has established the basic conceptual framework of the DIKWP model and its interactive extension. By adding the "Purpose" dimension, the DIKWP model completely characterizes the process from data to wisdom. Its networked hierarchical structure allows us to connect semantic content with cognitive intent. The DIKWP×DIKWP interaction model extends this framework to multi-agent scenarios, defining 25 potential cross-agent semantic transformation relationships. This model lays the semantic analysis foundation for the subsequent discussion of cognitive security and deception mechanisms. In the next section, we will re-examine the concept of "security" in the digital space based on this foundation, proposing a new definition from the perspective of semantic alignment.
2. Semantic Reconstruction of Digital Security: From Technical Robustness to Cognitive Alignment
"Security" in the digital age needs to transcend the traditional concept of technical robustness and ascend to the level of semantic and cognitive security. In the context of the DIKWP×DIKWP model, we redefine "security" as: a state where multiple participants maintain semantic understanding consistency and cognitive goal alignment across the levels of data, information, knowledge, wisdom, and purpose. In other words, the security of a system no longer just means resisting hacker attacks or system downtime, but more importantly, it means that the interacting parties will not produce significant deviations or conflicts due to semantic misunderstanding, false information, or purpose mismatch.
2.1 From Cybersecurity to Cognitive Security
Traditional cybersecurity focuses on protecting devices, networks, and data from unauthorized access, tampering, and destruction, such as firewalls blocking intrusions and encryption preventing data leakage. While these measures are certainly important, they are no longer sufficient to ensure comprehensive security as threats extend to the human mind. An increasing number of attackers do not directly paralyze systems but rather influence people's decisions and behaviors by manipulating information, thereby weakening the effectiveness and legitimacy of systems from within. The concept of cognitive security has thus emerged, with its focus on ensuring the security of the cognitive processes of human users. As one viewpoint puts it: "Cybersecurity focuses on protecting machines, while cognitive security focuses on protecting people." In the military and national security domains, this is reflected in the emerging situation of so-called "cognitive warfare": adversaries use network tools to attack and weaken human rationality, exploiting psychological weaknesses to disrupt decision-making. NATO and other organizations define it as a new form of warfare that goes beyond traditional information warfare, namely, systematically influencing cognition, exploiting cognitive biases, and inducing thought distortions through network means to produce adverse effects at the individual and collective levels. Cognitive security is therefore regarded as a new cornerstone of national security, covering responses to fake news, propaganda manipulation, psychological warfare, and many other aspects. The US military and corporate world have recognized the threat of malicious influence operations to cognitive security and have begun to organize cross-departmental response forces, as such attacks not only target military objectives but also involve the general public on social media.
In the context of cognitive security, the meaning of security has expanded: in addition to traditional elements like "system continuity" and "data non-leakage," it now includes "cognition not being misled" and "intentions not being betrayed." For example, a country's power infrastructure may be invulnerable in terms of cybersecurity, but if a hostile force spreads panic-inducing rumors on social media, causing the public to mistakenly believe that the power grid is about to collapse, leading to panic buying and riots, society will still fall into crisis. In this example, the power system itself is secure, but the public's information space and cognition have been breached. Clearly, truly comprehensive security must include both technical security and cognitive security. The latter requires us to ensure the authenticity of information, the clarity of semantics, and the consistency of multi-party cognitive intentions. This is highly consistent with the DIKWP model's emphasis that "context, interpretability, and purpose alignment are the core." As Yuchong Duan and others have proposed, massive amounts of data alone are not enough to support intelligent decision-making; only by taking context and purpose into account can we avoid a disconnect between smart applications and real human needs. Correspondingly in the security field, only when all parties have a consistent understanding of semantics and purpose can the risks brought by misunderstanding or deception be avoided.
2.2 Cognitive Alignment and Semantic Security
Based on the above ideas, we define semantic security as: a robust state in which interacting parties have a high degree of consistency in their semantic understanding of shared information, and their cognitive intentions remain aligned. In this state, messages are not misinterpreted or maliciously tampered with during transmission, and the knowledge formed and actions taken by each party based on the received information are consistent with the original intention of the information sender. This alignment includes:
·Data-level Alignment: Ensuring that the original data has not been maliciously tampered with and that all parties have access to the same data. For example, blockchain technology is used to guarantee data immutability and transparent auditability, thus achieving trust at the data level.
·Information-level Alignment: Ensuring that the semantics of messages are not distorted during transmission. All parties have the same understanding of the literal meaning and basic facts of a message, with no party distorting or omitting key information. For example, uniformly using verified fact-checks or attaching reliable source verification when disseminating content to reduce misunderstandings.
·Knowledge-level Alignment: The knowledge updated by each party based on the received information is consistent or compatible. For example, in scientific collaboration, researchers should reach consistent conclusions after sharing experimental results; if the conclusions are inconsistent, it is necessary to rule out whether one party was interfered with by false information.
·Wisdom-level Alignment: For complex decisions, all parties have similar judgments of the situation after full communication, at least without diametrically opposed understandings. This usually requires a transparent communication process and explanation so that people from different backgrounds can also understand the basis of the decision.
·Purpose-level Alignment: This is the highest level of alignment, referring to the participants reaching a consensus or compatibility on goals and intentions. Only when intentions converge can cooperation proceed smoothly. When the intentions of multiple parties are inconsistent, a mechanism is needed to manage conflicts. For example, in a multi-stakeholder system, it may be necessary to vote and negotiate on conflicting intentions through governance protocols to reach an agreement.
If all the above levels can be basically aligned, we consider the system to have reached a state of semantic security. In this state, it is difficult for one party to deceive another by distorting information, because any attempt to change semantics or conceal intentions will be promptly discovered and corrected. It can be said that semantic security requires a relationship of multi-party trust and transparency in communication, with the goal of establishing a "single source of truth" or at least a "compatible set of facts" cognitively, so that the decision-making basis of all parties is consistent.
Achieving semantic security requires a combination of technical and social means. On the one hand, technical measures must be used to ensure the authenticity and integrity of information, such as content digital signatures, blockchain traceability, etc., to ensure that the data/information layer is not silently tampered with. On the other hand, it also requires institutions and collaboration to ensure purpose alignment, such as establishing multi-party collaborative governance mechanisms to resolve goal conflicts. For example, some scholars have proposed introducing a Decentralized Autonomous Organization (DAO) framework on a semantic blockchain, so that when updating wisdom and purpose layer content, multi-signature voting or consensus protocols can be used to ensure that all relevant parties approve the changes. Another example is in international cooperation to respond to a pandemic, where countries need to share data and interpret it uniformly, aligning as much as possible from scientific knowledge to response strategies. This requires the establishment of a credible cross-national and cross-institutional cooperation mechanism (such as the scientific consensus formation process coordinated by the WHO).
2.3 The Risks of Cognitive Divergence
When semantic security cannot be guaranteed, the risk of cognitive divergence among the parties increases sharply. Cognitive divergence refers to different agents forming significantly different or even conflicting cognitive structures (DIKWP maps) of the same objective reality. Its consequences may include:
·Misunderstanding and Conflict: Due to inconsistent semantic understanding, collaboration fails and conflicts escalate. For example, conflicts arise between doctors and patients due to information asymmetry and cognitive gaps—research shows that a large cognitive distance between doctors and patients can severely damage communication effectiveness. Fineschi et al. call this the "Game of Mirrors": there is a gap in the perception of the health condition between the doctor and the patient, and the images they have of each other in their minds are inconsistent, as if looking into a mirror but seeing different reflections. This cognitive mismatch can be seen as a security risk because it may trigger medical disputes and even violent conflicts.
·Decision Bias: Individuals form biased knowledge due to receiving erroneous or one-sided information, thus making irrational decisions. For example, investors are misled by false news and make wrong investments, or voters change their voting intentions due to fake news. In these cases, "insecurity" is manifested as a deviation at the cognitive level—the information on which the decision is based is not true or complete, causing the practice to deviate from the path it should have rationally taken.
·Collapse of Trust: Semantic insecurity will eventually erode trust. When people realize that the information environment is full of falsehoods and manipulation and lacks alignment mechanisms, they tend to be skeptical of all information, leading to a decline in the overall level of trust. For example, when false news is frequent and all parties stick to their own versions, the public may lose trust in both the government and the media. This in itself is a security issue, as the functioning of society depends on basic trust.
·Social Fragmentation: At the group level, cognitive divergence will cause information cocoons and group polarization. Different groups are immersed in their own cognitive worlds, unable to communicate semantically with each other. In extreme cases, "parallel realities" even appear—for example, for the same event, people with different political stances are guided by recommendation algorithms to obtain completely different information, thus forming completely opposite collective memories. Attackers can exploit this division, using a cognitive forking strategy to show different "truths" to different groups, creating hostility between them. This social fragmentation seriously threatens social security and stability.
·Psychological Harm: From an individual perspective, continuous information manipulation and cognitive imbalance will also bring psychological insecurity and stress. Faced with a large amount of contradictory information, individuals may experience cognitive dissonance, i.e., internal conflict and discomfort. Cognitive dissonance drives people to adopt various (not necessarily rational) ways to reduce the incoordination, such as selectively ignoring certain facts or blindly believing in a certain source. Manipulators can in turn use this psychological mechanism by carefully designing information to make the target fall into cognitive dissonance, thus making them more likely to accept the "solution" provided by the manipulator (usually some extreme idea), further exacerbating cognitive bias.
In summary, the reconstruction of security at the semantic and cognitive levels aims to prevent the above risks. By ensuring cross-agent semantic alignment, it reduces the occurrence of cognitive divergence. It can be said that the meaning of "security" has expanded from the mere robustness of technical systems to the robustness of cognitive semantic systems. The former targets technical threats such as hackers and viruses, while the latter targets cognitive threats such as fake news and psychological warfare. The two are complementary: technical security is the foundation, without which the authenticity of information cannot be guaranteed; cognitive security is the sublimation, without which even the most secure technology can be defeated by human vulnerabilities.
In the following chapters, we will further study the mechanism of cognitive manipulation deception (Part 3), clarifying how attackers use the misalignment of DIKWP structures to create cognitive divergence, and how to explain and simulate this through modeling.
3. Cognitive Explanation and Modeling of Deception
Cognitive manipulation deception refers to the act of an attacker (the manipulator) influencing the cognitive process of a target (the deceived party) by carefully designing and disseminating information, causing them to form cognitions and make decisions that are inconsistent with or even contrary to their original cognitive intentions, as desired by the attacker. Unlike traditional deception (such as simple lies or technical scams), cognitive manipulation deception is more covert and systematic. It exploits the weaknesses and biases of human cognition to gradually shape a distorted DIKWP structure in the target's mind to serve the manipulator's intentions.
From the perspective of the DIKWP×DIKWP model, the process of deception can be understood as the manipulator actively outputting specific DIKWP elements (one or a combination of data, information, knowledge, and wisdom), which, after being cognitively transformed by the target, leave a "false imprint" on the target's DIKWP structure. The effect of deception is that the target's DIKWP map is reconstructed—some data is misunderstood, some information is accepted as knowledge, false knowledge is used as wisdom to guide practice, and even the target's intentions are led astray. Below, we will analyze this process step by step and discuss the cognitive psychological mechanisms behind it.
3.1 Attacker's DIKWP Transformation Strategies
To deceive the target, the manipulator first formulates an output strategy based on the target's existing cognitive state. This is equivalent to the attacker constructing their own DIKWP output model and attempting to influence the target's DIKWP cognitive model. Common deception strategies include:
·D-level Induction (Feeding Fake Data): The attacker provides a large amount of seemingly objective fabricated data to confuse the target. To the target, this data may seem unremarkable (because the attacker will make the fake data's surface statistical features similar to normal data to avoid suspicion), but when aggregated, it produces misleading trends or conclusions at the knowledge level. This is similar to adversarial attacks in machine learning: by applying subtle perturbations to the input data, the model's output deviates. For humans, a barrage of data may create a so-called "information cocoon," where the target is inundated with data of a specific tendency and gradually comes to believe the conclusions pointed to by this data. For example, fake accounts on social media may continuously publish statistical charts or fragments of facts (many of which are not obviously false), but the accumulation of these "data points" will lead the audience to believe a preconceived narrative.
·I-level Disguise (Confusing Information Output): The attacker implants misleading information using carriers such as text, images, and videos. A typical practice is to splice and edit quotes to take them out of context, or to distort facts through exaggeration or concealment. This information is often deceptive: each sentence may be partially true when viewed in isolation, thus fooling the target's initial credibility judgment, but the overall meaning conveyed is distorted. This method works because of people's confirmation bias and heuristic cognition: people tend to accept information that conforms to their existing views and do not rigorously check each item. The attacker first understands the target's preferences (existing knowledge and beliefs in their DIKWP) and then provides specific information that caters to their liking to deepen the target's trust in the wrong viewpoint. For example, fake news often uses sensational headlines and emotional language to evoke resonance, causing readers to cognize with emotion, which lowers their critical thinking ability. Research has found that false news is often more novel and emotionally evocative (such as surprise, anger) than real news, and therefore spreads faster and wider on social networks. This is precisely the result of the attacker's careful design at the information level: by triggering strong emotions and catering to biases, they make the audience relax their rational scrutiny and be willing to share, thus achieving the viral spread of lies.
·K-level Infiltration (Manipulating the Knowledge System): This is a more advanced form of deception, where the attacker attempts to influence the target's existing knowledge structure. One way is through repetitive propaganda or stereotype implantation, making certain false propositions "solidify" into common sense in the target's mind. Psychological research shows that people are more likely to believe information they hear repeatedly (even if it was initially false), which is known as the familiarity effect. Attackers use this by repeatedly spreading the same rumor, and over time, the audience may accept it without question. In addition, attackers will shape a closed information environment, allowing the target to only be exposed to a single-tendency viewpoint, thus simplifying their knowledge system. The personalized recommendations of social media algorithms may inadvertently act as accomplices—the algorithms amplify the content people like, forming an echo chamber effect, and attackers strengthen this effect by mass-producing biased content. As a result, the target's knowledge map gradually tilts in the direction desired by the attacker. For example, in conspiracy theory communities, participants for a long time only believe the information spread among themselves, gradually forming a framework completely different from mainstream knowledge. This is a manifestation of the attacker's successful infiltration at the knowledge level.
·W-level Deception (Influencing Wisdom and Decision-making): At the wisdom level, the attacker's goal is to make the audience make judgments in key decisions that are favorable to themselves and unfavorable to the target. At this point, the attacker may have already changed the target's perception of facts and knowledge through the preceding steps, so they only need to "give a push." Common techniques include fear appeals (sensationalizing a certain crisis to force the target to make a hasty decision) and false dilemmas (making the target believe they can only choose between two options that are both unfavorable to them, without seeing a third, correct solution). These exploit the psychological biases of human decision-making, such as the tendency to accept shortcut reasoning and abandon comprehensive analysis under pressure. By setting traps at the wisdom level, the attacker induces the target to make short-sighted decisions that go against their long-term interests. For example, a cyber spy might spread news that a certain company's product is about to fail, inducing investors (the target) to panic-sell their stocks, thereby profiting from it. The investor's judgment (wisdom) at this moment has been swayed by fear and is no longer based on rational analysis; the attacker has successfully achieved deception at the wisdom level.
·P-level Manipulation (Reversing Practical Intentions): This is the final stage of deception—directly changing the target's intentions and practical behaviors to be used by the attacker. At this step, the target may already be deeply mired in the fallacies of the previous levels, their original values and intentions have been shaken, and they begin to identify with the attacker's narrative and even stance. The attacker replaces the target's original intentions by proclaiming a new "noble purpose" or "urgent mission." The recruitment by extremist organizations is a typical example: they first gradually influence the knowledge and beliefs of young people through ideological propaganda (K-level), then shape their overall judgment of the world (W-level), and finally make them believe that joining the organization and dedicating themselves to "jihad" is the true purpose of life. At this point, the target's practical intentions have been completely reversed, from resisting violence to actively participating in it, which means the deception has reached its peak. For cognitive warfare at the national level, getting people within the opponent's side to "voluntarily" act in the way the attacker hopes (e.g., spreading rumors, disrupting social order) is the greatest victory.
Not all deceptions require the implementation of all the above strategies. In actual operations, attackers will choose the entry level and combination based on the target's situation. For example, common network phishing emails mainly use I-level information disguise (impersonating a legitimate organization to gain trust) and W-level decision induction (creating a sense of urgency to force a hasty transfer of funds), while large-scale political propaganda tends to favor K-level infiltration and P-level manipulation (shaping long-term concepts and orientations). Regardless of the path, the essence is to change the target's DIKWP structure: the input and output are asymmetric, and what the target gets is the "pseudo-wisdom" intentionally designed by the attacker.
3.2 Exploitation of Human Cognitive Weaknesses
The deception strategies described above are effective because human cognition has various biases and limitations that can be exploited by attackers. The attacker is like a "cognitive hacker" who specializes in finding and exploiting these vulnerabilities. Below are some important cognitive mechanisms and biases, illustrating how attackers exploit them:
·Bounded Rationality and Heuristics: Human cognitive resources are limited, and when faced with massive amounts of information, we adopt heuristics to simplify decision-making. This makes us susceptible to being a cognitive miser—preferring quick, effortless mental shortcuts. Attackers take advantage of this by providing seemingly reasonable simple explanations or conclusions, so people don't delve deeper. For example, if a rumor can be self-consistent with simple logic in a few seconds, people are often too lazy to verify it further and believe it to be true. To make matters worse, algorithmic recommendations cater to this human weakness by automatically pushing content that is easy to understand and aligns with preferences, reinforcing biases.
·Confirmation Bias: People tend to search for and believe information that supports their pre-existing views while ignoring contradictory information. This gives attackers the opportunity to cater to their preferences. They will study the existing beliefs of the target group (e.g., conspiracy theorists believe the government is untrustworthy) and then provide new "evidence" that conforms to these beliefs, making the target more certain that they are right. Even if counter-evidence appears, the target may selectively ignore it due to confirmation bias, thus falling deeper into the trap of falsehood.
·The Role of Emotion and Motivation: Cognition is not cold, hard logical reasoning; emotion and motivation have a huge impact on it. Attackers often lower the threshold for rational thinking by inciting emotions. High-arousal emotions like anger and fear narrow our perspective, making us focus more on immediate feelings and ignore fact-checking. At the same time, attackers are adept at using cognitive dissonance: when people face contradictory information and feel discomfort, they will actively choose to change a belief or behavior to eliminate the discomfort. If the attacker provides an "out" (usually a radical but consistent viewpoint), people may accept it to feel psychologically comfortable. For example, a person caught up in conspiracy theories will feel dissonant and uneasy when faced with mainstream debunking information, and may explain it with a more extreme belief like "the mainstream media is controlled," thus rejecting the truth.
·Group Influence and Conformity: Humans are social animals, and group opinions have a powerful influence on individuals. In-group bias makes people tend to believe information consistent with the group they identify with, while being skeptical of information from out-groups. Attackers use social bots and shills to create the illusion that "most people think this way." For example, by using a large number of fake accounts to post a certain viewpoint and like each other's posts, they create the impression that the viewpoint is dominant in the group. Individuals are more likely to accept this viewpoint due to conformity, to avoid deviating from the group. This is why false information likes to use phrases like "everyone is saying." In cognitive warfare, there are also strategies to divide groups, using information manipulation to make different groups oppose each other. Individuals, to maintain their group identity, will automatically block information from the opposing camp, which is not conducive to the self-correction of rumors.
·Familiarity Effect: Simple repetition can increase trust. Even a lie, if repeated a thousand times, will give people a sense of familiarity and security. This is because the human brain uses familiarity as one of the cues to judge truth (the fluency heuristic). Attackers are well aware of this and therefore like to adopt a "spray and pray" strategy—continuously repeating the core fallacy, letting the target hear and see similar content every day through various channels and forms. Eventually, even if the audience was initially skeptical, they may feel "I've heard it many times, maybe it's true" under the influence of the familiarity effect. The algorithmic amplification of social platforms further fuels the repetition effect: popular fake news often dominates users' information feeds, causing users to be repeatedly exposed to the same tune. This lowers users' vigilance, and they gradually take lies as facts.
The above are just a few examples. There are dozens of known cognitive biases in human cognition (such as anchoring effect, scarcity effect, framing effect, etc.) that attackers can potentially exploit. The term cognitive hacking is used to describe such attacks that exploit psychological weaknesses, influencing decisions by manipulating perception and cognition, rather than directly targeting computer systems by breaking through firewalls. Some research has summarized four common techniques used by cognitive hackers: exploiting biases (like the aforementioned confirmation bias), spreading disinformation, manipulating digital channels (like controlling the public opinion field on social media), and destroying trust (like undermining public trust in institutions and the media). All of these can be integrated into our modeling of the deception process.
3.3 Modeling the Deception Process
To formalize the above discussion, we can try to establish a cognitive deception process model. With the help of the DIKWP×DIKWP framework, we can view deception as a dynamic interactive process, where:
1.The attacker has their own DIKWP state (including the data the attacker possesses, the information they want to convey, the knowledge they want to influence, and the intentions they hope to guide, etc.). The attacker designs their output based on their understanding of the target (which may be formed through intelligence gathering to create a preliminary model of the target's DIKWP).
2.The attacker outputs designed content, which can be represented as a certain DIKWP combination. For example, a text message may correspond to several pieces of information (I) and latent knowledge (K), as well as implied wisdom (W) or purpose (P).
3.After receiving the content, the target projects and interprets it according to their current cognitive structure. This process is the mapping of the target's DIKWP graph to the content's DIKWP graph. If the attack is successful, the mapping result "writes" a change consistent with the attacker's expectations onto the target's DIKWP graph. For example, the target adds a false knowledge link or adjusts their judgment of an event.
4.The change in the target's cognitive state may in turn be reflected in their practical behavior (P-level change), such as forwarding a rumor or making a certain decision. The attacker can observe these external behaviors to verify the effect of the deception and iteratively adjust their strategy (forming a feedback loop).
5.The entire process can be divided into multiple rounds over time. The attacker may need multiple interactions to fully achieve their purpose (especially for complex, long-term deceptions like brainwashing propaganda).
At the model level, some key parameters and indicators can be introduced (which will be discussed in detail in Chapter 4), for example:
·Semantic Entropy (H): Measures the degree of cognitive uncertainty or confusion a target has about a certain topic. The attacker hopes to increase the target's semantic entropy (making them more confused), and then input their own narrative to reduce the entropy, leading the target to the certainty shaped by the attacker. This process is similar to "first creating a problem, then providing a solution."
·Cognitive Distance (Δ): Measures the difference between the target's current DIKWP state and the attacker's expected state. The attacker's task is to gradually reduce Δ, that is, to make the target's cognition step-by-step closer to their desired position. This is similar to error reduction in an optimization problem.
·Alignment (A): Measures the projection alignment of the target's DIKWP and the attacker's DIKWP in key dimensions. Initially, A may be very low (the target's and the attacker's intentions are completely opposite). A sign of successful deception is a significant increase in A (the target begins to endorse the attacker's views and intentions). Alignment can be calculated by comparing the knowledge nodes, judgment conclusions, and even emotional attitudes of both sides on a specific topic. For example, if both sides use the same keywords to describe the same event and their emotional judgments are consistent, the alignment is high.
·Attacker's Cost (C) and Return (R): At each step, the resources invested by the attacker (operating fake accounts, content creation, etc.) and the effect achieved (the amount of cognitive shift in the target). Ideally, the attacker hopes to achieve the greatest reduction in Δ with the smallest C. If the security mechanism is effective, it should increase the attacker's C or decrease R (discussed in Chapter 5).
By introducing the above parameters, we can formalize cognitive deception as a dynamic system, which can be further analyzed using game theory or control theory methods. This is very valuable for developing defense strategies: if we see the attacker as one party and the defender (cognitive security mechanism) as the other, we can establish an adversarial model. The goal of the defense is to reduce the attacker's R/C ratio, making it unprofitable or having a very low success rate.
It is worth noting that a complete simulation of the human cognitive process is extremely complex, but the above model at least provides us with a framework for understanding deception. In particular, the depiction of multi-level interaction in DIKWP allows us to distinguish between different levels of deception techniques, and thus design corresponding defenses. For example, for I-level information deception, we can strengthen fact-checking and source credibility scoring; for K-level knowledge deception, we need long-term education and diverse information channels; for W-level decision deception, we might use AI assistants to provide multi-angle analysis to aid decision-making; for P-level purpose manipulation, we need to resist the infiltration of extreme ideas from the level of values and psychological counseling.
In conclusion, cognitive manipulation deception is a multi-step, multi-level process in which attackers exploit numerous weaknesses in human cognition to gradually make the target form the DIKWP structure desired by the attacker. Through the analysis of the DIKWP×DIKWP model, we can not only understand the mechanism of deception more systematically, but also lay the foundation for developing measurement indicators and protection strategies in the next step. In the next chapter, we will propose quantitative measurement indicators for cognitive deviation and semantic imbalance to evaluate the impact of deception and the effectiveness of defense.
4. Design of Indicators for Cognitive Deviation and Semantic Imbalance
To effectively prevent and correct cognitive manipulation deception, we first need to be able to measure the degree of cognitive deviation and semantic imbalance. This is analogous to monitoring intrusion indicators or abnormal traffic in cybersecurity. In the field of cognitive security, we should introduce corresponding semantic metrics to describe and quantify the difference between a person's (or a group's) DIKWP state and a baseline state, as well as the health of the information environment. This section proposes and discusses several potential indicators, including semantic entropy, cognitive distance, and DIKWP projection alignment, and introduces some other related assessment methods. It should be emphasized that the proposal of these indicators aims to describe concepts and directions, and their specific implementation may require modeling and calculation using methods from artificial intelligence, cognitive science, and other fields.
4.1 Semantic Entropy: Characterizing Uncertainty
Semantic entropy borrows the concept of entropy from information theory to measure the uncertainty or chaos of semantic content in a cognitive system. Traditional information entropy, proposed by Shannon, is used to quantify the uncertainty of the outcome of a random variable: entropy is maximized when all possible events are equally probable, and zero when it is completely certain. Similarly, we can define how much uncertainty or divergence there is in a person's cognitive state on a certain topic or in a certain context.
For example, for a simple proposition (e.g., "climate change is real"), if a person's cognition is very certain (whether they believe it to be true or false), then the semantic entropy of that proposition in their cognition is low; if they are full of confusion and wavering, the entropy is high. A major strategy of attackers in the deception process is to create a high-entropy state—by providing a large amount of contradictory or ambiguous information to cause the target to have the greatest degree of confusion. As the metaphor "cognitive fog" suggests, the target cannot distinguish directions in a thick fog. High semantic entropy means that the target lacks a certain judgment of the truth. At this point, when the attacker throws out a clear position (often a wrong one), the target is more likely to accept it, because in comparison, this position provides "information gain" (reduces entropy).
Therefore, monitoring changes in semantic entropy can help us capture the clues of deception. If we can estimate the semantic entropy of a group on a hot topic through text analysis, surveys, and other means—such as the degree of dispersion of opinions, the ambiguity of mainstream consensus—then an abnormal increase in entropy may mean that someone is intentionally creating chaos. In addition, we can use semantic entropy as a trigger signal for security mechanism intervention: when the entropy is high, reduce uncertainty (lower the entropy) by releasing authoritative information and debunking rumors. It should be noted that the absolute value calculation of entropy is not intuitive in the field of cognitive semantics, but alternative methods can be used. For example, by analyzing discussions on social media about a certain event, if the viewpoints are highly fragmented and emotionally polarized, it indicates a low degree of consensus and high entropy; conversely, if most credible sources point to a consistent conclusion, the entropy is low. A healthy information space often means a certain degree of consensus or reliable cognitive anchors. Too high semantic entropy is usually accompanied by the proliferation of rumors or false information.
4.2 Cognitive Distance: Quantifying the Difference in Cognitive Structures
Cognitive distance is used to measure the degree of difference between the DIKWP structures of two cognitive agents. This concept is similar to attitude difference or cultural distance in psychology, but it is broader—covering the cognition of facts, the understanding of concepts, value judgments, and even the differences in purpose orientation. If we can abstractly represent a person's DIKWP state as a certain vector or graph, we can define the distance between two vectors/graphs.
When the cognitive distance is small, it indicates that the two parties have consistent views on most basic facts and knowledge (high degree of semantic alignment); a large distance predicts that effective communication between them will be difficult, and misunderstandings and even conflicts are likely to arise. Just as a large cognitive distance between a doctor and a patient can lead to contradictions and even doctor-patient conflicts. At the social level, if the cognitive distance between different groups continues to widen, social fragmentation will deepen, forming "information islands."
There are various imaginative ways to quantify cognitive distance. For example:
·At the Data/Information level, compare the overlap of data sources and information content mastered by both parties, such as the Jaccard similarity coefficient of their news reading lists. A low overlap may mean that the information cocoon effect is severe, and the cognitive distance is large.
·At the Knowledge level, we can construct knowledge graphs or concept networks and calculate the difference between the two graphs (e.g., knowledge node coverage, consistency of key concepts). Semantic analysis techniques and Latent Semantic Analysis (LSA) can be used to represent text as semantic vectors, thereby measuring the semantic similarity between the viewpoints expressed by two people. Research has shown that semantic distance based on word vectors and co-occurrence can be an objective means of analyzing semantic differences.
·At the Wisdom/Purpose level, we can assess the fit of values and intentions through questionnaires or behavioral analysis. For example, in team collaboration, the difference in the ranking of the importance of the project's purpose among members can be an indicator of purpose distance. If a team member secretly pursues personal interests rather than the team's purpose, their purpose distance from the team is large, and the security risk is high (which may lead to an insider threat).
The cognitive distance indicator can play a role in defense: security mechanisms can continuously monitor the distance of key individuals or groups from the "truth." If a certain group deviates further and further on a scientific consensus issue (e.g., people who believe in the flat earth theory and their distance from mainstream scientific cognition), it should raise an alarm. Similarly, for individual users, if it is detected that the content they have recently browsed and shared has a sharply increased difference from previously reliable information sources, it indicates that they may have fallen into a trap of false information, and the system can push debunking information or risk warnings to them.
4.3 DIKWP Projection Alignment: Cross-Level Semantic Consistency
The DIKWP projection alignment indicator aims to measure the degree of correspondence between two agents at each DIKWP level. The specific method is to "project" the DIKWP elements of one party onto the cognitive framework of the other and see what proportion is aligned. For example, we can ask: "Does Party B also acknowledge the key knowledge points that Party A considers important?", "Does Party B understand and agree with Party A's purpose?", and so on.
Alignment can be defined at each level:
·Data Alignment: Whether the data sources and datasets used by both parties are the same or mutually trusted. For example, in scientific collaboration, sharing raw data results in high alignment; if one party's data is not public, the alignment is low.
·Information Alignment: Whether the understanding and cognition of specific information and intelligence are consistent. For example, whether the main points described by two people after reading the same report are the same. If one emphasizes point A and the other emphasizes the opposite point B, the information alignment is low.
·Knowledge Alignment: The degree of consensus on concepts and laws. For example, in health knowledge, whether doctors and patients have a common understanding (e.g., both agree on a certain treatment principle). The Game of Mirrors study shows that there is a "mirror" deviation in the understanding of health concepts between doctors and patients—in this case, the knowledge alignment is very low.
·Wisdom Alignment: The consistency of comprehensive judgment of a situation. For example, in security analysis, whether different experts have consistent assessments of the threat level. If one person thinks it is highly dangerous and another thinks there is no need to worry, the wisdom alignment is clearly low.
·Purpose Alignment: The consistency of intentions and motivations. For example, whether all members in a team project are sincerely working towards a common purpose. If there are hidden agendas, the purpose alignment is low.
Overall, we can also define a weighted overall alignment A, representing the average degree of consistency across all important dimensions. Obviously, in a state of semantic security, the alignment A should be high. The significance of alignment is that it not only measures the current state but can also guide regulation: when it is found that the alignment at a certain level is low, targeted measures can be taken to improve it. For example, if a doctor finds that a patient's understanding of the treatment purpose (purpose level) is insufficient, they need to conduct health education to explain why the patient needs the treatment, thereby improving purpose alignment.
4.4 Other Indicators and Assessment Methods
In addition to the main indicators mentioned above, some auxiliary methods can be considered to assess cognitive deviation and semantic imbalance:
·Semantic Network Analysis: Construct a semantic network structure of the public opinion field or cognitive content and analyze its topological features. For example, the degree of community division in a social media discussion network can reflect whether group cognition is polarized. A high degree of homophily means poor semantic security.
·Public Opinion Entropy: Similar to the concept of semantic entropy, this calculates the entropy of the entire information environment at a macro level. For example, by tracking the various arguments and their audience proportions on a certain topic over a period of time, a high entropy value indicates scattered opinions and a lot of noise; a low entropy value (especially close to 0) may mean that one voice dominates or there is a strong consensus. This needs to be interpreted with caution, as extremely low entropy may mean some kind of information control or a single voice (not necessarily healthy), but generally, a state with a certain mainstream consensus and normal dissent is ideal.
·Cognitive Stability: By tracking an individual's viewpoints over a period of time, measure the stability or volatility of their cognitive state. If a person believes X today and changes to ¬X tomorrow, and then changes their story again a few days later, it indicates that they are greatly influenced by external information, have poor cognitive stability, and may have fallen into information chaos. Security systems can provide targeted help to such people.
·Deviation Consequence Index: Quantify the actual impact caused by cognitive deviation. For example, economic losses, casualties, and the number of social conflict incidents caused by rumors. Although this is not a direct cognitive indicator, it can be used as an effectiveness evaluation of security. For example, the number of riots caused by fake news in a region can reflect the deterioration of the cognitive security situation in that region.
·User Trust Index: Measure the public's degree of trust in the information environment (e.g., surveys on trust in the media and government statements). If there is a general distrust of any information source, then even if there are few rumors, cognitive security has not been achieved, because people are in a state of suspicion and uncertainty. The literature suggests introducing soft indicators such as a "user trust index" and "explanation satisfaction" to evaluate the performance of semantic systems. These indicators, obtained through questionnaires or interactive feedback, can reflect the reliability of a cognitive system in the minds of users.
·Purpose Achievement Rate / Conflict Resolution Rate: In specific collaboration or dialogue scenarios, measure whether the participants have achieved a common purpose or resolved disagreements through semantic communication. For example, if an intelligent customer service system can efficiently solve user problems in a dialogue with a user, it indicates that the DIKWP alignment is good and it is secure and trustworthy. If frequent misunderstandings lead to task failure, the semantic security is insufficient.
Scholars have recognized that establishing an evaluation indicator system for semantic cognitive models like DIKWP is a necessary research direction. For example, Mei et al. proposed in their work to build standard datasets and simulation environments to test the effectiveness of the DIKWP model, and to design user satisfaction surveys to measure interpretation performance and user trust. These efforts all indicate that we need a comprehensive measurement framework to objectively evaluate the state of semantic security and track improvements.
4.5 Using Indicators to Aid Defense
With the above indicators, we can use them for monitoring and early warning. For example:
·Real-time calculation of semantic entropy and community division indicators on social platforms. Once an abnormal increase in entropy or severe polarization of public opinion is found, it triggers a manual review of the topic to see if there is any malicious behavior at play.
·For important decision-makers or vulnerable groups, regularly assess their cognitive distance and alignment, which is equivalent to a psychological health check-up. If it is found that an expert's knowledge structure deviates greatly from that of their peers, discuss and correct it early to prevent them from being influenced by false information for a long time.
·In the process of information release and communication, dynamically adjust strategies guided by indicators. For example, when the government debunks rumors, first assess the current public semantic entropy. If the rumors have caused great confusion, it is necessary to release very clear information and expand coverage through multiple channels to reduce the entropy as soon as possible. If it is just a small group of people spreading false knowledge, then it is sufficient to improve the knowledge alignment for that circle without alarming the general public.
Of course, the use of indicators must be combined with a deep understanding of human behavior. Unlike traditional network indicators, cognitive semantic indicators have more uncertainty and subjectivity and need to be interpreted and used with caution. But just as in the field of cybersecurity we use various traffic indicators plus AI analysis to detect anomalies, in cognitive security, these indicators can also empower humans. Machines can help us discover hidden patterns and trends, while human experts make the final judgment and intervention decisions.
By establishing a complete cognitive security indicator system, we can more scientifically evaluate the impact of deception and objectively compare the effectiveness of different protection methods. The establishment of the indicators themselves also reflects the deepening of our understanding of cognitive security. For example, when we take "alignment" as the goal, it shows that we recognize that the key to security lies in communication consistency; when we use "entropy" for monitoring, it shows that we are concerned about uncertainty as a source of chaos. In short, indicators provide a bridge connecting theory and practice and are an important step for cognitive security to move from qualitative to quantitative.
5. Security Economics Mechanisms in Response
Cognitive security is not only a technical and psychological issue; there are also powerful economic motives driving malicious behavior. Therefore, from the perspective of security economics, mechanisms can be designed to weaken the attacker's motives, increase their attack costs, and reduce their potential benefits, thereby creating a deterrent effect. The basic principle of security economics is: to make the defender's cost-effectiveness ratio superior to the attacker's, making the attack economically unviable. For cognitive manipulation deception, we need to build a set of mechanisms that make the resources required to manipulate cognition far exceed the possible profits, thus making most attackers give up.
Based on the previous analysis, establishing economic mechanisms for cognitive security defense can be approached from the following aspects: enhancing the public's cognitive autonomy, strengthening information authenticity verification methods, implementing content traceability and responsibility tracking mechanisms, and introducing symmetric feedback to balance the information ecosystem. These measures are complementary and jointly act on the cost and benefit structure of attacks. They are discussed separately below.
5.1 Enhancing Cognitive Autonomy: Reducing Attack Benefits
Cognitive autonomy refers to an individual's ability to think independently, distinguish truth from falsehood, and resist psychological manipulation. This is equivalent to everyone having a set of "psychological firewalls." If the level of cognitive autonomy in the whole society is high, even if attackers invest a lot of resources to spread false information, the recipients will not believe it easily, the spread will be difficult to ferment, and their return (R) will be greatly reduced. On the contrary, if the public lacks media literacy and critical thinking, a rumor will spread like a virus once it is thrown out, and the attack benefit will be very high.
Therefore, enhancing public cognitive autonomy is a long-term plan. This requires strengthening education and literacy cultivation. Starting from school education, cultivate students' critical thinking, information retrieval, and verification skills, popularize common cognitive biases and deception techniques, and make people aware of their own cognitive weaknesses. For adults, improve "digital literacy" and "media literacy" through popular science publicity, public lectures, and media specials. Research shows that improving the public's media literacy and critical thinking helps to combat the influence of false information. One strategy is to implement so-called cognitive immunization or psychological inoculation: preemptively expose people to common deception tricks, and even let them experience small doses of false information and teach them how to identify it. In this way, when they encounter a real attack, they will have resistance, just like being vaccinated.
When the majority of the public can "not be deceived," attackers will find it difficult to profit even if they spread rumors. For example, a trained person will immediately see through a phishing email and will not leak information, making the attacker's efforts futile. Even for more complex political propaganda, if citizens have basic scientific common sense and a rational spirit, they will not be easily incited to blind anger or panic. This will greatly reduce the effectiveness of attack behaviors.
Of course, enhancing cognitive autonomy is a long-term process, and it is difficult to completely prevent the effects of attacks in the short term. But gradually improving the cognitive immunity of the whole society will require attackers to invest several or even tens of times more resources to achieve the same effect. For example, a single piece of fake news could deceive one hundred thousand people in the past, but now it may take hundreds of pieces and repeated dissemination through multiple channels. And when the defenders are also in action (e.g., authoritative organizations quickly debunk rumors, media push the truth, etc.), it is even harder for attackers to succeed. Therefore, cognitive autonomy is a multiplier in the entire security economics game: it amplifies the efficiency of all defense-side measures and discounts the attack benefits.
5.2 Information Authenticity Verification: Raising the Attack Threshold
Information authenticity verification methods aim to make false or tampered information less effective or to be exposed more quickly, thereby raising the threshold and cost for attackers to use false information. Specific measures include:
·Fact-checking Mechanisms: Establish and improve professional fact-checking teams and processes to quickly verify the content of public concern and promptly announce the results. Modern communication technology can assist in this process, for example, by using AI to pre-screen a large amount of content for suspected fake news, which is then reviewed and confirmed by experts. Although this also brings costs to the defenders, this cost is worthwhile compared to the losses caused by rumors, and the marginal cost can be reduced through scale effects and technological improvements.
·Credibility Labels and Reputation Systems: Add credibility ratings to information sources and content on social platforms or in browsers. Reliable official media and accounts with a long record of no fabrication can be marked as highly credible, while new anonymous accounts and accounts that forward a large amount of suspicious content can be marked as low credibility. This is similar to the seller reputation rating in e-commerce. Users can make judgments based on the labels and will not mistake rumors from low-credibility sources for reliable news. For attackers to increase the influence of their content, they have to disguise themselves as high-reputation, which requires long-term operation and careful deception, greatly increasing the difficulty and cost.
·Digital Signatures and Authentication: Use encryption technology to sign and verify key information. For example, important announcements issued by the government can use blockchain or digital signatures to prove their source is authentic and has not been tampered with. In this way, it is easier to expose forged announcements spread by attackers because they do not have the official signature. Many rumors like to impersonate authoritative sources. If every official message has a verifiable signature, the rumor will be exposed as soon as the signature is checked. Similarly, for media files such as images and videos, hash verification or watermarking technology can be introduced to verify authenticity, thereby preventing deepfakes from passing off as genuine. Currently, AI-generated fake videos are often used for smearing or fraud. If all circulating videos are accompanied by source certification, the public can discover abnormalities by checking the metadata. Although attackers may also try to forge these signatures or watermarks, that requires higher technical investment, greatly raising the threshold.
·Traceability and Accountability: With the help of content traceability technologies such as blockchain, the transmission path of information can be recorded. Once a message is confirmed to be a rumor, the original source and key nodes of the spread can be traced. This is similar to "solving the chain of transmission." When attackers know that spreading rumors may be traced back to the source and they will bear legal/reputational consequences, the expected cost of their actions will increase, and the deterrent effect will be enhanced. For example, some countries have already legislated to hold rumor-mongers legally responsible. Combining this with technical means to lock down their identity and punish them will deter many potential attackers. Blockchain technology provides an idea: record the evolution of each piece of information (from the data layer to the wisdom layer) as a time-stamped chain. On the one hand, this ensures the complete transparency of the content on the ledger. On the other hand, once a certain link is proven to be problematic, it can be determined which node contributed the problematic information. This is equivalent to establishing a basis for accountability.
Through authenticity verification, on the one hand, the probability of successful deception by false information is reduced. On the other hand, once the attack fails, the attacker may face high costs (reputation, legal consequences). It is particularly worth mentioning the application of the symmetric feedback mechanism in this field: allowing more "watchdogs" to participate in verification, and allowing the affected party to also provide feedback and challenges, thus forming a two-way interaction.
5.3 Content Traceability and Symmetric Feedback: Building a Fair Information Ecosystem
Content traceability has been mentioned earlier. It uses technical means to record the source and flow of content, which is equivalent to installing a "dash cam" for every piece of disseminated content. When a rumor occurs, the entire path can be replayed, which can not only quickly lock down the source of the truth but also find out who added the false elements in the middle. This is particularly effective for uncovering complex deceptions, because cognitive manipulation is sometimes not a single-sentence lie but a long chain of misleading information. With traceability, we have the opportunity to cut off the key links in the rumor chain, such as deleting or correcting forwards containing false information to prevent further spread.
Symmetric feedback refers to giving the receiving party a channel for feedback and correction in the information transmission chain, so that the flow of information is no longer one-way, but a two-way interaction. This mechanism helps to correct deviations and reduce attack benefits. A typical example is the Community Notes system on Twitter (now X). Community Notes allows ordinary users to add context and explanations to tweets on the platform. If someone finds that a tweet (which may be misleading) lacks context or is taken out of context, they can add a note to explain the full situation. These notes, after being reviewed by many people, are visible to all users. When misleading content is corrected by a note, its spread and credibility decrease significantly. Research has found that Community Notes has led many users to voluntarily delete the false tweets they posted, which shows that this symmetric feedback effectively reduces the impact of rumors. Compared to the platform's direct deletion of posts, Community Notes achieves a more gentle yet effective defense through crowd correction. For attackers, this greatly reduces the probability of rumors fermenting (return R decreases), and may also attract a concentration of "popular science corrections" from the community, further weakening the effect of subsequent attacks.
Symmetric feedback is also reflected in the establishment of a multi-party dialogue mechanism. When one party outputs information and the other has the right and channel to question or rebut, fallacies are harder to sustain. For example, when the government releases a major policy, it opens a Q&A session where the public and the media can directly question the details and basis. This forces the publisher (if they are sincere, there is no problem; if they are trying to cover up, it is difficult to continue to deceive) to remain honest and transparent. Similarly, within enterprises and organizations, encouraging subordinates to question superiors' instructions can prevent a decision from being based entirely on false information. Because even if the attacker deceives the decision-maker, if the lower levels find unreasonable points during execution and can provide feedback, the fallacy still has a chance to be corrected.
Encouraging public participation in supervision is another aspect of symmetric feedback. Let everyone become a "fact guardian" and increase the rewards and convenience of reporting rumors. Many platforms have already set up functions for users to report false information, but in the future, they can be more intelligent, such as identifying reports from high-credibility users more quickly through a reputation system. This forms "mass prevention and mass governance," greatly increasing the chance of attackers being exposed. Once a rumor appears and sharp-eyed people quickly point it out, the attack benefit is almost zero or even negative (reputation damage).
5.4 Cost-Benefit Analysis in the Security Economics Game
Combining the above measures, we can describe the changes in the attacker's costs and benefits under the intervention of security mechanisms:
·Pre-attack Costs: To design the deception, the attacker needs to invest in intelligence gathering, material preparation, etc. If the public has a high level of cognitive autonomy, the attacker must create more sophisticated information, otherwise it is easy to be seen through. This increases the planning cost.
·Attack Execution Costs: Authenticity verification and content traceability mechanisms will force attackers to use more complex technologies to bypass signatures and tracking, such as impersonating others' signatures or generating a large number of new accounts. Each bypass step is an additional cost. And symmetric feedback means that attackers may need to invest resources to continuously interfere with rebuttals, otherwise the rumor will be corrected. For example, in a Community Notes environment, attackers may need to prepare multiple sock puppet accounts to oppose the corrective content, trying to drown out the truth, but this further increases the operational difficulty and cost.
·Attack Benefits: Assuming there is no defense, the attacker's benefits may be high (e.g., successfully defrauding a large amount of money, or making a rumor cover millions of people). But with defense, the benefits are diluted or even become negative. Taking defrauding money as an example, if multiple authentications and signatures make their phishing emails almost universally ineffective, the benefit is almost zero. Taking spreading rumors as an example, traceability and feedback mechanisms may cause a rumor to be marked and debunked as soon as it is released, before it has caused any substantial impact, so the benefit is also zero. At the same time, the risk is high: once the source is traced, the attacker may face legal punishment or have all their accounts wiped out (negative benefit).
·Overall Game Situation: When the attacker's expected benefit/cost ratio is less than 1, meaning the investment is not worthwhile, the motivation for attack will naturally be greatly reduced. This is similar to a thief finding that the streets are full of cameras and police. The expected benefit of theft (the probability of successful theft × the value of the property) is far lower than the cost of the risk of going to jail, so they will give up stealing. For cognitive manipulation, when a rumor-monger finds that rumors are easily debunked by the public and their identity may be exposed, the benefit is minimal, and they may not bother.
Of course, not all attackers are rational and profit-driven. Some state-level or ideologically driven manipulators may not care about the cost. But even so, increasing the cost can also reduce the frequency and scale of their actions. What's more, many daily spreaders of false information (such as marketing accounts, self-media) are mainly driven by economic interests, and economic means are very effective against them.
It should be pointed out that security economics mechanisms do not exist in isolation, but should be parallel with legal and ethical measures. For example, the law clearly stipulates and punishes malicious manipulation in the cognitive space (e.g., spreading false news may be fined or imprisoned), which is equivalent to directly and substantially increasing the fixed cost of attacks. At the ethical level, strengthening the morality of practitioners (e.g., the media adhering to authenticity) reduces the possibility of "insiders" cooperating with attackers, which also reduces attack benefits.
In summary, by applying the ideas of security economics, we reverse the offense-defense benefit ratio by reducing attack benefits (the public is not easily deceived, rumors are difficult to spread) and increasing attack costs (technical barriers, legal risks). The ultimate goal is to make attackers rationally choose to give up cognitive attacks or greatly reduce the frequency of investment in such behaviors, thereby creating a safer cognitive space overall. This is similar to the idea of strengthening systems with technology: either build high walls or make the attack not worth it. In the cognitive field, it is difficult to build "walls," but by adjusting the interest drivers, we can make those who "attack the mind" unprofitable.
6. Application Scenario Analysis: DIKWP Manipulation Paths and Defense Strategies
Cognitive manipulation deception is not abstract; it manifests in reality through various application scenarios. This section selects a few representative scenarios: AI content recommendation systems, generative media (deepfakes), social platform information manipulation, and cognitive warfare, to analyze how attackers implement manipulation along the DIKWP path and the response strategies that can be adopted. In each scenario, we will combine the aforementioned theories to explain the DIKWP mapping of the attack chain and discuss how defense measures can be embedded in these chains to block deception.
6.1 AI Content Recommendation: Algorithm-Driven Cognitive Influence
Modern people's access to information largely relies on various recommendation algorithms—news feeds, social media timelines, video website recommendations, and so on. These AI-driven systems, based on users' past data (D) and preference information (I), form a model of the user's interests at the knowledge level (K) (the so-called user profile, which is equivalent to the system's knowledge inference about the user), and then use a specific strategy (W-level algorithmic wisdom) to select and push content, in order to achieve a certain purpose (P), usually to increase user stickiness or advertising revenue. The recommendation algorithm itself may not be malicious, but its optimization goals are often inconsistent with the user's long-term interests or social welfare, and may be exploited by malicious actors, thereby having a negative impact on the user's cognition.
Manipulation Path: Recommendation algorithms amplify human selective perception and confirmation bias along the D→I→K→W chain. When a user clicks on a certain type of content, the algorithm records this data (D), interprets it as a user preference (I), and updates the user profile (K). Then the algorithm will "wisely" push more similar content (W→returning more I to the user), hoping the user will continue to click (P: increasing engagement). This closed loop, if not intervened, will lead to an information cocoon: the user continuously receives information consistent with their existing preferences, without seeing other perspectives. Cognitively, this reinforces the user's original knowledge and beliefs, while reducing opportunities for correction. Attackers or malicious actors can exploit this by placing specific content or guiding the algorithm to be biased, thereby manipulating user cognition.
Specifically, a malicious content provider (such as an extremist organization's propagandist) may maintain a large number of accounts to publish videos/articles with specific extremist views. When a user accidentally comes into contact with a little of it and shows interest, the algorithm captures this signal and will recommend more and more content from that provider. Gradually, the user's information feed is dominated by this extremist content (the knowledge level is filled with a single viewpoint), and the user's wisdom and judgment also shift towards extremism. Research has found that some recommendation systems can significantly strengthen the cohesion and visibility of false or extremist networks, making their spread wider. This means that recommendation algorithms inadvertently become amplifiers of cognitive manipulation.
Defense Strategies: To address the manipulation risks of AI recommendations, multi-level measures can be taken:
·Increase Algorithm Transparency and Controllability: Require platforms to disclose the basic principles of their recommendation algorithms and allow users to adjust recommendation settings or choose different algorithm modes. This helps users to actively break out of the trap of a single preference. It has been mentioned that the X platform has made its community notes algorithm public, increasing transparency, which helps user trust and understanding. Similarly, if recommendation algorithms are transparent, users can know why they are recommended certain content and thus make more rational choices.
·Introduce Diversity Constraints: Incorporate information diversity weights into the algorithm's objectives to avoid excessive bias towards a single type of content. For example, YouTube and others can enforce that a certain percentage of content is different from the user's history in their recommendation strategies, thereby "breaking the wall." This will reduce the short-term click-through rate, but in the long run, it can protect the user's cognitive health and reduce the probability of being captured by extremist content.
·Establish Content Quality and Credibility Factors: Incorporate content credibility into the recommendation reference. For sources of conspiracy theories and fake news, even if the user shows interest, they should be de-weighted and not be amplified without limit. This is similar to adding a "truthfulness penalty" to the algorithm, making false or low-quality information less likely to dominate the screen. Combined with the aforementioned reputation system, high-reputation information sources are more likely to be recommended, while low-reputation ones are suppressed.
·User Education and Interaction: Remind users to be aware of recommendation bias. When the system finds that a user has been confined to a certain type of content for a long time, it can send reminders or insert "probe" content with different viewpoints for testing. If the user consciously clicks on different viewpoints, the algorithm should recognize this as the user's active intention to correct bias (P) and cooperate in adjusting. More human-computer interaction, allowing users to participate in algorithm tuning, enhances their autonomy.
·External Regulatory Audits: Have independent institutions regularly review the impact of the recommendation algorithms of major content platforms on public opinion and cognition, such as whether there is systemic bias and whether they are being maliciously manipulated and exploited. The audit results should be made public to prompt platforms to optimize. Policies can also require platforms to be responsible for major rumor spreading incidents caused by recommendations, forcing them to invest in anti-bias improvements.
6.2 Generative Media: Deepfakes and False Perception
The development of generative media (such as AI-generated images, audio, and video, i.e., deepfakes) has brought deception into a new phase where "seeing is not necessarily believing." In the past, people usually believed that photos and videos were real records (evidence at the knowledge level), but now AI can generate realistic materials, for example, making a celebrity say something they never said in a video. The impact on cognition is profound: our senses (data acquisition) are fed with fabricated data, which then leads to direct deception at the information and knowledge levels.
Manipulation Path: In the DIKWP model, deepfakes are equivalent to an attacker mastering superb data forgery D→I technology, "creating" events that never happened and presenting them in a realistic form (I-level). Since humans are accustomed to having a high level of trust in visual/auditory evidence, this fabricated information can easily bypass cognitive scrutiny and be directly accepted as knowledge. For example, a forged video of a leader making an emergency announcement, once seen by the public, may be immediately regarded as real information (updating knowledge: "the leader said X"), and they will act accordingly (practice). Through such fake videos, attackers can create panic (e.g., forging a government announcement of a financial collapse, causing a stock market crash) or smear someone's image (e.g., forging a political opponent's inappropriate remarks to damage their reputation). Given that deepfakes can be very realistic and the cost of generation is getting lower and lower, cognitive security faces a huge challenge—the foundation of trust at the data level is being shaken.
Defense Strategies: To cope with generative media deception, a combination of technology and law is needed:
·Generative Content Detection: Develop and deploy AI models to detect whether content is generated by AI. For example, use digital watermarks or neural networks to identify synthetic traces in images/audio. There has been some progress, such as detecting unnatural facial blinking and audio spectrum abnormalities. Platforms can automatically scan the authenticity of videos when users upload them and mark suspected deepfake content. Although attackers are in an arms race with detection models, continuous improvement in detection is still key.
·Authenticating Real Content: Conversely, promote the authentication of real audio and video. For example, videos released by news organizations can be shot with certified equipment and signed and encapsulated to ensure they have not been modified. In the future, cameras may have built-in cryptographic signatures for each frame. Once this is the case, unsigned videos will be default-labeled as "untrusted," thereby reducing the deceptive power of deepfakes.
·Legal Prohibition and Deterrence: Many jurisdictions have begun to legislate against malicious deepfakes. For example, it is illegal to impersonate others' images without permission, especially for defamation, election interference, etc. Clear legal responsibilities (civil compensation, criminal penalties) increase the attacker's cost. For example, if someone who spreads a fake video of a politician is caught and jailed, it will naturally deter others.
·Public Awareness Enhancement: Educate the public not to easily believe what they see and hear, and to have basic identification common sense. For example, important speeches released by officials usually have multiple reports and authoritative channels. If you only see a sensational video on an anonymous social media account, you should be more careful and verify it. Cultivating the public's preventive mentality against deepfakes is a bit like cultivating the awareness to identify photoshopped pictures back in the day.
·Rapid Clarification Mechanism: Once a malicious deepfake is found to be spreading, the relevant agencies should react quickly to clarify. It is best to do so in an equally eye-catching form. For example, if a fake video goes viral online, the official or credible media should release a statement or even a video comparison as soon as possible to point out the flaws. Time is very critical, because the impact of fake videos is extremely strong. If delayed, the rumor will cause real damage.
·Technical Watermarking Standards: Promote the addition of transparent watermarks or metadata identifiers to AI-generated content. This requires industry self-discipline or legal requirements. For example, some countries require that AI-generated images must be labeled as "AI-generated." Similarly, legal requirements for social platforms can be considered: anything detected as a suspected AI forgery must be prompted to the user. Although this is not 100% foolproof, it at least increases user awareness and reduces the risk of being misled.
6.3 Information Manipulation on Social Platforms
Social platforms (microblogs, forums, instant messaging groups, etc.) have become the main channel for information dissemination and also a battleground for malicious actors. Information manipulation here includes a series of behaviors such as rumor spreading, public opinion guidance, shill boosting, and social bot proliferation, with the goal usually being to influence group cognition and the direction of public opinion.
Manipulation Path: Information manipulation on social platforms can be divided into two aspects: content-level manipulation and social relationship-level manipulation:
·At the content level, attackers will concentrate on publishing or forwarding specific information (I), strengthening its presence through repetition and multi-channel publication, with the intention of making it group knowledge (K). The characteristics of false information mentioned earlier (novelty, emotional) are particularly suitable for spreading on social media. People are more likely to believe what their friends share, leading to the rapid integration of false information into the collective knowledge base. A 2018 study by Vosoughi et al. of over a decade of Twitter data showed that fake news spreads faster, wider, and deeper on social platforms than real news. The reasons behind this are, on the one hand, human preference, and on the other hand, the push from a large number of bots and fake accounts.
·At the social relationship level, manipulators may create a huge network of fake accounts (botnet), coordinating with each other to like and comment, creating the illusion that a certain viewpoint is "widely supported." In his research on Russian information warfare, Timothy Thomas pointed out that they "impose emotional cognition on the target through information operations to their own benefit." This is often accompanied by the infiltration of social networks, for example, by impersonating local opinion leaders to interact with the target group, building trust, and then spreading rumors. The topological characteristics of social networks are also exploited—such as concentrating firepower to attack key nodes (influential accounts being hacked or bought), so that manipulated information can be spread through these "super-spreaders," thereby triggering an information cascade throughout the entire network.
The information manipulation by attackers on social platforms can be understood as a large-scale DIKWP interaction of multiple agents in the DIKWP model: the I injected by the attacker's side triggers the D of countless targets, and then through human-to-human interaction, group K and W changes are formed. For example, in election interference, a hostile force fabricates a large number of persona accounts to go deep into the opponent's voter circles and spread rumors about a certain candidate. These rumors are taken as true by many people and become their knowledge (K), thus changing their choices in their voting wisdom and judgment (W). In the 2016 US election, Russia's "Internet Research Agency" operated in this way, which was detailed in the indictment.
Defense Strategies: The governance of information manipulation on social platforms requires the cooperation of platforms, users, and governments:
·Platform Identification and Crackdown on Fake Accounts: Strengthen the detection of bot accounts and shill behavior. Identify bots through abnormal behavior patterns (posting frequency, single interpersonal network structure, etc.) and ban them in batches. Twitter and others have been regularly cleaning up millions of suspicious accounts. Although attackers can continuously register new accounts, raising the threshold (such as real-name mobile phone verification, human-machine verification) can significantly reduce the number of bots.
·Content Moderation and Fact-checking: Platforms need to improve their community rules and promptly review and delete obvious false or inflammatory information. For borderline content, introduce third-party fact-checking. When marked as untrue, reduce its visibility or add a warning label. Facebook once added labels and down-ranked content verified as fake news, and the spread rate of that content dropped significantly. This actually weakens the chance of rumors becoming K-level consensus at the I-level.
·Rapid Debunking and Information Intervention: When a rumor ferments on a platform, there should be a rapid response mechanism—official accounts or credible third parties immediately publish debunking content and ensure its dissemination (through pinning, wide pushing, etc.). Information spreads fast on social networks, and debunking must also outrun rumors. A successful case is when earthquake rumors were rampant in a certain place, the local government and media intensively clarified through the same channels within a few hours, calming the panic. This requires establishing a cooperation network and contingency plans in normal times.
·Distributed Peer Correction: As in the example of Community Notes mentioned earlier, encourage users to correct each other. The platform can provide more convenient reporting and annotation tools, and even reward users who effectively debunk rumors. This crowd intelligence is often more effective than simple machine algorithms in discovering rumors because humans are good at semantic understanding and context judgment. Studies have shown that crowd-sourced annotations can significantly improve the identification of misleading posts and user trust in corrections.
·Cross-platform Collaboration: Manipulators often operate across platforms. If deleted on one platform, they may continue on another. Therefore, cross-platform monitoring and linkage are needed. Europe's DSA (Digital Services Act) already requires large platforms to share data for research institutions to monitor the cross-platform spread of false information. Establish a cross-platform rumor database. Once a rumor is confirmed in one place, a warning is synchronized on all platforms. This is similar to computer virus intelligence sharing, where companies jointly defend.
·Legal Regulation: The government should improve legislation to give regulatory authorities the power to require platforms to rectify. For example, if a platform allows a large number of bots to manipulate public opinion, it can be fined. During key election periods, there should be laws prohibiting foreign interference and the dissemination of false election information. The shill industry chain should be cracked down on according to law (some countries have already arrested gangs that manipulate online shills), to deter the black market. The law can also protect whistleblowers and debunkers, so that those who spread the truth are not suppressed by SLAPP lawsuits and the like.
·Improve User Media Literacy: Ultimately, the fundamental solution is for users themselves to be able to identify shills and rumors. Strengthen publicity and education, for example, by publishing typical cases, teaching the public how to identify fake accounts, question sensational news, and verify sources. Many rumors are easy to see through, but users need to have this awareness. The government, media, and educational institutions should all participate in this, cultivating citizens' media literacy as an essential skill in the digital age.
6.4 Cognitive Warfare and Cognitive Defense
Cognitive warfare refers to the systematic psychological and cognitive attacks conducted by a state or organization against a hostile camp using information means, in order to weaken its will and confuse its decision-making. It is an upgraded version of information warfare, with a greater emphasis on directly acting on the human brain. The Russian interference in the US election, ISIS online propaganda, etc., mentioned several times before, all belong to cognitive warfare. In military conflicts, cognitive warfare may even be on par with cyber warfare and public opinion warfare, becoming a contest for "control of the mind."
Manipulation Path: Cognitive warfare is usually planned by intelligence agencies/professional teams, with clear targets, sufficient resources, and a combination of various means. For example:
·Public Opinion Manipulation: Dispatching shills and media resources to infiltrate the enemy's society, creating division internally and demonizing the opponent externally. During the Cold War, the Soviet Union spread racial conflict rhetoric in the United States. Modern times are even more fueled by the internet, as in the IRA case mentioned earlier, using hundreds of fake accounts to impersonate Americans and stir up partisan opposition.
·False Flag Operations: Creating realistic fake intelligence and fake news to cause the other side's society to misjudge. For example, forging videos of leaders' speeches, forging official notices, etc., to make the public panic or the enemy troops confused. This is a typical use of deepfake and impersonation technology, directly attacking the knowledge and wisdom levels, so that the enemy's decisions are based on false information.
·Psychological Intimidation: By repeatedly spreading terrifying information or exaggerating battle results to strike at the enemy's morale. For example, spreading news on the internet that "our side is advancing like a hot knife through butter, the enemy is bound to be defeated," to make the opponent's soldiers and civilians lose confidence. Even if this information is not entirely true, the cognitive dissonance and despair caused by repeated dissemination and emotional rendering are effective.
·Psychological Operations for Surrender: Targeting enemy personnel with targeted messages of inducement or intimidation (e.g., a text message to a soldier: "You are surrounded, lay down your arms and you can live"). This practical-level manipulation, through highly aligned intelligence, makes the target believe that surrender is a wise move, thereby disintegrating their will to fight.
The effects of cognitive warfare are not easy to assess immediately, but some cases have shown its power: for example, in the 2014 Crimea incident, the Russian side quickly turned local public opinion in favor of the Russian army through information warfare, and the Ukrainian army was in chaos before the battle. This shows that cognitive warfare can change the intentions of the masses (P-level) and their perception of reality (W-level judgment). Therefore, all countries are paying more and more attention to cognitive defense.
Defense Strategies: Facing an organized cognitive warfare offensive, defense requires national-level comprehensive measures:
·Build a Cognitive Defense System: Establish a dedicated agency (such as a Center for Cyber and Cognitive Security) to continuously monitor and analyze the trends of enemy cognitive attacks. Use big data and AI to detect abnormal public opinion, traces of disinformation campaigns, and provide timely early warnings. For example, NATO has established a strategic communication and cognitive security alliance to deal with such threats.
·Strengthen Official Sources and Debunking: In cognitive warfare, public trust in official and mainstream media is key to resisting rumors. The government should maintain information transparency, promptly release accurate and authoritative information, so that the public has a backbone and is not easily swayed by fake news. This is especially true in wartime. The official side must take the initiative to occupy the high ground of information release, such as regular press conferences, special columns for debunking rumors, etc., to prevent the enemy's public opinion from gaining the upper hand.
·Psychological Protection Training: Conduct anti-enemy psychological warfare training for soldiers, civil servants, and personnel in important positions. They need to be familiar with the common manipulation methods of the enemy and learn how to deal with them. For example, if they receive a surrender-inducing text message, they should not only be unmoved but also report it to their superiors. For the public, immunity can be improved through publicity, such as emphasizing "verify first when you encounter shocking news" and "don't believe or spread rumors."
·Technical Countermeasures: Use cybersecurity technology to block the enemy's information infiltration channels. For example, after discovering an enemy shill network, combine cyber warfare means to paralyze its servers or communication channels. For fake videos, develop identification algorithms for network-wide use. Require social platforms to cooperate in banning enemy propaganda accounts. If necessary, even close some communication channels (such as disconnecting the internet in a war zone) to prevent large-scale psychological offensives by the enemy, but this must be used with caution.
·International Cooperation and Public Opinion: Expose and publicize the enemy's cognitive warfare actions, condemn them internationally, and strive for public opinion support. Letting your own people understand the enemy's methods can also enhance vigilance. For example, the US Department of Justice's public indictment detailing how Russia interfered in the election is both a legal action and a popular science education.
·Values Education: In the long run, strengthening social cohesion and value consensus is the most fundamental defense. The enemy often uses internal contradictions as a breakthrough point. If our society is united and immune to wrong ideas, then cognitive warfare will be difficult to find a handle. This includes reducing polarization in normal times and strengthening dialogue between different groups, as the saying goes, "prevent the enemy from using our cracks."
In short, the defense of cognitive warfare tests a country's comprehensive strength, requiring defense from technology to the human mind. The DIKWP model can be used here as a strategic analysis tool: by looking at which level the enemy is mainly attacking, we can strengthen it in a targeted manner. For example, if the enemy likes to fabricate fake news to attack our knowledge level, then we will focus on building a rapid debunking mechanism and improving media credibility; if the enemy uses group emotions, then we will strengthen psychological counseling and information transparency to stabilize the people's hearts.
Through the analysis of the above scenarios, we have seen the diverse forms of cognitive manipulation and the corresponding multi-dimensional defenses. Although the means are different, the principle is the same—it is a struggle between interference with and protection of the human DIKWP system. From AI recommendations and deepfakes to social media rumors and cognitive warfare, the underlying theoretical thread is consistent: attackers try to change the target's DIKWP cognition by converting their DIKWP output through various channels; defenders strive to maintain semantic security and alignment, protecting the correct DIKWP structure from being eroded.
7. Governance and Institutional Recommendations
Facing the increasingly severe challenges of cognitive security, in addition to technical and strategic responses, it is even more necessary to make forward-looking arrangements in governance and institutions. The security governance of the cognitive space involves multiple stakeholders (governments, platforms, media, citizens) and crosses the fields of law, ethics, and technology. To build a healthy and trustworthy digital cognitive ecosystem, we hereby propose the following recommendations at the governance and institutional levels:
1.Clarify Ethical Boundaries: Establish ethical guidelines for cognitive security in the digital age and draw a red line for information manipulation. For example, prohibit the intentional dissemination of false information, prohibit the use of AI to impersonate others to mislead the public, and prohibit the collection and use of others' cognitive data for manipulation without their knowledge. These ethical guidelines should be promoted at both the international and domestic levels. We can draw on bioethics and AI ethics frameworks, and form consensus documents through discussion by experts and the public. Once the ethical guidelines are established, they will help guide legislation and industry self-discipline, and also lay the foundation for cross-border cooperation.
2.Improve Laws and Regulations: Bring malicious behaviors in the cognitive space under legal supervision. It is recommended that legislation clarify the legal responsibility for acts such as disseminating rumors that affect public safety, producing and spreading deepfakes to mislead the public, and manipulating platform algorithms for large-scale cognitive interference. We can refer to and expand upon existing anti-fake news laws, personal information protection laws, etc. For example, stipulate that social platforms have an obligation to monitor and stop the spread of false information, otherwise they will be considered accomplices and can be fined. Another example is to treat unmarked deepfakes as fraud against the public, for which civil compensation and even criminal liability can be pursued. The law should also protect citizens' cognitive freedom and the right to freedom of thought from improper interference, defining extreme manipulation as an infringement of citizens' basic rights.
3.Platform Responsibility and Accountability: Clearly require large internet platforms to be responsible for the cognitive security on their platforms. This includes: establishing content review and debunking mechanisms, cooperating with regulatory authorities to provide data support for investigations, and regularly publishing transparency reports disclosing the handling of false information. For vulnerabilities that repeatedly lead to large-scale rumors, regulators can summon platforms for rectification or impose economic penalties. Platforms should also establish internal accountability systems. If recommendation algorithms or operational decisions lead to the proliferation of rumors, the relevant management personnel should be held responsible. This will help to change the past situation where platforms turned a blind eye to untrue content for the sake of traffic, and prompt them to include user cognitive security in their performance appraisals.
4.Cross-platform and Cross-national Collaboration: Cognitive security knows no borders, and a single country or platform can hardly stand alone. It is recommended to establish a cross-platform joint defense mechanism where major social media, search engines, etc., share false information intelligence and disposal experience. At the international level, a cognitive security convention or action plan can be promoted under frameworks such as UNESCO, especially to regulate state behavior and prohibit the use of information weapons for aggression. For example, similar to the Treaty on the Non-Proliferation of Nuclear Weapons, countries would pledge not to launch cognitive warfare against other countries. Once discovered, there would be international investigation and sanction mechanisms. Although implementation is difficult, it can at least form moral pressure and consensus. Regional organizations (such as the European Union) can also take the lead in establishing internal coordination. For example, the EU introduced a new version of the Code of Practice on Disinformation in 2022, requiring social platforms to cooperate in combating cross-border fake news and to have a united front externally.
5.Establish an Authoritative Cognitive Security Agency: The government should establish a dedicated cognitive security management agency or give new functions to existing cyberspace affairs departments to coordinate national cognitive security work. Its responsibilities would include: monitoring and analyzing public opinion and cognitive risks, coordinating various departments to respond to major rumor incidents, formulating relevant standards and specifications, and organizing nationwide media literacy education. The agency could also serve as a communication bridge between the public and the government, promptly releasing debunking information and answering public questions. It needs to be emphasized that the credibility of the authoritative agency is crucial. It must adhere to openness, transparency, and scientific neutrality to win public trust and truly play a stabilizing role.
6.Media and Education Reform: Encourage mainstream media to actively transform into "credible information providers" and "rumor smashers." The media should cooperate deeply with fact-checking organizations, launch regular columns to expose recent fallacies, and become a reliable channel for the public to obtain the truth. At the same time, in the education system, digital literacy and critical thinking should be included as compulsory content. Starting from the primary and secondary school stages, train students on how to identify online rumors, verify information sources, and understand algorithmic bias. This is a future-oriented investment. The new generation of digital citizens cultivated will be more difficult to be incited by rumors and will participate in social discussions more rationally.
7.Technology-empowered Governance: Vigorously support the research and development of technical tools for cognitive security. For example, develop a national-level intelligent rumor identification and traceability platform. Once a piece of content goes viral, it can quickly analyze its authenticity and transmission path for decision-makers' reference. Another example is to develop a personal cognitive assistant AI to help users evaluate the credibility of the information they see in real time and remind them of possible biases. These tools can be like antivirus software, constantly guarding information health on user devices. Of course, attention must be paid to privacy protection and preventing the tools themselves from being attacked.
8.Ethical Review and Responsibility Mechanisms: Before new technologies such as AI are applied to information dissemination, establish an ethical review process. For example, for AI news anchors and intelligent customer service answering sensitive questions, the possible risk of misleading should be assessed in advance. Conduct a cognitive security impact assessment for new products and new functions (similar to an environmental impact assessment). At the same time, establish accountability: if a technical product defect leads to the spread of large-scale false information, the manufacturer's joint liability should be clarified to prompt them to pay attention to security in the design stage.
9.Public Participation in Supervision: Empower the public with supervision rights and smooth feedback channels. For example, establish a nationwide online reporting platform and reward mechanism, giving recognition or material rewards to those who effectively report major rumors, so that the whole society participates in governance. A Cognitive Security Council composed of civil society representatives, experts, and media can also be established to regularly listen to public opinions and supervise the performance of the government and platforms.
Based on the above recommendations, we advocate for a model of "whole-of-society co-governance": government guidance, legal regulation, platform responsibility, technical support, media integrity, and public participation. Cognitive security involves the mental health of thousands of households and is also related to the long-term stability of the country and society. It is necessary to pool the strength of all parties to maintain it together.
What needs to be balanced is that governance should not lead to excessive censorship or suppression of speech. The purpose of cognitive security is to ensure the authenticity of information and the alignment of intentions, not to suppress diverse voices. Therefore, the institutional design must be cautious to prevent the infringement of freedom of expression in the name of security. It is best to establish independent review and accountability to ensure that governance measures target false and malicious behavior, not dissent itself. This is also why the importance of ethics and public supervision is emphasized—to make governance operate in the open and prevent abuse of power.
Looking to the future, with the further development of AI and the deepening of human-machine integration, we will face new cognitive security challenges. But no matter how technology evolves, the "people-oriented" principle remains unchanged: to protect everyone's right to perceive the world freely and truthfully. This should be the fundamental starting point and ultimate goal of governance. We need to build one line of defense after another, with layers of protection from technology, institutions, and ethics, to guard the common beacon of human reason in the digital torrent.
Conclusion
Security in the digital age has expanded from the traditional boundaries of hardware and networks deep into the cognitive and semantic space of humanity. This paper, by introducing the DIKWP×DIKWP interaction model, has theoretically reconstructed the concept of "security" in the digital space, emphasizing that security is not only about protecting systems from damage but also about the alignment and mutual trust of multiple parties in their cognitive and practical intentions. We have deeply analyzed the principles of cognitive manipulation deception, clarifying how attackers exploit human cognitive weaknesses and, through multi-level information deployment, guide targets to form DIKWP structures contrary to their original intentions, infiltrating layer by layer from data-level lies to purpose-level induction. On this basis, we have proposed indicators such as semantic entropy, cognitive distance, and alignment to quantitatively describe the process of cognitive deviation, providing tools for detecting and evaluating the effectiveness of defenses. These indicators allow us to more objectively monitor the health of the information environment and the state of cognitive security, laying the foundation for the design of security mechanisms.
In terms of security countermeasures, we have drawn on the concepts of security economics, emphasizing the need to build comprehensive mechanisms to make cognitive attacks "unprofitable." Enhancing public cognitive autonomy, improving authenticity verification, implementing content traceability and symmetric feedback can significantly increase the cost of manipulation and reduce its benefits, thereby curbing the occurrence of such attacks overall. In the analysis of application scenarios such as AI recommendation, deepfakes, social platforms, and cognitive warfare, we have specifically shown the confrontation between attack paths and defense strategies, proving that a combination of measures can effectively weaken the harm of cognitive manipulation. For example, symmetric crowd-correction mechanisms like Community Notes have been proven to help mitigate the impact of social media rumors; likewise, the combination of law and technology can greatly limit the spread of deepfakes.
Finally, at the governance and institutional level, we call for the establishment of a transparent and collaborative cognitive security governance framework. By clarifying ethics, improving laws, consolidating platform responsibilities, promoting cross-domain cooperation, and public participation, a situation of whole-of-society co-governance can be created. Only when the government, platforms, media, and the public work together, adhering to the values of truth, rationality, and openness, can we guard the dam of collective cognition in the turbulent torrent of information. At the same time, governance must strictly adhere to the bottom line of protecting citizens' cognitive freedom, maintaining normal information diversity and freedom of speech while cracking down on malicious manipulation. This requires wisdom and balance.
In conclusion, maintaining "security" in the digital space is no longer just a technical issue, but also a cognitive and social issue. The DIKWP×DIKWP model and related analysis proposed in this paper provide a panoramic perspective for us to understand and respond to this complex subject. From data to wisdom, from individuals to society, we need to consider security in a coordinated manner. In the future, as the integration of humans and AI deepens, we may further expand the DIKWP model, adding more dimensions (such as "U" for understanding/unconscious, etc.). But no matter how the model evolves, its core is still to serve humanity in better understanding the world and avoiding falling into deception and chaos. It is hoped that this research can contribute some theoretical innovation and practical inspiration to the field of cognitive security, helping to build a more truthful, transparent, and trustworthy digital future.
References:
·Fineschi, D. et al. (2022). Game of Mirrors: Health Profiles in Patient and Physician Perceptions. Int. J. Environ. Res. Public Health, 19(3), 1201.
·Pierce, B. M. (2021). Protecting people from disinformation requires a cognitive security proving ground. C4ISRNet.
·Portnox. What is Cognitive Hacking?.
·Huang, R. Y. et al. (2023). On challenges of AI to cognitive security and safety. Security and Safety, 2.
·Mei, Y. & Duan, Y. (2025). Comprehensive Review of DIKWP Model and Semantic Blockchain.
·Wikipedia. (n.d.). Entropy (information theory).
·CNA. (2021). The Psychology of (Dis)information: A Primer on Key Psychological Mechanisms.
·Romanishyn, A. et al. (2025). AI-driven disinformation: policy recommendations for democratic resilience. Frontiers in Artificial Intelligence.
·Gao, Y. et al. (2024). Can Crowdchecking Curb Misinformation? Evidence from Community Notes. SSRN.
·NATO ACT. Cognitive Warfare Concept.
·... (Other references omitted)
Citation Sources:
·Protecting people from disinformation requires a cognitive security proving ground: https://www.c4isrnet.com/opinion/2021/02/10/protecting-people-from-disinformation-requires-a-cognitive-security-proving-ground/
·On challenges of AI to cognitive security and safety | Security and Safety (S&S): https://sands.edpsciences.org/articles/sands/full_html/2023/01/sands20230010/sands20230010.html
·Cognitive Warfare Article: https://innovationhub-act.org/wp-content/uploads/2023/12/CW-article-Claverie-du-Cluzel-final_0.pdf
·(PDF) A Comprehensive Review of DIKWP Model and Semantic Blockchain: Integrating Data-Information-Knowledge-Wisdom-Purpose into Knowledge Graphs and Semantic Web: https://www.researchgate.net/publication/392759351_DIKWP_moxingyuyuyiqukuailianjiangshuju-xinxi-zhishi-zhihui-yitu_zhenghejinzhishitupuyuyuyiwangdezonghezongshu
·The DIKWP (Data, Information, Knowledge, Wisdom, Purpose) Revolution: A New Horizon in Medical Dispute Resolution: https://www.mdpi.com/2076-3417/14/10/3994
·(U) The Psychology of (Dis)information: A Primer on Key Psychological Mechanisms: https://www.cna.org/reports/2021/10/The%20Psychology-of-(Dis)information-A-Primer-on-Key-Psychological-Mechanisms.pdf
·What is Cognitive Hacking? - Portnox: https://www.portnox.com/cybersecurity-101/what-is-cognitive-hacking/
·AI-driven disinformation: policy recommendations for democratic resilience - PMC: https://pmc.ncbi.nlm.nih.gov/articles/PMC12351547/
·Review Human-algorithm interactions help explain the spread of ...: https://www.sciencedirect.com/science/article/abs/pii/S2352250X23002154
·The role of recommendation algorithms in the formation of ...: https://www.sciencedirect.com/science/article/pii/S0306457325001840
·Digital media and misinformation: An outlook on multidisciplinary ...: https://pmc.ncbi.nlm.nih.gov/articles/PMC8156576/
·The Science of Cognitive Hacking: Practical Defenses Against AI ...: https://kathrynshares.com/2025/05/02/the-science-of-cognitive-hacking-practical-defenses-against-ai-powered-manipulation/
·Entropy (information theory) - Wikipedia: https://en.wikipedia.org/wiki/Entropy_(information_theory)
·What can quantitative measures of semantic distance tell us about ...: https://www.sciencedirect.com/science/article/abs/pii/S2352154618301098
·Study: Community Notes on X could be key to curbing misinformation: https://giesbusiness.illinois.edu/news/2024/11/18/study--community-notes-on-x-could-be-key-to-curbing-misinformation
·[PDF] Understanding the Contribution of Recommendation Algorithms on ...: https://scholarworks.boisestate.edu/cgi/viewcontent.cgi?article=1408&context=cs_facpubs
·Community notes increase trust in fact-checking on social media: https://pmc.ncbi.nlm.nih.gov/articles/PMC11212665/
·Incorporating Psychological Science Into Policy Making: The Case ...: https://pmc.ncbi.nlm.nih.gov/articles/PMC7615323/
·[EPUB] AI-driven disinformation: policy recommendations for democratic ...: https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1569115/epub
·[PDF] RESPONDING TO COGNITIVE SECURITY CHALLENGES: https://stratcomcoe.org/cuploads/pfiles/web_Responing-to-Cognitive.pdf

