大数跨境
0
0

Design of Talent Growth Evaluation and Performance Assessment

Design of Talent Growth Evaluation and Performance Assessment 通用人工智能AGI测评DIKWP实验室
2025-10-28
5

Design of Talent Growth Evaluation and Performance AssessmentSystem Based on DIKWP Interaction Model



Shumei An, Yucong Duan, Shuaishuai Huang


International Standardization Committee of Networked DIKWPfor Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)
Abstract
The current era of artificial intelligence poses new requirements for talent cultivation and evaluation. Based on the DIKWP networked cognitive model (Data-Information-Knowledge-Wisdom-Purpose) theory proposed by Professor Yucong Duan, this paper designs a talent growth evaluation and performance assessment system covering education, enterprise, and policy aspects. First, we systematically introduce the semantic mathematics framework of the DIKWP model, clarifying its breakthrough compared to the traditional "Data-Information-Knowledge-Wisdom (DIKW)" hierarchical model by introducing the Purpose layer and two-way feedback mechanism, and its significance for evaluation. Next, based on DIKWP as the underlying semantic structure, we construct a talent growth process model, dividing learners' cognitive development into five stages. Based on this, we design a cognitive growth assessment system for the education sector, a job competency model and performance evaluation mechanism for the enterprise sector, and a general framework for industry talent growth for the policy sector, forming a trinity evaluation system. Then, we propose a "Knowledge Quotient (KQ)" layered white-box assessment method, using AI-assisted scoring and cognitive link tracking to quantitatively evaluate individual capabilities at each DIKWP layer. This assessment system emphasizes process evaluation and can generate capability radar charts, providing a basis for job matching and talent selection. Meanwhile, we integrate the concepts of "semantic sovereignty" and "sovereign AI" to explore how to ensure that the authority for setting talent evaluation standards, the ownership of assessment data, and the technological leadership of intelligent assessment platforms remain in local hands. Furthermore, the paper proposes a digital credit transfer and capability certification mechanism linked with the national credit bank and qualification framework, realizing mutual recognition and conversion of learning outcomes and supporting a lifelong learning ecosystem. Finally, through typical cases (such as intelligent manufacturing engineers, artificial intelligence trainers, etc.), we provide capability mapping tables and evaluation indicator systems for the DIKWP 5x5 interaction modules in career growth paths, verifying the feasibility and generalizability of the system. This paper has a complete structure and detailed content, providing a reference for vocational education reform and talent evaluation innovation.
Keywords: DIKWP Model; Talent Growth Evaluation; Performance Assessment; Knowledge Quotient (KQ); Semantic Sovereignty; Credit Bank
1. Introduction
The vigorous development of Artificial Intelligence (AI) technology is profoundly changing the paradigm of talent cultivation and evaluation. Against the backdrop of industrial upgrading and digital transformation, there is a surge in societal demand for composite talents equipped with high-order cognitive abilities, innovative thinking, and clear goal orientation. However, traditional talent evaluation systems focus on the static mastery of knowledge and short-term performance, making it difficult to comprehensively measure an individual's learning growth and long-term development potential in complex environments. This contradiction has given rise to calls for a new type of talent growth evaluation system—one that can connect the three levels of education, career, and policy, dynamically track the entire development cycle of talent from knowledge acquisition to wisdom-based decision-making and Purpose-led guidance.
In recent years, the Chinese government has proposed the idea of "Great Vocational Education," emphasizing the need to break down the barriers between vocational education and general education, industrial demand, and lifelong learning, and to build a cross-departmental collaborative talent cultivation ecosystem. The State Council's "Outline for Building a Strong Education Nation (2024–2035)" clearly requires the construction of a lifelong learning system based on a national qualification framework and credit bank, facilitating the mutual recognition and conversion of different learning outcomes. The Ministry of Education and other departments are also promoting the construction of "Smart Education" and AI+ education pilot projects, launching digital infrastructures such as the National Vocational Education Smart Education Platform to support the digital transformation of skilled talent cultivation. Against this backdrop, talent evaluation systems need to keep pace with the times, introducing intelligent technology and new cognitive models to achieve process-oriented, full-link, and personalized evaluation, adapting to the requirements of high-quality talent cultivation in the new era.
The proposal of the DIKWP networked cognitive model offers a brand-new approach to this goal. This model extends the classic "Data-Information-Knowledge-Wisdom (DIKW)" cognitive hierarchy by adding the Purpose layer and tightly connects each layer in a dual-loop interactive form. This implies that in the cognitive process, not only can lower layers generate higher-level cognitive outputs, but higher-level wisdom and Purpose can also reversely regulate lower-level perception and learning behaviors, forming an adaptive closed loop. In contrast, the linear, unidirectional structure of the traditional DIKW model fails to reflect the "goal-driven" characteristic of cognition and struggles to evaluate learners' proactiveness and value orientation. In the context of AI, introducing the Purpose layer is particularly important for cultivating talents with autonomous planning and value judgment capabilities. Therefore, applying the DIKWP model to talent growth evaluation holds the promise of capturing the comprehensive capability development trajectory, from basic cognition to high-level decision-making and even life planning.
This paper aims to design a talent growth evaluation and performance assessment system based on the DIKWP interaction model to meet the needs of assessing comprehensive qualities of talent in the digital age. The article first elaborates on the theoretical basis of the DIKWP model and its modeling framework in semantic cognition and artificial intelligence, explaining its fundamental differences from traditional models and its unique advantages. Then, it explores how to construct a talent growth process model with DIKWP as the underlying semantic structure, and based on this, design evaluation mechanisms covering education, enterprise, and policy sectors to form a unified system connecting school education, enterprise training, and industry standards. Next, it proposes the "Knowledge Quotient (KQ)" layered white-box assessment system, combined with AI-assisted scoring and cognitive link tracking, to achieve in-depth analysis of learning and work processes, thereby supporting process evaluation of talent and job suitability analysis. Subsequently, by integrating the concepts of "semantic sovereignty" and "sovereign AI," we discuss how to ensure the autonomous control of evaluation standards, the secure control of data ownership, and the autonomous control of assessment platform technology, ensuring the implementation of the evaluation system has reliable sovereign guarantees. To enhance the practicality of the system, this paper further proposes linking DIKWP assessment results with the national credit bank and qualification framework, constructing a digital credit transfer and capability certification mechanism to promote mutual recognition and connection between different education and training outcomes. Finally, through typical case analyses, selecting emerging occupations such as intelligent manufacturing engineers and AI trainers, the article simulates their growth paths based on DIKWP, provides a mapping relationship table between the 25 DIKWP interaction modules and job capability requirements, and corresponding evaluation indicators, verifying the application value of this system and looking forward to future development directions.
In summary, this paper attempts to combine cutting-edge cognitive model theory with practical talent evaluation needs, creating a complete four-in-one solution of "theoretical framework + evaluation system + technical platform + policy guarantee," aiming to provide useful references for improving the quality of talent cultivation and evaluation reform in the new era.
2. Theoretical Basis: DIKWP Networked Cognitive Model
The DIKWP model, proposed by Professor Yucong Duan of Hainan University, is an extension and reconstruction of traditional cognitive hierarchy models. The traditional DIKW model divides the knowledge system into four levels: Data, Information, Knowledge, and Wisdom, implicitly assuming that the cognitive process proceeds unidirectionally from bottom to top. The DIKWP model, however, adds a higher level of Purpose above the four DIKW layers, forming a five-layer cognitive system, while emphasizing the networked, bidirectional interaction between the levels. This model essentially views the cognitive process as a closed-loop system from perception to goal, and then feedback back to perception.
In the DIKWP model, the meaning and function of each layer can be summarized as follows:
Data Layer (D): Corresponds to raw facts and perceptual input. Includes unfiltered raw data obtained by humans or intelligent agents through senses or sensors. E.g., students acquiring new characters and words from textbooks, temperature values collected by sensors.
Information Layer (I): Corresponds to the interpretation and organization of data, making it meaningful information. Useful messages are extracted by processing raw data through cleaning, classification, calculation, etc. E.g., calculating the average value from a pile of experimental data, extracting the main points from an article. This layer focuses on understanding: understanding the meaning represented by the data.
Knowledge Layer (K): Corresponds to the systematic integration and internalization of information, forming a knowledge system that can be transferred and applied. This includes establishing conceptual frameworks, theoretical models, and integrating scattered information. E.g., students forming a complete disciplinary knowledge structure from learned theorems and formulas; enterprise employees summarizing experience from different projects into industry Know-how. The knowledge layer emphasizes mastery and application: mastering principles and being able to apply them in different situations.
Wisdom Layer (W): Corresponds to the ability to analyze, judge, and creatively solve problems based on knowledge. Wisdom is not only about "knowing how to do," but also reflects "knowing why to do it and whether it should be done." At this level, individuals can perform global analysis of complex situations, weigh multiple factors, and make high-level decisions. E.g., engineers facing new technical challenges can propose innovative solutions by integrating existing knowledge and experience; students in project practice can flexibly apply what they have learned to solve real problems. The wisdom layer embodies high-order thinking: critical thinking, creativity, and comprehensive decision-making ability.
Purpose Layer (P): Corresponds to the cognitive subject's goals, motivations, values, and ultimate objectives. This is the key layer that distinguishes the DIKWP model from traditional models. The Purpose layer reflects "what goal I want to achieve" and the choices driven by value orientation. For individuals, the Purpose layer involves learning motivation, career ideals, moral values, etc.; for intelligent agents, it is the set objective function, behavioral guidelines, etc. The Purpose layer endows the cognitive process with directionality and self-drive: it provides goal traction for lower-level learning and action, and also adjusts the goals themselves through continuous feedback.
The DIKWP model emphasizes that the layers do not have a linear static relationship, but a dynamic bidirectional interaction. On the one hand, cognition can ascend level by level along the "D→I→K→W→P" direction: raw data is processed into information, which then forms knowledge to support wise decisions, ultimately serving to achieve certain purposes. On the other hand, more importantly, is the reverse effect of higher levels on lower levels (P→W→K→I→D): purpose and wisdom guide us to pay attention to which data and acquire what information. For example, with a clear learning goal (P), students will targetedly collect materials (D) and filter irrelevant information (I); experienced engineers, after establishing project goals, will re-evaluate the required data and knowledge reserves, thereby identifying and filling gaps. This top-down regulation enables the cognitive system to self-correct and self-improve, forming a feedback loop.
In contrast, the traditional DIKW model is regarded as a pyramid-like hierarchical structure, lacking explicit feedback paths between levels, especially lacking the description of "purposefulness." This leads to limitations in traditional models when simulating human cognition: they cannot reflect the motivation-driven and goal-oriented effects in human learning. The DIKWP model, by adding the Purpose layer and bidirectional mapping, becomes a network structure: changes in any layer will affect other layers through the network. For example, under the DIKWP framework, knowledge acquisition depends not only on existing information but also on the learner's purpose and wisdom judgment; conversely, new knowledge will affect the adjustment of their wisdom and goals. Such a structure is closer to the real human cognitive process and also provides an interpretable layered architecture for the design of artificial intelligence systems.
It is worth noting that the DIKWP model also introduces the distinction between semantic space and conceptual space to formally describe the cognitive process. Research indicates that for any cognitive subject, there exists a relatively stable conceptual space internally (storing existing concepts and knowledge) and a dynamically changing semantic space (processing the semantic meaning of external input information). The DIKWP model, through the information flow of each layer, maps external data to the subject's internal semantic space, and then interacts with the conceptual space through the knowledge layer, continuously correcting the correspondence between concepts and semantics. For example, different raw data can have the same semantic meaning and be classified under the same concept ("different apples are classified under the same 'apple' concept"); when new data appears that does not conform to existing concepts, it prompts the adjustment or refinement of the concepts. This bidirectional interaction between semantics and concepts is considered key to understanding human cognition. It gives the DIKWP model ontological modeling capabilities, i.e., it can formally characterize knowledge units and their semantic connections, providing an operational framework for computers to implement cognitive processes.
The proposal of the DIKWP model not only helps explain human cognitive phenomena but also provides new ideas for the modeling of artificial intelligence and artificial consciousness (AC). Traditional AI systems are usually limited to the three layers of data, information, and knowledge, corresponding to capabilities such as pattern recognition, information retrieval, and inference calculation; they lack the characterization of higher-level wisdom and purpose. Therefore, many AI decisions are considered "unconscious" and difficult to explain (i.e., black boxes), because we cannot identify clear goal motivations or value orientations from them. Conversely, an artificial consciousness (AC) system should possess the full five-layer capabilities of DIKWP, including autonomous goal setting and value judgment. This means that when an AC system makes decisions, it considers not only whether the data and knowledge match, but also the legitimacy of the decision's purpose and its long-term impact, thus getting closer to human volitional behavior. The DIKWP model can therefore be seen as a structural blueprint for the evolution of AI to AC: by adding the dimension of "Purpose" to AI, its behavior is driven by internal goals, and self-adjustment is achieved through continuous feedback. For example, Professor Yucong Duan's team used the DIKWP model to conduct white-box evaluations of the "consciousness level" of mainstream large language models (LLMs), finding that current LLMs mostly stay at the knowledge reasoning layer, and their grasp of user purpose and self-purpose regulation is still insufficient. This confirms the importance of introducing the Purpose layer: only by elevating AI to the level of considering "why to do" rather than just "how to do" can it be called having preliminary consciousness.
From the above analysis, it can be seen that the DIKWP model has three fundamental differences compared to the traditional DIKW model, which are significant for talent evaluation:
Level Extension: Introduction of the Purpose Layer – DIKWP incorporates "Purpose" into the cognitive system, bringing subjective agency factors such as the evaluation subject's motivation, values, and goal planning into the assessment perspective. This compensates for the shortcoming of traditional evaluation which only focuses on results and neglects motivation. For example, in talent cultivation, it is necessary not only to evaluate how much knowledge a student has mastered (K layer) but also to see if they have clear learning/career goals, sense of responsibility, and social commitment (P layer). Purpose layer evaluation helps guide education to pay more attention to moral cultivation and professional literacy.
Structural Change: Top-Down Feedback – DIKWP establishes feedback loops for each layer, emphasizing the guiding role of higher-level cognition on lower-level behavior. In evaluation, this means we focus not only on the results but also on the cognitive process and strategies. For example, can an employee adjust methods based on the final goal when solving problems? Can students self-reflect and proactively identify and fill gaps during the learning process? These belong to the regulatory capabilities of higher levels over lower levels (feedback from W/P to D/I), which are often overlooked by traditional linear evaluation. Through the DIKWP framework, process-oriented and developmental evaluation mechanisms can be designed to capture the learner's thinking path and adjustment behavior in real time, thereby providing more comprehensive feedback.
Semantic Perspective: Interpretable Cognitive Chain – Since the DIKWP model subdivides the cognitive link and provides semantic-level definitions, it naturally supports white-box cognitive assessment. We can design tests for the capabilities of each layer, thus透视 (seeing through like an X-ray) the strengths and weaknesses of the evaluated subject at each layer. This makes the evaluation interpretable: why does someone perform poorly at the decision-making level? It might be due to insufficient knowledge reserves at the knowledge layer, or lack of motivation at the Purpose layer. Conversely, traditional evaluation often gives a single score, lacking this link analysis capability. The DIKWP model lays the theoretical foundation for constructing a hierarchically clear and causally traceable evaluation indicator system. At the same time, it can also compare cognitive levels with specific task requirements, directly linking talent evaluation with job competency models, which will be fully utilized in the enterprise competency evaluation below.
In summary, the DIKWP networked cognitive model provides a full-link, highly interpretable theoretical framework. It integrates the latest research achievements in the fields of "semantic mathematics" and "artificial consciousness" in artificial intelligence, breaking through the limitations of traditional cognitive models. Applying this model to talent growth evaluation can systematically characterize the comprehensive development trajectory of talent from basic abilities to high-order abilities and value orientation, which is significant for improving the quality of education and training and the scientific nature of talent selection. In the following text, we will use DIKWP as the cornerstone to explore how to construct a talent growth process model and design specific evaluation and assessment mechanisms.
3. Talent Growth Process Model and Evaluation System Design
Using the DIKWP model to model the talent growth process allows viewing individual learning and career development as a process evolving sequentially from primary perception to advanced purpose. Based on this, we propose a division of talent growth stages based on the five DIKWP layers, and design corresponding evaluation and performance measurement mechanisms for the education, enterprise, and policy sectors respectively, enabling the three to connect and integrate. The system design for these three sectors is elaborated below.
3.1 Education Sector: Student Cognitive Growth Assessment System
In the education field, we are committed to building a student cognitive growth assessment system that runs through the entire cycle of course teaching, process assessment, practical training, and graduation evaluation, ensuring that students achieve sequential cultivation and comprehensive development of DIKWP five-layer capabilities during their school years.
1. Curriculum System Design and Cognitive Map:
We plan curriculum content hierarchically based on the DIKWP model, cultivating students' cognitive abilities at different levels in stages. For example, the basic stage focuses on training the Data and Information layers, offering courses emphasizing fact memory and concept understanding to cultivate students' perception and understanding of basic knowledge points; the improvement stage focuses on the Knowledge layer, setting up problem-oriented courses and experiments to help students integrate scattered information, form systematized knowledge, and learn to apply what they have learned to solve typical problems; the advanced stage focuses on the Wisdom and Purpose layers, encouraging students to participate in research-based learning and project-based learning to cultivate their analytical evaluation ability and goal planning ability. For example, exploratory projects can be introduced, requiring students to start from posing questions (P layer motivation), collect data (D layer), analyze to form information (I layer), use knowledge to solve problems (W layer), and finally reflect on the project's significance (P layer) and refine new knowledge. This curriculum design follows the DIKWP cognitive link from "perception—understanding—application—innovation—reflection," ensuring the hierarchical progression of teaching content.
At the same time, drawing on the concept of the DIKWP knowledge map proposed by Professor Yucong Duan's team, we present the students' mastery of knowledge points in the form of a semantic network. The specific approach is: construct a knowledge graph for a certain course or professional field, connect the knowledge points according to affiliation and prerequisite relationships, and mark the current mastery status of each node for the student (known/unknown/degree of mastery). Through this knowledge map, teachers and students can intuitively understand "what content has been mastered, what still needs to be learned" at the Data, Information, and Knowledge levels. More importantly, the map can also incorporate information from the Wisdom and Purpose layers: for example, marking which high-order abilities (W layer) have not yet been reached, and what the current learning goals (P layer) are. This visual cognitive map helps teachers in personalized teaching design—updating teaching plans for each student (e.g., focusing on explaining unknown content), while guiding students to independently plan their learning paths (e.g., choosing relevant course modules based on their career intentions). After completing each learning unit, the knowledge map is updated, like exploring a map in an RPG game, continuously unlocking new knowledge areas. This not only enhances the fun and sense of achievement in learning but also achieves a dynamic match between teaching content and students' cognitive states.
2. Phased Assessment:
Multi-level regular assessments are set up during the teaching process to evaluate the development of students' abilities at each DIKWP layer. Different from traditional tests that purely examine knowledge point mastery, the test questions we design will cover various levels from basic cognition to high-order thinking, focusing on process and white-box evaluation. For example: quizzes or unit tests can include objective questions (multiple choice, true/false) to test understanding at the Information level, and short answer questions to evaluate application ability at the Knowledge level; mid-term and final exams include open-ended questions or case analysis questions to test students' analysis and synthesis (Wisdom layer) and reflection abilities—such as asking students to evaluate the pros and cons of a certain plan and propose improvements (reflecting W layer), and describe the social significance or personal gains of the plan (reflecting P layer). After each test, the teacher not only gives right/wrong and scores but also provides layered feedback based on the white-box assessment concept: for example, pointing out whether the student's memory at the Data layer is accurate, whether the understanding at the Information layer is thorough, whether the application at the Knowledge layer is correct, whether the reasoning at the Wisdom layer is rigorous, and whether the goal at the Purpose layer is clear. Such feedback can help students locate their weak links at which cognitive level. Subsequently, the system automatically updates the student's DIKWP knowledge map (e.g., marking wrongly answered knowledge points as "not mastered") and generates personalized practice questions for推送 (pushing), specifically consolidating exercises targeting their weak levels. For example, if the assessment finds that a student scores low in comprehensive application at the "Wisdom layer," the system might recommend some comprehensive case analysis questions or project tasks for training to enhance the ability at that layer. This process reflects a two-way feedback learning evaluation: each assessment not only tests learning outcomes (Information layer feedback on right/wrong) but also updates the model of the student's cognitive state (Knowledge layer), which in turn triggers targeted training (Wisdom layer creates questions) and adjusts learning goals (Purpose layer). Through a continuous cycle of assessment-feedback-improvement, students' abilities are enhanced layer by layer, and teachers' teaching is adjusted in a timely manner, truly achieving teaching according to aptitude.
3. Comprehensive Practice and Project Training:
To cultivate students' high-level abilities, we introduce a large number of project-based learning and practical training sessions in the curriculum plan. These comprehensive practices aim to let students experience the complete DIKWP cognitive cycle, apply what they have learned in real or simulated situations, thereby exercising their wisdom in decision-making and purpose planning abilities. For example, interdisciplinary team projects or graduation designs can be arranged in senior years, requiring students to independently determine project goals (corresponding to Purpose layer P), collect relevant materials and data (D layer), analyze and extract valuable information (I layer), construct problem-solving plans and prototypes (K and W layers), and finally evaluate the effectiveness of the plan and reflect on the significance of the project (W layer evaluation and P layer reflection). In this process, students will play multiple roles from information collectors, knowledge integrators, plan formulators to goal drivers, tangibly experiencing the similarities, differences, and connections of cognitive activities at each layer. For example, in the "Intelligent Customer Service Robot Training" project (a typical task for AI trainers), students need to acquire raw data such as conversation logs (D), perform information extraction like intent classification (I), build dialogue models combining business knowledge (K), optimize model training strategies based on indicators (W), while keeping the predetermined performance goals (P) consistent throughout to drive decisions at each stage. These comprehensive practices are guided by mentors throughout the process, supplemented by students' self-recording and peer review mechanisms: students regularly report their ideas, encountered problems, and adjusted plans to mentors and group members. Mentors then provide targeted guidance based on DIKWP levels, for example, reminding them to pay attention to whether they have deviated from the project goals (P layer) or whether sufficient knowledge background (K layer) was considered in the plan design. Through immediate feedback and continuous improvement, students constantly calibrate their cognitive processes in practice. For example, when a project encounters a bottleneck, the mentor might guide students to re-examine whether the initial data was sufficient (D layer) or whether the goal setting was reasonable (P layer), thereby directing the students' attention back to the lower-level foundation or higher-level purpose. This training model effectively cultivates students' ability to integrate and apply cognition at all levels, enabling them to enhance their self-reflection and autonomous learning abilities while completing tasks.
4. Graduation Evaluation and Capability Profile:
When students complete their studies, a comprehensive evaluation of their DIKWP capability development throughout the learning stage is needed to determine their graduation eligibility or grant corresponding capability certificates. Traditional graduation assessments often limit themselves to examining subject knowledge (such as written exams, theses) while neglecting high-order abilities and comprehensive qualities. We propose introducing DIKWP five-layer indicators into graduation evaluation for a holographic profile assessment of students. Specifically, thesis (or design) defense and comprehensive exams should cover: mastery of core professional knowledge and skills (Information layer I and Knowledge layer K), complex problem analysis and innovation ability (Wisdom layer W), career planning awareness and values (Purpose layer P), etc. For example, defense questions include not only asking students to explain key concepts and data sources in the thesis (testing their I layer, D layer abilities) but also requiring them to evaluate the pros and cons of the plan and improvement ideas (testing W layer ability), and can also inquire about their future plans or the significance of the research (testing P layer awareness). To improve the objectivity and efficiency of the evaluation, we can use AI tutoring/grading systems to assist in scoring: using natural language processing and semantic analysis technology to semantically parse students' theses and defense statements, mapping their answers onto the DIKWP knowledge graph for each layer, and scoring the performance at each layer. For example, an AI grading system can identify whether the data and information points involved in the student's thesis are accurate and complete (D/I layer), whether the knowledge application is correct (K layer), whether the argumentation process reflects logic and innovation (W layer), and whether the conclusion and outlook reflect the author's purpose awareness and social responsibility (P layer). Through this semantic scoring, the system generates a DIKWP capability report for each student, displaying the student's relative strengths and weaknesses across the five layers in the form of a radar chart or bar chart. At the same time, we combine this evaluation result with the credit system: converting the assessment results of different levels into corresponding credits or capability levels. For example, if a student performs outstandingly at the Wisdom and Purpose layers, additional innovation credits or honorary titles can be awarded; if a student has defects at the Knowledge layer, their graduation eligibility may require supplementary training or practice to compensate. The credit conversion rules can be based on the national credit bank framework, linking them with professional qualification standards (detailed in the policy section below). The final graduation evaluation report serves not only for degree recognition but also provides rich information to employers—it outlines the graduate's capability curve (DIKWP five dimensions), helping employers select talents that match the job requirements.
Through the aforementioned education sector assessment system, we achieve a closed-loop integration of teaching and evaluation: teaching activities advance according to DIKWP levels, while assessment feedback continuously calibrates the teaching path and students' effort direction, ultimately ensuring students' development in data acquisition, information understanding, knowledge application, wisdom innovation, and purpose guidance. Research shows that this multi-level assessment method enables students to receive targeted cultivation and feedback at different stages, contributing to their comprehensive growth. More importantly, it cultivates students' metacognitive abilities (i.e., the ability to recognize and regulate their own cognitive processes): through frequent feedback and self-examination, students gradually learn to think about their learning (D/I/K) strategies and goals from a higher level (W/P), which is precisely the characteristic of lifelong learners needed in future society. In summary, the application of the DIKWP model in the education sector provides a feasible path to address the drawbacks of current exam-oriented education, promote quality education, and personalized cultivation.
3.2 Enterprise Sector: Job Competency Model and Performance Evaluation Mechanism
In the corporate organizational environment, the core of talent evaluation lies in the assessment of job competency and performance feedback. We introduce the DIKWP model into enterprise human resource management, constructing a job competency model and performance assessment loop based on DIKWP layered capabilities, achieving full-process management of employees from recruitment selection, training development, to assessment and promotion.
1. Competency Profile Based on DIKWP:
First, create capability profiles for key positions, clarifying the distribution of required capabilities across the five DIKWP layers for each position. Traditional competency models often include several dimensions (such as professional skills, communication skills, leadership, etc.), but lack a unified structure and internal connections. Using the DIKWP framework, any job requirement can be broken down and mapped to the five levels: Data, Information, Knowledge, Wisdom, and Purpose. For example, for the "Software Engineer" position, we can analyze it as follows:
Data Layer (D): Required basic knowledge and skills, such as mastery of programming language syntax, common algorithms and data structures, use of code debugging tools, etc. This reflects mastery of basic facts/tools.
Information Layer (I): Requires the ability to understand business requirements and analyze problems. For example, being able to read requirement documents to extract key information, understand user feedback and summarize problem phenomena. This reflects information acquisition and understanding ability.
Knowledge Layer (K): Corresponds to professional skills and knowledge application. Such as proficiently using design patterns for architecture design, applying algorithm knowledge to optimize program performance, etc. This is the core professional knowledge capability of the position.
Wisdom Layer (W): Reflects innovation, judgment, and decision-making ability. For example, weighing different technical solutions based on project constraints, comprehensive analysis ability to troubleshoot complex failures, and even foresight of new technology trends and team guidance ability. Belongs to complex problem-solving and decision-making ability.
Purpose Layer (P): Related to career goals, values, and sense of responsibility. Such as having product awareness (grasping the user value and business goals of the software), identifying with the company's mission and vision and using this to guide work input, having a career development plan (aspiring to become an architect, etc.). Reflects goal-driven and value orientation.
The capability requirements for each layer above can be further detailed and quantified based on the job level. For example, for a junior programmer, the emphasis might be on D/I layer capabilities (such as writing correct code, understanding simple requirements); a mid-level engineer needs strong K layer capabilities (such as independently completing module design, mastering system tuning knowledge); senior engineers and architects place more emphasis on W layer (complex system design decisions) and P layer (leading team direction) capabilities. Through such layered analysis, we can obtain a capability evolution path for a position from junior to senior. Taking a software engineer as an example, their growth path can be described as: junior engineers focus on coding and debugging (D/I), mid-level begins designing and optimizing (K), and senior plans architecture and strategy (W/P). This path characterized by DIKWP is not only structurally clear but also facilitates connection with educational cultivation (university stage cultivates D/I/K, mid-to-senior levels train W/P). Research by Professor Yucong Duan's team has conducted similar DIKWP capability breakdowns for more than ten typical positions, providing example references for enterprise competency model development. Enterprises can formulate their own DIKWP capability dictionaries for positions based on these analysis results, breaking down the requirements of each position into several measurable capability modules.
Furthermore, we introduce the DIKWP x DIKWP 25 interaction modules into competency model construction. DIKWP x DIKWP refers to the Cartesian product of two five-layer processes, generating 25 types of inter-layer interaction scenarios, each of which can be seen as a specific capability unit. For example, the "Data → Information (D→I)" module represents the ability to process data into useful information (e.g., data analysis, intelligence extraction); "Information → Knowledge (I→K)" represents the ability to integrate scattered information into a knowledge system (e.g., writing reports summarizing rules); "Knowledge → Wisdom (K→W)" corresponds to the ability to use knowledge to make wise decisions (e.g., solving complex problems); "Wisdom → Purpose (W→P)" corresponds to the ability to elevate successful experiences into strategic goals (e.g., formulating long-term plans); and "Purpose → Data (P→D)" represents the ability to collect and monitor data guided by goals. And so on, the 25 modules cover almost all possible cognitive transitions that may occur at work. For instance, a marketing manager might need modules including I→K (summarizing strategies from market information), K→W (making decisions based on experience), W→P (elevating successful tactics into marketing strategy), P→D (determining data monitoring indicators based on strategy), etc. We can further associate each module with specific behavioral indicators, such as the "Information → Knowledge" module corresponding to "the ability to write high-quality analysis reports (indicators such as number of viewpoints in the report, amount of data cited, number of effective conclusions drawn)." Through this modular approach, enterprises can establish a unified and fine-grained capability indicator library to evaluate the performance of employees in different positions and at different levels. These 25 DIKWP modules can be evaluated individually or combined according to job requirements, offering high flexibility and adaptability. In the case analysis section below, we will provide definitions of the DIKWP 25 modules and examples of their evaluation indicators, and show typical combinations of these modules for certain positions.
2. Behavioral Indicator Design and Recruitment Assessment:
Based on the competency model, the next step is to translate the capability requirements of each layer into specific behavioral indicators and assessment tools for recruitment selection and performance appraisal. During the recruitment phase, we can design situational assessment methods targeting candidates' competencies at each DIKWP layer. For example:
Assessment of Data → Information Capability: Can be assessed through Behavioral Event Interviews (BEI) asking candidates how they extracted useful information from messy data in the past; or set written test questions requiring the organization of key points from raw materials. For technical positions, they can also be asked to analyze a data report on the spot and identify problems.
Assessment of Information → Knowledge Capability: Case studies can be designed, asking candidates to read a business scenario description and propose a plan with justification, to evaluate their ability to integrate information to form a plan (knowledge). At the same time, pay attention to whether they connect scattered information into systematic cognition, reflecting the level of knowledge structuring.
Assessment of Knowledge → Wisdom Capability: Situational interview questions can be used, such as "How would you handle complex problem X?" or "Discuss an open-ended challenge in a group," observing the candidate's process of using knowledge to solve new problems and make judgments. Their balancing of factors like risk and resources during decision-making can also be examined to judge Wisdom layer capability.
Assessment of Wisdom → Purpose Capability: Candidates can be asked about their career plans, how they set goals in past work, and how they adjusted goals when facing failures, to understand their goal-setting and self-driving abilities. For example, ask them to describe an experience with a long-term project, focusing on how they set phased goals and repositioned after setbacks.
Assessment of Purpose → Data Capability: Role-playing can be used, giving a project goal and asking the candidate to list the key data or indicators they think need attention, to evaluate their information sensitivity under goal orientation. For example: "If the goal is to improve customer satisfaction, what data would you focus on collecting?" From this, judge their strategic thinking and execution focus.
In addition, there are assessments of cross-layer soft skills, such as cross-departmental collaboration (team knowledge sharing capability at the Information → Knowledge level), innovation awareness (composite capability of Knowledge → Wisdom → Purpose), etc., which can also be included in the assessment scope. These can be assessed through follow-up questions in structured interviews, situational simulations, etc. In short, applying the DIKWP model in recruitment means: no longer limited to resume screening and single written tests, but comprehensively understanding the candidate's potential and shortcomings at different cognitive levels through diverse situational assessments. This helps enterprises select talents that best fit the job requirements, rather than coarse screening based solely on education or experience.
Existing practices show that using DIKWP layered interviews can improve the accuracy and fairness of talent selection. For example, a company added a "Wisdom layer" question when recruiting product managers: "Please describe an experience where you balanced user needs and development resources in a project." Many candidates with only superficial knowledge could not provide in-depth answers and were screened out; while those who could analyze trade-offs from multiple perspectives and had clear product goal awareness stood out. It turned out that these hired individuals performed better after joining. This shows that DIKWP layered assessment can effectively identify high-potential talents and avoid being misled by superficial experience. At the same time, it also provides candidates with more space to showcase themselves. For example, some technical personnel may have average expression skills, but can still prove their strength by achieving high scores in data analysis and code tests (D→I, I→K modules) in written tests, thus reducing the bias of traditional interviews towards extroverted personalities.
3. Cultivation and Performance Loop:
After talent enters the enterprise, the DIKWP model can further guide their cultivation, development, and performance management, forming a closed-loop system of selection-cultivation-assessment-feedback. Specifically:
During the onboarding training phase, personalized cultivation plans are formulated based on the new employee's DIKWP assessment results (obtained through entry tests or probationary period assessments). For example, for employees who are strong in the Information and Knowledge layers but lack in the Wisdom layer, arrange for them to participate in projects that require solving complex problems to exercise decision-making ability in practice; for newcomers lacking motivation in the Purpose layer, strengthen corporate culture and vision education, clarify job significance, and stimulate a sense of responsibility. This is similar to the personalized learning path in the education sector, but in the enterprise, it is jointly completed by the HR department and department supervisors, ensuring person-job fit and making the best use of talent.
In terms of job rotation and promotion, talent is selected and trained based on DIKWP capability requirements. For example, employees identified as having high-level potential (W/P) are included in management trainee or reserve cadre programs, assigned cross-departmental projects, and provided with mentorship to accelerate their growth at the Wisdom and Purpose levels. For skilled talents, their responsibilities are gradually increased according to the competency profile levels: reaching a certain level of knowledge and skills (K layer) allows promotion to senior engineer, possessing decision-making ability (W layer) and leadership willingness (P layer) makes them considered for department head. Thus, the promotion path is clear and evidence-based, helping employees understand organizational expectations and facilitating fair selection by the organization.
In performance appraisal, introducing DIKWP layered indicators makes the appraisal more comprehensive and balanced. Traditional KPIs often focus on short-term performance (usually corresponding to work results at the Knowledge layer K) and some behavioral evaluations (such as work attitude at the Information layer I), while ignoring innovation and long-term goals. Through the DIKWP framework, we can set different emphasis KPI combinations for employees at different levels. For example, the assessment focus for front-line execution personnel can be on data accuracy, task completion quality (D/I/K layer results); in addition to business indicators, middle managers should have indicators such as team knowledge sharing, cross-departmental collaboration (I→K), and problem-solving count (W layer); top managers should be assessed on strategic goal achievement rate, vision communication effectiveness (P layer), etc. At the same time, to encourage employees' long-term development, growth indicators can be added to performance, such as the number of new skills mastered within a year (I→K layer), the number of innovative suggestions adopted (K→W layer), the cost saved/value created for the company (W layer), and the number of successors cultivated in the team (W→P layer), etc. Although these indicators are not as intuitive as sales figures, they can guide employees to focus on long-term improvement and organizational value, consistent with corporate strategy.
In feedback and adjustment, a DIKWP capability monitoring system is used to achieve real-time tracking and guidance of employee growth. For example, we can establish a digital "capability file" for each employee, recording the change curve of their performance scores at each layer over time. During performance interviews, managers can present the employee's DIKWP radar chart, showing clearly which dimensions have improved compared to the last time and which have stagnated. This intuitive feedback helps employees recognize their shortcomings, thereby formulating improvement plans together with the manager (such as applying for relevant training programs). For the organization, summarizing all employees' DIKWP files can also reveal the overall structure of the talent team: for example, whether the Wisdom layer is generally weak, requiring strengthening of innovation culture cultivation; or certain departments score low on the Purpose layer (mission identity), warning of potential team cohesion issues. Based on these findings, the human resources department can adjust the allocation of training resources and talent introduction strategies to address the shortcomings.
Through the above measures, a talent development loop linked by DIKWP is formed within the enterprise: recruitment finds newcomers matching job requirements (entry quality), the cultivation process continuously enhances their capabilities at each layer (process value-added), performance evaluation considers both current output and capability growth (result-oriented), and feedback returns to talent strategy adjustment (closed loop). This system ensures equal emphasis on short-term performance and long-term talent growth. As research indicates, DIKWP-based talent management can help enterprises systematically understand talent growth paths, set standards and indicators consistent with capabilities at each layer, thereby balancing employees' current contributions and future potential.
A typical case is the change in an innovative technology enterprise after implementing this system: previously, the company's performance appraisal almost solely looked at sales figures and product development progress. Since introducing DIKWP indicators in departments like R&D and marketing, the number of employees proactively learning new knowledge and technologies (K layer) increased significantly, and cross-departmental knowledge sharing activities doubled (I→K indicator rose). More interestingly, some engineers began to proactively think about the user value of products and the company's strategic direction (P layer awareness enhanced), proposing many constructive suggestions. One engineer, due to demonstrating outstanding W/P layer capabilities, was exceptionally promoted to participate in strategic planning. This validates the role of the DIKWP evaluation system in identifying high-potential talent and motivating comprehensive development.
4. Industry Talent Map and Standard Connection:
Besides applying the DIKWP model internally within enterprises, we also encourage industry organizations and government departments to use it for macroeconomic talent planning. For example, constructing an industry talent map, classifying positions across the entire industry according to DIKWP growth stages, and mapping the supply and demand situation for talent. Government and human resources departments can use such maps to identify talent gaps and cultivation priorities, providing policy support. For instance, if the talent in a certain emerging industry is mostly stuck at the Knowledge layer (intermediate skills), lacking high-end talent at the Wisdom/Purpose layers, the government can targetedly launch advanced training programs, industry-university-research cooperation projects to enhance the overall level of the industry.
Furthermore, DIKWP thinking can be integrated when formulating national occupational standards and qualification certifications. Currently, many vocational qualification exams are divided by level but lack a clear cognitive progression logic. We suggest aligning vocational qualification levels with DIKWP stages: junior qualifications mainly examine Data and Information layers (operational skills, basic knowledge), intermediate qualifications focus on Knowledge layer application (professional skills, case analysis), senior qualifications highlight Wisdom layer decision-making and Purpose layer professional literacy (comprehensive cases, strategic thinking, ethical responsibility, etc.). For example, the national occupational skill standard for "Artificial Intelligence Trainer," a new occupation released in 2021, already includes job requirements for different levels, progressing from data processing to model optimization. We can further refine the content of exams at each level: Level 1 (the highest) should, in addition to algorithm optimization, assess the ability to formulate AI training process specifications and train teams (W/P layers), to ensure that Level 1 certificate holders have the ability to lead AI training projects. This approach enhances the value of vocational qualification certification, making it truly reflect the comprehensive competency of the certificate holder.
Currently, although there are no cases of directly applying DIKWP comprehensively to industry standards domestically or internationally, related research has shown the potential of this direction. For example, some scholars have proposed that DIKWP analysis can be used as a structured tool for job descriptions and capability requirements, helping to compile more scientific job specifications; meanwhile, the DIKWP model is also being experimentally used as the standard basis for AI system evaluation. From this, it can be inferred that introducing the DIKWP framework at the policy-making level will help establish a unified language environment for talent evaluation, promoting the connection between education, training, and employment standards.
Overall, after adopting the DIKWP model, the talent evaluation and development system in the enterprise sector achieves precise alignment between individual growth paths and organizational needs. Each employee's development is no longer blind or fragmented, but mapped onto a clear track of cognitive progression; the organization's management of talent also shifts from extensive to intensive, from results to process. As industry insiders say: "Viewing employees through the lens of Data-Information-Knowledge-Wisdom-Purpose, you will find their potential is far more than the numbers on the performance sheet." The DIKWP model helps enterprises see the "part of the iceberg below the water" for talent, thereby making wiser decisions on hiring and cultivation.
3.3 Policy Sector: General Model for Industry Talent Growth and Evaluation Framework
At the macro policy level, introducing the DIKWP model helps establish a unified talent growth stage model and evaluation reference framework, guiding the alignment of talent cultivation and assessment standards across various industries and regions with national strategic needs.
1. DIKWP Talent Growth Stage Model:
We recommend that government and industry associations jointly propose a general talent growth stage model, dividing career development into five stages corresponding to the five DIKWP layers. Tentatively defined as:
Data Stage: Corresponding to the DIKWP Data layer, primarily characterized by mastering basic knowledge and skills. Talents at this stage are equivalent to entry-level, engaged in work performed according to procedures and specifications, emphasizing memory, imitation, and execution capabilities. For example, apprentices, junior technicians starting their jobs are mostly in this stage.
Information Stage: Corresponding to the Information layer, characterized by possessing certain understanding and analysis capabilities, able to classify work objects and provide basic explanations for phenomena. Talents at this stage have moved beyond purely physical or mechanical operations, beginning to extrapolate and identify problems. Such as staff with 1-3 years of experience who can understand job processes and identify areas for improvement.
Knowledge Stage: Corresponding to the Knowledge layer, manifested by accumulating rich professional knowledge and proficiently applying it in practice. Talents at this stage are typically the backbone of the industry, serving as key professionals or grassroots managers, capable of solving common problems and guiding others. For example, engineers, doctors, etc., with intermediate professional titles are mostly in the Knowledge stage.
Wisdom Stage: Corresponding to the Wisdom layer, characterized by possessing independent judgment, complex decision-making, and innovation capabilities. Talents entering this stage often hold senior management or technical leadership positions, capable of dealing with complex, unstructured problems and proposing innovative solutions. They have a comprehensive vision and systematic thinking, such as senior engineers in enterprises, university professors, department managers, etc., can be classified into this stage.
Purpose Stage: Corresponding to the Purpose layer, the highest stage, reflected by having strategic vision and leadership capabilities. Talents at this stage are not only exceptionally capable themselves but can also formulate visions, lead teams or industry development, focusing on long-term goals and social value. Such as entrepreneurs, industry-leading experts, academicians, etc. At the individual level, they have reached the realm of "unity of knowledge and action, driving knowledge with vision."
The above stage model provides a unified coordinate system for talent development. The government can position various talent programs accordingly. For example, the "Young Talents Program" mainly cultivates talents transitioning from the Knowledge stage to the Wisdom stage; the "Chief Expert Program" focuses on attracting or funding talents in the Wisdom and Purpose stages. Various industries can also refer to this model to design their own talent cultivation paths, providing a basis for defining talent levels across different organizations, avoiding fragmented and inconsistent standards. More importantly, in terms of public communication, clarifying these five stages helps the public understand: career development is a gradual process of continuously enhancing cognitive levels. This can guide the public to establish a correct view of talent development, neither belittling oneself nor being overly anxious for success, but taking each step steadily. For example, letting young people know that obtaining a diploma (Knowledge stage) is not the end goal; wisdom and purpose need to be honed through work practice to grow into industry leaders.
2. Reference Framework for Talent Tiered Evaluation:
Building on the stage model, the policy level can further formulate a reference framework for talent capability grading, linking the above stages with existing talent evaluation systems. This includes professional title reviews, skill levels, vocational qualification certificates, etc. For example:
Professional Title Reviews: Consider aligning junior titles with the "Data/Information Stage," intermediate titles with the "Knowledge Stage," associate senior titles with the "Wisdom Stage," and senior titles with the "Purpose Stage." Review indicators adjust accordingly: junior titles focus on mastery of basic knowledge, intermediate titles require the ability to work independently and apply knowledge to guide others, senior titles must reflect innovative achievements and industry contributions (Wisdom layer), and the highest titles demand leadership roles in academic or industry development (Purpose layer). Such review orientation encourages professionals pursuing high titles to not only have papers and patents (knowledge contribution) but also achievements in industry standard setting, strategic planning, etc., thus guiding high-level talents to pay more attention to value creation at the Purpose layer.
Skill Levels and Vocational Qualifications: Exam content and certification standards can be redesigned according to the DIKWP framework. For instance, if a skill appraisal for a certain occupation is divided into five levels (junior worker, intermediate worker, senior worker, technician, senior technician), it can perfectly correspond to the five stages. Lower levels focus on operational skills and knowledge points, intermediate and senior levels add assessment of plan design, troubleshooting (Wisdom layer), and the highest level requires management experience or technological breakthrough achievements (Purpose layer manifestation). Taking the national vocational qualification for "Artificial Intelligence Trainer" as an example, its standard already includes work requirements for different levels, progressing from data processing to model optimization. We can further refine the exam content for each level: Level 1 (highest) should, in addition to algorithm optimization, assess the ability to formulate AI training process specifications and train teams (W/P layers), ensuring Level 1 certificate holders can lead AI training projects. This practice enhances the value of vocational qualification certification, making it truly reflect the comprehensive competency of the holder.
Government and industry can also develop standardized question banks and assessment tools based on the DIKWP framework for skill competitions and talent evaluations in different occupations. For example, develop a comprehensive capability test for engineering technicians, including professional knowledge questions for D/I layers, case analysis questions for the K layer, decision simulation questions for the W layer, and questionnaires on professional ethics and planning for the P layer. This test can serve as a reference basis for selecting high-level talents (such as the Outstanding Engineer Award). Standardized assessment reduces subjective bias and improves the scientific rigor and credibility of selection. At the same time, test results can also be used for talent profiling, helping to formulate talent cultivation policies.
It needs to be emphasized that the implementation of the DIKWP evaluation framework should be integrated with existing policy tools and gradually permeated. In the short term, policy departments can encourage pilot DIKWP evaluations in various regions and industries. For example, introduce DIKWP indicators in vocational skill competitions organized by human resources departments, providing multi-dimensional scores for contestants to demonstrate the value of this new evaluation method; or add an indicator in the Ministry of Education's talent cultivation quality assessment to see if schools adopt process-oriented, multi-level evaluation methods, thereby promoting exploration of DIKWP assessment in universities. In the long run, once the DIKWP framework proves effective and feasible, it can be considered for elevation to national standards or guidelines, with authoritative institutions publishing evaluation guides, clarifying the DIKWP elements and indicators that should be referenced in various talent evaluations. This will have a profound impact on China's talent evaluation system: first, achieving "interoperability" between evaluation systems in different fields, as they all speak the same language of cognitive levels; second, promoting the reform of evaluation from "focusing on endpoints" to "focusing on processes," truly implementing the concept of lifelong learning; third, providing a standard basis for AI participation in talent evaluation, making human-machine collaborative assessment possible (detailed in the technical platform section below).
3. Safeguard and Support Measures:
To ensure the implementation of the above framework at the policy level, a series of supporting measures are needed. Such as improving laws and regulations, incorporating concepts like lifelong learning, industry-education integration, and credit mutual recognition into the "Vocational Education Law," "Lifelong Education Law," etc. (In fact, the newly revised Vocational Education Law already reflects these spirits). Also, establish coordinating bodies, forming or authorizing specialized standards committees at the national level (e.g., the DIKWP Evaluation Standards Committee under the World Artificial Consciousness Association is already working), responsible for researching and formulating DIKWP-related standards, and coordinating implementation work among education, human resources, and industry associations. The government should also increase investment to support the development of technical platforms and tools needed for DIKWP assessment, making it part of digital education and digital governance. For example, build national-level question banks, capability map databases, etc., and promote their application in institutions and enterprises. Finally, publicity and guidance are also important; promote the advanced nature of the DIKWP talent evaluation concept through mainstream media and professional conferences, increasing social acceptance of process evaluation and multi-dimensional evaluation. Especially, eliminate the distrust of some parents and employers towards new evaluation methods, establishing the concept that "everyone can become talented, growth is not solely based on scores." In the long run, this helps create a good atmosphere for reform, promoting the shift of China's talent evaluation from "score-only," "diploma-only" towards a more scientific and rational approach.
In summary, applying the DIKWP model at the policy end can, from a global perspective, "set standards" and "pave the runway" for talent growth. It emphasizes not only the horizontal comparison of individual levels but also focuses on the vertical development and improvement of individuals. This highly aligns with the "lifelong learning society" construction currently advocated vigorously in China. It is foreseeable that, with policy guidance and safeguard measures in place, a new talent evaluation ecosystem jointly promoted by government-industry-enterprise-institution will gradually form: the government sets the direction, the industry issues standards, enterprises build platforms, institutions conduct practice, jointly promoting the continuous improvement of the talent team's quality, providing inexhaustible talent power for economic and social development.
4. Knowledge Quotient (KQ) White-Box Assessment System
In the talent growth evaluation system constructed earlier, both the education and enterprise sectors require specific methods and tools to measure and provide feedback on individual DIKWP capabilities. To this end, we propose a white-box assessment system centered on the "Knowledge Quotient" (KQ), combined with artificial intelligence technology, to achieve in-depth analysis and quantitative evaluation of the cognitive process. This system aims to provide an indicator measuring an individual's cognitive ability level, similar to the Intelligence Quotient (IQ) test. However, KQ differs from traditional IQ in that its evaluation dimension is not a single numerical value, but a five-dimensional vector, corresponding to the capability performance at the five DIKWP levels.
4.1 Concept and Framework of KQ Assessment
Knowledge Quotient (KQ) can be understood as a comprehensive measure of a person's ability to "transform data into value." Here, "data" represents the lowest level of raw information input, and "value" represents the highest level of goal achievement and meaning output. The KQ assessment aims to answer the question: To what extent does a person possess the ability to process perceived fragmented information layer by layer, ultimately producing useful wisdom and achieving goals?
Specifically, KQ includes the following five sub-dimensions of "capability quotients":
K Q D  (Data Perception Quotient): Measures the individual's ability to acquire, discern, and memorize raw data. For example, the ability to accurately observe and record phenomena, memorize basic facts. Testing methods can include speed-memory tests, information retrieval speed and accuracy, etc.
K Q I  (Information Processing Quotient): Measures the individual's ability to understand information, extract key points, and maintain semantic consistency. Tests can use reading comprehension, data analysis, and other questions to assess the test-taker's ability to find key information from raw materials and filter out noise interference.
K Q K  (Knowledge Reasoning Quotient): Measures the individual's ability to integrate information into knowledge and perform reasoning by analogy. Tests include inductive and deductive reasoning questions, professional knowledge application questions, etc. For example, given several scattered pieces of information, require the test-taker to summarize a rule or conclusion.
K Q W (Wisdom Application Quotient): Measures the individual's ability to use knowledge to solve problems and make decisions in complex situations. Tests can design situational simulations or case analysis questions, asking the test-taker to propose solutions, evaluating their innovativeness and effectiveness.
K Q P  (Purpose Recognition Quotient): Measures the individual's ability to understand others' purposes and self-regulate goals. Can be judged through assessing the test-taker's grasp of the question's purpose (e.g., accurately understanding the real requirement of the test question) and case analysis of adjusting behavior according to goals.
The above five sub-items can be understood as different facets of "Cognitive Quotient." Integrating them constitutes a person's overall KQ profile. For example, a person's KQ result might be expressed as:  K Q D =120 K Q I =110 K Q K =130 K Q W =100 K Q P =95  (similar to IQ using 100 as the mean, this is just an example). This reflects that the person is excellent in basic knowledge and reasoning, but relatively weak in decision-making and goal grasping.
KQ assessment has two significant differences from traditional tests: first, it is white-box and process-oriented; second, it is multi-dimensional and combinatorial.
White-box means the assessment looks not only at the final answer's correctness but also focuses on the answering process and thinking, tracking the candidate's "cognitive link." For example, in a reading comprehension question, it not only records the final answer chosen by the test-taker but also records which sentences they highlighted in the passage, which words they looked up, through an interactive system. This process data helps judge their strategy and efficiency in extracting information (I layer). Another example, when solving a math problem, if the candidate is allowed to give derivations step by step, the system can score the correctness and logic of each step, rather than simply scoring the final result. This way, it can be seen that a candidate might get the final answer wrong, but the main error occurred in the last step (perhaps carelessness, i.e., W layer decision error) rather than not understanding the knowledge (K layer is still good). This is significant for subsequent teaching feedback.
Multi-dimensionality is reflected in the KQ output being a vector rather than a single score. This addresses a criticized point of traditional IQ tests—it's difficult to summarize the complexity of intelligence with one score. Two people with similar total scores might be worlds apart in certain aspects. For example, A is good at memorizing facts (high  K Q D ) but weak in reasoning and innovation (low  K Q W ), while B is the opposite. White-box KQ assessment can clearly show this difference. This is like a "blood routine" in medical examination having multiple indicators, and different diseases cause abnormalities in different indicator combinations. Therefore, KQ is more like a "physical examination report" of cognitive ability rather than a simple ranking.
Combinatorial means that different positions or learning goals require attention to different combinations of KQ dimensions. KQ provides a general assessment framework, but employers or educational institutions can customize weighting according to needs. For example, recruiting a data analyst might focus more on  K Q D  and  K Q I ; selecting a project manager requires higher  K Q W  and  K Q P . By adjusting weights or even question composition, targeted sub-tests can be formed, such as an "Innovative Thinking Test" mainly taking parts of $KQ_K$ and  K Q W . In this way, the KQ system has high adaptability, usable for both general ability diagnosis and extracting assessment tools for specific purposes.
4.2 KQ Assessment Implementation: AI Assistance and Cognitive Chain Tracking
To implement the KQ white-box assessment system, it is necessary to build an intelligent assessment platform using artificial intelligence technology to support the analysis and quantification of complex cognitive processes. Here, the KQ assessment platform we envision includes the following key functions:
1. Intelligent Paper Assembly and Adaptive Testing:
The platform possesses vast question resources covering various professional fields and cognitive levels, with each question tagged with its corresponding DIKWP module (e.g., a certain question mainly assesses knowledge integration at the I→K layer). At the beginning of the test, the system can initially generate a set of test papers based on the test-taker's background (age, major, etc.), including questions from different levels. During the test, the platform implements an adaptive strategy: if it finds the candidate performs particularly well or poorly at a certain level, it can dynamically adjust the difficulty and focus of subsequent questions. For example, if the candidate answers the first few data memory questions (D layer) correctly and quickly, fewer easy memory questions can be given later, shifting the focus to information understanding (I layer) and knowledge application (K layer) questions; conversely, if the candidate is unable to tackle two consecutive high-order reasoning questions (W layer), the system can temporarily lower the difficulty to avoid excessive frustration. This adaptive testing can more efficiently approach the candidate's ability boundary, obtain stable measurement results in a shorter time, while also improving the assessment experience, keeping people of different levels appropriately challenged and feeling accomplished.
2. AI Semantic Parsing and Scoring:
For subjective questions (such as essays, case analyses, programming questions, and other open-ended items), the platform uses Natural Language Processing (NLP) and knowledge graph technology for semantic parsing and scoring of answers. The "semantic mathematics" framework proposed by Professor Yucong Duan's team provides theoretical support for this: they represent knowledge and cognitive processes using formalized semantics, enabling machines to "understand" the logic and meaning in text. For example, if a candidate answers an essay question "Discuss your views on phenomenon X," the AI first uses a pre-trained large language model to understand the surface meaning of the answer, then constructs a semantic graph corresponding to the standard answer (or expert knowledge base). This graph shows the key concepts mentioned by the candidate and their logical relationships. The AI compares it with the semantic graph of the ideal answer: checking if the candidate mentioned the necessary information points (I layer), correctly applied relevant knowledge (K layer), whether the logical reasoning is rigorous (W layer), and whether the viewpoint fits the question's intent (P layer). For instance, on a history question, if the candidate only lists facts without evaluation, the AI judges their Wisdom layer contribution as insufficient; or if asked for countermeasures but the candidate only talks about significance, the AI judges they haven't grasped the question's Purpose layer. Through this semantic level alignment, the AI can provide scoring that is more accurate and layered than simple keyword matching. In practice, the world's first LLM "Knowledge Quotient" evaluation reported by Science and Technology Daily used DIKWP semantic analysis to score the large model's answers. We can apply similar technology to evaluate human candidates' answers, ensuring objectivity and consistency in scoring, while also saving manual grading costs.
3. Cognitive Link Tracking and Process Recording:
During the assessment process, the platform records a series of the candidate's operations and behavioral data to construct their problem-solving cognitive link. This specifically includes: time spent on answers, click paths, modification traces, thinking sketches, etc. For example, on a question requiring multiple steps (like a programming question), the platform saves the result of each code run, every modification; for complex case discussion questions, the candidate might first fill in several key points, then delete some, the platform also records this writing sequence. By mining this process data, we can reconstruct the candidate's thinking trajectory, and evaluate their problem-solving strategies and cognitive habits. Example: Two candidates have the correct final answer, but one revised it many times, indicating they might have been confused initially or used trial-and-error; the other finished smoothly, indicating clear and organized thinking. Another example, some candidates first draft an outline (logical framework) before filling in the content; others write as they think. These reflect different cognitive styles. Tracking the link also helps detect cheating and authenticity: if a person's answer has no modifications, is extremely fast but the answer is complex and perfect, the platform can judge it as suspected plagiarism or use of external aids, because normal people usually have pauses for thought and revision processes. Furthermore, when the candidate views reference materials or hints, the platform records which knowledge points they consulted, which can also help analyze their knowledge gaps. All in all, cognitive chain tracking provides much richer information than the final result, turning the assessment from a "black box" to a "white box," making the reason behind every score traceable.
4. Assessment Result Visualization and Reporting:
The platform automatically generates assessment reports, including KQ sub-scores, comprehensive evaluation, and suggestions. Results are usually presented combining graphics and text. Graphically, radar charts are commonly used to display the score distribution across the five dimensions, clearly showing where a person excels and where they lack. There can also be sub-item bar charts comparing the student with the average level of their peer group. In the text part, the platform provides analysis based on the scoring pattern, such as: "Your data perception and information processing abilities are excellent, indicating a solid foundation and quick understanding. However, you are relatively weak in wisdom-based decision-making and purpose planning. It is recommended to focus on strengthening training in complex problem-solving and goal management skills in the future." For students, the report can also list weak knowledge points and recommend learning resources, combined with their knowledge map; for employees, the report can provide a job competency assessment, indicating which levels of ability meet job requirements and which still need improvement. For example, the report might state: "Based on your KQ profile, you are currently very suitable for professional positions requiring rigorous analysis, but still need to improve strategic planning (P layer) ability to be competent for management positions." These personalized insights help the evaluated person improve targetedly. Reports are also provided to managers or mentors to formulate subsequent development plans.
5. AI Tutoring and Training Feedback:
KQ assessment is not a one-time judgment result, but can also serve as the starting point for continuous training. The platform is often linked with learning systems to customize training plans for users based on assessment findings. For example, if a student's KQ report shows weakness in Knowledge reasoning (K layer), the system recommends relevant reasoning exercises, and even arranges AI tutors to discuss the problems they don't understand. If employee assessment reveals insufficient experience at the decision-making level, the system might suggest participating in simulated business decision games or team projects, recording performance during the process, and then AI coaches provide feedback. By combining assessment and training, the KQ system truly becomes a tool for enhancing abilities, not just labeling. This closed loop also aligns with Professor Yucong Duan's emphasized concept of "bidirectional self-supervision": assessment (Wisdom layer output) in turn influences learning strategy (Purpose layer adjustment), thereby improving future data, information, and knowledge acquisition processes, achieving continuous self-optimization.
6. Job Matching and Talent Selection:
When enough personnel have completed KQ assessments, the platform can also perform macroscopic talent allocation functions. Enterprise HR can view each employee's KQ profile for talent inventory: e.g., using a nine-box grid to classify employees by performance (K layer performance) and potential (W/P layer potential). Through this visual map, the enterprise identifies high-potential talents (average performance but high potential) for focused cultivation, and can also discover those with low potential early on to help adjust their direction. In recruitment, the company can set the ideal KQ combination for a certain position, then screen candidates whose assessment data matches, improving recruitment success rate and job suitability. For example, to hire a project manager requiring high W and P values, candidates who score excellently in these two items in the KQ assessment should be prioritized for interviews, rather than solely based on education and seniority. In this way, person-job matching enters the "data-driven" era, and the underlying data is precisely obtained from DIKWP white-box assessment, which is much more scientific than traditional judgments based on vague impressions.
In summary, the Knowledge Quotient (KQ) white-box assessment system fully utilizes AI technology to achieve a panoramic scan and precise quantification of talent's cognitive abilities. It abandons the drawbacks of judging a lifetime by a single exam and one-sidedly focusing on scores. Through process data and multi-dimensional indicators, it reveals each person's unique cognitive ability structure. Applying KQ assessment in education can assist process evaluation and promote personalized learning; applying KQ in enterprises can assist talent selection and cultivation decisions; at the macro level, the accumulation of KQ data can even help analyze the quality of group talent and discover educational shortcomings. It can be said that KQ white-box assessment is the concrete implementation of the DIKWP model in practice, marking a key step towards achieving scientific talent evaluation.
Of course, the promotion of the KQ system also needs to pay attention to ethical and fairness issues. For example, the protection of private data, a cautious attitude towards assessment errors, and avoiding the restriction of diverse talent development with a single model. This requires us to continuously improve technology and standardize management in practical application. Overall, as long as we adhere to the original intention of "evaluation for better talent development," KQ white-box assessment will become a powerful aid in cultivating innovative talents and promoting the rational flow of talents.
5. Case Analysis: DIKWP Module Mapping and Career Growth Path
To further verify the practicality of the above theories and systems, we select three representative emerging occupations: Intelligent Manufacturing Optimization Engineer, Artificial Intelligence Trainer, and Digital Operations Talent, analyze their DIKWP capability requirements, and provide corresponding evaluation indicator designs. These cases will demonstrate how to map the 25 interaction modules of DIKWP to the capability requirements and growth paths of specific positions, and how to evaluate these capabilities through the assessment system.
Case 1: Intelligent Manufacturing Optimization Engineer
Job Description:
Intelligent Manufacturing Optimization Engineers are primarily responsible for improving the efficiency and quality of production lines in industrial manufacturing environments through data analysis and model optimization. For example, they need to monitor production equipment data, identify process bottlenecks, formulate optimization plans, and continuously improve the production system based on the enterprise's strategic goals.
DIKWP Capability Mapping:
Based on the job characteristics, we identify the key cognitive links involved for Intelligent Manufacturing Optimization Engineers, including:
D→I (Data to Information): Capability requirement is to collect raw data from the industrial site and transform it into meaningful information. Specific performance: acquiring production parameters from sensors, logs, extracting key operational intelligence through cleaning and statistical analysis, such as equipment failure rate, production line bottleneck reports, etc. Evaluation indicators can include: data collection coverage rate, information extraction accuracy, report generation timeliness, etc.
I→K (Information to Knowledge): Capability requirement is to synthesize multiple information points into executable knowledge or plans. Performance: summarizing general principles of optimization plans based on bottleneck information and manufacturing theory, forming a process improvement knowledge base. Indicators can include: the number and proportion of effective plans summarized from information; the quality of knowledge documents organized (expert evaluation).
K→W (Knowledge to Wisdom): Capability requirement is to formulate and implement wise decisions using mastered knowledge. I.e.: evaluating the pros and cons of different optimization plans, making the optimal decision considering site constraints (cost, quality, delivery time, etc.), and putting it into practice. Evaluation indicators: comprehensive benefit score of proposed decision plans, plan implementation success rate, time taken for decision, etc.
P→D (Purpose to Data): Capability requirement is to determine the data that needs attention and collection, guided by enterprise goals. For example, when the company's purpose is to "reduce production costs by 10%," optimization engineers will targetedly monitor energy consumption data and raw material loss data to obtain information related to cost control. Indicators: alignment of indicator system with goals (monitored data items cover key factors related to the goal), proactiveness of data collection (whether data needed for potential problems was considered in advance).
W→P (Wisdom to Purpose): (Relatively minor but also involved) Performance: After achieving optimization results, being able to elevate the experience to new goals to drive the next round of improvement. E.g., a successful optimization plan reduces emissions by 5%; the engineer suggests the enterprise incorporate "green manufacturing" into strategic goals based on this. Indicators: number of new goals or suggestions proposed based on optimization results, adoption rate.
The combination of these modules shows that the job capabilities of an Intelligent Manufacturing Optimization Engineer span the closed loop from bottom-level data perception to high-level strategic feedback. Particularly important are the main path modules D→I, I→K, K→W, corresponding to the three major skill domains of data analysis, knowledge integration, and decision application. On the other hand, P→D and W→P reflect the ability to integrate enterprise purpose into the optimization cycle, ensuring that optimization work always aligns with company goals.
Growth Path and Evaluation:
Corresponding to the DIKWP stage model, the talent growth for this position can be divided into: junior level focusing on data and information analysis, intermediate level starting to accumulate knowledge plans, and senior level requiring wise decision-making and participation in strategy. Specifically:
Junior Optimization Engineer (corresponding to Data/Information stage): Capable of basic data monitoring and report generation, identifying obvious problem signs. Assessment focus: data processing accuracy, simple analysis ability. Can be evaluated through regular written tests (e.g., statistical analysis knowledge quiz) and work spot checks (report error rate).
Intermediate Optimization Engineer (corresponding to Knowledge stage): Able to propose improvement plans based on multi-source information, possessing certain systematic thinking. Assessment focus: quantity and quality of plans, knowledge application level. Evaluation can be through case interviews—given a set of production data and problem phenomena, require them to propose improvement ideas. Expert panel scores based on completeness, innovativeness.
Senior Optimization Engineer (corresponding to Wisdom stage): Can comprehensively consider cost, quality, efficiency, make optimization decisions in complex trade-offs, and guide implementation. Assessment focus: decision effectiveness (e.g., annual cost savings), cross-departmental collaboration ability. Evaluation methods include 360-degree assessment (colleague feedback on their decision communication effectiveness), and KPIs based on actual project results (e.g., number of optimization goals achieved).
Senior Optimization Expert/Manager (corresponding to Purpose stage): Not only optimizes existing processes but also proactively proposes new optimization directions based on company strategy, elevating local improvements to overall planning. Assessment focus: strategic contribution, team leadership. Evaluation can be combined with company strategic goal completion status, the impact of major optimization projects led by them on the company's long-term indicators.
Through a combination of process evaluation and result evaluation, we can comprehensively track the development of optimization engineers' capabilities across the five layers. Taking engineer Xiao Zhang from enterprise A as an example: After 1 year of employment, KQ assessment found his D/I capabilities were outstanding (high score in data analysis), but W/P were low (lacked decision-making魄力 (boldness) and goal awareness). So the company assigned him a mentor and, in the second year, put him in charge of a small optimization project to exercise decision-making (W layer). After the project completion, re-evaluation showed significant progress in the W layer; he could weigh multiple factors to make decisions. At the same time, his P layer also improved, as being responsible for the project made him better understand the meaning of company goals. This cultivation combined with assessment enabled Xiao Zhang to grow into an intermediate backbone after 3 years, with potential to further advance to a senior position.
The case shows that DIKWP modules not only help characterize job requirements but also guide the pace of personnel cultivation and evaluation. For Intelligent Manufacturing Optimization Engineers, rigorous data/knowledge assessment ensures the foundation, progressive situational decision-making assessment cultivates wisdom, and goal assessment integrated with strategy hones purpose. Ultimately, individual growth and enterprise goals achieve unity.
Case 2: Artificial Intelligence Trainer
Job Description:
Artificial Intelligence Trainer is a newly emerging occupation in recent years, primarily responsible for training and optimizing models during AI product development, including data collection and annotation, training plan design, model effect evaluation, and continuous improvement. Simply put, they are "AI teachers who train AI."
DIKWP Capability Mapping:
Based on the national "Artificial Intelligence Trainer" occupational standard and related industry practices, we sort out the typical DIKWP interaction modules involved in this position:
D→I (Data to Information): Collect, clean training data, and annotate it, making it usable information for the model. Example: Extracting and annotating user intents (I) from customer conversation logs (D). Capability indicators: annotation accuracy, data processing volume per unit time, ability to filter data noise, etc.
I→K (Information to Knowledge): Integrate annotated information into the model's knowledge base or rule system. Such as designing label systems, updating intent recognition rules, transforming a large number of annotated samples into generalizable knowledge. Indicators: knowledge base coverage rate, rule correctness rate, model knowledge update frequency.
K→W (Knowledge to Wisdom): Use machine learning and parameter tuning experience (knowledge) to formulate model training strategies, solving model performance issues. For example, adjusting regularization schemes based on overfitting phenomena, optimizing model structure based on algorithm principles. Indicators: effectiveness of proposed optimization strategies (magnitude of model accuracy improvement), number of difficult problems solved.
W→P (Wisdom to Purpose): Adjust the training goals for the next stage based on the evaluation results of the model's performance. For instance, finding the model performs poorly on a certain type of user Q&A, the trainer sets a new goal based on this: "Increase accuracy for this type of Q&A by 10%," and plans resource investment. Indicators: rationality of goal setting (whether proposed based on problem analysis), goal achievement rate.
P→D/I (Purpose to Data/Information): This composite module refers to guiding data collection and feature selection in reverse, based on training goals. E.g., with the purpose of "improving model accuracy," decide to increase data for certain important scenarios and focus on specific features. Indicators: relevance of newly added data to the goal, effectiveness of feature engineering (whether it improves model performance).
The above modules basically depict the AI trainer's cyclical workflow of "continuously improving the model guided by goals." Specifically, they first receive project requirements (P layer, e.g., developing a customer service bot to better understand user intents), then determine what data is needed (P→D: e.g., collect various user questions), process the data into information for model use (D→I: annotate data), then extract knowledge from it to optimize the model (I→K: organize rules and features), then train the model to make wise decisions (K→W: model applied to decision making), evaluate model performance and adjust the next round of goals accordingly (W→P). In actual tasks, these steps often iterate in cycles until the model reaches the expected performance.
Growth Path and Evaluation:
The career path of an AI Trainer can be divided into junior data annotator, intermediate model training engineer, senior AI training expert/product manager, etc., referring to the DIKWP levels. Correspondingly:
Junior (Data/Information stage): Responsible for basic data annotation, simple preprocessing, etc.; familiarity with data and task domain information is sufficient. Evaluation focus: data annotation quality, efficiency; basic tool usage ability. Can be assessed through regular spot checks of annotation sample correctness and speed ranking.
Intermediate (Knowledge stage): Able to independently run model training, adjust parameters, solve general performance problems. Evaluation focus: understanding and application of algorithm knowledge, model tuning ability. Assessment methods: written/on-machine tests of mastery of common algorithm principles; simulating a training task to see if they can correctly tune parameters to meet performance requirements.
Senior (Wisdom stage): Able to formulate innovative training plans for complex and tricky problems, achieving breakthroughs in overall model effectiveness; also possesses the ability to guide teams. Evaluation focus: number of successful innovative plan cases, ability to solve difficult problems, teamwork and leadership skills. Assessment methods: case interviews—let them recall how they overcame the most difficult training project experienced (interviewers focus on their analysis, decision process W layer), 360-degree feedback—team members' evaluation of their guidance ability and goal leadership (P layer).
Expert (Purpose stage): Usually responsible for the training direction and strategy of AI products, controlling overall quality, and even participating in product planning. They are not only proficient in technology but also understand product goals and user needs. Evaluation focus: strategic vision, industry influence, product success rate. Assessment methods: comprehensive review—such as the achievement status of product performance indicators they participated in setting, the adoption rate of training standards they formulated by the industry; professional contribution—white papers published, talents trained, etc.
By applying KQ assessment combined with actual project performance, the growth of AI trainers' capabilities across the five layers can be tracked. For example, in an internet company, HR used KQ to conduct a capability inventory of two trainers, Bai Yan and Zhao Ming. The results showed: their professional performance (e.g., magnitude of model accuracy improvement) was similar, but Bai Yan's  K Q P  (Purpose layer) score was significantly higher than Zhao Ming's. This also matched observations: Bai Yan often proactively suggested product improvement directions, linking training work with user experience, while Zhao Ming was more focused on the data itself. So the company decided: to favor Bai Yan when promoting a leader, because her goal awareness and leadership ability were stronger (P layer advantage). For Zhao Ming, let him continue to deepen his technical skills, and arrange for him to work paired with a product manager to broaden his horizons. In the following two years, Bai Yan successfully grew into a project leader, playing a leading role in the training of a new category of AI assistants, while Zhao Ming's technology became more refined, becoming the company's algorithm expert. This case validates the value of DIKWP assessment in job matching and talent growth guidance: it not only helps discover potential differences behind apparent performance but also puts everyone on the track best suited for leveraging their strengths.
Case 3: Digital Operations Talent
Job Description:
Digital Operations Talent broadly refers to composite talents in the internet and digital economy fields engaged in data-driven market operations, content operations, etc. They need to comprehensively use data analysis, marketing planning, product thinking, and business insights to achieve growth goals.
DIKWP Capability Mapping:
A typical scenario for digital operations work: improving the user retention rate of a certain App. This requires operations personnel to analyze problems from massive user data, refine operations knowledge, formulate operation plans, and continuously adjust goals. The corresponding DIKWP modules include:
D→I: User data analysis. There is a large amount of user behavior data daily (clicks, dwell time, etc.); operations need to transform it into information, such as "next-day retention rate," "characteristics of churned users." Capability indicators: proficiency in using data analysis tools, quality of analysis reports.
I→K: Operations knowledge accumulation. Integrate information from multiple activities or different channels to form a user operations methodology, such as identifying key churn reasons and retention improvement measures. Indicators: refined operations element library, reuse rate of successful experiences.
K→W: Plan execution. Design creative activities (such as push content, community activities) based on mastered operations knowledge and implement them, using wise means to improve retention. Indicators: innovation score of planned initiatives, execution effectiveness (magnitude of retention rate improvement).
W→P: Goal adjustment. Adjust the operational goals and strategies for the next stage based on the evaluation of each activity's results (feedback after wisdom application). E.g., the achievement rate of the first activity was only 80%, realized the goal was set too high, corrected the next goal and added resources. Indicators: rationality of goal adjustment, trend of goal achievement rate in consecutive activities.
P→I/K: Data utilization under strategic guidance. Operations need to understand the company's strategic intent, refine it into operational focus indicators and knowledge. For example, if the company strategy emphasizes user activity (P), operations specifically analyze activity data (I) and learn methods to improve activity (K). Indicators: alignment of selected operational data with strategy, number of new operational methods learned per quarter.
Digital operations talent embodies the cycle of data-driven business decision-making in their work: set goals (P), check data (D→I), find solutions (I→K→W), then review results and adjust goals (W→P). In this process, user/market awareness (P layer) and data capability (D/I layer) are both crucial, while creativity and decision-making (W layer) determine the success or failure of the plan. Therefore, an excellent operations person needs to both "look down at the data" and "look up at the goals."
Growth Path and Evaluation:
Digital operations talents usually start as data analysts or content operations specialists, gradually growing into operations managers independently responsible for projects, or even transitioning to product managers or business leaders. Corresponding to DIKWP:
Novice Period (Data/Information stage): Main work is at the execution level, such as organizing data reports, publishing content, etc. Assessment focuses on execution ability and basic analysis: timeliness of task completion, correctness of simple data analysis.
Growth Period (Knowledge stage): Begins to take charge of small operational activities, accumulating methodologies. Assessment focuses on method and knowledge accumulation: quality of activity completion, summary and refinement after each activity (whether an operations manual, lessons learned library is formed).
Mature Period (Wisdom stage): Able to independently plan large-scale activities, achieving improvement in key indicators. Assessment focuses on decision-making and innovation: ROI (Return on Investment) of activities, degree of innovation, adaptability when encountering problems (e.g., performance when temporarily adjusting plans due to poor user feedback during an activity).
Leadership Period (Purpose stage): Rises to become operations head, formulating annual operational strategies, translating business goals into specific operational indicators. Assessment focuses on strategic alignment and team leadership: overall goal completion rate, contribution to company strategy (e.g., whether operational strategies pushed user numbers to meet company requirements), team cultivation situation, etc.
During the evaluation process, simulated operations competitions can be used as an assessment method. For example, give multiple operations managers a virtual product and initial data, let them formulate a 3-month retention improvement plan, then simulate execution (the platform generates results based on the plan and random events), and finally compare indicators such as retention rate improvement and user satisfaction. This process can comprehensively assess their DIKWP capabilities: whether data analysis is thorough (D/I), whether strategy is based on insight (K), whether execution adjustments are timely (W/P). Similar competitions have been adopted in talent selection by some internet companies with good results—because it realistically reproduces the entire cognitive link rather than being fragmented like written tests.
Typical Growth Case: Xiao Liu, an operations staff member at a content community, was responsible for daily content publishing in the first year (strong D/I capability). In the second year, he began planning some online activities, but the first few had mediocre results (retention rate hardly improved). The team arranged targeted mentoring for him, helping him review and refine experiences after each activity (enhancing I→K capability). At the same time, he was allowed to participate in cross-departmental meetings to understand company strategy and product planning (enhancing P layer understanding). A year later, the activities planned by Xiao Liu showed significant improvement: they were more tailored to user needs and could achieve specific goals. He successfully increased the 7-day retention rate of a certain activity by 5% and was rated as excellent. Now in his 3rd year, he has been promoted to community operations supervisor, responsible for formulating quarterly operational strategies. This journey shows that through targeted evaluation feedback and cultivation (such as review meetings, strategy discussion meetings), Xiao Liu's capabilities gradually leaped from the data execution layer to the wisdom decision-making layer and purpose layer, achieving steady career growth.
The above three cases cover fields like manufacturing, IT, and internet operations, proving the universality and explanatory power of the DIKWP interaction model: regardless of whether the task object is machinery, AI models, or user groups, the corresponding work can be abstracted into a chain from data to purpose, and key capability modules can be extracted for cultivation and assessment. These cases also highlight the important value of white-box capability assessment: it makes implicit capability requirements explicit, and makes the dynamic growth process measurable, thereby providing clear improvement directions and communication language for individuals and organizations.
Below, we summarize the general definitions of the 25 DIKWP interaction modules and their evaluation indicators in Table 1. This table serves as both a reference template for designing specific assessment content and a systematic organization of the scattered indicators from the cases.
Table 1: DIKWP x DIKWP 25 Interaction Modules, Capability Definitions, and Example Evaluation Indicators
Module (From → To Layer)
Capability Definition
Example Capability Performance
Typical Evaluation Indicators
D→D (Data to Data)
Ability to acquire, organize, and save data; organizing scattered raw data into usable datasets.
Effectively collecting and cleaning duplicate, erroneous data.
Data completeness rate; error rate reduction percentage after cleaning.
D→I (Data to Info)
Data analysis and information extraction ability; extracting meaningful information patterns from raw data.
Extracting key operating indicators from sensor logs; summarizing user preferences from survey results.
Information extraction accuracy; proportion of key information covered in reports.
D→K (Data to Know.)
Ability to induce general rules from large amounts of data, forming transferable knowledge.
Deriving business rules from big data analysis and elevating them into guidance manuals.
Accuracy of models or rules; percentage of scenarios covered by the knowledge base.
D→W (Data to Wisd.)
Ability to make judgments and decisions directly based on data (without intermediate knowledge reasoning).
Monitoring data anomalies in real-time and immediately deciding to shut down for inspection.
Decision response time; anomaly handling success rate.
D→P (Data to Purp.)
Ability to perceive goals or purposes from data; using objective facts to inspire subjective goal setting.
Identifying new opportunities through market data insights, thereby formulating new business goals.
Number of times strategy adjusted based on data; adoption rate of new goals proposed bottom-up.
I→D (Info to Data)
Ability to infer the need for further data collection based on interpreted information.
Finding report information insufficient, deciding to collect additional relevant data.
Timeliness of identifying information gaps and supplementing data; degree of decision improvement from additional data.
I→I (Info to Info)
Information integration and transformation ability; synthesizing multiple pieces of information, converting formats, to get new info.
Compiling briefings from multi-channel information; converting text information into charts.
Information integration degree (redundancy reduction rate); multi-source consistency.
I→K (Info to Know.)
Knowledge construction ability; inducing principles or experiences from multiple pieces of info, elevating to structured knowledge.
Summarizing success patterns or failure lessons from multiple case information.
Number of rules refined; correct applicability rate of knowledge.
I→W (Info to Wisd.)
Ability to directly solve problems and make decisions based on grasped information.
Proposing solutions on the spot based on customer feedback information.
Sufficiency of information basis for decisions (correct decision rate without knowledge base); first-time resolution rate.
I→P (Info to Purp.)
Ability to grasp others' purposes from obtained info or form one's own new purposes.
Understanding the leader's true intent conveyed between the lines; getting new startup ideas from industry reports.
Accuracy rate of understanding implicit purposes; number of effective new goals triggered by info.
K→D (Know. to Data)
Ability to use existing knowledge to guide data collection and selection.
Knowing which data is important for the problem and prioritizing its acquisition.
Relevance of data selection; coverage rate of data items predicted needed by knowledge.
K→I (Know. to Info)
Ability to use knowledge to interpret information; understanding/evaluating info within a knowledge context.
Interpreting test report information using medical knowledge; understanding contract terms based on legal knowledge.
Information interpretation accuracy; number of information queries/doubts raised (reflects deep understanding).
K→K (Know. to Know.)
Knowledge transfer and extension ability; deriving new knowledge from existing knowledge, or applying knowledge to new domains.
Applying algorithm knowledge to different problem scenarios with innovation; creating new theories through interdisciplinary fusion.
Knowledge transfer success rate (application effect in different contexts); number of knowledge innovation outcomes.
K→W (Know. to Wisd.)
Ability to use knowledge for analysis, judgment, and creative problem-solving.
Designing innovative solutions to technical problems based on engineering principles.
Feasibility/innovation score of proposed solutions; number of major problems solved.
K→P (Know. to Purp.)
Ability to support vision and decisions with knowledge; transforming what is known into long-term goals.
Experts formulating development plans based on industry knowledge; teachers inspiring students' ambitions with profound knowledge.
Rationality of strategy or vision formulation (expert review); team acceptance degree of vision.
W→D (Wisd. to Data)
Ability to identify and re-acquire needed basic data from a high-level decision perspective.
After decision, identifying need for new data to verify hypothesis, arranging additional investigation.
Targetedness of data added after decision; success rate of data verifying hypothesis.
W→I (Wisd. to Info)
Ability to concretize decision ideas into communicable information.
Expressing abstract strategy using specific indicators and plans.
Clarity of information transformed from strategy (subordinate understanding degree); communication efficiency.
W→K (Wisd. to Know.)
Ability to elevate decision experience into general knowledge; accumulating wisdom crystallization.
Writing project decision experience into case study materials; promoting solutions into industry standards.
Number of times refined experience is reused; number of standards or patents formed.
W→W (Wisd. to Wisd.)
Self-reflection and continuous optimization ability; learning from one decision to improve the next.
Reviewing decision failures and avoiding similar mistakes in the next decision.
Decision improvement speed; decrease in recurrence rate of similar problems.
W→P (Wisd. to Purp.)
Ability to transform decision outcomes into new, higher-level goals.
After completing the current goal, setting a more ambitious next-stage goal accordingly.
Challenge degree of new goal (improvement magnitude over previous goal); goal foresight.
P→D (Purp. to Data)
Ability to proactively seek relevant data based on goals.
Specifically monitoring energy consumption data to achieve energy-saving goals.
Acquisition rate of goal-relevant data; effectiveness rate of monitored data warnings.
P→I (Purp. to Info)
Ability to filter and focus on information based on goals.
Sales manager closely watching daily sales reports and market dynamics targeting sales goals.
Relevance of focused information to goals; number of times key information overlooked (reverse indicator).
P→K (Purp. to Know.)
Ability to learn new knowledge and update knowledge systems driven by goals.
Self-learning new technology to achieve R&D goals and integrating it into the team knowledge base.
Quantity of new knowledge learned for the goal; application effect of new knowledge.
P→W (Purp. to Wisd.)
Ability to guide high-level decisions and make decisive trade-offs based on ultimate goals.
Entrepreneur deciding on diversified business trade-offs guided by vision; leader resolutely taking extraordinary measures for mission.
Decision consistency score with vision; decisiveness in major decisions (delay days, etc.).
P→P (Purp. to Purp.)
Ability to reflect on and reshape one's own purposes; adjusting goals based on environment and cognitive improvement.
Entrepreneur adjusting startup vision based on realistic challenges to make it more feasible; individual repositioning goals in career development.
Rationality of goal revision (expert assessment); improvement in goal achievement rate after revision.
Table 1 summarizes the meaning of each module and possible evaluation indicators. In practical application, indicators should be refined based on specific positions and tasks, and indicator weights determined through research and analysis. Through this module table, we can conveniently construct the capability model for a certain occupation. For example, combining several highlighted modules in the table can outline the capability requirement matrix for the aforementioned AI Trainer, Manufacturing Optimization Engineer, and Digital Operations positions. This structured method also helps identify capability shortcomings: for example, for a certain team, comparing the distribution of team members' indicators on these modules, if the overall "W→P" is weak, it indicates the team lacks awareness of elevating successful experiences into strategy, requiring strengthening training in this aspect or introducing talents with this specialty.
In summary, through the above cases and module mapping table, we have verified the powerful explanatory ability and practical value of the DIKWP model: it can not only theoretically describe the cognitive and growth process but also specifically guide job capability modeling and evaluation indicator design. For constantly emerging new occupations and positions, we can also use the same method for analysis, thereby timely updating talent cultivation and assessment systems, maintaining synchronization between education and industry. In the next section, we will discuss how to build the corresponding technical platform to truly implement these theories and methods.
6. Intelligent Assessment Platform Architecture and Implementation Suggestions
To put the aforementioned DIKWP talent evaluation system into practice, it is necessary to build an intelligent assessment platform integrating advanced technologies. This platform should support evaluation applications in multiple scenarios, including learning assessment in education, talent assessment within enterprises, and qualification certification at the industry level. At the same time, it must meet requirements for data security and technological autonomy, embodying the concepts of "semantic sovereignty" and "sovereign AI." Below we propose suggestions from the perspectives of platform functional architecture and implementation path.
6.1 Platform Core Functional Architecture
Based on the requirements described earlier, the intelligent assessment platform should possess the following core functional modules:
1. DIKWP Semantic Analysis Engine:
This is the foundation of the platform. It uses semantic understanding and knowledge graph technology to analyze and score users' answers or behaviors at the DIKWP levels. Specifically, it includes: Natural Language Processing (for parsing the semantics of text answers), knowledge graph matching (for comparing candidate answers with standard knowledge points for compliance), logical reasoning models (evaluating the correctness of reasoning chains), etc. The characteristic of this engine is its built-in DIKWP hierarchical rules, enabling it to output layered results during analysis, such as a score of 80 for the Data layer, 70 for the Information layer, etc., for a certain answer. Developing this engine requires a large amount of annotated data for training, while referring to the research results of Professor Yucong Duan's team in "semantic mathematics" to ensure the engine can better distinguish the characteristics of each cognitive layer. Due to the complexity of the Chinese language and culture, the semantic engine needs to be optimized for Chinese scenarios, supporting the parsing of professional corpora in different fields such as education, technology, and humanities. China has rich data and experience reserves in this regard, which can fully leverage local advantages to build semantic analysis models with independent intellectual property rights, thereby mastering the foundation of semantic sovereignty.
2. White-Box Behavior Recording and Link Modeling:
The platform should have built-in monitoring modules to record key behavioral information during user answering or training processes. As mentioned earlier, it tracks user operations such as clicks, inputs, interruptions, modifications, etc., and transforms these behavioral sequences into easily analyzable "cognitive link" models. To improve accuracy, the platform can draw on techniques from user experience research, such as mouse tracking, eye-tracking, etc., to capture user attention allocation. The link modeling module will map behavioral sequences to the DIKWP process, for example, identifying a pattern where a user first spends a lot of time reading data (long dwell time on D layer), then makes multiple attempts (repeated decisions on W layer), and finally modifies the initial goal (P layer adjustment). Through such analysis, one can gain a deeper understanding of the thinking process behind the user's answers. This link information can be used for individual feedback and also help researchers improve assessment questions (e.g., if most people take a detour on a certain question, it indicates the question stem might be ambiguous). This module needs to protect and anonymize user privacy, extracting only anonymized behavioral features meaningful for assessment, ensuring data sovereignty is on our side and personal privacy is not infringed.
3. Adaptive Assessment and Generative Question Module:
To adapt to users of different levels and fields, the platform should implement adaptive assessment. This requires a built-in assessment algorithm that adjusts questions in real time based on user performance. Combined with large model technology, the platform can even generate personalized questions. For example, when detecting a student is weak in calculus knowledge (K layer), the platform can instantly use a GPT-like model to generate a targeted calculus problem for practice. Generative AI can also be used to produce questions of varying difficulty and different scenarios, increasing the diversity of the question bank. Additionally, the platform can recommend question combinations based on DIKWP modules, e.g., if enterprise HR wants to assess "Information → Knowledge" capability, it automatically assembles questions involving inductive summarization. Adaptive and generative modules ensure the flexibility and efficiency of assessment, also reducing the cost of manual question setting. However, when using generative AI, its output must be verified and reviewed to ensure questions are correct and unbiased. This requires tackling the controllable generation technology of large models, making them follow instructions and be safely usable on our sovereign platform.
4. Credit Transfer and Certificate Issuance Module:
The platform should connect with the national credit bank and qualification framework to realize the conversion of assessment results into credits and certificates. For example, if a student completes a certain vocational skill assessment on the platform and meets the standard, corresponding digital credits or skill badges are automatically generated and recorded in their credit bank account. After accumulating a certain amount of credits, they can apply to exchange them for nationally recognized qualification certificates or degree course credits. This requires data docking between the platform and the credit bank system, and ensuring the authenticity of the assessment process (e.g., using real-name + facial recognition anti-cheating measures). Additionally, blockchain technology can be introduced for evidence preservation, making the acquisition and use of each certificate traceable, preventing forgery and tampering. In design, the platform's credit transfer rules should be open and transparent, smoothly connecting with existing vocational qualification standards. For example, specify that completing the "Electrician Intermediate Skill Assessment" on the platform can offset a certain number of hours of offline practical assessment. This module reflects the social recognition of evaluation results, making talent evaluation no longer isolated, but integrated into the lifelong education system. The platform operator should cooperate closely with education and human resources departments, regularly updating conversion rules and adding connections for new industry certificates.
5. Visualized Decision Support Module:
Provide rich report displays and decision support functions for users (students, employees) and managers (teachers, HR, supervisors). Including individual DIKWP capability radar charts, progress curves; class or team talent matrices (e.g., using a nine-box grid to show the DIKWP level distribution of each member); and organizational level capability maps (e.g., overall level comparison of skilled talents at each layer in a certain region). These visual charts help different roles quickly understand assessment data and assist them in making decisions. For example, teachers can adjust teaching focus based on class radar charts; HR can use talent matrices to decide on training resource allocation; the government, seeing the industry capability map, identifies shortages of talents at certain levels, thereby adding relevant cultivation programs. To achieve these functions, the platform needs built-in data analysis and visualization tools, allowing users to customize queries and reports. For example, a principal might want to see the DIKWP growth trend of students in their school over the past 5 years, HR might want to filter employees in the company who rank in the top 10% on the W layer score, etc. Through a friendly interface and interaction, these data will be transformed into insights, helping managers formulate measures and individuals make learning plans.
6.2 Platform Implementation and Sovereignty Guarantee
In the actual promotion of the assessment platform construction and application, attention needs to be paid to the following points to guarantee technological leadership and security controllability:
1. Autonomous and Controllable Core Technology:
Key technologies used by the platform such as natural language processing, big data analysis, knowledge graph construction, etc., should prioritize independent R&D or controllable open-source solutions, minimizing dependence on foreign closed technologies. This involves the realization of "sovereign AI." Specific practices include: using large language models trained independently in China to parse Chinese test questions and answers, ensuring high accuracy in the context of local language and culture, while eliminating potential value biases from foreign models; building domestic computing power platforms to train the aforementioned models, ensuring data does not leave the country; using domestic databases, middleware, and other basic software to reduce supply chain risks. In recent years, domestic research in knowledge computing and cognitive intelligence has achieved solid results, and some open-source frameworks (such as deep learning frameworks supporting knowledge graphs) can be directly used or secondarily developed. Therefore, achieving autonomous control of the core technology of the assessment platform is entirely possible and is also an inherent requirement for guaranteeing "semantic sovereignty." At the same time, autonomous technology is also conducive to customized optimization according to the characteristics of China's education and talent evaluation, avoiding the "one-size-fits-all" difficulty of adapting imported products.
2. Data Security and Privacy Protection:
The platform will accumulate a large amount of personal learning and assessment data, which is highly sensitive (including scores, capability weaknesses, and even behavioral trajectories). Strict data sovereignty and privacy protection mechanisms must be established. On the one hand, clarify data ownership rights, e.g., student assessment data belongs to the student and their school, and relevant departments need approval according to regulations to access it, prohibiting commercial profiteering. On the other hand, technically, methods like distributed storage, permission-based encryption, etc., can be used to prevent single-point leakage. Access to sensitive data should have anonymization strategies, e.g., when generating group statistical reports, only display proportions without involving individual identities. The state can issue corresponding laws and standards for this, requiring assessment platform operators to obtain qualifications and undergo regular audits to ensure their use of user data complies with regulations. This reflects one aspect of "semantic sovereignty" at the data level: talent assessment data is a national strategic resource, must be controlled by trusted local institutions, cannot be obtained by external forces to analyze China's talent structure, and certainly cannot be abused by commercial institutions, affecting fairness.
3. Autonomy of Values and Evaluation Standards:
When designing the assessment platform, attention must also be paid to the issue of evaluation orientation. This is precisely the embodiment of "semantic sovereignty" at the value dimension. We must ensure that what the platform evaluates and rewards aligns with China's talent concept and core values. For example, in the Purpose layer evaluation, affirmation of social responsibility and patriotism should be reflected; in the Wisdom layer evaluation, algorithm bias leading to disadvantages for certain thinking styles should be avoided. For instance, if a certain foreign large model is used to score essays without adjustment, it might favor Western discourse systems and give low scores to articles expressing patriotic sentiments. This situation must be prevented. Therefore, the models used by the platform must incorporate mainstream Chinese values. For example, by adding samples with positive value guidance during model training, manually reviewing and fine-tuning the model output results, etc. When unreasonable tendencies are found in assessment results (e.g., talents of a certain style systematically score low), the reasons should be analyzed promptly and the algorithm adjusted, preventing technological bias from imperceptibly affecting the direction of talent cultivation.
4. Infrastructure and Supply Chain Security:
The assessment platform is an important infrastructure for digital education and governance, and should be deployed in a nationally trusted cloud environment or local servers, avoiding reliance on overseas cloud services. Especially in sensitive applications involving national exams, vocational qualification assessments, etc., there must be independent physical and network environments, adopting protection measures at the level of classified protection to prevent cyber attacks. At the same time, emergency plans should be perfected to cope with various emergencies such as system failures, large-scale cheating attempts, etc., ensuring the continuity and fairness of the assessment. In this regard, China already has experience in online college entrance examination grading, computerized graded exams, etc., whose security schemes can be referenced. In the supply chain, hardware equipment, operating systems, databases, etc., should use domestic products as much as possible to reduce the risk of backdoors being implanted by others. Additionally, consider building assessment sandbox environments for testing all new versions of assessment algorithms and generative AI outputs, ensuring no security risks or inappropriate content exist before deployment. These measures combined constitute building a "national team" talent assessment platform, escorting our education and talent strategy.
5. Step-by-Step Implementation and Pilot Verification:
The construction of such a huge platform cannot be achieved overnight; it is recommended to adopt a gradual, pilot-first strategy. First, gain successful experience in local scenarios, then gradually expand the scope, eventually forming a unified national network. Specifically, it can be divided into three steps:
Step 1: Select several universities and large/medium-sized enterprises for pilot projects. Universities focus on using the platform for course exams, graduation design defenses, etc.; enterprises focus on recruitment assessment and on-the-job training evaluation. Verify the effectiveness and user acceptance of the platform's core functions (such as semantic scoring, knowledge maps, white-box analysis) in the pilot, and make timely improvements. For example, first let a graduating class of a certain major use the platform for comprehensive capability assessment, compare it with teachers' traditional evaluations, see the correlation and differences, thereby adjusting algorithm parameters.
Step 2: Promote the platform to the regional or industry level. For example, build a unified vocational education assessment platform within a province, connect it to the provincial credit bank, for capability certification of secondary and higher vocational students; or promote it within the IT internet industry association for talent assessment exchange among member companies. Through regional/industry application, further improve platform standards, especially addressing the issues of data sharing and mutual recognition between different institutions, solving cross-institutional trust mechanisms (possibly achieved through blockchain, etc.). At this stage, corresponding industry standards or local regulations should also be issued, clarifying how assessment results are used in recruitment and personnel matters, accumulating institutional basis for national promotion.
Step 3: Integrate various parties at the national level to form a national talent evaluation service platform. This could be an independent subsystem of the national smart education platform, or expand the functions relying on the existing national smart education platform. By then, it will not only serve education and internal enterprise needs, but also provide assessment services to individuals in society (e.g., individuals submitting their works online for assessment to obtain capability certificates). At the same time, national-level regulatory and support institutions will formally operate, such as establishing a "National Intelligent Assessment Center" responsible for operation, maintenance, and continuous R&D. After this step is completed, our talent evaluation will truly enter a new stage of digitalization and intelligence.
After the platform is fully implemented, the benefits it brings will be enormous. In terms of talent cultivation, teachers can understand students more accurately and teach according to their aptitude; students can try multiple times to "pass levels," growing through continuous feedback, which is much healthier and more scientific than the high-pressure system of judging a lifetime by a single exam in the past. In terms of employment, enterprises reduce the inefficiency of blindly screening resumes and have more quantitative basis, truly achieving dynamic matching of "jobs finding people" and "people fitting jobs." At the macro level, decision-makers can gain insights into the talent structure and trends of regions and industries from assessment big data, providing quantitative support for policy formulation. This will greatly enhance the efficiency and accuracy of talent resource allocation in our country, and also improve the fairness of education and employment.
It is worth noting that as the data accumulated by the platform becomes increasingly rich, we can also explore new paths for artificial intelligence to feed back into education and management. For example, use assessment big data to train a "tutoring AI" that provides real-time Q&A and strategy suggestions for learners; use industry talent maps to train a "career planning AI" that provides intelligent suggestions for individuals' career development paths, etc. These imaginations all depend on us doing the assessment work well and solidly first, because scientific evaluation is both the endpoint and the starting point—the endpoint lies in witnessing cultivation results, and the starting point lies in guiding learning and development towards higher goals in the next stage.
7. Policy Path and Future Outlook
The DIKWP interaction model-driven talent growth evaluation system is a systematic project involving multiple aspects such as education reform, enterprise employment, industry standards, and even technological innovation. To make this system truly effective, besides the construction of the technical platform, policy-level escort and the renewal of social concepts are even more necessary. Finally, we discuss this from the perspectives of policy measures and future outlook.
7.1 Policy Path
1. Top-Level Design and Institutional Guarantee:
The government should incorporate the DIKWP talent evaluation system into the national talent development strategy. Specifically, the "National Education Plan" and "National Human Resources Plan" can explicitly propose "exploring diversified evaluation mechanisms based on the DIKWP model," giving policy direction recognition. In terms of laws and regulations, timely update the "Implementation Plan for Education Evaluation Reform," "Vocational Education Law," "Lifelong Education Regulations," etc., adding support clauses for process evaluation and credit bank mutual recognition. For example, the revised Vocational Education Law already emphasizes the concepts of industry-education integration and lifelong learning; it can be further detailed: recognizing learning outcomes obtained through intelligent assessment platforms, possessing equivalent validity to traditional certificates. Additionally, establish a cross-departmental coordination mechanism, led by the Ministry of Education and the Ministry of Human Resources and Social Security, together with departments like Industry and Information Technology, Finance, etc., to form a special working group to coordinate and promote related work. This ensures that when implementing the new evaluation system, all links work together without buck-passing.
2. Pilot Demonstration and Phased Promotion:
As mentioned in the implementation suggestions, we need to select some regions and units willing to reform and innovate to conduct pilots first. The central government can set up special funds to support pilot units in purchasing equipment, developing software, training personnel, etc. For those with significant pilot results, promote the experience nationwide through authoritative channels such as the National Education Examinations Authority, Vocational Skill Testing Authority, etc., using pilots to drive overall rollout. At the same time, consider promoting it gradually by education stage and professional field. For example, implement it first in vocational education and higher education, because these fields have strong demands for reform in skill and capability evaluation, and resistance is relatively small; then permeate into basic education, assisting quality evaluation in primary and secondary schools (e.g., introducing DIKWP indicators in comprehensive quality portfolios). Also, implement it first in new occupations and emerging industries, because these fields have many standard gaps and are more receptive to new frameworks; after maturation, extend to traditional fields, running parallel with existing standards for a period before unifying at an opportune time.
3. Training Enhancement and Public Promotion:
The implementation of the new system requires a large number of professionals familiar with DIKWP concepts and intelligent assessment tools. The government should support universities in offering related courses or training programs to cultivate new types of talents such as "intelligent evaluators," "data analysis teachers," etc. Conduct continuing education for existing teachers, assessors, HR personnel, etc., enabling them to master the skills of using the platform and interpreting reports. At the same time, strengthen publicity and guidance to the general public. Report pilot results and expert interpretations of the reform's significance through mainstream media, increasing social recognition of process evaluation and diversified evaluation. Especially, eliminate the distrust of some parents and employers towards new evaluation methods, establishing the concept that "everyone can become talented, growth is not solely based on scores." In the long run, this helps create a good atmosphere for reform, promoting the shift of China's talent evaluation from "score-only," "diploma-only" towards a more scientific and rational approach.
4. Industry-Education Integration and Standard Co-construction:
Encourage industry enterprises to participate in the formulation of evaluation standards and application promotion. For example, invite leading enterprises and industry associations to participate in the development of assessment question banks, introducing cutting-edge practical cases into assessments, seamlessly connecting evaluation content with actual job requirements. When publishing industry talent standards, simultaneously release corresponding DIKWP assessment guidelines, guiding enterprises to use the new framework in recruitment and internal training. The government can build public service platforms to promote mutual recognition of evaluation results: for example, universities and enterprises share students' capability assessment reports (with student consent) as recruitment references, reducing repetitive assessments. This actually promotes the socialization of talent evaluation standards, breaking down departmental barriers and making talent flow smoother. Ultimately, through co-construction and sharing by all stakeholders, achieve the agglomeration effect of evaluation reform, making it a "public good" for improving human capital quality.
5. International Cooperation and Standard Output:
On the premise of mastering semantic sovereignty and technological leadership, we should also actively participate in international dialogue, publicize and promote the DIKWP evaluation concept, enhancing China's voice in the field of global education evaluation reform and skill standards. On the one hand, cooperate with UNESCO and other organizations to integrate the DIKWP framework into their talent capability framework initiatives, contributing Chinese wisdom. On the other hand, serve the talent cultivation needs of countries along the "Belt and Road," driving the export of China's education technology and artificial intelligence industries through outputting assessment platforms and standards. It should be noted that in international cooperation, we must still adhere to prioritizing our own interests: learn from advanced foreign experience, but do not copy Western evaluation models wholesale; instead, establish a Chinese characteristic assessment paradigm, letting the world see our concepts and achievements in cultivating innovative talents.
7.2 Future Outlook
Looking ahead, the talent growth evaluation and performance assessment system based on the DIKWP model will have profound impacts:
Firstly, in terms of improving talent quality, it will promote the continuous emergence of comprehensively developed talents. Since evaluation no longer solely focuses on temporary results but on long-term development and comprehensive capabilities, students and employees will be positively guided to strive to improve their qualities such as high-order thinking, innovation ability, and social responsibility. This means that talents entering the workforce in the future will be more adaptable and creative, capable of handling complex and changing work and undertaking greater missions. This provides solid talent support for China to accelerate the construction of an innovative nation and achieve high-quality development.
Secondly, in terms of transforming education models, the DIKWP assessment system will force reforms in education and teaching. Teachers will pay more attention to how to cultivate students' Wisdom and Purpose layer capabilities, rather than just imparting dead knowledge; schools will gradually establish evaluation systems centered on process assessment and capability output, and classroom teaching will also become more flexible, diverse, and practice-oriented. It is foreseeable that future classrooms may feature "AI teaching assistants" accompanying students' learning, providing real-time assessment feedback, and teachers teaching according to aptitude and providing differentiated guidance. This greatly improves teaching efficiency and targetedness, truly providing every child with the opportunity to shine in life.
Thirdly, in terms of human resource management, enterprises will welcome upgrades in employment and training models. Through KQ assessment, enterprises can "let data speak," objectively identify and cultivate talents. Each employee will have a digital capability file, promotions will be evidence-based, and job matching rational. Employees themselves can also clearly see their growth paths, stimulating internal motivation. This will create a fairer, more transparent corporate culture that encourages progress, enhancing overall organizational combat effectiveness. At the same time, talent decisions supported by big data will also become more scientific, reducing subjective bias from personal judgment, allowing truly potential talents to stand out, forming a good ecosystem where everyone's talent is fully utilized.
Then, in terms of social fairness and mobility, the new evaluation system is expected to partially break the chronic problems of "prestigious school-ism," "diploma-ism," etc., allowing various growth paths to be recognized and respected. For example, through credit banks and capability certification, a vocational school graduate who continuously learns and improves at work can also obtain equivalent bachelor's or master's education achievements based on accumulated credits and certificates, competing on a more equal footing with graduates from prestigious schools in job seeking. This will promote horizontal and vertical connections between education and employment, building a learning society. In such an environment, everyone has the opportunity to improve their "knowledge quotient" through continuous effort, achieve upward mobility, and the entire society's innovation vitality and cohesion will be enhanced.
Finally, from the perspective of the relationship between artificial intelligence and human development, the implementation of the DIKWP evaluation system is also a new paradigm of human-machine collaboration. AI is no longer just a tool for scoring objective questions in exams, but deeply participates in capability evaluation and cultivation feedback, becoming an "efficiency booster" and "magnifying glass" for human wisdom. Humans better understand and develop themselves through AI, while AI also continuously improves through interaction with humans. The "artificial consciousness white-box assessment" proposed by Professor Yucong Duan for evaluating AI models is consistent in concept with our white-box assessment of humans; both hope to build an interpretable, sustainably optimized intelligent system. Perhaps in the future, we will see that evaluating a team or an organization requires assessing both the human wisdom of team members and the "artificial wisdom" level of the AI assistants they use; neither is dispensable. This will be a new characteristic of talent evaluation in the AI era: the evaluation subject expands from a single human to the overall intelligence of "human-machine coupling." It is conceivable that our exploration in the field of talent evaluation this time will also provide ideas for machine intelligence evaluation, achieving mutual promotion.
Of course, the promotion of the new system will also face some challenges and problems, requiring us to continuously research and improve. For example, how to ensure assessment results are not abused for utilitarian purposes? How to prevent unfairness caused by systematic biases in technology? How to maintain the openness of the evaluation system, accommodating diverse values and personalities? These all need to be treated cautiously in practice. But regardless, the general direction is clear: evaluation reform centered on people and development is imperative.
Conclusion: Based on the DIKWP interaction model, this paper explores the construction path of a talent growth evaluation system for the new era. We believe that with the deepening of theory, the advancement of technology, and the support of policies, this system will surely turn from blueprint to reality in the near future. By then, we will welcome a gratifying situation: exams are no longer shackles, evaluations are no longer cold numbers, but become ladders for everyone to achieve a better self; enterprises and society will also be full of vitality due to using the right talents and cultivating the right talents. Standing at the new starting point where the "Two Centenary Goals" converge, we contribute to the realization of building a strong education nation and a strong talent nation, helping every person with dreams grow into a better version of themselves!
References
中国人工智能学会. 2024中国人工智能系列白皮书人工智能基础选编[R/OL]. (2024)[2025-10-13]. https://www.scribd.com/document/892224156/2024-中国人工智能系列白皮书-人工智能基础选编.
段玉聪基于网状DIKWP模型的体验自我叙事自我语义数学人工意识重构[EB/OL]. (2025)[2025-10-13]. https://www.scribd.com/document/922040880/.
段玉聪. DIKWP人工意识模型与相关理论分析报告[R/OL]. (2024)[2025-10-13]. https://www.researchgate.net/publication/393637609.
Wu, K.; Duan, Y. DIKWP-TRIZ: A Revolution on Traditional TRIZ Towards Invention for Artificial Consciousness. Appl. Sci. 2024, 14, 10865. https://doi.org/10.3390/app142310865
OECD. Beyond Academic Learning: First Results from the Survey of Social and Emotional Skills[R]. Paris: OECD Publishing, 2021.
段玉聪主权AI与语义主权:中国数字主权的”[EB/OL]. (2024)[2025-10-13]. https://listenhub.ai/episode/ugc-68e72e5704ea152dec35e39a.
科技日报全球首个大语言模型意识水平识商白盒DIKWP测评2025报告发布[N/OL]. (2025-02-19)[2025-10-13]. https://www.stdaily.com/web/gdxw/2025-02/19/content_298792.html.
段玉聪. DIKWP白盒测评LLM黑盒基准的能力映射元分析[EB/OL]. (2025)[2025-10-13]. https://zhuanlan.zhihu.com/p/26428209440.
Duan Y. Intelligent Educational Cognitive Service Platform Based on DIKWP Concept-Semantic Interaction[R/OL]. (2025)[2025-10-13]. https://www.researchgate.net/publication/392894636.
BEISEN北森企业如何搭建岗位学习成长地图?[EB/OL]. (2023)[2025-10-13], https://www.beisen.com/special/198.htm.
重庆科技报. “人工智能训练师国家职业技能标准发布[EB/OL]. (2021-12-07)[2025-10-13], https://epaper.cqrb.cn/kjb/2021-12/07/08/cqkjb2021120708.pdf.
第十三届全国人大常委会中华人民共和国职业教育法(2022年修订)[EB/OL]. (2022-04-20) [2025-10-13], https://www.hcvt.cn/uploadfile/2022/0518/20220518111437373.pdf.
主权AI下的语义主权大模型与高质量数据集[EB/OL]. (2024)[2025-10-13]. https://www.sdbdra.cn/newsinfo/8531201.html.
语言模型白盒测评DIKWP)与黑盒测评(LLM)对比:以DeepSeekOpenAI等为例. (2025)[2025-10-13]. https://www.researchgate.net/publication/389108581_dayuyanmoxingbaihecepingDIKWPyuheihecepingLLMduibiyiDeepSeekyuOpenAIdengweili.


人工意识与人类意识


人工意识日记


玩透DeepSeek:认知解构+技术解析+实践落地



人工意识概论:以DIKWP模型剖析智能差异,借“BUG”理论揭示意识局限



人工智能通识 2025新版 段玉聪 朱绵茂 编著 党建读物出版社



主动医学概论 初级版


图片
世界人工意识大会主席 | 段玉聪
邮箱|duanyucong@hotmail.com


世界人工意识科学院
邮箱 | contact@waac.ac


【声明】内容源于网络
0
0
通用人工智能AGI测评DIKWP实验室
1234
内容 1237
粉丝 0
通用人工智能AGI测评DIKWP实验室 1234
总阅读9.0k
粉丝0
内容1.2k