Constructing Virtual Health Personas and Designing Multi-Dimensional Health Interface Systems in Proactive Medicine Architecture
Yucong Duan
International Standardization Committee of Networked DIKWPfor Artificial Intelligence Evaluation(DIKWP-SC)
World Academy for Artificial Consciousness(WAAC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
(Email: duanyucong@hotmail.com)
Introduction
Contemporary healthcare is undergoing a paradigm shift from passive diagnosis and treatment to proactive health management. Under the traditional model, the medical system is often disease-centered; patients only seek treatment after symptoms appear, and health data is primarily passively recorded in medical records. This Passive Medicine leads to lagging prevention and fragmented chronic disease management, resulting in a large number of preventable complications. According to the World Economic Forum, about 90% of healthcare expenditures in the United States are for the treatment of chronic diseases and mental illnesses, yet many chronic diseases could originally be significantly improved through early intervention. It is evident that the passive model, which relies solely on mending after the fact, is not only costly but also has limited effectiveness. To cope with challenges such as the high burden of chronic diseases and aging populations, countries are beginning to advocate for a "health-centered" approach, achieving Proactive Guardianship of individuals throughout their entire life cycle via digital technology. This has given rise to concepts like the "digital health consciousness system," which uses wearable devices, IoT sensing, and artificial intelligence to perceive an individual's status in real-time and provide decision support, thereby moving the medical threshold forward. For example, an AI model developed by DeepMind can continuously analyze electronic medical record data and predict critical conditions like acute kidney injury 48 hours in advance, giving doctors valuable time for early intervention. Practice has proven that continuous data analysis and intelligent prediction can proactively identify health risks, issue warnings, and guide preventive interventions, greatly enhancing the foresight of the medical system.
Proactive Medicine is a concept born against this backdrop. Its core is to center on health throughout the entire life cycle, upgrading medicine from passive repair to proactive assurance in a Purpose-driven way. Proactive medicine emphasizes "activating" health data that was previously dormant in databases. Through the Data-Information-Knowledge-Wisdom-Purpose (DIKWP) intelligent closed loop, it transforms data into meaningful information and actionable wisdom decisions, directly serving an individual's health goals. In this architecture, the technology system seems to endow medicine with a "health consciousness," enabling it to predict risks before diseases occur and proactively take measures, shifting "treating disease" forward to "preventing disease." Proactive medicine is not only an innovation of technical means but also represents a transformation of medical philosophy: changing the person from a passive patient to an active manager of their own health, and shifting the medical system from a "firefighting mode" to a "fire prevention mode," pursuing a deeper understanding and more effective control over life and health.
Based on the theory of proactive medicine, especially the core content and structure of its fourth chapter, this article deeply explores the Construction of Virtual Health Personas and the Design of Multi-Dimensional Health Interface Systems within the Proactive Medicine Architecture. The article first clarifies the philosophy of the proactive medicine architecture and its cognitive framework, the DIKWP model. It then focuses on analyzing the semantic cognitive mechanisms, construction logic, and human-machine interaction mechanisms of the Virtual Health Persona, and analyzes the paths of individual health digital modeling and human-machine collaboration. Next, it elaborates on the design principles of the multi-dimensional health interface system, including key technologies such as data sensing mechanisms, personalized modeling algorithms, and feedback closed-loop structures, and then proposes an integrated architecture model of human-machine-system fusion. Finally, it looks forward to future development directions and summarizes the full text. Through systematic theoretical reconstruction and expansion, this article aims to present the systematicity, philosophical depth, and feasible technical connotations embodied in proactive medicine in the construction of virtual health subjects and their system interface design.
The Proactive Health Architecture Philosophy and the DIKWP Cognitive Model
The design philosophy of the Proactive Medicine Architecture originates from the systematic abstraction of human health and the cognitive processes of artificial intelligence. Its core lies in establishing a closed loop from data to Purpose, enabling AI to continuously perceive, analyze, and intervene in health states. This closed loop is clearly represented by the DIKWP model. DIKWP is an extension of the classic "Data-Information-Knowledge-Wisdom (DIKW)" model, adding a "Purpose" layer at the highest level, thus forming a five-layer structure: Data (D) → Information (I) → Knowledge (K) → Wisdom (W) → Purpose (P). In this architecture:
·Data Layer (D): Collects raw personal health-related data, such as physiological signals, behavioral records, environmental indicators, etc. This is the foundation for the system to perceive the world.
·Information Layer (I): Pre-processes and aggregates raw data to extract meaningful features and indicators. For example, converting ECG signals into heart rate variability indicators, or calculating the daytime fluctuation range from blood glucose monitoring values.
·Knowledge Layer (K): Combines medical domain knowledge to interpret and correlate the features extracted at the information layer, forming a preliminary cognition of the health state. For example, mapping "large daytime blood glucose fluctuations" to the possible medical meaning "pancreatic function fluctuations or medication adherence issues," providing a basis for further decision-making.
·Wisdom Layer (W): Makes comprehensive judgments and decisions based on knowledge. This layer is equivalent to the clinical doctor's decision-making process, synthesizing multi-faceted information (including medical guidelines, past experience, etc.) to make specific intervention recommendations or health strategies. The output of the wisdom layer is often a personalized plan, such as adjusting a certain drug dosage, formulating an exercise plan, etc.
·Purpose Layer (P): Represents the ultimate health goals and driving force of Purpose that the system pursues; it is the guiding direction of the entire closed loop. In proactive medicine, the Purpose is usually set by the individual or the medical team, such as "control blood pressure to the target level within six months" or "help the patient lose 5 kg." The Purpose layer ensures that wisdom-based decisions are aligned with the person's ultimate health goals and do not deviate from a valuable direction.
The introduction of the "Purpose" layer transforms AI from a purely passive reasoning tool into a proactive entity driven by goals. Traditional AI often lacks the constraint of high-level purpose and may only focus on optimizing a certain indicator while ignoring the overall interest. In the DIKWP model, a clear health Purpose provides value criteria for AI decision-making. For example, for a diabetic patient, if the Purpose layer sets "reduce HbA1c to below 7% within half a year," the system will adjust the work of each layer around this goal: the Information layer will focus more on blood glucose and related behavioral data, the Knowledge layer will retrieve knowledge related to diabetes management, the Wisdom layer will formulate medication and lifestyle intervention plans, and the Data layer will increase the frequency of blood glucose monitoring to verify the effect. In this way, every step from data collection to intervention action is consistent with the final health Purpose and does not operate in isolation. Studies have shown that embedding clear Purpose in AI helps to avoid uncontrolled optimization problems in the system, ensuring its behavior is always constrained within a range that is beneficial to the user. In fact, the DIKWP framework ensures that technical activities always serve human health goals, transforming passive data utilization into active Purpose-driven action.
It is worth emphasizing that the DIKWP layers are not in a linear pipeline relationship, but form a networked interaction structure through bidirectional semantic mapping. Information can be abstracted from the bottom up, level by level, or adjusted and fed back from the top down. This bottom-up and top-down combined semantic loop gives the system dynamic adaptive capabilities. For example, when the Data layer captures new changes in the patient's environment or behavior, the upper-level Information and Knowledge will be updated accordingly, and the Wisdom decision may adjust the intervention plan; conversely, when the Purpose layer changes the goal according to stage progress (e.g., after achieving the 5kg weight loss, upgrading to maintaining a normal BMI), the data collection focus of the lower layers will also change accordingly to obtain information related to the new Purpose. This two-way feedback between upper and lower levels ensures that all layers of the system form a semantic closed loop: each decision is checked for consistency in a multi-layer context, conforming to the patient's overall situation and final goals. From an information science perspective, this is equivalent to reducing system entropy and maintaining the synchronization of the semantic field and the conceptual field, which is a necessary condition for maintaining a healthy steady state. When the patient's true health state (semantic field) deviates from the AI's internal model (conceptual field), the feedback mechanism will prompt the two to realign, thereby maintaining the system's semantic self-consistency and stability. Through such closed-loop regulation, the DIKWP architecture endows the proactive medicine system with a self-calibration capability similar to that of a biological organism, enabling it to promptly detect and correct deviations and always operate around the patient's health Purpose.
In summary, the proactive medicine architecture, with the DIKWP model as its theoretical cornerstone, achieves a full-link penetration from raw data to health Purpose. It integrates the AI cognitive architecture and the medical decision-making process: the underlying 24/7 multi-source data perception provides the "sensory organs," the upper layer integrating medical knowledge and value orientation forms the "intelligent brain," and then the "action limbs" are commanded by a clear Purpose to implement interventions. Such an architecture has a strict hierarchical division of labor, yet maintains overall collaboration through semantic feedback, reflecting a high degree of techno-philosophical consistency. As advocated by the proactive medicine philosophy, it continuously injects "negative entropy" into the life system at the information level, transforms disordered data into orderly knowledge and action, and turns potential disease risks into opportunities for maintaining health. In this process, humans and AI are no longer separate subjects, but semantically collaborative partners, sharing health Purpose, co-shouldering decision-making logic, and gradually nurturing a "common self" that integrates human values and machine intelligence. This is precisely the higher-level vision of proactive medicine: technology is no longer just an external tool, but becomes a part of human health self-regulation, achieving true doctor-patient-machine integration and digital-rational symbiosis.
The Semantic Cognitive Mechanism of Virtual Health Personas
In the digital system of proactive medicine, the Virtual Health Persona plays the role of the patient's digital "mind" and "will." If the health digital twin is the patient's "body" and "organs" in the virtual world, mapping their physiological state and disease models, then the virtual health persona is the patient's digitized psychological and volitional proxy, reflecting the cognitive and decision-making abilities regarding the patient's health aspects. The virtual health persona is driven by AI, represents a specific patient, interacts with them in a personalized way, and provides health management support. It has certain personality traits, communication styles, and decision-making preferences, much like an experienced private health coach who understands the user's temperament. Unlike a simple chatbot, the virtual health persona has an underlying metacognitive mechanism—it can not only perform preset health tasks but also self-evaluate and adjust its own behavioral strategies to better adapt to user needs and health goals. In other words, it will continuously reflect during the service process: "Is the advice I'm providing effective? Do I need to change my approach?" This self-reflection capability allows the virtual persona to continuously optimize itself, evolving towards a higher level of "digital mind."
To realize a virtual health persona, it is necessary to integrate simulation methods for human self-awareness and personality modeling. The DIKWP model provides a feasible path for this: allowing AI to build its own semantic self-model internally, i.e., taking its own cognitive process as data and inputting it again for self-monitoring and adjustment. Specifically, the virtual persona will record the effect of each interaction with the user, such as whether the user followed the advice, what their emotional feedback was, and whether the corresponding health indicators improved. This information becomes the basis for AI to adjust its own strategies: if it finds that a user prefers humorous encouragement and dislikes strict supervision, AI will change its tone accordingly; if a certain type of suggestion is often ignored, AI will analyze the reason and try new proposal methods. This adaptive process based on semantic feedback makes the virtual persona "understand" the user more and more, and also become better at promoting positive change in the user. From a cognitive perspective, the virtual persona is equivalent to having a reinforcement learning-like ability. Each human-machine interaction is a round of experimentation, continuously updating strategies to maximize long-term health benefits.
In addition to adaptive learning, the virtual health persona also needs to have controllable and transparent personality attributes at a higher semantic level. Researchers have proposed constructing personality templates through semantic tags and knowledge graphs, associating specific personality traits with corresponding language and behavior patterns for AI to reference and execute. For example, personality tags can be set in the AI system: "empathetic," "humorous," or "rigorous and serious," etc. If the tag is "humorous," the virtual persona will use a more relaxed tone and humorous wording when communicating; if set to "rigorous and serious," it will favor a serious expression of data and logic. By adjusting these semantic personality parameters, the system's dialogue style and behavioral tendencies become configurable and easy to understand. At the same time, the AI's personality is not static—it gradually cultivates a stable and unique style through continuous fine-tuning. When the style stabilizes, it will further influence the AI's future decision-making tendencies, becoming part of its "value self." Therefore, the virtual health persona is actually a result of "co-shaping" through long-term interaction between AI and the user: it has a pre-designed model framework, and also integrates the preferences and values that the user "taught" the AI during the interaction. Ideally, a successful digital health persona should establish a human-machine symbiotic partnership with the user: AI has medical rationality and data insight capabilities, the user provides personal preferences and final decision-making will, and the two trust and complement each other, jointly maintaining the established health Purpose. This also means that although the virtual persona has a certain degree of autonomy, it must always respect and serve the user's interests and wishes, implementing the "human-centric" principle.
The development of modern artificial intelligence technology has laid the foundation for constructing highly realistic virtual health personas. On the one hand, large-scale pre-trained language models and generative dialogue technology enable AI to understand and generate human-like language, carrying out context-rich conversations. These models can not only answer health questions but also adjust their wording according to the user's emotions and reactions, achieving appropriate questioning and answers, and considerate behavior. Studies have shown that the performance of generative AI health assistants in some aspects is already close to or even exceeds that of ordinary humans. For example, a comparative study found that an AI consultation assistant was comparable to clinical doctors in the accuracy of classifying conditions and safety. Some studies even pointed out that a well-designed AI chatbot is sometimes more stable in empathetic expression than humans—machines can always respond patiently and empathetically, while human doctors, due to energy and emotional limitations, inevitably fluctuate in empathy. On the other hand, AI systems can capture rich state information of users through multimodal perception technology, such as speech tone reflecting emotions, cameras capturing facial expressions, and even real-time monitoring of physiological stress indicators through wearable devices. This information can be used to judge the user's psychological state, thus allowing the virtual persona to make appropriate emotional responses and guidance. For example, when it detects that the user's tone is low and heart rate variability has decreased, the digital persona may offer gentle comfort and psychological support, rather than just emphasizing medical tasks.
In fact, a mature virtual health persona should possess several key capabilities (as shown in the figure below). First, it can dynamically adjust its dialogue strategy based on patient feedback, similar to an experienced clinician digging deeper into the medical history through circular questioning, rather than rigidly following a consultation script. Second, it can communicate in a personalized way: choosing appropriate language complexity and communication style based on the user's cultural background, health literacy, and emotional state, so that different groups feel comfortable. Third, the virtual persona should integrate medical decision support capabilities, able to mobilize knowledge bases and clinical prediction models to provide users with scientifically reliable advice. When faced with problems beyond its capabilities, it should also be aware of its limitations and promptly suggest the user seek medical treatment or consult a professional to ensure safety. In addition, the virtual persona should be seamlessly integrated with the user's health data: this includes the continuous cumulative memory of personal historical data, as well as access to electronic medical records and the latest medical knowledge. By integrating multi-source information, AI can remember the user's past health events and naturally associate previously discussed issues in dialogue, forming the illusion of long-term companionship. Finally, since AI is not limited by physiological time, the virtual persona can be on standby 24/7, achieving continuous supervision and timely feedback for users. This always-on feature enables it to scale interactions across a wide user base—communicating with thousands of individuals simultaneously without compromising quality. In short, the virtual health persona combines multi-dimensional capabilities such as communication, cognition, decision-making, memory, and scalability, and is a complex intelligent agent that connects technology and humanistic care.
Figure: Schematic diagram of the key capabilities of an AI health dialogue agent. A virtual health persona should possess capabilities such as adaptive questioning, personalized communication, clinical decision support, data integration and memory, and continuous availability, to achieve precise and considerate service for patients. The figure depicts the potential of generative AI health assistants to improve medical communication effectiveness through dynamic dialogue and multi-source data integration.
In summary, the virtual health persona is the bridge connecting artificial intelligence and humanistic care in the proactive medicine architecture. On the one hand, backed by AI's powerful computing and knowledge integration capabilities, it provides individuals with 24/7, personalized health guidance; on the other hand, it wins user trust with human-like interaction methods, transforming cold algorithms into warm companionship. Its semantic cognitive mechanism reflects AI's simulation of human self-awareness and personality traits: achieving self-evolution through metacognitive self-reflection, endowing value orientation through semantic tags, and showing empathy through context-awareness. It can be said that the construction of virtual health personas marks the leap of medical AI from the tool stage to the agent stage—it no longer just provides information, but begins to participate in human health behavior change in the form of a "digital persona," collaborating with humans to achieve health Purpose. In the vision of proactive medicine, everyone is expected to have such a digital health partner: it is both a rigorous medical consultant and an intimate life coach, accompanying us towards a more active and autonomous healthy future.
Individual Modeling and Human-Machine Collaboration Paths
Individualized health modeling is the basis for proactive medicine to achieve precision intervention. Facing massive, heterogeneous health data, how to transform it into a meaningful health semantic graph for a specific individual, and then support personalized decision-making, is a major challenge. The DIKWP model naturally provides a framework for this: multi-source signals collected at the data layer are processed at the information layer to extract meaningful feature patterns, the knowledge layer associates these features with medical knowledge, the wisdom layer comprehensively assesses the health state and proposes decision-making suggestions, and the entire process is dynamically calibrated under the guidance of individual health Purpose. The resulting individual DIKWP semantic graph is a knowledge network that comprehensively characterizes individual health, which contains both low-level specific data points and high-level abstract semantic concepts, and the layers are connected into a network through semantic relationships. For example, in a person's health graph, the data node "decrease in average daily steps" may point to the information node "insufficient exercise" through a relationship; this information node is further associated with knowledge nodes such as "decline in cardiopulmonary endurance" and "poor blood sugar control"; based on these knowledge nodes, the wisdom layer may generate a decision node such as "increase exercise," and correspond back to the health Purpose nodes of "improve physical fitness/control blood sugar." Thus, AI has a panoramic understanding of the individual's health status: it sees both the "twigs" (specific indicator changes) and does not ignore the "forest" (the health meaning of multi-indicator associations). This semantic graph actually organically links data that was traditionally scattered, such as vital signs, lifestyle, and psychological state, enabling AI to "understand" a person's overall health picture and make context-relevant judgments based on it.
Constructing an individual semantic graph is inseparable from the integration of multimodal semantic recognition and context-awareness technologies. Multimodal semantic recognition refers to AI extracting understandable semantic information from data of different modalities: for example, identifying emotional tendencies and symptom keywords from free-text health diaries, identifying daily activity patterns from wearable sensor data, and identifying organ function abnormalities from medical images. Early attempts, such as the symptom checker assistant developed by Babylon Health, used rule trees and Q&A forms, and their semantic understanding ability was limited, making it difficult to handle complex symptoms freely described by patients. In recent years, with the development of natural language processing and deep learning, AI can train large language models to understand medical information in unstructured text, and the accuracy of consultation and triage has approached that of doctors in some tests. For example, a study published in Frontiers in AI compared the performance of AI and human doctors in giving diagnostic suggestions based on case records, and the results showed that the AI system was comparable to ordinary doctors in clinical classification accuracy and safety. This shows that modern semantic AI is initially competent for complex language understanding and reasoning tasks in medical scenarios, laying the foundation for intelligent consultation and health consultants.
However, just understanding general medicine is not enough. Proactive health management requires AI to deeply grasp the individual's cognitive characteristics and behavioral patterns to achieve truly "personalized" intervention strategies for thousands of people. Different people show vastly different preferences and behaviors in health management: some are happy to accept detailed data explanations and scientific basis, while others need emotional incentives to stay motivated. Some are highly self-disciplined and execute interventions according to plan, while others are "three-day fishers, two-day net-driers," needing repeated reminders and guidance. In response to these differences, the system needs to establish a cognitive and behavioral model of the user: using historical interaction data to understand the user's preferred communication style, the psychological states they easily fall into, and typical adherence levels. For example, by analyzing the user's interaction logs with the system, it can be found whether they often interrupt a certain type of task and under what circumstances they are prone to low moods; another example, personality traits and health beliefs can be understood through questionnaires and scales. This information helps AI to adjust intervention strategies according to the individual: for users with poor adherence, the system can increase the frequency of reminders and introduce reward mechanisms to strengthen their motivation for behavior change; for those with anxiety tendencies, the system should provide more psychological comfort and explanations when giving health advice, to alleviate their concerns. This sensitivity to context and individual differences makes the proactive medicine system more intelligent. It not only "knows what," but also "knows why"—understanding why this person makes a certain choice in this situation, so as to provide targeted guidance and intervention.
In reality, the practice of integrating multimodal data and individual cognitive modeling has already begun to emerge. For example, in the "Dingbei" proactive health system jointly developed by Guangdong Second Provincial People's Hospital and Huawei, in addition to collecting users' physiological indicators, users' emotions and subjective feelings are also obtained through mobile app diaries. If the AI analysis finds that the user has recently mentioned keywords such as "insomnia" and "anxiety" multiple times, and objective indicators such as heart rate variability are abnormal, it will judge that their stress level has increased, and then add content on psychological stress reduction and sleep aid to the intervention suggestions. Another example, the Dingbei system introduces a traditional Chinese medicine (TCM) constitution questionnaire. After the user fills it out, AI judges their TCM constitution type (such as yin deficiency, qi deficiency), and integrates this as an additional dimension into the health semantic graph, thereby incorporating TCM health preservation concepts into the suggestions. This shows that the semantics of different knowledge systems can be integrated within AI: the risk models of Western medicine and the constitution models of TCM work together to generate a more comprehensive cognitive model for the individual. The resulting intervention suggestions not only have a scientific basis but also take into account the user's cultural background and personal beliefs, and are therefore more easily accepted and adhered to.
To effectively integrate symbolic knowledge, data learning, and human factors modeling, hybrid intelligence architectures are increasingly favored. On the one hand, symbolic AI (such as knowledge graphs, expert systems) is used to ensure the accuracy and explainability of medical knowledge and reasoning. On the other hand, machine learning (such as deep neural networks) is used to mine hidden patterns from massive data and improve prediction capabilities. The combination of the two can complement each other's strengths: symbolic methods provide the framework and priors, and machine learning provides flexibility and adaptability. For example, IBM Watson once tried to use a medical literature knowledge base for tumor treatment decision-making, understanding the semantics of medical records and literature through NLP technology, and proposing solutions through symbolic reasoning; Google DeepMind, etc., focused on deep learning, automatically learning complex predictive models from millions of EHRs, achieving breakthrough early prediction of acute kidney injury. Practice has also revealed the difficulties of integrating the two: although Watson has a huge knowledge base, it failed to meet expectations due to poor integration of multi-source data and the inability of models to coordinate effectively. In contrast, China, relying on platforms such as the National Health and Medical Big Data Center, has aggregated hundreds of millions of standardized case data, providing a unified and rich training basis for AI models. The "Dingbei" large model, jointly developed by Huawei and Guangdong Second Provincial People's Hospital, integrates millions of authoritative medical data and knowledge. Through retrieval enhancement algorithms, it achieves unified representation of multi-modal text, images, and videos, and is competent for full-scenario applications from interpreting physical examination reports, health Q&A to disease risk prediction. For example, the "Dingbei Report" sub-model can interpret the abnormal indicators of a physical examination report and generate a personalized health plan in one minute, with an accuracy rate reaching the level of a deputy chief physician; "Dingbei Kangkang" serves as a 24/7 online health consultant, semantically understanding user questions and giving professional answers. These explorations show that by integrating symbolic and connectionist AI, and combining medical and cognitive science, we are gradually approaching the goal of building an intelligent health agent that "understands both medicine and you." It has both medical professionalism and insight into the individual, and can truly play the role of a "bridge" between doctors and patients, achieving precision health management under human-machine collaboration.
In terms of human-machine collaboration, the information system of proactive medicine organically connects doctors, AI, and patients through closed-loop path generation and feedback, forming a collaborative health management path. After mastering the individual's digital twin model (body level) and virtual health persona (mind level), AI can play the role of a "navigator," automatically planning health intervention routes for patients, and implementing, monitoring, and continuously optimizing them in real life. This intelligent intervention path generation system can be analogized to GPS navigation: according to the target location (health Purpose), comprehensively considering road conditions (health data and environment) and map knowledge (medical knowledge and experience), planning several optional routes (intervention plans), then selecting the optimal path, and adjusting the route according to real-time conditions during travel. For example, for a diabetic patient, AI can simulate Path A: "adjust drug dosage + increase exercise," and Path B: "maintain current medication + strict low-carb diet," predict their respective blood sugar compliance rates and side effect risks after half a year, and select the solution with the highest comprehensive satisfaction in combination with the patient's preferences (e.g., whether they mind taking more drugs or prefer lifestyle intervention). Such path formulation is essentially an optimal decision-making problem under multiple constraints, involving a trade-off among multiple factors such as medical effectiveness, adherence, lifestyle, and economic burden. The DIKWP model clearly decomposes this process: the Data/Information layer provides the "map" of the current state and environment, the Knowledge/Wisdom layer acts as the "path algorithm" based on medical principles, and the Purpose layer sets the "destination" and priority of health improvement. With the help of high-precision health digital twin models, AI can preview the effects of different solutions in virtual space, which is equivalent to letting patients "try before they act". This is particularly crucial in personalized medicine—traditional treatment is mostly based on group average effects, while digital twins allow for simulation for specific individuals, thus finding the most suitable plan for this person.
Once the path is selected, the system does not execute it rigidly, but achieves dynamic adjustment through closed-loop feedback. During the implementation of the intervention, sensors and user reports continuously provide new data, and AI evaluates the intervention effect in real time: if it finds that a certain indicator has not improved as expected or a new abnormality appears, it will correct the route in time. For example, if the patient's blood pressure is still high for several consecutive days, the system will prompt the doctor to consider adjusting the drug dosage or changing the drug; another example, if it perceives that the user's activity is consistently below the target, the virtual persona will give encouragement at the right time, increase the frequency of reminders, or suggest more suitable exercise methods to improve feasibility. Doctors can also understand the patient's recent status through the summary report provided by the system, and proactively intervene when needed, such as making a follow-up call or calling for a return visit. The patient, on their end, will receive an easy-to-understand health report summary every day, for example: "Walked 6000 steps today, goal reached! Blood sugar after dinner was 8.5, a bit high. Reduce staple food appropriately tomorrow." This feedback, on the one hand, helps the patient summarize the day's experience and strengthen their self-management ability (gradually learning how to autonomously regulate health, just like a teacher leading a student to practice day after day until they master the "wisdom of health management"); on the other hand, it also allows AI to continuously improve its cognition of the patient's behavioral response patterns, providing a basis for subsequent path optimization. Through interactive learning by both humans and machines, the intervention plan becomes more and more in line with the patient's actual situation, forming a positive cycle of AI guidance → patient practice → effect feedback → strategy update.
The path generation and feedback system driven by AI enables proactive medicine to achieve a new closed-loop paradigm of "equal emphasis on medicine and prevention." In the past, clinical treatment and daily prevention were often disconnected: doctors formulated plans in the hospital, but there was a lack of tracking on how patients executed them at home and what difficulties they encountered. The effect was only evaluated at the next follow-up visit. In the proactive medicine closed loop, the intervention plan extends to the patient's daily life, and is continuously optimized through data feedback and the intervention of professional medical opinions. In this way, treatment and prevention are integrated, and health management becomes a continuous process. It can be expected that with the further development of AI's planning capabilities and reinforcement learning technology, the future path generation system will become more intelligent and personalized. For example, AI agents may summarize the most effective intervention strategies from countless human-machine interactions through reinforcement learning; or use group digital twin models to optimize intervention paths by drawing on the successful plans of people similar to the current user. No matter how the specific technology evolves, its ultimate goal is to build a health escort system similar to "autonomous driving": under the macro-supervision of human experts (doctors), AI undertakes daily micro-adjustments and decision support, allowing everyone's health journey to reach the goal safely and efficiently.
Design Principles and Implementation of Multi-Dimensional Health Interface Systems
The implementation of proactive medicine is inseparable from a solid digital health infrastructure and good human-computer interaction design. The so-called Multi-Dimensional Health Interface System refers to the interactive hub connecting humans, AI, and various health-related devices and platforms. It covers multiple dimensions such as data collection, information presentation, and feedback control, and is associated with all aspects of the patient's physiology, psychology, and environment. Designing such a system needs to follow several key principles:
1.Seamless, Full-Time Data Perception: The system should be able to continuously acquire health-related data from multiple sources, including physiological signals, behavioral habits, environmental factors, and subjective feelings, forming a 24/7 monitoring of the individual. This requires the establishment of a multimodal sensing ecosystem, such as wearable, implantable, and even home environmental sensors working together, to upgrade the previously scattered physical examination indicators into a continuous digital data stream. In recent years, wearable and embedded health devices have become increasingly abundant: smart bracelets and watches monitor steps, heart rate, and sleep; dynamic glucose meters continuously record blood sugar changes; smart home blood pressure monitors upload blood pressure at regular intervals; smartphone apps collect diet and emotion diaries... These devices cover physiological, behavioral, psychological, and other multidimensional data, making the patient's vital signs and behaviors "visible" and providing a steady stream of fuel for proactive medicine. Interoperability must be paid special attention to in the design of the perception layer: through unified data formats and interface standards, the integration and sharing of data from different devices and institutions can be achieved. For example, a regional health information platform in Xiamen, China, has aggregated tens of millions of electronic medical records, and has broken down the data silos of 19 hospitals and community clinics through a unified format standard, supporting full life-cycle health monitoring for more than 15 years. It is precisely the standardized interfaces and interconnected architecture that have laid the cornerstone of proactive health data perception. Therefore, the multi-dimensional health interface must achieve standard unification, real-time connectivity, and wide coverage at the data perception dimension.
2.Layered Architecture and Real-Time Response: To balance data timeliness and system load, the health sensing system generally adopts an Edge—Center layered architecture. The bottom layer is various front-end sensors, responsible for data collection and preliminary processing (such as filtering, statistics); the middle layer is personal terminals or home gateways (such as smartphones, home health centers), which aggregate multi-source data and upload it encrypted; the top layer is the cloud health platform, which performs storage, deep analysis, and model reasoning. This architecture utilizes edge computing capabilities, reduces network bandwidth pressure, and improves response speed. For example, a bracelet can locally calculate the average daily steps and heart rate, and upload the summary hourly, but once an abnormal heart rate is detected, it will be reported in real time via the mobile phone and trigger an alarm. Through such layered buffering, the system can not only respond to key health changes in milliseconds, but also avoid a large amount of redundant data occupying communication resources, achieving a balance between performance and timeliness. Therefore, the technical implementation of the multi-dimensional interface should fully consider edge intelligence deployment, letting the "sensing-processing-action" closed loop be as close as possible to the user's site, and only necessary data is uploaded to the cloud, thereby improving overall efficiency and reliability.
3.Non-invasiveness and Good User Experience: Proactive health monitoring requires sensing to be ubiquitous, but the premise for users to be willing to use it continuously is that the technology is as "invisible" as possible, i.e., the physiological and psychological burden is minimized. To this end, the device design must be comfortable, beautiful, and easy to operate, minimizing interference with daily life. Many innovations have appeared in recent years: such as non-invasive blood glucose monitoring patches, optical sensing to measure blood pressure, etc., achieving indicators that previously required blood draws or cuffs, which can now be measured by attaching a small sensor or a watch. Another example is integrating health sensors into daily necessities (watches, mattresses, toilets), so that users complete data collection almost without perception. These are important means to improve user acceptance. A typical experience is that a project, by deploying continuous glucose monitors, wearable heart rate monitors, and mobile health diaries on diabetic patients, successfully transformed information scattered in paper manuals and recall consultations into a real-time digital stream, allowing patients to be almost unaware of the monitoring process, yet greatly improving data integrity. Therefore, the multi-dimensional health interface needs to achieve low intrusiveness at the human-body interface level: the hardware should be light and comfortable, and the interaction should be close to natural habits, letting "monitoring" integrate into life without being obtrusive. At the same time, attention must be paid to the design of incentive mechanisms, enhancing user willingness to cooperate through gamification, rewards, etc.—after all, no matter how good the sensor, if the user refuses to wear it, it is of no use.
4.Privacy, Security, and Ethical Review: The continuous collection and utilization of personal health data bring significant privacy and ethical challenges, which must be fully considered in the design of the interface system. On the one hand, adhere to the principles of data minimization and Purpose limitation: only collect data related to health management Purpose, and clearly inform users of the Purpose. Especially for information that can be additionally inferred (such as the GPS of wearable devices can reveal whereabouts, microphones can capture environmental conversations), users must be given the right to informed choice. On the other hand, strengthen data security measures: encrypt from transmission to storage, de-identify key data, and implement strict access control. An excellent case is the "Dingbei" system in Guangdong, which adopts a local deployment strategy, placing AI models and databases on the hospital's intranet, avoiding the transmission of sensitive health data over the public internet, and effectively reducing the risk of leakage. In addition, the interface system needs to have built-in ethical review and user authorization mechanisms: adopt a dynamic informed consent model, and promptly seek user consent when new data types are collected or the Purpose changes, rather than a single authorization being "valid for life." A user-friendly privacy setting dashboard can be designed to allow users to view the authorization status of various data at any time and turn them on or off freely. For example, when the system wants to call the user's gait data to train an AI model, or share some of their data with a doctor, the multi-dimensional interface should pop up a clear explanation for the user to confirm or reject, rather than defaulting to on. Only by firmly placing the user at the center of data control can a long-term trust relationship be established. There are already clear international guidelines for this: the World Health Organization's 2021 report emphasized that medical AI must ensure transparency, explainability, and responsibility, and emphasized the importance of obtaining user informed consent and privacy protection. In short, the multi-dimensional health interface must have embedded privacy protection and ethical governance modules, running the "human-centric" concept through technical implementation, so as to protect user rights and interests while providing intelligent services.
5.Interconnection and Platform Openness: The proactive health system involves many stakeholders—individuals, hospitals, device manufacturers, data platforms, insurance companies, etc. If the interface system is closed, it will form new data silos and technological monopolies, which is not conducive to overall collaboration. Therefore, the design should, as much as possible, adopt open standards and encourage the systems of different manufacturers and institutions to connect through standard APIs, data formats, etc. Regulatory authorities can promote data portability policies to prevent one platform from monopolizing user health data or setting unreasonable migration barriers. For example, requiring health apps to provide convenient data export functions, allowing users to transfer historical data to new platforms; hospital electronic medical record systems following national interoperability standards, making it convenient for doctors to obtain complete medical history across institutions. Only when all systems "shake hands" to become an organic network can global proactive health management be truly realized, otherwise it may repeat the past fragmentation of informatization. In addition, platform governance also needs to keep pace: for platforms that provide health AI services, establish access and review mechanisms to avoid problems such as data misuse or algorithmic discrimination. Regulators can require platforms to accept algorithm audits and independent supervision, assessing the performance, fairness, and safety of their models in different populations. For example, regularly detecting the prediction bias of a cardiovascular risk prediction AI for different genders and races, and publishing the audit results. At the same time, emphasize result explainability: when AI provides suggestions to doctors and patients through the interface, it should, as much as possible, attach an explanation of the basis (such as abnormal indicators, relevant guideline clauses, etc.), so that users understand "why this is recommended." These measures can improve the transparency of AI decisions and enhance the confidence of doctors and patients in human-machine collaboration. In short, the multi-dimensional interface system is not only a technical connection, but also a carrier of governance connection: through open collaboration and audit mechanisms, the entire ecosystem develops healthily.
From a technical implementation perspective, the multi-dimensional health interface system can be understood as an integrated junction point of multiple subsystems. Downstream, it connects with various IoT devices to achieve data collection interfaces; upstream, it connects with cloud AI and knowledge bases to achieve analysis and decision-making interfaces; on the terminal side, it provides user interfaces (including patient app interfaces, doctor decision support interfaces, etc.) to achieve information feedback interfaces; horizontally, it connects the data and services of different institutions through standard platforms to achieve inter-system interfaces. Such multi-directional interactions converge to form a multi-dimensional interface system that runs through human-machine-environment. Each interface dimension has special design requirements: device interfaces require light weight, high efficiency, low power consumption, and stability; cloud interfaces require high throughput and security; user interfaces require ease of use and effective communication, etc. Taking the interaction interface between patients and AI as an example, in order to cater to the preferences of different groups, it can support multimodal human-computer interaction at the same time: for example, the elderly may prefer voice communication, and the system needs to have natural language understanding and speech synthesis functions; young people are accustomed to mobile app text or graphical interfaces, and the system provides interactive dashboards, chatbots, etc.; for specific situations, AR/VR visualization technology can even be introduced to intuitively superimpose health suggestions on real-world scenarios (e.g., AR glasses displaying a pill icon in the field of vision when it is time for the user to take medicine). Regardless of the form, the ultimate goal is to allow users to efficiently obtain and understand health information, while conveniently providing feedback and executing interventions. For example, when AI detects a rise in the user's blood pressure, it will not only push graphic and text suggestions on the mobile phone, but also give a voice reminder through a smart speaker, and ask about their feelings after the user takes measures (such as taking medicine or resting) to record the effect. This multi-channel fusion interface design ensures that users with different preferences can get timely and effective communication, truly achieving a patient-centric friendly experience.
At the level of data analysis and decision-making algorithms, the multi-dimensional health interface system also needs to have embedded personalized modeling and closed-loop optimization algorithms. In terms of personalized modeling, the individual semantic graphs and cognitive models discussed in the previous section will be specifically implemented in the interface system: through backend big data processing and front-end real-time monitoring, a dynamically updated health model is maintained for each user. This usually involves online learning techniques in machine learning: the model will be continuously fine-tuned as new data arrives. For example, if the user's recent activity increases, the risk assessment model should lower their cardiovascular risk score; if a new symptom is detected, the diagnostic model will adjust the ranking of disease possibilities, etc. In terms of closed-loop optimization, methods such as control theory and reinforcement learning need to be implemented, so that the system can automatically correct intervention strategies based on feedback. For example, a reinforcement learning agent can be instantiated for each user, and the reward function set as the improvement of health indicators and user satisfaction, then the agent will gradually approach the optimal strategy through continuous trial and error. Or, through digital twin simulation, Monte Carlo simulations can be performed on different solutions, and the solution with the greatest expected utility can be selected for execution. The effects of all these algorithms are ultimately presented and act on the user through the interface system, so the collaboration with the interface must be considered in the design. For example, to make reinforcement learning more robust, artificial calibration of rewards can be obtained through user interface prompts (such as asking the user about the acceptability of a certain suggestion, and incorporating it into the reward); to make the simulation results more convincing, the expected effects of different solutions can be displayed in visual charts on the interface to help users understand the reasons for the choice. This reflects the integration of technical design and interaction design. Only when algorithms and interfaces work together—"calculate clearly" and "speak clearly"—can the multi-dimensional health interface truly play a role.
In summary, the multi-dimensional health interface system is the neural hub of proactive medicine, connecting all links of human-data-intelligence-action. Its design needs to coordinate technical performance (real-time, standard, secure) and humanistic care (easy to use, private, personalized), ensuring smooth data flow in unseen places, and enhancing user experience in visible places. Just like the vision pursued by proactive medicine, this system aims to let technology "moisten things silently" integrate into the medical ecosystem: being ubiquitous but not abrupt, weaving a net of guarding health with intelligence, but making people feel convenience and peace of mind, rather than anxiety and burden.
Human—Machine—System Fusion Architecture Model
The ultimate form of proactive medicine is to build a highly integrated collaborative architecture among humans, artificial intelligence, and the medical system. In this Human—Machine—System Fusion Architecture Model, the individual (patient), the intelligent agent (AI), and the medical service system (including doctors, hospitals, public health institutions, etc.) are no longer separate elements, but become interconnected nodes in an organic whole, jointly forming a new, health-centered medical ecosystem.
This fusion architecture can be understood from both micro and macro levels. At the micro level, it is the formation of a "human-machine symbiotic unit," that is, each individual has their own digital avatar on the proactive medicine platform, including a health digital twin and a virtual health persona, which respectively play the roles of their digital "body" and "mind." The digital twin continuously maps and updates the individual's physiological state, while the virtual persona understands and acts as an agent for the individual's health Purpose and decision-making preferences. In this way, the human (real self) and their AI agent (digital self) form a twin partnership. Within the human-machine symbiotic unit, the DIKWP architecture ensures that all behaviors of the AI are guided by the human's health goals, and the two maintain a high degree of semantic consistency. As mentioned earlier, when AI and humans share health Purpose and collaborate deeply at the semantic level, a "common self" that integrates human values and machine intelligence begins to emerge. This common self can be regarded as a virtual health subject: technology is no longer an external tool, but gradually becomes a part of the human body's health self-regulation system. It not only extends human perception (obtaining information that was originally not easily detectable through sensors), but also expands human cognition (gaining insight into complex associations and future trends through AI models), and even enhances human's ability to act (by automating the execution of some health management tasks). For example, a hypertensive patient's common self can "perceive" blood pressure fluctuations in real time and take measures, just as if the patient had an intelligent assistant inside their body, balancing and regulating at all times to maintain a steady state. This human-machine integrated health management model makes the individual a true proactive manager of their own health, but they are not fighting alone; they have the constant assistance and guardianship of AI.
At the macro level, tens of millions of human-machine symbiotic units are connected to the medical system through digital platforms, forming a regional or even national health wisdom network. In this network, multi-agent collaboration and information sharing will trigger profound changes in the medical model. First, patients, AI, and clinicians form a tripartite care team. Patients are no longer just objects of treatment, but participate in decision-making and management with the assistance of AI; AI is no longer a tool, but an enabler; doctors also transform from primary executors to supervisors and complex problem solvers. They maintain real-time interconnection with the digital platform as the link: AI provides doctors with the latest dynamic summaries and preliminary decision-making suggestions for patients, doctors make high-level decisions and adjust AI strategies based on this, and patients follow AI's guidance in their daily lives and feed back their feelings—the three parties jointly complete the diagnosis and treatment process that was previously mainly driven one-way by doctors. This model improves the efficiency of medical resource utilization, allowing the limited energy of doctors to be used on the cutting edge (handling difficult cases and humanistic care), while handing over repetitive monitoring and primary decision-making to AI agents. Second, the fusion architecture makes group health management possible. By aggregating the data of all individual digital twins, public health departments can monitor group health trends in real time, promptly discover signs of epidemics or areas where major risk factors are concentrated, and deploy interventions in advance. For example, if a city's digital network finds an abnormal increase in respiratory symptoms among a large number of citizens for several consecutive days, the system can automatically warn of the possibility of an influenza outbreak, prompting the health department to start response measures early. In normal times, big data analysis can also help optimize the allocation of medical resources, such as predicting the drug demand of a certain chronic disease patient group in the next month, and guiding the supply chain to prepare. Third, the fusion architecture brings opportunities for global learning and knowledge evolution. Each human-machine interaction instance, each piece of health data, can be used to train and improve AI models after authorization and de-identification. The model performance continuously improves as the number of users increases, which in turn better serves each user, achieving a virtuous cycle of "learning by using, using by learning." This is similar to collective intelligence: each node in the network contributes to the whole, and the whole feeds back to the individual, improving the health level of all. The academic community has already proposed the concept of "digital twins as global learning health and disease models," which is to connect data from different regions and different backgrounds through digital twins to achieve the sharing of disease prediction and prevention experience. The human-machine-system fusion architecture of proactive medicine is precisely the practical carrier of this concept.
Of course, to ensure the reliable operation of this fusion architecture, an ethical and governance framework is essential. While technology deeply penetrates medicine, it must be ensured by institutions for safety, fairness, and public interest. First is the establishment of digital health sovereignty: at the individual level, the law must clarify the individual's ownership of their own health data and digital persona. Users have the right to know on what basis AI's diagnosis and treatment suggestions for them are made, and can choose to intervene or opt out; at the national level, it is necessary to ensure that the country's health data assets and core algorithms are manageable and controllable, to prevent excessive external technical dependence or data outflow from threatening national security. China, through measures such as establishing national health and medical big data centers, has taken the lead in exploring centralized data management and utilization, achieving secure control and coordinated governance of massive health data. Second, the multi-stakeholder co-governance model needs to be implemented: patients, hospitals, enterprises, and regulatory agencies should jointly formulate and abide by the rules of the digital health platform. For example, set up recognized red lines and authorization processes for data sharing, and have independent third-party testing and evaluation of AI products for access to assess their safety and effectiveness. Third, continuous value embedding is also important: when AI participates in decision-making in the fusion architecture, medical ethics principles (such as non-maleficence, informed consent, fair distribution, etc.) must be transformed into AI's constraints and evaluation indicators. The WHO report particularly pointed out that AI systems must be transparent and explainable, and accountable to the professionals who use their suggestions—in other words, AI cannot become a black box for which no one can be held accountable. Its decision-making logic and basis should be reviewable, and if necessary, human doctors are responsible for the results rather than shifting the blame to the algorithm. To this end, Explainable AI (XAI) technology can be introduced to ensure that the suggestions given by AI can be traced back to certain medical knowledge or data pattern support. This is not only convenient for doctors to adopt, but also conducive to post-event analysis and correction.
When the human-machine-system fusion architecture operates smoothly, we will usher in a new medical ecosystem where technology and medicine are deeply integrated. Its landmark vision includes: medical services extending from within the hospital to being ubiquitous, health management covering from individuals to groups, and extending from the present to the future; the doctor-patient relationship expanding from one-to-one to multi-agent collaboration, with new types of trust built on data transparency and common goals; medical knowledge transforming from expert monopoly to being accessible to the public through AI, so that everyone can get professional advice to a certain extent. The ultimate significance of this transformation is: human health sovereignty is truly reflected—people are no longer bound by the information asymmetry of health, but have more initiative with the empowerment of AI; at the same time, the humanistic care of medicine is carried forward—technology undertakes tedious labor, allowing medical staff to have more time and energy to care about the emotional and humanistic needs of patients. As the advocates of proactive medicine say, this is a millennial leap in the medical paradigm: technological progress has not turned people into cold numbers, but through symbiotic integration, life ethics and digital wisdom are blended, reshaping a higher level of medical civilization.
Future Outlook and Summary
Looking to the future, the vision depicted by the proactive medicine architecture is gradually moving from ideal to reality. With the exponential development of technologies such as artificial intelligence, biosensing, and digital communication, we have reason to believe that the virtual health personas and multi-dimensional health interface systems discussed in this article will become more mature and complete in the near future. At the technical level, within the next five years, we may witness the emergence of more intelligent and humane health AI: they are expected to combine more powerful multimodal large models, have a deep medical knowledge background, and at the same time understand human emotions better; they may use new paradigms such as federated learning to achieve joint training by millions of users without leaking personal privacy; they will be embedded in richer carriers, such as smart homes, wearable patches, and even appear as digital human images in AR glasses to communicate with us. At the application level, digital health personas may expand to health scenarios beyond medical treatment, such as adolescent psychological counseling robots in schools, elderly care digital companions in communities, etc., providing targeted support for different groups. The multi-dimensional health interface system will also extend its tentacles—more daily devices will become health data sources, cities will be equipped with real-time health monitoring networks, and public policies will be able to accurately intervene in public health problems based on big data. It is foreseeable that the information system of proactive medicine will continuously iterate and evolve: becoming more intelligently interconnected, richer in human warmth, and letting technology truly integrate into medical practice. Perhaps in the near future, we will usher in such an era: "Before the disease arrives, health is already perceived; while the risk is still budding, intervention is already in place." Everyone becomes an active guardian of their own health with the companionship of a digital health assistant; the medical system becomes unprecedentedly agile, efficient, and patient-centered because it has a comprehensive real-time health "neural network."
In summary, the construction of virtual health personas and the design of multi-dimensional health interface systems in the proactive medicine architecture provide a clear blueprint for the transformation of the 21st-century medical model. From the underlying 24/7 sensing and monitoring to the high-level wisdom-guided decision-making, all levels are closely linked, feedback loops, and multi-agents collaborate to jointly shape a closed-loop ecosystem centered on health. The virtual health persona, as the digitized "health mind," enables AI to truly possess human-like understanding and communication capabilities, guiding users to improve health behaviors with empathy and insight; the multi-dimensional health interface, as the bridge for human-machine interaction, breaks down the barriers between data, knowledge, and action, making continuous health management possible. In this blueprint, we see both a solid foundation of realistic technology (such as the popularization of wearable devices, significant progress in artificial intelligence) and the absorption of the latest research and practical experience (such as interdisciplinary achievements like digital twins, cognitive computing, and ethical AI). More importantly, the humanistic care and philosophical concepts that run through it from beginning to end make it different from any previous technological innovation: proactive medicine emphasizes starting from human values and wishes, taking the reduction of life entropy and the promotion of life order as the measurement criteria, and integrating the development of medical technology with thinking about the meaning of life. This unity of technical feasibility, ethical desirability, and conceptual consistency makes proactive medicine not only an innovation of the medical model, but also a reconstruction of the relationship between health and life. When the virtual health persona is integrated with each of us, and the multi-dimensional health system supports the healthy operation of society like air and water, we will finally enter a "Proactive Era": in this era, everyone masters their own health journey more autonomously, the entire society's medical resources are optimally allocated, and humanity's pursuit of its own health reaches an unprecedented height and realm. This will be a fundamental transcendence of the traditional medical paradigm, and also a glorious chapter of technology empowering human well-being. We have reason to believe that through continuous exploration and practice, the vision of proactive medicine will gradually take root and benefit billions of lives.
References
·Tomašev, N., et al. (2019). A clinically applicable approach to continuous prediction of future acute kidney injury. Nature, 572(7767), 116-119.
·Ross, C. (2022). Once billed as a revolution in medicine, IBM’s Watson Health is sold off in parts. STAT News. Retrieved March 16, 2023.
·Tudor, B. H., et al. (2025). A scoping review of human digital twins in healthcare applications and usage patterns. NPJ Digital Medicine, 8(1), 587.
·World Health Organization. (2021). Ethics and governance of artificial intelligence for health.
·Hern, A. (2017). Royal Free breached UK data law in 1.6m patient deal with Google’s DeepMind. The Guardian, 3.
·Coorey, G., et al. (2022). The health digital twin to tackle cardiovascular disease—a review of an emerging interdisciplinary field. NPJ Digital Medicine, 5(1), 126.
·Radanliev, P. (2025). Privacy, ethics, transparency, and accountability in AI systems for wearable devices. Frontiers in Digital Health, 7, 1431246.
·Azarm, M., et al. (2017). Breaking the healthcare interoperability barrier by empowering and engaging actors in the healthcare system. Procedia Computer Science, 113, 326-333.
·Baker, A., et al. (2020). A comparison of artificial intelligence and human doctors for the purpose of triage and diagnosis. Frontiers in Artificial Intelligence, 3, 543405.
·Mahajan, A., & Powell, D. (2025). Transforming healthcare delivery with conversational AI platforms. NPJ Digital Medicine, 8(1), 581.

