You are currently viewing Navigating the AI Revolution in Pharmacovigilance: Insights from the CIOMS WG XIV Report

Navigating the AI Revolution in Pharmacovigilance: Insights from the CIOMS WG XIV Report

  • Post published:March 31, 2026

See the full video of the presentation by José Ortiz to the Peruvian Society of Pharmacovigilance below (Spanish)

On Saturday 21st March 2026, José Ortiz was invited by the Peruvian Society of Pharmacovigilance (SOPEFAR) to talk about the CIOMS Working Group XIV Report. Please see the video in Spanish below, summary in English follow below.


Summary:

The integration of Artificial Intelligence (AI) into pharmacovigilance (PV) represents a transformative shift in how we monitor and ensure the safety of medicinal products. An AI system is defined as a machine-based system that, for explicit or implicit objectives, infers how to generate outputs such as predictions, recommendations, or decisions that can influence physical or virtual environments. Within the context of PV, these technologies aim to enhance drug safety monitoring, improve patient safety, and ensure regulatory compliance by optimizing the benefit-risk profile of treatments.

A Framework for Responsible Innovation

The rapid evolution of technologies, particularly Generative AI (GenAI) and Large Language Models (LLMs), has created an urgent need for a robust ethical and operational framework. To ensure that AI deployment does not compromise patient safety, the CIOMS Working Group XIV has established seven core guiding principles that serve as a “safety net” for the industry.

  1. Risk-Based Approach: The intensity of oversight and control must be proportionate to the risk identified in a specific use case. Risk is characterized by the gravity of the consequences for humans if an error occurs and the likelihood of that error materializing.
  2. Human Oversight: Humans must remain in control to protect human autonomy and prevent unintended effects. This includes mechanisms like Human-in-the-Loop (HITL), where humans intervene in every decision cycle, and Human-on-the-Loop (HOTL), where humans monitor the system’s operation and can intervene if performance deviates.
  3. Validity and Robustness: AI solutions must achieve their intended purpose within acceptable parameters and reliably handle variations in real-world data, which in PV is often inconsistent or incomplete.
  4. Transparency and Explainability: Organizations must disclose when and how AI is used to build trust. Explainability refers to the degree to which humans can understand the logic behind an AI system’s specific output, moving away from “black box” models where possible.
  5. Data Privacy: Protecting personal health information is a fundamental right. The use of LLMs poses new challenges, such as the potential for patient re-identification even from de-identified datasets through data linkage.
  6. Fairness and Equity: AI must not propagate or amplify biases (such as those based on race, gender, or age) present in training data, which could lead to underserved populations or inaccurate safety signals for specific groups.
  7. Governance and Accountability: Governance refers to the management system used to direct AI use, while accountability ensures that legal and ethical responsibility remains with the human organization, as AI systems themselves cannot be held responsible.

Practical Applications and Use Cases

AI is already being applied across the PV lifecycle to manage the increasing volume of safety reports and complex data sources.

  • ICSR Processing: AI models are used for duplicate detection, automated coding of adverse events to MedDRA terms, and extracting data from unstructured case narratives.
  • Efficiency Gains: Pilot studies have shown that using LLMs for data extraction from source documents can result in efficiency gains of 39%, saving approximately 20 minutes of manual work per case.
  • Signal Detection: Traditional statistical methods are being augmented by machine learning to identify novel patterns, adjust for confounding factors, and analyze non-traditional sources like social media and scientific literature.
  • Clinical Support: In clinical settings, AI assists in the early diagnosis of adverse reactions, such as detecting hydroxychloroquine retinopathy years before traditional clinical diagnosis, enabling timely intervention.

The Future: From Reaction to Prevention

The long-term vision for AI in PV is a paradigm shift from a reactive discipline focused on reporting past harms to a proactive, real-time monitoring system. Future advancements envisioned by the CIOMS report include:

  • Predictive Safety: Shifting from “warm-start” (post-approval) to “cold-start” scenarios where AI predicts potential risks during early-stage drug development before a product reaches the mass market.
  • Autonomous Expert Systems: The development of AI capable of emulating refined medical and scientific judgment to handle ambiguity and “partial truths” in fragmented data.
  • Proactive Self-Vigilance: Technologies like smart implants and nanotechnology that detect and share safety information directly with the patient or their “internal self” to correct anomalies before symptoms arise.

Conclusion

While AI offers immense potential to revolutionize drug safety, its success depends on a shift in professional roles. PV experts must develop AI literacy—learning to guide and critically evaluate machine outputs rather than just performing manual tasks. As these technologies become more autonomous, the guiding principles of ethics, transparency, and meaningful human involvement will remain the essential foundation for safeguarding public health in the digital age.

Would you like to have more information about how to implement Artificial Intelligence solutions in your organization? Contact PVpharm, click here: contact us for further information.