Exploring the FDA and EMA Principles of Good AI Practice
In January 2026, the FDA and EMA released a short but important document: “Guiding Principles of Good AI Practice in Drug Development.”
At just two pages, the guidance is concise—but it sends a clear signal about how regulators expect artificial intelligence to be developed and governed across the drug development lifecycle.
For an industry that is rapidly experimenting with AI, the document is both reassuring and clarifying. Let’s explore several of the key concepts they highlight.
Natural Synergies with the Status Quo
For experienced industry leaders, several elements of the principles should be encouraging:
Acceptance of the AI opportunity. If the FDA’s growing portfolio of AI-enabled medical devices and its 2025 AI guidance document were not enough to convince AI skeptics of regulatory readiness, this document reinforces the message. The agencies explicitly acknowledge that AI can help modernize the drug development paradigm, promoting innovation, reducing time-to-market, strengthening pharmacovigilance, and even reducing reliance on animal testing through improved prediction of toxicity and efficacy.
Alignment with risk-based practices. The principles strongly reflect the risk-based ethos already embedded in regulatory science. This mirrors the operating philosophy used across areas such as monitoring, validation, quality systems, and clinical oversight. The industry already understands these frameworks, so the challenge ahead is extending them into AI-driven capabilities.
Leveraging established validation approaches. One of the most common questions surrounding regulated AI is: how do you validate systems that are not fully deterministic? The answer suggested by the principles is reassuringly familiar. The agencies emphasize practices already central to regulated technology environments: robust design, data governance, engineering best practices, documentation, and quality management systems among other topics. These expectations are fully consistent with how technology-driven solutions are already developed and validated in regulated environments.
This is all good news, as it further confirms what many industry insiders have been expecting.
The Role of Standards
Despite its brevity, the document references “standards” five times. Historically, US and EU regulators have relied heavily on standards to streamline their complex and safety-critical activities. Standards create a shared language that simplifies compliance and inspection for both sponsors and regulators. The challenge is that standards require stability, and stability is not a near-term feature of today’s rapidly-evolving AI landscape.
Both agencies are encouraging industry collaboration to further develop and implement these principles. Could we see the formation of an organization similar to CDISC, but focused on AI standards in drug development? Perhaps.
Having been involved in the formation of CDISC, I remember firsthand how complex that effort was. I would also argue that CDISC’s data domains — mainly limited to information structured in research protocols and statistical analysis plans — are more constrained than today’s AI models and use cases. Practical industry applications of AI today routinely feature electronic medical records, genomics and molecular data, patient registries, claims and prescription data, lab results, and other real-world data (RWD) and real-world evidence (RWE). Many of AI’s most transformative applications will depend on combining these heterogeneous data environments, which introduces significant challenges for standardization.
Industry Readiness
For many industry leaders, the biggest question raised by the document is simple: is our organization actually ready?
Many life sciences companies are still pursuing AI opportunistically through pilot projects, experimentation, and point-solution tools. While these activities can generate useful experiences, they do not inherently cultivate the disciplined operating environments regulators are describing. They also don’t protect the organization from the risks in AI permeating every system and vendor within the enterprise.
In practice, three capability gaps tend to emerge:
Lack of an Organizing Framework. The guidance repeatedly references the importance of “context of use”, including role, scope, model risk, and quality procedures. Establishing this clarity requires an organizational framework for AI governance aligned to the company’s quality system. Many organizations have not developed such a framework and lack strategies for ensuring adherence to it.
Compliance & Inspection Evidence. The principles closely mirror areas regulators traditionally examine during inspections, including data provenance, process adherence, audit trails, data protections, and documentation. Ironically, these are often the areas that receive less attention during rapid technological exploration. In AI environments, weaknesses in these areas do more than create regulatory risk — they frequently undermine the quality, reliability, and performance of the AI solutions themselves.
AI-specific Processes and Expertise. The guidance emphasizes the need for multidisciplinary expertise that spans both AI technology and the scientific or business context in which it is used. Many organizations have yet to create formal mechanisms that support this type of AI-informed collaboration. Equally important, AI introduces new lifecycle management challenges, including model monitoring, drift detection, retraining / tuning, migration, and evolving cybersecurity risks. In practice, many companies simply do not have sufficient internal AI expertise to fully address AI competencies that deliver engineering best practices, interpretability, explainability, performance controls, transparency, generalizability, and robustness.
As AI adoption accelerates, these gaps should be a call to action.
Leaning into Next
The encouraging news is that these challenges are not insurmountable. Proven governance, quality, and technology management approaches can address them—when applied deliberately and supported by the right expertise.
For leadership teams, several strategic questions become increasingly important:
Who owns our AI readiness and what are the specific goals and outcomes we are pursuing?
How are we orchestrating change management, cross-functional alignment, and standards development around AI adoption?
How are we prioritizing improvements to our quality system, data environments, and development practices so that concepts like “context of use” are reflected in daily operations?
Where should we supplement internal teams with external AI expertise to ensure regulatory and safety considerations are properly managed?
The answers to these questions will not emerge overnight. And AI capabilities will continue to evolve rapidly for the foreseeable future. What life sciences organizations need is not a one-time solution, but a sustainable way to bring structure and discipline to what often still feels like the “Wild West” of AI.
The organizations that succeed will be those that balance innovation with governance—moving quickly while building the operational discipline regulators clearly expect.
How prepared is your organization for that shift?