FDA Issues Draft Guidance for the Use of Artificial Intelligence (AI) in Medical Devices and Drug Development
Written by: Ningxi Sun
The Food and Drug Administration (FDA) has recently issued two draft guidance documents aimed at providing recommendations for the use of AI in medical devices and drug and biological product development. These guidance documents provide recommendations that seek to support the safe, effective, and transparent use of AI-enabled technologies in healthcare. The press release can be found here.
AI in Medical Devices:
The first draft guidance (Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations) introduces a Total Product Life Cycle (TPLC) approach addressing how AI systems maintain their safety and performance over time. This draft guidance provides recommendations for marketing submission content, as well as for the design, development, testing, and post-market monitoring of AI-enabled devices throughout the total product life cycle.
Key content recommendations for marketing submissions include:
- Device Description
- User Interface
- Labeling
- Risk Assessment
- Data Management
- Model Description and Development
- Performance Validation
- Device Performance Monitoring
- Cybersecurity
- Public Submission Summary
Additionally, the guidance emphasizes strategies to enhance transparency and bias control in AI-enabled devices. These recommendations aim to help developers design, develop, and maintain high-quality devices that prioritize safety and effectiveness.
According to the FDA, “[t]he guidance, if finalized, would be the first guidance to provide comprehensive recommendations for AI-enabled devices throughout the total product lifecycle, providing developers an accessible set of considerations that tie together design, development, maintenance and documentation recommendations to help ensure safety and effectiveness of AI-enabled devices.”
AI in Drug Development:
The second daft guidance (Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products) introduces a risk‐based credibility assessment framework for AI models used in drug development. It emphasizes the importance of maintaining the credibility of AI model outputs throughout their lifecycle and outlines options for sponsors and other interested parties to engage with the Agency on issues related to AI model development.
The risk-based assessment framework includes seven steps:
- Step 1: Define the Question of Interest
- Step 2: Define the Context of Use for the AI Model
- Step 3: Assess the AI Model Risk
- Step 4: Develop a Plan to Establish AI Model Credibility Within the Context of Use
- Step 5: Execute the Plan
- Step 6: Document the Results of the Credibility Assessment Plan and Discuss Deviations From the Plan
- Step 7: Determine the Adequacy of the AI Model for the Context of Use
A detailed discussion of each step can be found in the corresponding sections of the draft guidance.
The FDA is inviting public comments on these draft guidance documents, with a deadline of April 7, 2025.