Regulatory expert Andrei Ninu joins Host María Mateos in this episode of Life Sciences – In Focus to discuss one of the most complex challenges facing the medical device industry today: navigating the dual requirements of the EU Medical Device Regulation (MDR) and the newly enacted EU AI Act.
Ninu, who specializes in functional and software safety, as well as AI applications in healthcare, offered a practical and grounded perspective on what these overlapping regulations mean for manufacturers, particularly those developing AI-powered medical technologies.
MDR Meets the AI Act
For companies developing AI-driven medical devices, the combined weight of MDR and the AI Act can appear daunting. Both frameworks demand thorough documentation, quality management systems, risk assessments, and post-market surveillance strategies. However, as Ninu explained, these frameworks are not entirely distinct from one another. They share several core principles.
While the MDR is tailored specifically to medical devices, the AI Act applies more broadly across industries. The result is a situation where AI-enabled medical devices must meet the requirements of both. Yet, according to Ninu, a technical file that complies with MDR already covers much of what is needed for AI Act conformity, perhaps as much as 80 percent. This overlap offers a foundation for integrated compliance strategies, reducing duplication and making regulatory alignment more achievable.
Training Data as a Raw Material
A compelling insight shared during the conversation was the comparison of AI training data to the raw materials used in physical medical devices. Just as traditional manufacturers must demonstrate traceability, safety, and quality for their physical inputs, AI developers must apply the same rigor to the data sets that shape their models.
From lifecycle management to post-market performance monitoring, many of the operational principles under the MDR are mirrored in the AI Act. Ninu emphasized the value of building a single, integrated quality system that spans both regulatory domains, rather than attempting to create two separate tracks for compliance.
From Regulatory Concern to Controlled Acceptance
One of the more nuanced topics addressed was the regulation of adaptive AI systems, those capable of updating their behavior over time. Previously viewed as a certification risk due to concerns around unpredictability, these systems are now being reconsidered within a framework of controlled change.
Ninu explained that both MDR and the AI Act now allow for a level of model adaptability, as long as it remains within clearly defined limits and is thoroughly documented. The Focus has shifted from static conformity to transparent risk management, recognizing that AI systems that fail to adapt may underperform in clinical settings over time.
Where Small and Mid-Sized Medtech Companies Should Focus
For smaller manufacturers, the regulatory landscape may seem especially burdensome. Ninu advised that the most effective starting point is investing in the right expertise, beginning with a regulatory team that understands both technical development and compliance requirements.
He also underscored the importance of communication across teams. Regulatory professionals must be able to engage meaningfully with engineers and data scientists to ensure alignment and effective collaboration. When in-house capacity is limited, partnerships with academic institutions or external specialists can help fill gaps, particularly in areas like data science and AI ethics.
Data Partitioning and the Importance of Independent Testing
Ninu offered a preview of an upcoming article titled Partitioning Data for Clinical Trust, which explores how improper data splitting can distort AI performance results. Drawing from a real audit case, he described how test datasets were inadvertently compromised when rare edge cases, already seen and labeled by clinicians, were reused in performance evaluations.
The issue revealed a critical misunderstanding: testing in AI must remain statistically valid and independent. In contrast to traditional device testing, which is often binary (pass/fail), AI performance must be assessed across multiple metrics, such as precision, recall, and F1 score. Misinterpreting this distinction can lead to overestimated effectiveness and potential patient safety risks.
Be Specific, Be Measurable
Regulators expect medical device manufacturers to demonstrate not only technical performance but also clinical relevance. According to Ninu, specificity in clinical claims is essential. Overly broad claims are challenging to validate and often unsupported by adequate performance metrics.
Accuracy alone is rarely sufficient. If the underlying dataset is imbalanced, for example, with a large majority of positive cases, a high accuracy rate may be misleading. In such cases, additional metrics are needed to provide a more meaningful assessment of system behavior across different patient groups and conditions.
Engage Early with Notified Bodies
A final recommendation from Ninu focused on the importance of early engagement with notified bodies. Rather than waiting until the final stages of product development, manufacturers should consider using structured dialogues, such as stepwise reviews with defined milestones, that allow for feedback and course correction. This approach can prevent delays, reduce rework, and bring greater predictability to the conformity assessment process.
Looking Ahead
As AI technologies become increasingly embedded in healthcare, regulatory expectations will continue to evolve. But as Ninu’s insights made clear, these changes need not be viewed as obstacles. With thoughtful integration, cross-functional collaboration, and early regulatory engagement, companies can navigate these challenges and bring safer, smarter products to market.
