As someone who has followed the AI Act closely from the very beginning, particularly in the context of healthcare, the European Commission’s new health package immediately raised some questions. Concretely, the Commission proposes to move the Medical Devices Regulation (MDR) and In Vitro Diagnostic Medical Devices Regulation (IVDR) from Section A to Section B in Annex I of the Artificial Intelligence Act. The Commission explains this change by pointing to concerns about overlaps between the AI Act and existing medical device law, arguing that parallel application could stifle innovation and delay access to new technologies (p. 26).
However, this explanation in terms of “double regulation”, “overlap”, and “parallel application” does not fully align with the original logic of the AI Act.
Under the AI Act as adopted in 2024, AI systems that are part of medical devices as covered in the MDR/IVDR were automatically considered high-risk (Annex I, Section A). This did not mean double or parallel certification. It did not require a second, parallel certification track. The AI Act was designed to work within the MDR and IVDR conformity assessment system, adding AI-specific safeguards such as requirements on data quality, bias mitigation, human oversight, and post-market monitoring. These safeguards are not covered by medical device law (see my paper on AI biases in the AI Act v. MDR in the Journal of Law and the Biosciences)
Under the new proposal, AI systems that are part of medical devices as covered by the MDR/IVDR would still be part of Annex I and thus considered high-risk. However, only a small part of the provisions for high-risk AI under Chapter III would apply. Article 2 of the AI Act states that for AI systems covered by the legislation listed in Section B of Annex I, only Article 6(1), Articles 102 to 109 and Article 112 apply. In practice, this means that most AI-specific obligations in Chapter III, such as requirements on risk management, data governance, human oversight, and post-market monitoring, would no longer apply to AI medical devices.
In other words, the proposal does not narrow the scope of the AI Act for healthcare, but it does narrow the obligations for providers and deployers of medical AI systems. This marks a shift away from treating healthcare as a structurally high-risk domain under the AI Act. The Commission prioritises regulatory simplification and innovation, but this raises important questions about the role of AI-specific safeguards for patients.
This raises several important questions that the proposal itself does not (yet) answer:
1. Does the Commission intend to incorporate AI-specific safeguards, such as requirements on data quality, bias mitigation, and human oversight, into the MDR/IVDR framework?
The proposal is silent on bias, discrimination, or other AI-specific risks.
2. Does the Commission plan to list specific high-risk medical AI use cases elsewhere in the AI Act (for example in Annex III), so that they would still be subject to the full high-risk regime?
Again, the proposal does not explicitly address the risks associated with AI-enabled medical devices.
3. Will this proposal truly lead to more progress in innovation?
While the proposal promises greater flexibility, it also introduces greater uncertainty. High-risk classifications for medical AI may increasingly depend on future delegated and implementing acts. This increases policy discretion for the Commission, but reduces legal predictability for patients, healthcare providers, and developers.
4. Is the strong emphasis on innovation and administrative efficiency the right regulatory choice for building a “healthier Europe”?
The proposal does not acknowledge that the MDR and IVDR were never designed to address algorithmic bias, discrimination, or population-level and systemic impacts of AI. The AI Act was meant to fill exactly that gap.
The proposal responds to real concerns about implementation of the MDR and IVDR. But it also represents a clear shift in how risks related to AI in healthcare are governed. By narrowing the application of AI-specific safeguards in a domain that directly affects patients’ health, bodies, and lives, the Commission is making a deliberate policy choice in favour of regulatory simplification. As this proposal moves into the legislative phase, the European Parliament has a crucial role to play in making sure that essential safeguards, such as protections against bias, discrimination, and unequal impacts, remain adequately protected. Ensuring innovation in medical AI should not come at the expense of patients’ rights, health equity, and trust in Europe’s healthcare systems.
Please cite as: Hannah van Kolfschooten, ‘AI Medical Devices after the Health Package: Innovation or lack of safeguards for patients?’ (17 December 2025, hannahvankolfschooten.com).

