Artificial intelligence (AI) is already being used to read scans, support diagnoses, and guide treatment decisions. Yet just as these systems are entering everyday clinical practice, Europe may weaken the rules meant to keep them safe. In March 2026, the European Parliament adopted its position on a “digital omnibus” reform that aims to simplify the AI Act and reduce burdens for industry. Among other changes, it proposes to delay the application of rules for high-risk AI systems and to make those rules less stringent where sector-specific legislation, such as medical device law, already applies. For medical AI, this means that systems remain labelled as “high-risk” under the AI Act, while no longer being subject to many of the safeguards originally designed to accompany that label. What is presented as technical simplification is, in practice, a significant shift in how AI in European healthcare will be regulated. This de-regulation move has significant consequences for the health and rights of patients.
A key driver behind this reform is industry pressure to avoid what is often described as “double regulation” of AI medical devices. Medtech companies argue that complying with both the AI Act and existing medical device legislation creates duplication and slows innovation. It is true that Europe’s medical device system is already complex and burdensome, but this argument misstates the problem. The AI Act was never meant to create a separate approval track for medical AI, but was designed to work alongside the Medical Devices Regulation and the In Vitro Diagnostic Medical Devices Regulation, allowing requirements to be assessed through one integrated conformity process. The AI Act adds safeguards for risks that traditional product law does not capture: biased data, lack of transparency, weak human oversight, and harmful effects that only emerge in real clinical use. Removing those safeguards does not necessarily simplify the regulatory procedure for AI medical devices, but it does put patients at risk.
The timing of this reform makes it particularly concerning, as AI is no longer a future scenario in healthcare, but part of everyday clinical practice. Hospitals across Europe are using AI systems to analyse medical images and support diagnostic and treatment decisions. Yet the proposed changes would delay or dilute the safeguards governing these systems, potentially pushing meaningful protections years into the future or making them dependent on technical standards that take time to develop. With hospitals under increasing pressure to digitalise and adopt AI tools, this creates a growing gap between the speed of technological uptake and the strength of legal protection. In healthcare, legal safeguards are a precondition for safe use of technology.
Equally important is what happens after AI systems enter clinical practice. Many of the most significant risks do not arise at the moment of market approval, but in real-world use. A system that performs well in testing may lead to overreliance in hospital settings, or may systematically underperform for certain patient groups, for example because training data did not adequately represent them. Safety thus depends not only on how AI is designed but on how it is used in practice: whether healthcare professionals are trained to interpret outputs critically, whether systems are monitored, and whether responsibility is clearly defined. The current AI Act recognises this by placing obligations not only on developers but also on deployers, including hospitals and healthcare providers. The proposed reform removes these obligations, leaving healthcare professionals without clear guidance and shifting responsibility into a fragmented landscape of national rules and professional standards. This will result in uneven protection across Europe and less clarity about accountability when harm occurs, undermining both patient safety and professional confidence.
Finally, the claim that these changes will simplify regulation and support innovation is not convincing. Much of the complexity in Europe’s medical device system stems from structural issues such as limited regulatory capacity and overlapping procedures, not from AI-specific safeguards. Removing those safeguards does not resolve these underlying problems; it merely shifts complexity elsewhere. Instead of a coherent EU-wide framework, safeguards for medical AI risk may be reintroduced gradually through fragmented guidance, delegated acts, and sector-specific rules, creating legal uncertainty for companies and healthcare providers alike. In a field as sensitive as healthcare, such uncertainty undermines trust and will lead to less adoption of AI. A regulatory framework that labels medical AI as “high-risk” while stripping away many of the obligations attached to that label creates the appearance of safety without ensuring it. This is not simplification, but deregulation in one of the most sensitive areas of AI use. The better solution is not to strip out AI-specific safeguards, but to simplify procedures while keeping the substance intact: align documentation, improve guidance, strengthen the capacity of Notified Bodies, and preserve clear obligations for both developers and healthcare institutions.
As the proposal now moves into trilogue negotiations, the European Commission, Parliament and Council should reject any reform that weakens safeguards for medical AI. If Europe wants trustworthy medical AI, it should simplify procedures where needed, but preserve the safeguards on data quality, human oversight, monitoring and accountability that patients and healthcare professionals need.
[published in German in Der Tagesspiegel]
Please cite as: Hannah van Kolfschooten, Krankenhäuser sind auf die KI-Sicherheitsvorkehrungen angewiesen (Der Tagesspiegel, 22 April 2026) https://background.tagesspiegel.de/digitalisierung-und-ki/briefing/krankenhaeuser-sind-auf-die-ki-sicherheitsvorkehrungen-angewiesen?utm_source=linkedin&utm_medium=organic&utm_campaign=2026-03-05_linkedin_organic_fokusseite-digitalisierung–ki

