Criticism of the European Union’s AI Act by the HealthTech Industry is often based on incorrect assumptions, and is in conflict with health and fundamental rights principles, write Hannah van Kolfschooten, Janneke van Oirschot, and Claudia Nicastro.
The European Union’s Artificial Intelligence (AI) Act is a world-first attempt to create comprehensive AI regulation, to protect both the internal market and fundamental rights. It also applies to AI medical devices. Defending its economic interests, the MedTech industry has fiercely criticised the “burdensome regulatory framework” of medical devices, including those in the AI Act proposal. Rather than consider them burdensome, we argue that, to protect health and fundamental rights, stronger regulation of medical AI is needed, and debunk five common MedTech myths about medical AI.
Myth 1: “We do not need the AI Act to regulate medical AI: the MDR offers enough protection for fundamental rights”
MedTech Europe argues that ‘AI-enabled medical technologies have been accessing the EU market safely for years, accelerated by the introduction of the Medical Devices Regulation.’ But, MedTech neglects the limited scope of application of the MDR, which uses a strict definition for medical devices and has a focus on quality and safety.
Medical devices vary widely and can be anything from medical facemasks to defibrillators. However, the incredible surge in digital health technologies, including those that use AI, is not well accounted for in this MDR. Indeed, the MDR lacks specific provisions for medical AI and does not have the explicit aim to protect fundamental rights, which the AI Act does. This while medical AI holds different risks to the health and fundamental rights of patients than other medical devices. The AI Act has a broader application than the MDR and does protect data quality and transparency, which in turn provides for more fundamental rights protection for patients using medical AI.
Myth 2: “More regulation will delay access to lifesaving medical devices”
Pharmaceutical companies are famous for arguing that more regulation will endanger access to “lifesaving medicines”. MedTech has often made the same claim about medical devices. MedTech Europe, for example, argues that “by bringing in another regulatory layer, the AI Act risks undermining a delicate and well-balanced regulatory environment.”
Medical AI can only be ‘lifesaving’ when it’s effective and we are certain it meets high standards of quality and safety. Medical AI holds more risks to the health and fundamental rights of patients than other medical devices as it can be used by doctors for important decisions with a big impact on patients’ health. Its dependence on huge amounts of data poses risks for privacy and data protection rights, and biases in low-quality training data can lead to discrimination and injury. The AI Act’s requirements aim to guarantee better quality and safety of medical devices than the MDR regime. Also, the regulatory regime for medicines is far stricter than the medical devices regime, while medical AI may pose similar risks to health.
Myth 3: “Health and wellbeing apps do not hold risks for fundamental rights violations”
This is an assumption that underlies the new AI Act. Unfortunately, the Act misclassifies some types of medical AI: health and wellbeing apps fall under the minimal risk class in the AI Act, meaning that they are not subject to specific requirements. However, their classification does not consider that these apps hold several privacy-related risks and can lead to discrimination. An example is apps for post-traumatic stress symptoms, where low-quality of data may lead to wrong treatment pathways and advice. Since these apps are used by vulnerable groups, these kinds of suggestions can be dangerous. This type of app also presents cybersecurity issues that lead to the violation of fundamental rights, for example, when the app shares sensitive information with unidentified third-party marketers and advertisers who misuse the information at a later date.
Myth 4: “Safety of patients trumps protection of privacy”
The MedTech industry focuses on product safety over privacy. However, privacy is a fundamental right and should always be protected as a minimum requirement to guarantee the human dignity of people, especially the most vulnerable groups such as older people. In this case, ‘computers are increasingly guiding decisions about elder care – and tracking everything from toilet visits to whether someone has bathed’. Since privacy is a key condition for trust in healthcare, it must remain a core value to ensure a good healthcare system. In fact, constant surveillance changes patients’ behavior, so that, knowing that they are being monitored, people tend to hide their real condition. Finally, people, especially older adults, are often afraid of technology, which can worsen if their privacy is not guaranteed.
Myth 5: “An innovation oriented regulatory framework will lead to high-quality healthcare”
The MedTech industry claims that focusing the regulation on innovation and quality will foster better patient outcomes. What they seem to overlook is that we won’t have quality care unless we have comprehensively protected fundamental rights, and for that, we need the AI Act. Although there is no clear definition of quality of care, the WHO stresses the importance of protecting fundamental rights for high-quality healthcare. In particular, the right to health as ‘the right to the highest attainable standard of health’ implies certain minimum human rights protection standards. It includes the right to control one’s health and body and to be free from interference, as well as the right to a system of health protection that gives everyone an equal opportunity to enjoy the highest attainable level of health. Therefore, high-quality care is a means to guarantee our rights, not the ends, and should be treated as such.
As explained above, these five myths are unfounded and often based on incorrect assumptions. It is clear that further regulation of medical AI is necessary to ensure better protection of patients’ rights.
By Hannah van Kolfschooten, Janneke van Oirschot and Claudia Nicastro – originally posted on HAIweb. Please cite as: Hannah van Kolfschooten, Janneke van Oirschot & Claudia Nicastro, ‘Five “Big MedTech” Myths about Medical AI Debunked’, 1 April 2023, Health Action International, available at: <https://haiweb.org/five-big-medtech-myths-about-medical-ai-debunked/>.