Health protection is non-negotiable in the AI Act negotiations

A health-centric approach to the Artificial Intelligence (AI) Act is essential for the protection of health and fundamental rights of European citizens, write Hannah van Kolfschooten and Janneke van Oirschot. 

The European Commission’s proposal for an Artificial Intelligence (AI) Act has been the topic of a heated debate since its publication in April 2021. Civil society organisations believe the proposal falls short on fundamental rights protection, industry is worried it will stifle innovation, and governments fear consequences for national security. We critique the AI Act for neglecting the risks health AI pose to patients’ health and fundamental rights.

The 3,000 amendments to the Act tabled by political groups in the European Parliament say a lot about how controversial regulation of AI really is. This summer, the Parliament’s co-rapporteurs start the negotiation process with compromise amendments. Our message to MEPs, who will need to vote on the amendments is the following: Make health non-negotiable. A health-centric approach to the AI Act is essential for the protection of health and fundamental rights of European citizens, in particular the rights to access to healthcare, non-discrimination and privacy.  

AI is the simulation of human intelligence by machines. AI systems are software-based technologies that use certain data-driven approaches to solve specific problems. What all AI systems have in common, is that they recognise patterns in enormous amounts of data.

AI in the health sector is not like AI in any other sector and deserves special consideration because (1) people’s health is at stake, (2) people are in a vulnerable position when in need of healthcare, (3) the collection of health data has dramatically increased in recent times and (4) health data is historically littered with bias. Because of these characteristics, health AI faces unique risks that need to be specifically addressed in the AI Act. 

Take disease outbreak surveillance as an example. Many people with flu-like symptoms use Google for self-diagnosis. AI can use this data to monitor and predict the spread of infectious diseases. This can be useful for public health officials to make decisions about infectious disease control and how to distribute care resources.

But how accurate are these AI systems when the model is based on subjective user data? Limited regulation of the quality of health AI will lead to distrust in public health and healthcare, breeding hesitancy in access to healthcare. What is more, increased use and sharing of health data threatens privacy and data protection rights.

Another example is the use of AI for medical diagnostics. AI can be used to identify skin cancer in images of skin lesions, after being trained on the basis of thousands of images of “healthy” and cancerous skin lesions. But what happens when image datasets were non-representative, incomplete or of low-quality?

Biases in the training data can lead to discrimination and individual injury or even death. Especially racial bias may lead to incorrect diagnoses and deepen existing socio-economic inequality, something that is not taken into account in current regulation on medical technology. Additionally, lack of transparency and explainability threatens patients’ rights to information and informed consent to medical treatment.

These are just a couple of illustrations of the risks of AI usage for health, one of the most popular sectors for AI deployment in the European Union. Yet, the AI Act does not specifically address health AI and does not provide solutions for its key risks. It can’t be stressed enough that health must be prioritised when MEPs negotiate their amendments over the coming months, with some tabled amendments that deserve particular support. 

Foremost, given its extensive risk, important AI uses in health and healthcare should be marked as high-risk, which will ensure more stringent regulatory requirements. 

Second, high-risk AI should undergo a fundamental rights impact assessment, which takes into account risks to human health. Also technical documentation of health AI should include an assessment of its risks for health, safety and fundamental rights. 

Finally, AI systems that disadvantage groups based on health status should be prohibited completely. 

Similarly, we call on MEPs to strongly oppose amendments that remove health AI from the current list of ‘high-risk AI uses’ or add extra requirements for AI systems to be marked high-risk. 

It’s high time to take on a health-centric approach to the AI Act. It’s worth reiterating: health protection is non-negotiable in the AI Act negotiations. 

By Hannah van Kolfschooten and Janneke van Oirschot – originally posted on EURACTIV. Please cite as: Hannah van Kolfschooten & Janneke van Oirschot, ‘Health protection is non-negotiable in the AI Act negotiations’, 18 August 2022, EURACTIV, available at: <https://www.euractiv.com/section/digital/opinion/health-protection-is-non-negotiable-in-the-artificial-intelligence-act-negotiations/>

Call for Action: High Time to Prioritise Health in the EU Artificial Intelligence Act

Last year, the European Commission published its long awaited legislative proposal on artificial intelligence (AI): the Artificial Intelligence Act (AIA). Since then, a heated debate has been going on about its contents, as many civil society organisations believe the proposal falls short on fundamental rights protection. Various committees within the European Parliament have proposed amendments to the Commission’s AI proposal (e.g. ENVIJURI). After an initial report with amendments from the two rapporteurs, it was up to the LIBE/IMCO Committee members to table amendments, and well, they did. In total, MEPs tabled the astronomical number of 3312 amendments. We were glad to find amongst them multiple amendments proposed by HAI. While this is a promising first step to bring health into the picture, it doesn’t mean that they will make the cut to the final act, as there are multiple negotiation and voting processes ahead. It is essential that the AI Act takes a more health-centric approach and that the health amendments make it to the final Act. We call on MEPs to prioritise health when negotiating their amendments over the coming months. Read below why the health sector needs special attention.

WHAT DO WE TALK ABOUT WHEN WE TALK ABOUT HEALTH-RELATED AI?

AI is the simulation of human intelligence by machines. AI systems are software-based technologies that use certain data-driven approaches to solve specific problems. In the health sector, AI has various applications. One can think of the use of AI systems to predict diagnosis and the best course of treatment, but also for allocation of hospital beds and best placement of ambulances. Governments can use AI systems for the prediction of the spread of infectious diseases and health insurers can deploy AI systems for fraud detection. What all AI systems have in common, is that they recognise patterns in enormous amounts of data. The outcome of the system – e.g. the prediction or decision – depends on the data that was used for development. It’s often difficult to determine how the system reached a specific outcome.   

WHAT ARE THE KEY RISKS OF HEALTH-RELATED AI FOR PATIENTS AND HEALTH?

• Bias in training data may lead to discrimination and individual injury/death (i.e., racial bias may lead to incorrect diagnoses); and deepen existing socio-economic inequalities;

• Technical system errors in AI could lead to mass patient injuries because of widespread use;

• Increased use and sharing of health data threatens privacy and data protection rights of patients;

• Lack of transparency and explainability threatens patients’ rights to information and to informed consent to medical treatment;

• Lack of clarity on accountability threatens patients’ rights to legal redress and compensation;

• Lack of regulation of health-related AI may lead to a lack of trust in healthcare; affecting access to healthcare;

• Issues with cybersecurity threaten patients’ health in the case of cyberattacks on for example insulin pumps and pacemakers.

HOW SHOULD WE ADDRESS THESE RISKS IN THE AI ACT PROPOSAL?

Health is one of the industries that will be most affected by AI and one of the most popular sectors for AI deployment in the EU. Yet, the AIA Proposal does not specifically address the health sector and does not provide solutions for the key risks of health-related AI for the health and rights of patients.

To address these risks, we propose the following main amendments to the AIA Proposal:

  • Signal risks of AI for health, patients’ rights and access to healthcare in the preamble.
  • Adhere to the 1948 WHO definition of health: “Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.”
  • Classify all health-related AI systems as ‘high risk’ in Annex III (i.e. public health, pharmaceuticals and wellbeing).
  • Include individual rights in the AIA + mirrored corresponding obligations for providers (i.e. the right to object to automated decision-making in healthcare, the right to an effective remedy, obligation to conduct a fundamental rights impact assessment).
  • Extend the prohibition on social scoring to include private actors (i.e. health insurance companies).
  • Extend the prohibition on remote biometric identification to all actors (i.e. public health authorities).

By Hannah van Kolfschooten and Janneke van Oirschot – originally posted on haiweb.org. Please cite as: Hannah van Kolfschooten & Janneke van Oirschot, ‘Call for Action: High Time to Prioritise Health in the EU Artificial Intelligence Act’, 24 June 2022, Health Action International, available at: <https://haiweb.org/prioritise-health-in-the-artificial-intelligence-act/&gt;.