Health protection is non-negotiable in the AI Act negotiations

A health-centric approach to the Artificial Intelligence (AI) Act is essential for the protection of health and fundamental rights of European citizens, write Hannah van Kolfschooten and Janneke van Oirschot. 

The European Commission’s proposal for an Artificial Intelligence (AI) Act has been the topic of a heated debate since its publication in April 2021. Civil society organisations believe the proposal falls short on fundamental rights protection, industry is worried it will stifle innovation, and governments fear consequences for national security. We critique the AI Act for neglecting the risks health AI pose to patients’ health and fundamental rights.

The 3,000 amendments to the Act tabled by political groups in the European Parliament say a lot about how controversial regulation of AI really is. This summer, the Parliament’s co-rapporteurs start the negotiation process with compromise amendments. Our message to MEPs, who will need to vote on the amendments is the following: Make health non-negotiable. A health-centric approach to the AI Act is essential for the protection of health and fundamental rights of European citizens, in particular the rights to access to healthcare, non-discrimination and privacy.  

AI is the simulation of human intelligence by machines. AI systems are software-based technologies that use certain data-driven approaches to solve specific problems. What all AI systems have in common, is that they recognise patterns in enormous amounts of data.

AI in the health sector is not like AI in any other sector and deserves special consideration because (1) people’s health is at stake, (2) people are in a vulnerable position when in need of healthcare, (3) the collection of health data has dramatically increased in recent times and (4) health data is historically littered with bias. Because of these characteristics, health AI faces unique risks that need to be specifically addressed in the AI Act. 

Take disease outbreak surveillance as an example. Many people with flu-like symptoms use Google for self-diagnosis. AI can use this data to monitor and predict the spread of infectious diseases. This can be useful for public health officials to make decisions about infectious disease control and how to distribute care resources.

But how accurate are these AI systems when the model is based on subjective user data? Limited regulation of the quality of health AI will lead to distrust in public health and healthcare, breeding hesitancy in access to healthcare. What is more, increased use and sharing of health data threatens privacy and data protection rights.

Another example is the use of AI for medical diagnostics. AI can be used to identify skin cancer in images of skin lesions, after being trained on the basis of thousands of images of “healthy” and cancerous skin lesions. But what happens when image datasets were non-representative, incomplete or of low-quality?

Biases in the training data can lead to discrimination and individual injury or even death. Especially racial bias may lead to incorrect diagnoses and deepen existing socio-economic inequality, something that is not taken into account in current regulation on medical technology. Additionally, lack of transparency and explainability threatens patients’ rights to information and informed consent to medical treatment.

These are just a couple of illustrations of the risks of AI usage for health, one of the most popular sectors for AI deployment in the European Union. Yet, the AI Act does not specifically address health AI and does not provide solutions for its key risks. It can’t be stressed enough that health must be prioritised when MEPs negotiate their amendments over the coming months, with some tabled amendments that deserve particular support. 

Foremost, given its extensive risk, important AI uses in health and healthcare should be marked as high-risk, which will ensure more stringent regulatory requirements. 

Second, high-risk AI should undergo a fundamental rights impact assessment, which takes into account risks to human health. Also technical documentation of health AI should include an assessment of its risks for health, safety and fundamental rights. 

Finally, AI systems that disadvantage groups based on health status should be prohibited completely. 

Similarly, we call on MEPs to strongly oppose amendments that remove health AI from the current list of ‘high-risk AI uses’ or add extra requirements for AI systems to be marked high-risk. 

It’s high time to take on a health-centric approach to the AI Act. It’s worth reiterating: health protection is non-negotiable in the AI Act negotiations. 

By Hannah van Kolfschooten and Janneke van Oirschot – originally posted on EURACTIV. Please cite as: Hannah van Kolfschooten & Janneke van Oirschot, ‘Health protection is non-negotiable in the AI Act negotiations’, 18 August 2022, EURACTIV, available at: <https://www.euractiv.com/section/digital/opinion/health-protection-is-non-negotiable-in-the-artificial-intelligence-act-negotiations/>

Call for Action: High Time to Prioritise Health in the EU Artificial Intelligence Act

Last year, the European Commission published its long awaited legislative proposal on artificial intelligence (AI): the Artificial Intelligence Act (AIA). Since then, a heated debate has been going on about its contents, as many civil society organisations believe the proposal falls short on fundamental rights protection. Various committees within the European Parliament have proposed amendments to the Commission’s AI proposal (e.g. ENVIJURI). After an initial report with amendments from the two rapporteurs, it was up to the LIBE/IMCO Committee members to table amendments, and well, they did. In total, MEPs tabled the astronomical number of 3312 amendments. We were glad to find amongst them multiple amendments proposed by HAI. While this is a promising first step to bring health into the picture, it doesn’t mean that they will make the cut to the final act, as there are multiple negotiation and voting processes ahead. It is essential that the AI Act takes a more health-centric approach and that the health amendments make it to the final Act. We call on MEPs to prioritise health when negotiating their amendments over the coming months. Read below why the health sector needs special attention.

WHAT DO WE TALK ABOUT WHEN WE TALK ABOUT HEALTH-RELATED AI?

AI is the simulation of human intelligence by machines. AI systems are software-based technologies that use certain data-driven approaches to solve specific problems. In the health sector, AI has various applications. One can think of the use of AI systems to predict diagnosis and the best course of treatment, but also for allocation of hospital beds and best placement of ambulances. Governments can use AI systems for the prediction of the spread of infectious diseases and health insurers can deploy AI systems for fraud detection. What all AI systems have in common, is that they recognise patterns in enormous amounts of data. The outcome of the system – e.g. the prediction or decision – depends on the data that was used for development. It’s often difficult to determine how the system reached a specific outcome.   

WHAT ARE THE KEY RISKS OF HEALTH-RELATED AI FOR PATIENTS AND HEALTH?

• Bias in training data may lead to discrimination and individual injury/death (i.e., racial bias may lead to incorrect diagnoses); and deepen existing socio-economic inequalities;

• Technical system errors in AI could lead to mass patient injuries because of widespread use;

• Increased use and sharing of health data threatens privacy and data protection rights of patients;

• Lack of transparency and explainability threatens patients’ rights to information and to informed consent to medical treatment;

• Lack of clarity on accountability threatens patients’ rights to legal redress and compensation;

• Lack of regulation of health-related AI may lead to a lack of trust in healthcare; affecting access to healthcare;

• Issues with cybersecurity threaten patients’ health in the case of cyberattacks on for example insulin pumps and pacemakers.

HOW SHOULD WE ADDRESS THESE RISKS IN THE AI ACT PROPOSAL?

Health is one of the industries that will be most affected by AI and one of the most popular sectors for AI deployment in the EU. Yet, the AIA Proposal does not specifically address the health sector and does not provide solutions for the key risks of health-related AI for the health and rights of patients.

To address these risks, we propose the following main amendments to the AIA Proposal:

  • Signal risks of AI for health, patients’ rights and access to healthcare in the preamble.
  • Adhere to the 1948 WHO definition of health: “Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.”
  • Classify all health-related AI systems as ‘high risk’ in Annex III (i.e. public health, pharmaceuticals and wellbeing).
  • Include individual rights in the AIA + mirrored corresponding obligations for providers (i.e. the right to object to automated decision-making in healthcare, the right to an effective remedy, obligation to conduct a fundamental rights impact assessment).
  • Extend the prohibition on social scoring to include private actors (i.e. health insurance companies).
  • Extend the prohibition on remote biometric identification to all actors (i.e. public health authorities).

By Hannah van Kolfschooten and Janneke van Oirschot – originally posted on haiweb.org. Please cite as: Hannah van Kolfschooten & Janneke van Oirschot, ‘Call for Action: High Time to Prioritise Health in the EU Artificial Intelligence Act’, 24 June 2022, Health Action International, available at: <https://haiweb.org/prioritise-health-in-the-artificial-intelligence-act/&gt;.

A Health-Oriented Approach to the Declaration on European Digital Rights and Principles

Hannah van Kolfschooten and Sofia Palmieri

Europe is in the midst of a digital revolution transforming many aspects of daily life. In the healthcare sector, digitalisation may change today’s values and rights. Health tech developments range from relatively simple digital solutions—such as online appointment systems and digital health records—to artificial intelligence (AI) applications—such as AI-powered automated breast cancer diagnosis systems and AI-assisted automatic seizure detection. Recognising both the potential and risks of extensive digitalisation, the European Union (EU) is pushing for a stronger and safer EU digital single market. To this end, the European Commission has proposed a Declaration on European Digital Rights and Principles. As health lawyers, we advocate for a greater focus on health protection in the EU’s digital single market strategy, especially regarding medical AI applications and automated decision-making.

In this post, we argue that while the Declaration on European Digital Rights and Principles has little direct legal effects on patients, its strong political power may better protect patients against the risks of medical technology. Given the special nature of digital patients’ rights, we will propose a health-specific interpretation of the Declaration.

HOW THE DIGITAL TRANSFORMATION OF HEALTHCARE MAY NEGATIVELY AFFECT PATIENTS

In the healthcare sector, the impact of new technology is particularly tremendous. While there are many benefits—such as the ability to deliver tailored patient care in a timely and cost-effective manner—at the same time, health tech can negatively affect health and wellbeing. The use of AI in healthcare poses several risks to patients’ health and integrity. If technical errors occur in the AI software and the system is widely used, this could lead to mass patient injuries, for example, when AI is used for medication calculations or assistive surgery robotics. Moreover, when unrepresentative or low-quality datasets are used to train AI models, the systems may reflect biases leading to discrimination and inequality in healthcare, eventually causing health problems. For example, AI software used to predict individual cardiovascular risks may perform better for men than women because women have been underrepresented in the training dataset, which may lead to the underdiagnosis of women and unequal distribution of care.

At the same time, medical AI may affect the protection of patients’ rights, such as medical privacy and informed consent. The digital transformation in health is leading to an increased exchange of data, from patients’ medical records to clinical trial results. This challenges privacy rights because of issues with access, use and control of personal data. The need for large datasets may encourage tech companies to pressure individuals to provide personal data. The exact use and location of data may change over time, and data leaks and the possibility of re-identifying personal data may impact patients’ personal lives. Another issue arises from the untransparent nature of many AI applications. It is difficult for users (health professionals) and end-users (patients) to understand how the application reached a certain conclusion. Opacity in the AI’s decision-making process may put existing health practices and patients’ rights under pressure, such as the provision of information, informed consent and legal redress. In short, it is high time to digitalise health and patients’ rights.

WHERE DOES THE EU STAND IN DIGITAL HEALTH RIGHTS?

In the field of healthcare, the EU is relatively powerless. The organisation of healthcare systems is an area of policy where the Member States are still highly autonomous, and the EU’s power to regulate health is limited to coordination or supplementation of Member States’ health policies and laws. At the same time, the EU’s open borders for its market activities do lead to a partially regulated, but the free flow of health services, patients and medicines. While the EU regulates some health products, there’s no uniform patients’ rights instrument. This means that it is more difficult for the EU to mitigate patients’ risks that arise from the digital healthcare transformation. At the same time, the EU must ensure “A high level of human health protection (…) in the definition and implementation of all Union policies and activities” under Article 168 TFEU. Building on this so-called “health mainstreaming”-obligation, in the subsequent paragraphs, we argue that the new Declaration on European Digital Rights and Principles indirectly addresses some of the core issues patients face in the digital health environment. In that sense, the Declaration may be useful in the protection of patients in the digital healthcare transition.

THE DECLARATION ON EUROPEAN RIGHTS AND PRINCIPLES

On 26 January 2022, the European Commission proposed to the European Parliament and the Council to sign a Declaration of Rights and Principles that will guide the digital transformation in the EU. The Declaration, quoting European Commission President Von der Leyen, “is about who we want to be as Europeans” when it comes to digitalisation. The somehow romantic content of the proposal on digital rights and principles perfectly matches the decision of the European Commission to opt for a non-binding “soft law”—an instrument like a declaration. It should be noted that declarations are not among the legal instruments listed in the EU Treaties1 and are rarely used. By means of a declaration, EU institutions convey their views on a specific issue, clarifying directions and desiderata.

The digital principles included in the Declaration are intended as essential concepts based on common European values and serve as guidance for a human-centred, secure, inclusive, and open digital environment where no one is left behind. Therefore, the Declaration aims to be a reference framework and serve as an interpretative guideline for pre-existing rights and values, reinterpreted in the light of digitalisation. The Commission clarifies that these principles do not affect the rights that already protect people online within the EU, nor do they affect the legitimate limits to the exercise of such legal rights.

In the next section, we propose to read the principles enshrined in the Declaration in light of the healthcare context, given the need for the protection of patients’ rights in a rapidly digitalising medical environment, especially when it comes to medical AI.

A HEALTH-ORIENTED APPROACH TO THE DECLARATION

The Declaration has a value-based aspiration, putting forward a human-centric approach to guide the EU digital transition. Drawing from EU treaties and the EU Charter of Fundamental Rights, the Declaration aims to be a kind of the first bulwark for rights and values in the digital sphere. This generic nature of principle-based rights allows us to read them through the lens of the right to health and AI. Let us take a health-oriented look at the Declaration.

Starting from Chapter 1, titled “Putting people at the centre of the digital transformation”, the proposal reflects one of the main sentiments characterising the discussion on the introduction of AI in the field of health, namely that the patient should be at the centre of this transformation and should be the first to benefit from it. This starting point is thus in line with fundamental values in health, mainly human dignity. The following chapters detail principles and rights following from the first chapter, which are essential to realising the centrality of the person.

Chapter 2, by focusing on solidarity and inclusion, touches on some of the hottest topics of ongoing reflection in health tech law: how do we make sure digital health helps and not hurts patients—and benefit all patients? Digitalisation should be a way of increasing equality and quality of care when it comes to health, leaving no one behind. This includes ensuring proper internet connection in remote areas that would benefit most from digital health and sufficient digital skills to understand digital health tools. In this spirit, the European Commission commits to including those people who are more likely to be marginalised. In order to reach this goal, the Declaration recognises a sort of “right to be connected” in order to maximise the benefits of digital services, as well as a “right to support and continuous education” for those in need of developing digital skills.

Chapter 3 considers the individual’s freedom of choice. It centres on the statement that transparency is essential in the relational dynamics with AI systems. In this sense, subjects must always be aware of whether they are interacting with an AI or a human being. In the medical sphere, this knowledge is certainly necessary to create a relationship of trust between the patient and doctor, put the patient at the centre of the therapeutic choice, and ensure truly informed consent to medical treatment. While not made explicit in the Declaration, the transparency requirement suggests that the subject (in our case, the patient) is recognised the right to opt for a totally human relationship from which the AI is excluded. These rights correspond to basic patients’ rights, such as the right to informed consent to medical treatment.

While Chapters 4 (online participation) and 6 (sustainability) leave little room for health-oriented reading, Chapter 5 is relevant for health as it deals with safety, security and empowerment principles. It focuses on security by design of the product, creating a sort of parallel with some of the requirements already seen in the AI Act, aimed at protecting digital services from cyber-attacks. Considering the disastrous consequences of a hackable pacemaker for heart patients, one can imagine its importance for health. Even more important is the principle aimed at protecting privacy, declaring the protection of personal data and full control over the secondary use of data a fundamental principle. Medical data protection is also crucial in medical care, as it fosters trust in both the doctor and medical sciences.

Although this analysis of the health-focused declaration is by no means exhaustive, it allows us to draw a preliminary conclusion as to which fundamental principles the EU wants to put at the helm of digitisation policy are also essential in the debate on the digitalisation of health. Human centredness, equity and privacy stand out as leading principles. These principles require an assessment of the accompanying legal framework to see whether the EU can provide adequate protection. For now, what is certain is the inspirational role of the Declaration, which, although not mentioning health explicitly, in fact, sets a clear direction for digital health rights.

CONCLUSION

In sum, the strength of the Declaration is not its legal robustness—its legal powers and enforceability are limited, both because of its declaring nature and the limited EU health competencies. The power of the Declaration lies in the fact that it gives an authoritative voice to some of the core issues of digitalisation that were already highlighted in the literature but still lacked proper institutional recognition. This shows the EU’s commitment to a digital Europe guided by fundamental rights and values instead of pure economic value. Furthermore, the Declaration translates formerly abstract values into some more tangible rights, which could be used to interpret existing fundamental rights instruments by European courts.

It is true that we cannot call the Declaration a legal revolution for healthcare as it has little direct legal effects on patients. In line with previous EU instruments in the digital field, it does not introduce specific new patients’ rights or provide new angles to the ongoing debate on the safe integration of digital health and AI in society. However, when we analyse the Declaration’s content from a health perspective, we see that it addresses the same issues prevalent in the digital health debate. The Declaration defines a human-centric approach guiding the European digital transition—we urge to broaden this with a health-centric approach. Following the EU’s health mainstreaming obligations, we believe that a health-centric approach to the Declaration could contribute to equipping patients with some much-needed rights in the digital health transformation. The actual effects on patients remain to be seen but adopting a health-focused approach to the Declaration is a welcome step in the digitalisation of healthcare.

This blog was first published on The Digital Constitutionalist.

Please cite as: Hannah van Kolfschooten and Sofia Palmieri, ‘A Health-Oriented Approach to the Declaration on European Digital Rights and Principles’ (The Digital Constitutionalist, 25 May 2022). Available at <https://digi-con.org/a-health-oriented-approach-to-the-declaration-on-european-digital-rights-and-principles/>.

EU Regulation of Artificial Intelligence: Challenges for Patients’ Rights

In order to create a well-functioning internal market for Artificial Intelligence (AI) systems, the European Commission recently proposed the Artificial Intelligence Act. However, this legislative proposal pays limited attention to the health-specific risks the use of AI poses to patients’ rights. This article outlines that fundamental rights impacts associated with AI such as discrimination, diminished privacy and opaque decision-making are exacerbated in the context of health and may threaten the protection of foundational values and core patients’ rights. However, while the EU is facilitating and promoting the use and availability of AI in the health sector in Europe via the Digital Single Market, it is unclear whether it can provide the concomitant patients’ rights protection. This article theorizes the Europeanization of health AI by exploring legal challenges through a patients’ rights lens in order to determine if the European regulatory approach for AI provides for sufficient protection to patients’ rights.

In 2021, the European Commission proposed the Artificial Intelligence Act. In her recent article for Common Market Law Review, Hannah van Kolfschooten expresses concerns about this legislative proposal with regard to patients’ rights. The aim of the proposed act is to create a well-functioning internal market for Artificial Intelligence (AI) systems. However, Van Kolfschooten argues that limited attention is paid to the health-specific risks the use of AI poses to patients’ rights. She states that fundamental rights violations associated with AI are exacerbated in the context of health and may threaten the protection of patients’ rights. While the EU is facilitating and promoting the use and availability of AI in the European health sectors, it remains unclear whether it is simultaneously able to protect patients’ rights against the risks that the use of AI poses.

Please cite as: Hannah Van Kolfschooten, ‘EU regulation of artificial intelligence: Challenges for patients’ rights’, (2022), 59, Common Market Law Review, Issue 1, pp. 81-112.

How the Parliament’s AIDA Report Misdiagnoses the Risks and Benefits of Health AI

In November 2021, the European Parliament’s Special Committee on AI in a Digital Age (AIDA Committee) put forward its draft report on artificial intelligence (AI) in a digital age. The report advocates for a very permissive approach to the regulation of AI to stimulate innovation and foster the competitiveness of the EU. With this, however, it understates the possible risks specific to the development and use of AI in the context of health and lacks actual solutions to translate potential into action. We identified five shortcomings and hereby appeal to the regulator to address them.

Jorge Félix Cardoso, M.D., M.A. is a Parliamentary Assistant at the European Parliament

Hannah van Kolfschooten, LL.M. is a PhD Researcher at Law Centre for Health and Life, University of Amsterdam.

Diogo Nogueira Leite, M.Sc. is a PhD Researcher in Health Data Science at the Center for Health Technology and Services Research, University of Porto.

Tjaša Petročnik is a PhD Researcher at TILT/TILEC, Tilburg University

Henrique Vasconcelos, M.D.

The authors write in their personal capacity.

Health is broader than healthcare

When talking about AI in or for health, we are not only talking about a cancer diagnosis or the personalisation of therapeutics in the clinic; the scope of health provision is expanding from formal healthcare systems into our smart devices, actively involving (pre-)patients and consumers. Think about increasingly AI-driven fitness apps, symptom checkers, disease management tools and so on, which have the potential to widen access to health-enhancing resources by cutting conventional healthcare ‘gatekeepers’ out of the equation.

These tools are, however, in the report seen as mere means to relieve pressure on healthcare systems, even though they might affect the health attitudes and behaviours of users or even result in serious harm if not performing optimally. Additionally, as health data is scarce and desirable, we can expect such consumer-facing AI tools to progressively act as an avenue to obtain such data. Misuse might result in privacy violations, civil rights discrimination based on health records, or health inequality increases. AI regulation should thus take into account that AI-based health practices are also taking place outside formal healthcare settings and properly address this.

Exaggerated benefits of health AI

Without a doubt, AI applications in clinical practice and health research show significant promise. Yet, reading the AIDA report, one may get the impression that AI is widely and successfully used in clinical settings. Although the fight against COVID-19 has indeed accelerated research of new technologies in health, this has (so far) produced few robust, generalisable results. While some AI algorithms may perform with high accuracy on paper, even comparable to human specialists, they might not perform as well in real-world clinical practice. The expectations for AI in health appear inflated as the promised transformative events have for now mostly been confined to lab-controlled environments. Furthermore, the AIDA report seems to falsely equate more diagnoses with improved clinical outcomes. However, that is not always the case – overdiagnosis might even result in more harm than good. Ultimately, implementing AI technologies in health contexts requires accurate algorithms and investments in care infrastructure, professionals, and resources.

Underdiagnosis of the risks for both individuals and society

While the benefits of AI appear over-diagnosed, the AIDA report seems to downplay the risks. It correctly acknowledges some of them, in particular, harms to individuals’ wellbeing due to, for instance, misdiagnosis and related liability issues. But, by focusing on individual risks, the report overlooks broader societal risks and harms of AI. Unlike ‘human’ medical errors, mistakes of AI systems can have a much larger impact: technical system errors in AI could lead to mass patient injuries because of widespread use. Additionally, AI might perform better on sub-populations that are better studied or better represented in the training datasets, reflecting existing societal biases. For example, when AI is used to aid health professionals in diagnosing skin cancers, the system frequently uses freely available image databases that often do not include images of people of colour. As a result, these systems may lead to people of colour’s underdiagnosis. This would not only directly harm the health of individual members of (already) marginalised groups; it could also deepen the (existing) socio-economic inequalities in terms of access to health and health outcomes. It thus seems that the report disregards these societal implications in favour of potential economic benefits.

Patients are not mere data points; they are humans with fundamental rights

The risks of AI in health go well beyond health risks. AI runs on data, but data is not just an input; it is “collected by people, from people and for people”. Data subjects and those affected by AI are human beings with fundamental rights like dignity, non-discrimination and privacy. For instance, the report does not touch upon the black-box nature of many AI systems; a lack of understanding of how AI reaches its conclusion makes it difficult for clinicians to explain and for patients to understand AI’s advice, which can impair existing health practices of information provision and informed consent as an integral part of the right to health. Moreover, those seeking care are in a specific situation of vulnerability due to asymmetry in information and risk aversion. Therefore, the EU’s approach to AI in health should not only ensure health-specific fundamental rights are protected in the design, deployment, and usage of AI in health tools but also to address citizens’ willingness to provide and curate health data and ways to govern it appropriately.

Development of AI in health requires a specific approach

Developing health solutions is different from conventional product development because it touches upon people’s bodies and lives in a way no other industry does. Introducing AI in health amplifies the need for multidisciplinary teams that bring together insights from (bio)medicine, data science, behavioural sciences, and ethics and law. By lacking identification of this heterogeneity, namely through the recognition of the need for sector-specific regulation, the report fails to address the needs of patients, practitioners, and also developers and industry.

The AIDA report should therefore acknowledge the special nature of AI in the context of health and call for the proper regulation of the risks to patients or consumers as affected individuals and society. After all, health is everyone’s concern, and the algorithms have already entered doctor’s offices and our living rooms. Let’s ensure that they are really here to make us healthier.

This blog was first published on Euractiv on 24 January 2022.

The First steps in EU Regulation of Artificial Intelligence: Algorithms and Patients’ Rights

The use of artificial intelligence (AI) in healthcare is becoming increasingly common. This development holds great promise for the quality and accessibility of healthcare, but is not without risks. On the one hand, AI has its advantages: it facilitates accurate diagnosis of patients and it makes it possible to hold e-consults. On the other hand, it brings along risks for patients’ rights and privacy. Current European regulation insufficiently tackles these risks. The newly proposed Artificial Intelligence Act aims to better regulate AI in healthcare. However, the proposed Act lacks of a human-centric approach: the proposal centres on companies rather than people. Although certainly not perfect, the proposed Act is a step forward toward better regulation of AI in healthcare. Given the ongoing digitalisation of healthcare and rapid developments in AI, it surely will not be the last. In her article, Hannah van Kolfschooten describes the legal developments regarding the use of AI in healthcare and the consequences for patients’ rights from a European perspective.

This article was published in the Tijdschrift voor Gezondheidsrecht in Dutch.

Please cite as Mr. H.B. van Kolfschooten, ‘Eerste stappen in Europese regulering van artificiële intelligentie: algoritmes en patiëntenrechten’, TvGR 2021, p. 381-389.

Conspicuous by its Absence: Health in the European Commission’s Artificial Intelligence Act

Earlier this year, the European Commission published its long awaited legislative proposal on artificial intelligence (AI): the Artificial Intelligence Act. With the proposed Artificial Intelligence Act, the European Commission has taken the first steps towards uniform rules on AI in the EU. The act aims to take a balanced approach to regulating AI, which ensures effective protection of fundamental rights without hindering AI’s socioeconomic benefits. The proposal fails, however, to address the health specific challenges that AI presents. 

AI technology, and particularly its machine learning techniques, can be deployed to predict the best course of action in a specific context because of its capability to recognise patterns in large datasets. AI has been heralded as holding the promise to save billions of lives by improving the quality of healthcare, reducing costs, increasing accessibility of healthcare, and anticipating health emergency threats. However, given the unfathomable “black box” nature of AI, it may also affect fundamental rights, such as the rights to non-discrimination, privacy, and access to justice. 

When AI is deployed in the context of health, patients are exposed to potential specific risks that could lead to physical or psychological harm—for instance, when racial bias in algorithms leads to incorrect diagnoses. The lack of transparency around how algorithms work also makes it difficult to provide patients with the relevant information they need to exercise their patients’ rights, such as informed consent. Plus, the dependence of AI on large amounts of (personal) data may affect medical data protection, because of patients’ limited control over use of their personal data and cyber security vulnerabilities in AI systems. All of this means that care should be taken when AI is applied in clinical or health settings, yet the proposal falls short in realising this caution.

The EU’s AI proposal takes a risk based approach to the regulation of AI: the higher the risk, the stricter the rule. Most of the requirements laid down in the act focus on “high risk” applications and include rules on transparency, accuracy, and data governance. The proposal labels AI systems used in specific areas, such as critical infrastructure, education, and law enforcement, as “high risk.” While the proposal stipulates that all devices falling under the Medical Devices Regulation (MDR) qualify as “high risk,” healthcare is nonetheless conspicuous in its absence from the list of high risk areas.

This is remarkable since healthcare forms one of the most popular sectors for AI deployment in the EU and is an inherently risky market because it deals with matters of the human body and life and death. The commission seems to have assumed that all AI applications used in the context of health are covered by the MDR. This assumption is false: the MDR only covers medical devices and software with an intended medical purpose, such as treatment of patients. This therefore excludes a lot of AI applications used in the realms of health, like many fitness and health apps (for example, apps to track medication) and administrative AI systems used by doctors in a hospital or other healthcare setting. These applications may, however, still present new challenges and possible risks to people, because of their (in)direct effects on the human body or the use of sensitive health data. Mobile pregnancy apps, for example, offer AI powered recommendations that will likely influence the reproductive health of users and process sensitive data on people’s health and life choices, yet they would not fall under the MDR and thus are not considered “high risk” under the proposed Artificial Intelligence Act.

This omission is foremost caused by the lack of a human-centric approach: the proposal centres on companies rather than people. The proposed act ignores the perspective of the “end users” or those affected by AI powered decisions. It mainly sets rules for developers and allows for companies to self-assess their conformity with regulation, yet it does not provide “end users” with the resources to guard themselves against the detrimental effects of AI. This regulatory approach to AI disregards the vulnerability of humans exposed to AI algorithms. This is especially harmful in the health and clinical context, where people are particularly susceptible to the risks of AI because of the inherent dependency and information asymmetries in the patient-doctor relationship. In comparison, the EU’s General Data Protection Regulation does empower citizens to control how their personal information is used by granting them extensive rights.

It is true that the EU has limited legal powers to regulate in the area of healthcare, but this does not absolve the EU from its responsibility to protect the fundamental rights of people when it comes to their health. In order to adequately protect people’s rights in the context of health-related AI, the EU must empower those affected by AI systems with effective and enforceable rights. In addition, health and healthcare must be included in the list of “high risk” areas. This is the only way that Europe can fully reap the benefits of AI in health and medical science as a whole.

This blog was first published on BMJ Opinion

Please cite: H.B. van Kolfschooten, ‘Conspicuous by its absence: health in the European Commission’s Artificial Intelligence Act’, BMJ Opinion, 30 July 2021, https://blogs.bmj.com/bmj/2021/07/30/conspicuous-by-its-absence-health-in-the-european-commissions-artificial-intelligence-act/.

Data Protection in Health Crises in the EU: Apps in the Battle against COVID-19

Mobile technology is increasingly being used to manage crisis situations, such as the corona pandemic. Apps process large quantities of personal data, which has consequences for the right to data protection. In the battle against COVID-19, millions of European citizens entrust Covid-apps with their sensitive personal data. To what extent may the right to data protection be limited in the light of a health crisis and which role does the EU play in this matter? In her article, van Kolfschooten focuses on the role of data protection in crisis management in the EU and discusses to what extent the right to data protection can be limited in the light of a health crisis. She argues that the right to data protection is not protected in the same way and to the same extent by all European Member States during health crises. A high and equal level of data protection for every EU citizen is therefore on edge.

[Paper published in Dutch]

In crisissituaties – waaronder de coronacrisis – wordt steeds meer gebruik gemaakt van mobiele technologie. Apps verwerken grote hoeveelheden persoonsgegevens van individuen, met gevolgen voor het grondrecht op gegevensbescherming. Deze bijdrage gaat in op de rol van gegevensbescherming in crisisbeheersing en -bestrijding in de EU en bespreekt in hoeverre het recht op gegevensbescherming kan worden ingeperkt in het kader van een gezondheidscrisis.

H.B. van Kolfschooten, ‘Gegevensbescherming in gezondheidscrises in de EU: apps in de strijd tegen COVID-19’, Ars Aequi 2021(70).

Privacy Harmonisation in Times of Crisis

Hannah van Kolfschooten & Bastiaan Wallage

On 25 May 2018, the General Data Protection Regulation (GDPR) entered into force. From then on, the lawfulness of processing personal data within the European Union has been harmonised. The right to personal data protection, an important part of the human right to privacy, is thus one of the few human rights that has also been elaborated in European law. The aim of the GDPR is to provide every Union citizen with a high and equivalent level of data protection. However, the plethora of mobile corona apps that have sprung up like mushrooms since March 2020 shows that European Member States do not always apply the same meaning to the term ‘privacy’. The question is therefore whether the GDPR – even in times of corona crisis – can guarantee a consistent level of protection across Europe. We will answer this question below.

The right to privacy and data protection is very topical. For example, the District Court of The Hague recently ruled that the ‘fraud detection system’ known as SyRI violates the European Convention on Human Rights (ECHR), in which the right to privacy is laid down, among other things. First of all, it is relevant that it follows from case law that the GDPR must be interpreted in accordance with Article 8 ECHR. In line with the aforementioned judgment about SyRI, Member States would in that case be expected to exercise restraint when relying on a justification or exception, also under the GDPR. In Covid-19 times, this does not seem to be the case so far and the provisions of the GDPR are quickly brushed aside by Member States. Some Member States are even developing mobile applications that seriously infringe the privacy rights of their residents. These apps differ greatly from one Member State to another, and there is as yet no harmonised approach at the European level.

An example of a European corona app is the Polish home quarantine app ‘Kwarantanna domowa’. Polish citizens in compulsory home quarantine (14 days, after returning home from abroad or after contact with a corona patient) could choose between downloading this app and an unexpected police visit to check whether they were indeed staying at home. The app asks for a selfie at random moments. This selfie is shared with the authorities and then, by means of facial recognition and location data, it is checked whether the user is indeed at home. If this is not the case, or if the user does not respond within twenty minutes, the police are alerted. It should be clear that this app has a major impact on the privacy of users.

It is doubtful whether the Polish app would stand the test of the GDPR. Although the GDPR provides an exception clause for situations such as the corona crisis, the data protection principles of the GDPR apply in full. Also in times of crisis, the requirements of proportionality and subsidiarity must be met. This means, among other things, that only those personal data may be processed that are necessary to achieve the objective, that the data may not be kept unnecessarily long, and that the security must be in order. Although the Polish app collects information for the purpose of quarantine, the personal data it requests for this purpose (such as selfies) are probably a step too far. In short, the app does not seem to meet the requirements of proportionality and subsidiarity.

Although in the heat of the coronas battle in the Netherlands there was talk of using telecom data and an ‘appathon’ was urgently organised, we have not yet got that far in the Netherlands. After much criticism of the security and importance of privacy, the Dutch government is currently developing its own corona app, with the close involvement of the Dutch Data Protection Authority and the Human Rights Board. This app is expected to be launched – on a voluntary basis – in the summer. Such a practice has been meticulously avoided for the time being.

Now that the worst of the crisis has been averted, there also seems to be more interest in European cooperation. Where apps were mainly used to enforce restrictive measures at the beginning of the lockdown, the emphasis is now on ‘tracking apps’. These apps map out the contacts of infected patients using Bluetooth, GPS or smartphone tracking. Several Member States have now released their own tracking apps. The European Data Protection Supervisor, however, pleads for one European Corona tracking app. The supervisor points out the risks that the large differences between apps pose to privacy rights. Coordination from the EU, with due regard for the rights from the GDPR and the ECHR, would lead to better privacy protection in Europe.

The corona crisis therefore leads to different approaches to the right to privacy and data protection. The difference between national corona apps shows that some Member States impose far-reaching restrictions on privacy. The question is whether this is proportionate and subsidiary in all cases. That was precisely not the intention of the GDPR: it should have equalised this right in all Member States. A European approach to the development of coronatracking apps is therefore necessary. A crisis should not be a licence to ignore European human rights.

A Legal Perspective on Contact Tracing

Information for early detection of health emergencies can save lives. In the COVID-19 crisis, we see that national governments in the European Union (EU) turn to widespread surveillance and contact tracing in order to gather information on the spreading and risks of the virus. Contact tracing is the tracing of individuals and their contacts, who have been exposed to a pathogen that can cause a serious (cross-border) threat to health, and who are in danger of developing or have developed a disease. While these surveillance measures can focus on groups of people, contact tracing often directly affects individuals. In light of this difficult conundrum, the European Commission has published a toolbox for contact tracing and the interoperability between member states. In this guidance privacy is a key concern, including the use of contract tracing in a proportionate manner. Yet there is no guidance as to what proportionality might mean and how we should go about assessing this.

Contact tracing of individuals across the EU in the context of COVID-19 therefore raises the question how privacy can be protected when a disease rises to the level of a threat to security? In 2013, the EU has been given a larger role in pandemic responses, as formalized in the Health Threats Decision. As a consequence, contact tracing is now no longer a primarily national issue responsibility as information is exchanged at EU level. In the COVID-19 crisis, the practice of exchanging personal (health) data between the member states for the purpose of contact tracing gains importance. However, in the field of EU health law there is only limited guidance as to how public health can be safeguarded in a proportional manner.

In this article, we analyze EU case law in the field of security to give us guidance as to determine proportionality of limitations on privacy in light of a public health threat. Proportionality in this regard implies that the measures–in this case contact tracing–do not go further than what is needed in light of the public policy aims. In the case of COVID-19 the aim of protecting public health has risen to a level of threat that it can also be seen as a risk to security. However, important differences are also noteworthy and caution against a simple one to one comparison with security and public health. In the field of security policy makers are working often within a friend-enemy dichotomy, whereas in the field of public health the patient is seen as vulnerable and in need of assistance rather than as the enemy.