Hannah van Kolfschooten and Carmel Shachar
Abstract
The Council of Europe, Europe’s most important human rights organization, is developing a legally binding instrument for the development, design, and application of AI systems. This “Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law” (AI Convention) aims to protect fundamental rights against the harms of AI. The AI Convention may become the first legally-binding international treaty on AI. In this article, we highlight the implications of the proposed AI Convention for the health and human rights protection of patients. We praise the following characteristics:[1] Global regulation for technology that easily crosses jurisdictions;[2] The human rights-based approach with human rights assessment;[3] The actor-neutral, full-lifecycle approach;[4] The creation of enforceable rights through the European Human Rights Court. We signal the following challenges:[1] The sector-neutral approach,[2] The lack of reflection on new human rights,[3] Definitional issues, and[4] The process of global negotiations. We conclude that it is important for the Council of Europe not to compromise on the wide scope of application and the rights-based character of the proposed AI Convention.
1. Introduction
In the medical field, physicians, patients, and tech developers are calling for regulation of Artificial Intelligence (AI). There are questions and concerns about doctors using ChatGPT, [1] the liability of increasingly autonomous surgical systems, [2] and the persistent racial biases exhibited by medical AI systems. [3] Worldwide, legislators are rushing to regulate AI, while new applications keep popping up at an unprecedented speed. [4] On the European continent, multiple regional legislative instruments are being negotiated in parallel. In June 2023, the European Parliament finally agreed on the content of the EU Artificial Intelligence Act (AI Act). Concurrently, the Council of Europe (CoE), Europe’s most important human rights organization, is developing a legally binding instrument for the development, design, and application of AI systems: the “Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law” (AI Convention). [5] The Convention will apply to the medical context. [5]
Unlike the AI Act – that only applies to the 27 EU Member States – the AI Convention has the potential of becoming the first legally-binding international treaty on AI. We argue that, with its clear focus on fundamental rights protection, the AI Convention has the potential to fill the currently existing regulatory gaps in the protection of patients against the harms of medical AI. We first briefly outline the challenges posed by medical AI. Then we explain how the AI Convention is different from the AI Act and provide an overview of the current text of the AI Convention. Subsequently, we highlight the most important implications for the health and fundamental rights protection of patients. We conclude with recommendations on how to strengthen the protection of health in the ongoing legislative drafting of the AI Convention.
2. The need for AI governance in healthcare
AI has widely recognized potential for improving healthcare. However, it poses significant risks to the protection of the health and human rights of patients. Medical AI depends on enormous amounts of high-quality data. This is difficult to obtain, and many datasets contain inaccuracies (e.g. errors in medical records) and biases (e.g. underrepresentation of certain patients in datasets). This can lead to discrimination and/or medical errors programmed into the algorithm, causing physical or mental harm, especially for already marginalized groups. The increasing use of AI in healthcare may challenge access to healthcare because of lack of trust of patients in medical AI, [6] lack of digital literacy (e.g. of older patients), and lack of access to digital tools.[7] Medical AI also risks unlawful collection and use of sensitive personal data and may implicate cybersecurity, data protection, and privacy rights. “Black box” medical AI complicates existing informed consent procedures[8] because of its inexplicability and complexity for patients, affecting patient autonomy and physical integrity. At the same time, AI blurs the lines of responsibility and accountability, creating confusion about liability and compensation procedures in the event of harm.[9]
3. Background, purpose, and outline of the AI Convention
The CoE has been guarding the human rights of people on the European continent since 1949. It has 46 member states (including all 27 EU member states and countries such as the United Kingdom, Turkey, and Switzerland), 5 Council observers (United States (US), Canada, Japan, Mexico, and the Holy See), and 3 assembly observers (the parliaments of Canada, Israel, and Mexico can participate in assembly meetings). In the last decade, the CoE has issued numerous non-binding legal instruments on AI (e.g. guidelines and recommendations). In 2019, an Ad Hoc Committee on AI (CAHAI) was established to assess the need for a legally binding AI convention. Since January 2022, a new Committee on AI (CAI), consisting of member states, observer states (e.g. US and Japan), and representatives of the private sector and civil society, has been drafting the text of an AI Convention.[10,11] A consolidated working draft of the AI Convention – which was, in spite of critique, drafted behind closed doors[12] – was released in July 2023 and will serve as the basis for further negotiations within the CoE.[5] The final text is expected in 2024.
While the CoE’s main human rights instrument – the European Convention on Human Rights – applies to all areas of life, including technological developments, and its European Court of Human Rights can already rule on human rights issues concerning new technology, the CoE deems a new AI Convention necessary for the following three reasons:[1] To solve interpretation difficulties in current human rights applied to the context of AI,[2] To legally embed essential principles, and[3] To set international AI human rights norms to support trade. [10] It, therefore, does not necessarily introduce new human rights but rather designs more specific provisions to apply existing CoE standards on human rights in the AI context. The legal instrument will probably take the form of a framework convention (guiding principles and values) rather than a convention (binding rights and obligations). [13]
The AI Convention aims to regulate AI systems in their entirety during all phases of their lifecycle: from design to development, to application, to decommissioning (Article 1). It defines AI systems as “any algorithmic system or a combination of such systems that uses computational methods derived from statistics or other mathematical techniques and that generates text, sound, image or other content or either assists or replaces human decision-making.” (Article 3). The Convention applies horizontally across all sectors (Article 4). The Convention stipulates general obligations for actors applying AI systems (Chapter II), and specific obligations for Member States implementing the Convention, such as ensuring non-discrimination (Article 17), providing for public consultation (Article 19), and investing in digital literacy and skills (Article 20). At the same time, it formalizes fundamental AI principles to underpin concrete individual rights, such as the principle of transparency and oversight (Article 7) and the principle of safe innovation (Article 12). To ensure accountability and redress, it requires States to implement suitable redress systems (Chapter IV). It also proposes a regime for the assessment of the impact of AI technologies on human rights based on the level of risk (Chapter VI) and requires States to establish effective oversight mechanisms (Article 25).
The AI Convention is not the first legal instrument to set rules for AI. The European Commission is preparing the EU AI Act and the U.S. Federal Government has recently published a blueprint for an AI Bill of Rights. [14] The AI Convention will not override these regulatory initiatives but instead aims to complement them. The AI Convention and the AI Act exist within different legal frameworks – although they regulate the same object (AI systems) and apply to overlapping actors (27 EU Member States). The AI Act focuses on streamlining products using AI within the EU internal market, while the AI Convention centers on protecting fundamental rights of people affected by AI systems. The EU AI strategy has been critiqued for its limited applicability to health-specific issues, and lack of individual fundamental rights protection in the realm of healthcare. [15], [16], [17] The AI Convention has the potential to fill these legal gaps.
4. Promises of the Council of Europe’s AI Convention for health protection
4.1. First global AI treaty
Should the negotiations around the AI Convention be successful, this will be the first significant international treaty to focus on medical AI implementation. Moreover – the first treaty in this field with the primary aim of protecting citizens’ human rights against the harms of AI. The AI Convention is principle-based and will – unlike the AI Act and the AI Bill of Rights – introduce legally binding principles to ensure human rights, which also will apply to citizens of non-EU states. [11] This application to non-EU states and citizen will mean that AI Convention will have a broader territorial scope of application than the AI Act, as the CoE has 47 countries as members (including all 27 EU Member States). Observer states such as Japan and the US are also involved in the drafting and can decide to become signatories to the AI Convention. Third party countries also have the option to enter the treaty. The European Union is also involved in the drafting of the Convention. Because of all the different parties involved, the AI Convention has the potential of becoming the first legally binding global treaty on AI.
Striving toward a global AI treaty is an applaudable move. First, the potentially disruptive effects of AI technology transcend across borders, especially in inherently international areas such as warfare, trade, and (cyber)surveillance. Second, international tech companies lead the “AI revolution”. This multi-level, cross-border digital ecosystem, existing of private and public actors, cannot effectively be regulated by national regulators only. [18] For the healthcare sector, a global AI treaty would mean that the human rights standards governing medical products and services using AI would be raised worldwide. Setting international human rights standards will also guarantee equal protection to patients seeking healthcare cross-border. A global AI treaty, especially one focused on medical applications, will be truly a landmark achievement in AI governance and a step in the right direction to effectively regulate technology that easily crosses jurisdictions.
4.2. A human rights-based approach to AI governance
In accordance with its aim to achieve greater unity between States, and out of concern for the potential detrimental effects of AI systems on the protection of human rights, the CoE proposes a human rights-based approach to AI governance. First and foremost, the AI Convention requires all AI systems to undergo a human rights assessment before deployment (Article 15). Using the proposed impact assessment model, States need to identify AI systems that present “unacceptable” or “significant” levels of risks to human rights. Unacceptable AI systems require States to impose a moratorium on their use while significantly risky systems require strict measures to prevent harm. Additionally, the AI Convention guarantees non-discrimination, privacy and data protection, transparency, and robust systems for redressing harm, as key principles for AI. [19]
The human rights assessment has a lot of potential for protecting patients against the possible harms of medical AI tools. Foremost, it would require a risk assessment of the health rights as guaranteed by the European Court of Human Rights, such as the right to life and the right to access healthcare. The principle of non-discrimination, as guaranteed by the Convention, is important for health equity: fair opportunities to attain the highest level of health for all individuals. For example, when AI becomes common practice in healthcare, States must invest in internet access in remote areas to ensure sufficient access to standard-of-care digital technologies. The Convention sets high requirements for data protection. Adequate data protection is an important goal in the healthcare sector because of the intimate nature of the data patients share with their doctors. By introducing robust frameworks for accountability and liability, the Convention introduces a highly needed new layer of protection for patients, as current legal regimes do not equip patients with the means to remedy harm caused by medical AI. [17] The transparency principles put forward by the Convention protect patients from so-called “black box”-algorithms and allow for informed decision-making about medical treatment, which is an important factor in the creation of a relationship of trust between patient and doctor.
Setting international human rights standards for AI is important, especially for medical AI. First, people in need of healthcare are in a more vulnerable position than “regular consumers”. Second, the protection of patients’ rights and the financing of the healthcare sector is typically regulated nationally, which means that patients’ rights and entitlements vary between States. Because of its global character and focus on human rights, the AI Convention, if enacted, can be an important tool to ensure that AI technology will be implemented equitably and consistently for all, regardless of location.
4.3. All sectors, across the AI lifecycle
The definition of AI in the proposed AI Convention is rather broad and includes most medical AI applications, including ChatGPT-assisted diagnosis, fertility tracker apps, and AI-based fall detection in elderly care facilities. There is however no consensus yet on the exact scope of the Convention. While the Council of Europe advocates a broad scope, the U.S., the U.K., and Japan aim to limit the scope to the public sector. [20]
Even if the scope were to be limited to uses of AI in the public sector, a sizable number of medical AI applications would still be covered by the Convention. Many public healthcare systems use AI, the government uses AI for public health (i.e. automated disease surveillance and Covid-19 apps), and many states deploy AI for the distribution of health-related social benefits. Another important feature of the AI Convention is that it applies to the entire lifecycle of AI. As some AI systems evolve over time (“adaptive AI systems”), this approach allows for regulation of the total lifecycle. In comparison, the EU’s AI Act, does not have a lifecycle approach but focuses on regulating the development phase. On top of that, it only sets strict rules for “high-risk” AI medical devices. For patients, regulating the application phase could be beneficial, as this would also create obligations for physicians using medical AI.
4.4. Enforceable rights through the European Human Rights Court
The CoE has a strong history of upholding patients’ human rights and health. The AI Convention has the potential to further contribute to this aim. In order to achieve improved human rights protections in the medical field, the CoE has produced numerous recommendations on medical-ethical concerns, such as health governance, organ donation, and medical data protection. Its Human Rights Court has upheld patient protections against, among other things, coerced medical procedures, breaches of medical data, and health discrimination. For example, the right to know one is interacting with an AI system could be invoked by patients in national courts. In the past, the CoE has introduced strict obligations for states to pro-actively protect the health and patients’ rights of their citizens. Following court orders, many states have amended national health laws and policies. [21] The AI Convention has the potential to influence national practices regarding medical AI in the same manner.
5. Challenges of the Council of Europe’s AI Convention for health protection
An obvious challenge to the effectiveness of the proposed AI convention in the medical context is its general nature. The Convention applies horizontally across all sectors and does not stipulate specific rules for the use of AI in healthcare. This means that it does not consider any health-specific challenges, such as medical informed consent procedures, and will need “translation” to the medical context.
Taking a human rights-based approach to regulating AI has also been criticized: human rights have been accused of being too Western, too individualistic, too narrow, and too abstract. [22] For example, the choice for individual rights has been questioned, as AI often uses aggregated, anonymized data, that may directly affect the human rights of a specific population group rather than an individual.
The CoE decided to “re-interpret” existing human rights to fit the context of AI. Some scholars have however argued that current rights are insufficient and call for a complementary set of human rights directed specifically against the harms arising from AI technologies. For example, a “neurorights”-movement emerged in response to the development of AI in neurotechnology, [23] such as AI-powered brain implants to treat illnesses, [24] or headphones to record brain activity and improve productivity. [25] In this context, scholars have proposed new human rights to mental self-determination and to cognitive liberty. [26] Other rights proposed by scholars are the right to be offline and the right to internet access. [27] One could also think of a right to a human doctor, or a right to a second AI opinion.
Civil society organizations have expressed concerns about the implications of the blanket exemption for national security for protecting human rights, as included in a previous draft of the text (Revised Zero Draft). [28] While this blanket exemption has disappeared from the current version of the text, the proposal still recalls existing exemptions for issues of national security. This type of exemption may also apply to medical applications of AI. AI is increasingly used at the intersection of public health and national security, such as Covid apps, fever screening tools in border control, or emotion recognition systems to assess the mental state of undocumented migrants. [29]
Other concerns are the broad definition of AI in the Convention and the lack of guidelines on prohibitions of unacceptable AI systems by governments. [28] An often-mentioned example of unacceptable AI is biometric mass surveillance, such as behavioral detection of mental disability in public spaces. [30] In healthcare, patients may benefit from a prohibition of solely automated refusal of health insurance applications.
Finally, the great ambitions of the AI Convention may challenge its eventual success. If the objective is to generate global consensus, there is a significant risk of watering down the text, both in relation to its scope and its obligations. To illustrate, the US, UK, Canada, and Israel have been pushing to limit the scope of the Convention to the public sector only – leaving out private companies using AI. [20] In light of the growing power of big tech companies in the realm of medical AI, excluding them is disadvantageous for patients. Moreover, the focus on global applicability may also result in more principles instead of rights and obligations, limiting the protection of patients. In the latest version of the text, the effect of negotiations is very visible: as compared to the previous draft (Revised Zero Draft), there seems to be more emphasis on national interpretation, domestic legal systems, and guiding principles instead of binding rights.
6. Conclusions
In other words, a human-rights-based approach to AI is beneficial for the health and human rights of patients. When medical AI becomes standard in healthcare, the CoE’s AI Convention, the first legally enforceable fundamental rights document for AI established on such a large scale, has the potential to improve health and patients’ rights globally. The current draft text is a step in the right direction. However, the current shortcomings may significantly diminish its positive impact. It is therefore important that the CoE does not compromise on the wide scope of application and the rights-based character of the proposed Convention. Moreover, the CoE could play an important role in translating the human-rights approach to AI to the context of healthcare, for example by issuing a recommendation.
At the same time, the current political landscape may not allow the CoE to live up to its ambitions. In this light, the CoE has to reconsider what is more valuable: establishing internationally accepted human rights principles for AI governance or introducing enforceable rights for a smaller group of citizens. As compared to the Revised Zero Draft, in the last version of the text, the focus seems to be more on principles. This may be explained by the exclusion of civil society organizations from the Drafting Committee Meetings. In our view, an issue as important as a worldwide AI Convention requires a new public round of stakeholder consultation, as ultimately, AI affects the human rights of everyone.
References
- [1]G. KolataWhen Doctors Use a Chatbot to Improve Their Bedside MannerThe New York Times [Internet] (2023)Jun 12 [cited 2023 Jun 19]; Available fromhttps://www.nytimes.com/2023/06/12/health/doctors-chatgpt-artificial-intelligence.htmlGoogle Scholar
- [2]AAB Jamjoom, AMA Jamjoom, JP Thomas, P Palmisciano, K Kerr, JW Collins, et al.Autonomous surgical robotic systems and the liability dilemmaFront Surg [Internet], 9 (2022)[cited 2023 Jun 19]Available fromhttps://www.frontiersin.org/articles/10.3389/fsurg.2022.1015367Google Scholar
- [3]LN Guo, MS Lee, B Kassamali, C Mita, VE. NambudiriBias in, bias out: Underreporting and underrepresentation of diverse skin types in machine learning research for skin cancer detection—A scoping reviewJ Am Acad Dermatol, 87 (1) (2022), pp. 157-159[Internet]Jul 1 [cited 2023 Jun 19]Available fromhttps://www.jaad.org/article/S0190-9622(21)02086-7/fulltextView PDFView articleView in ScopusGoogle Scholar
- [4]Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina LigettThe AI Index 2023 Annual Report [Internet]Institute for Human-Centered AI, Stanford University, Stanford, CA (2023)Apr [cited 2023 Jun 19]Available fromhttps://hai-annual-report.stanford.eduGoogle Scholar
- [5]Committee on Artificial Intelligence (CAI), Council of Europe. Consolidated working draft of the Framework Convention on Artificial Intellignece, Human Rights, Democracy and the Rule of Law. Strasbourg; 2023 Jul. Report No.: CAI(2023)18.Google Scholar
- [6]J Adler-Milstein, N Aggarwal, M Ahmed, J Castner, BJ Evans, AA Gonzalez, et al.Meeting the Moment: Reducing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical DiagnosisNAM Perspect [Internet] (2022)Sep 29 [cited 2023 Jul 17]; Available fromhttps://nam.edu/meeting-the-moment-addressing-barriers-and-facilitating-clinical-adoption-of-artificial-intelligence-in-medical-diagnosis/Google Scholar
- [7]TLD. HealthDigital technologies: a new determinant of health, 3, Lancet Digit Health [Internet] (2021), p. e684Nov 1 [cited 2023 May 4]Available fromhttps://www.thelancet.com/journals/landig/article/PIIS2589-7500(21)00238-7/fulltextGoogle Scholar
- [8]IG. CohenInformed Consent and Medical Artificial Intelligence: What to Tell the Patient?, 108, The Georgetown Law Journal [Internet] (2020), pp. 1425-1469[cited 2023 Jun 16]Available fromhttps://www.law.georgetown.edu/georgetown-law-journal/in-print/volume-108/volume-108-issue-6-june-2020/informed-consent-and-medical-artificial-intelligence-what-to-tell-the-patient/Google Scholar
- [9]WHO calls for safe and ethical AI for health [Internet]. [cited 2023 Jun 19]. Available from: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-healthGoogle Scholar
- [10]Ad Hoc Committee on Artificial Intelligence (CAHAI). Feasibility Study [Internet]. Strasbourg; 2020 Dec. Report No.: CAHAI(2020)23. Available from: https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6daGoogle Scholar
- [11]Hendrickx PV VictoriaThe Council of Europe’s road towards an AI Convention: taking stock[cited 2023 Jul 3]CITIP blog (2023)Available fromhttps://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/Google Scholar
- [12]M Hickok, M Rotenberg, K. CaunesThe Council of Europe Creates a Black Box for AI PolicyVerfassungsblog [Internet] (2023)Jan 24 [cited 2023 Jul 3]; Available fromhttps://verfassungsblog.de/coe-black-box-ai/Google Scholar
- [13]D Leslie, C Burr, M Aitken, J Cowls, M Katell, M BriggsArtificial intelligence, human rights, democracy, and the rule of law – A primer [Internet]Strasbourg: Council of Europe (2021)[cited 2023 Jul 4]. Available fromhttps://edoc.coe.int/en/artificial-intelligence/10206-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-a-primer.htmlGoogle Scholar
- [14]Globalpolicy.AI [Internet]. [cited 2023 Jul 3]. Globalpolicy.AI. Available from: https://globalpolicy.ai/en/Google Scholar
- [15]IG Cohen, T Evgeniou, S Gerke, T. MinssenThe European artificial intelligence strategy: implications and challenges for digital healthLancet Digit Health, 2 (7) (2020), pp. e376-e379JulView PDFView articleView in ScopusGoogle Scholar
- [16]H. van KolfschootenEU regulation of artificial intelligence: Challenges for patients’ rightsCommon Mark Law Rev, 59 (1) (2022)[Internet]Feb 1 [cited 2022 Mar 17]Available fromhttps://kluwerlawonline.com/journalarticle/Common+Market+Law+Review/59.1/COLA2022005Google Scholar
- [17]MN Duffourc, S. GerkeThe proposed EU Directives for AI liability leave worrying gaps likely to impact medical AINpj Digit Med [Internet]., 6 (1) (2023), pp. 1-6Apr 26 [cited 2023 Jun 19]Available fromhttps://www.nature.com/articles/s41746-023-00823-wGoogle Scholar
- [18]L. SchmittMapping global AI governance: a nascent regime in a fragmented landscapeAI Ethics, 2 (2) (2022), pp. 303-314, 10.1007/s43681-021-00083-y[Internet]May 1 [cited 2023 Jul 17]Available from View PDF This article is free to access.Google Scholar
- [19]Council of Europe. Algorithms and human rights: Study on the human rights dimensions of automated data processing techniques and possible regulatory implications. Strasbourg; 2018 Mar. Report No.: DGI(2017)12.Google Scholar
- [20]http://www.euractiv.com [Internet]. 2023 [cited 2023 Jul 10]. EU’s AI ambitions at risk as US pushes to water down international treaty. Available from: https://www.euractiv.com/section/artificial-intelligence/news/eus-ai-ambitions-at-risk-as-us-push-to-water-down-international-treaty/Google Scholar
- [21]Impact of the European Convention on Human Rights [Internet]. [cited 2023 Jul 18]. Human rights and health – Impact of the European Convention on Human Rights – http://www.coe.int. Available from: https://www.coe.int/en/web/impact-convention-human-rights/human-rights-and-healthGoogle Scholar
- [22]NA. SmuhaBeyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea, 34, Philos Technol (2021), pp. 91-104, 10.1007/s13347-020-00403-w[Internet]Nov 1 [cited 2023 Jun 23]Available fromView article View in ScopusGoogle Scholar
- [23]Ienca M. On Neurorights. Front Hum Neurosci [Internet]. 2021 [cited 2023 Jul 10];15. Available from: https://www.frontiersin.org/articles/10.3389/fnhum.2021.701258Google Scholar
- [24]O. WhangBrain Implants Allow Paralyzed Man to Walk Using His ThoughtsThe New York Times (2023)[Internet]May 24 [cited 2023 Jul 10]; Available fromhttps://www.nytimes.com/2023/05/24/science/paralysis-brain-implants-ai.htmlGoogle Scholar
- [25]euronews [Internet]. 2023 [cited 2023 Jul 10]. These neurohacking headphones track brain signals to keep you focused. Available from: https://www.euronews.com/next/2023/06/14/these-neurohacking-headphones-use-ai-to-track-your-brain-signals-to-help-you-stay-productiGoogle Scholar
- [26]N. HertzNeurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of ThoughtNeuroethics, 16 (1) (2022), p. 5, 10.1007/s12152-022-09511-0[Internet]Nov 23 [cited 2023 Jun 6]Available from View PDF This article is free to access.Google Scholar
- [27]B. CustersNew digital rights: Imagining additional fundamental rights for the digital eraComput Law Secur Rev, 44 (2022), Article 105636[Internet]Apr 1 [cited 2023 Jan 25]Available fromhttps://www.sciencedirect.com/science/article/pii/S0267364921001096View PDFView articleView in ScopusGoogle Scholar
- [28]Council of Europe must not water down their human rights standards in convention on AI | ECNL [Internet]. 2023 [cited 2023 Jul 10]. Available from: https://ecnl.org/news/council-europe-must-not-water-down-their-human-rights-standards-convention-aiGoogle Scholar
- [29]Statewatch | A clear and present danger: Missing safeguards on migration and asylum in the EU’s AI Act [Internet]. [cited 2023 Jul 10]. Available from: https://www.statewatch.org/publications/reports-and-books/a-clear-and-present-danger-missing-safeguards-on-migration-and-asylum-in-the-eu-s-ai-act/Google Scholar
- [30]Biometric Recognition and Behavioural Detection Assessing the ethical aspects of biometric recognition and behavioural detection techniques with a focus on their current and future use in public spaces | Think Tank | European Parliament [Internet]. [cited 2023 Jul 11]. Available from: https://www.europarl.europa.eu/thinktank/en/document/IPOL_STU(2021)696968Google Scholar
Please cite as:
Van Kolfschooten & Shachar, ‘The Council of Europe’s AI Convention (2023-2024): Promises and Pitfalls for Health Protection’, Health Policy 2023, https://doi.org/10.1016/j.healthpol.2023.104935.

