Accounting for EU external effects: from clinical trials to data colonialism to AI ethics dumping

Published by

on

Against a backdrop of rapidly expanding health artificial intelligence (AI) development, this paper examines how the European Union’s (EU) stringent digital regulations may incentivise the outsourcing of personal health data collection to low- and middle-income countries (LMICs), fuelling a new form of AI ethics dumping. Drawing on parallels with the historical offshoring of clinical trials, we argue that current EU instruments, such as the General Data Protection Regulation (GDPR), Artificial Intelligence Act (AI Act) and Medical Devices Regulation, impose robust internal safeguards but do not prevent the use of health data collected unethically beyond EU borders. This regulatory gap enables data colonialism, whereby commercial actors exploit weaker legal environments abroad without equitable benefit-sharing. Building on earlier EU responses to ethics dumping in clinical trials, we propose legal and policy pathways to prevent similar harms in the context of AI.

Please cite as: van Kolfschooten H, Parwani P, Perehudoff K. Accounting for EU external effects: from clinical trials to data colonialism to AI ethics dumping. International Journal of Law in Context. Published online 2026:1-15. doi:10.1017/S1744552325100323

Introduction

The use of artificial intelligence (AI) in health care is said to hold great potential for improving human health worldwide (World Health Organization 2021). AI systems can – with a certain degree of autonomy – quickly analyse enormous datasets and generate recommendations, decisions, predictions and new information.Footnote 1 The use of AI could improve the quality, effectiveness, efficiency and accessibility of health care (Davenport and Kalakota Reference Davenport and Kalakota2019). There are many promising examples of AI uses in low-resource health-care settings, such as mobile apps for rural areas using automated audio analysis on coughing sounds to determine whether a person has tuberculosis (Pahar et alReference Pahar, Theron and Niesler2022); clinical decision support tools for hyper-specialised diagnostics and treatment planning in low-resource hospitals (Hosny and Aerts Reference Hosny and Aerts2019); and AI-powered chatbots providing personalised health advice, for example for maternal health (Phiri and Munoriyarwa Reference Phiri and Munoriyarwa2023). There is, however, an evident global ‘AI divide’, where the social and economic benefits of AI technologies are disproportionately distributed between high-income countries (HICs) and low- and middle-income countries (LMICs) (Carter et alReference Carter, Liu and Cantrell2020; Wang et alReference Wang2024).

In the European Union (EU), the annual private investment in AI technology in the medical and health-care sector increased by 108.15 per cent between 2017 (USD 339.88 million) and 2022 (USD 707.46 million). This investment makes medical technology one of the top fields to deploy AI, and thus an interesting market opportunity for Big Tech companies (AI Index Report, 2023).Footnote 2 Against this backdrop, in recent years, the EU has increasingly regulated its ‘Digital Single Market’ through the General Data Protection Regulation (GDPR),Footnote 3 and since August 2024, the Artificial Intelligence Act (AI Act).Footnote 4 Both establish strict compliance requirements for data processing and high-risk AI systems.

This paper focuses on the potential dangers of the EU’s strict digital regulations incentivising AI developers to offshore data collection activities to LMICs, particularly in sub-Saharan Africa. The strict regulation of personal data collection, processing and sharing in the EU has given rise to increased practices of ‘digital ethics dumping’: the exporting of data collection practices that would be ethically unacceptable in Europe to LMICs, where strong legal frameworks and ethics compliance mechanisms may be lacking, for use in HICs (Floridi Reference Floridi2019). Indeed, when personal health data are collected in LMICs for the training, testing and development of AI systems, due to intensified legal frameworks in the EU, the increasing demand for medical AI technology may give rise to a new reality of ‘AI ethics dumping’. Numerous scholars have conceptualised these practices as a form of ‘data colonialism’: Big Tech companies exploit their powers to extort personal data from individuals, imposing burdens on the local community without sharing the benefits (Magalhães and Couldry Reference Magalhães and Couldry2021).

We caution that history may be repeating itself in a digital form. Similarly to the EU’s regulation of clinical trials testing new medicines in the past, stringent EU regulations of the Digital Single Market may unintentionally exacerbate ethical issues in LMICs. Ethics dumping and data colonialism are not new phenomena: there are many examples of medical researchers escaping restrictive regulatory regimes in some parts of the world by exporting otherwise prohibited research practices to LMICs. For instance, the International Agency for Research on Cancer (IARC), in collaboration with the US National Institutes for Health and the Gates Foundation, conducted clinical trials for cervical cancer testing in India. Despite the availability of proven effective screening methods, the women in the control arm of the trials did not receive any screening and informed consent was missing (Bagcchi Reference Bagcchi2014; Suba et alReference Suba, Ortega and Mutch2018). Although this violated well-established ethical principles, clinical trials were not required by law and the trials were not subject to legal oversight (Srinivasan et alReference Srinivasan, Johari, Jesani, Schroeder, Cook, Hirsch, Fenet and Muthuswamy2017). In response, India updated the clinical trial guidelines in 2019 (Pillamarapu et alReference Pillamarapu, Mohan and Saberwal2019). Another example can be seen with clinical trials for AstraZeneca’s antipsychotic drug Seroquel (approved in 2007), where placebo trials with patients suffering from schizophrenia or acute mania were conducted almost exclusively in LMICs, including India, Malaysia, South Africa and the Philippines, due to strict laws in the EU and USA (Schipper and Weyzig Reference Schipper and Weyzig2008). Research already suggests that the adoption of strict rules for data protection in the EU seems to affect the export of clinical trials to LMICs (Calzati Reference Calzati2022).

The EU has responded in several ways to limit ethics dumping of clinical trials in LMICs, such as the EMA guidelines for clinical trials conducted outside the EU and stricter conditions for research funding. However, there are currently no strong legal instruments to limit AI ethics dumping. While the EU has strong internal data protection rules (e.g. GDPR), it lacks an enforceable legal framework for the extra-territorial collection of personal data from LMICs for use in training AI systems. The Declaration of Helsinki, which stipulates the ethical principles for medical research involving human participants, has been critiqued for focusing on pharmaceutical drugs and excluding AI tools (Haidar and Gooshki Reference Haidar and Gooshki2024).

This paper is structured as follows. Section 2 examines how legal, technical and commercial barriers within the EU incentivise the outsourcing of health data collection to LMICs, fuelling practices of digital ethics dumping and data colonialism. Section 3 draws parallels with past clinical trial outsourcing, showing how the EU has previously addressed ethics dumping through regulatory interventions. Section 4 proposes regulatory pathways to prevent similar harms in the AI context. Our core message is that as the EU moves forward with regulating data-driven technology, those regulations will have ripple effects in LMICs. Now is the time to consider the potential global effects of the EU’s Digital Single Market regulation.

1. EU external effects: outsourcing collection of personal data

1.1 Health AI is lucrative – but data are hard to get

AI systems, particularly those developed for health-related applications, rely on vast quantities of high-quality personal health data. These data are essential for training, testing and validating AI models. The need for such data has only intensified with the global push toward digital health solutions such as AI-powered diagnostic tools and medical chatbots. AI in health is a lucrative field. As a result, both public institutions and private companies are investing heavily in the development of health AI. However, access to such data is highly constrained within the EU due to technical, commercial and legal barriers.

First, technical barriers include the fragmentation and lack of interoperability between health data systems in EU Member States. Electronic health records (EHRs) are often held in different formats and maintained by a range of public and private actors with inconsistent standards and varying access policies (Stegemann and Gersch Reference Stegemann and Gersch2019). The European Commission has attempted to remedy this through the European Health Data Space (EHDS), but progress remains slow and stakeholders are sceptical about its technical feasibility (Marelli et alReference Marelli2023). Second, commercial barriers arise because health data is a valuable asset. Obtaining data directly from medical records or medical questionnaires is time-consuming and costly. Large health datasets are often held by hospitals, insurers or private companies who are reluctant to share them without high compensation. The scarcity of freely available, high-quality health datasets contributes to rising costs for data acquisition (Raab et alReference Raab2023).

Third, and most importantly, there are legal constraints to the collection of personal health data. The GDPR, in force since 2018, plays a central role. It applies to all types of personal data, regardless of sector, and is fully binding and directly applicable across the EU (Articles 1–3). The GDPR requires every instance of personal data processing to be underpinned by a legal basis (Article 6), and provides individuals with several rights, such as the right to information, the right to access and the right to withdraw consent (Articles 7(3) and 12–15). Similarly, it imposes duties on data processors and controllers, and confers rights on data subjects to increase their control (Chapter III). Notably, the GDPR qualifies health data as ‘sensitive’ personal data; Article 4(15) states: ‘personal data related to the physical or mental health of a natural person, including the provision of healthcare services, which reveal information about his or her health status’. This definition is explicitly broader than personal data obtained in the medical sphere.Footnote 5 The GDPR stipulates a general prohibition on the processing of health data, while providing for limited derogations (Article 9). In short, in the EU, the collection, processing and sharing of personal data to develop AI technology are bound to strict conditions (e.g. based on informed consent or in the public interest) and safeguards (e.g. protection of data subjects’ rights). Therefore, not all actors in search of personal data to develop AI systems are able or willing to obtain enough high-quality data.

These constraints also intersect with the EU AI Act, which entered into force in August 2024. The AI Act stipulates the legal framework for the development, the placing on the market, the putting into service and the use of AI systems in the EU. Its objective is to foster innovation by harmonising the market, while simultaneously offering protection to fundamental rights (van Kolfschooten Reference van Kolfschooten2022a). The AI Act imposes further requirements on the development of high-risk AI systems, which include most medical AI. For instance, developers must ensure the use of high-quality datasets that are representative, free of errors and biases and relevant to the intended use. Finally, the EU Medical Devices Regulation (MDR) sets strict requirements for the quality and safety of AI-based medical devices (van Kolfschooten 2022b, 2023). As a result, companies increasingly consider LMICs to meet their data demands. While the AI Act includes provisions to prevent circumvention of EU rules – applying extra-territorially to providers and deployers of AI systems established outside the Union where the output is intended to be used in the EU (Recitals 21–22; Article 2(1)) – these safeguards do not prevent the use of personal data collected under weak or absent protections abroad.

1.2 The price of AI: from regulatory avoidance to data colonialism

Together, these technical, commercial and legal hurdles may create incentives for AI developers to seek data elsewhere. Indeed, evidence is growing that Big Tech companies are turning to LMICs, particularly in sub-Saharan Africa, where large and diverse populations offer valuable data sources, and data protection regimes are often less stringent or poorly enforced. Africa has even been described as a ‘data goldmine’. This section outlines three interrelated activities that demonstrate how these incentives play out in practice. While the third activity – large-scale extraction of personal data – most directly supports the argument that strict EU regulation pushes health data collection for AI system development abroad, the first two illustrate broader patterns of technological dependency and epistemic injustice in relation to data collection.

First, Big Tech companies are building digital infrastructures in LMICs, often framed as philanthropic initiatives to expand Internet access and digital inclusion. In reality, these projects serve as entry points for datafication and establish long-term dependencies (Magalhães and Couldry Reference Magalhães and Couldry2021). In 2013, Mark Zuckerberg published a paper titled ‘Is connectivity a human right?’ – presenting Facebook’s philanthropic mission to ensure worldwide Internet access (Zuckerberg Reference Zuckerberg2013). Indeed, in 2016, Facebook launched a ‘Free Basics platform’ in over twenty African countries, in partnership with local telecoms operators. This platform provided users with free access to a list of websites (Nothias Reference Nothias2020). In the same vein, in 2018, Google collaborated with a local cable company in Lagos, Nigeria, to launch a network of free Wi-Fi hotspots in Nigeria (Oyedemi Reference Oyedemi2021). Both Google and Meta are working on undersea cables to further connect the African continent to the Internet (Browne Reference Browne2020). However, it has been argued that these datafication projects in LMICs do not actually do social good – but are mainly undertaken in the interest of the profitable good (Magalhães and Couldry Reference Magalhães and Couldry2021). At the same time, there has been a recent surge in the building of data centres on the African continent, often with the help of international loans (Soulé Reference Soulé2024). The establishment of digital infrastructures in Africa by foreign actors can cause dependency and may limit local innovation and control over digital infrastructure (Birhane Reference Birhane2020). While this infrastructuring may not result directly from EU regulation, it lays the groundwork for the exploitative data practices discussed below.

Second, there is a growing trend of AI technologies developed in HICs being exported to LMICs, often without adequate contextual adaptation or local innovation capacity. For example, empirical research shows that more technology-advanced States, such as Nigeria, mainly import their technological products from HICs. This is problematic for numerous reasons. First, these AI models do not necessarily address the critical issues experienced on the African continent, as the context in which the tools are deployed matters (Arun Reference Arun, Dubber, Pasquale and Das2020). Second, if designed to work for people living in HICs in the Global North, these tools do not necessarily function as well for populations in LMICs. Research shows that patterns of health inequality and discrimination are deeply embedded in most AI systems (Leslie et alReference Leslie2021). Most of the time, medical AI is developed for high-resource settings in HICs, and does not take into account the different deployment contexts in LMICs, for example, regarding varying population groups, diseases, available resources or digital education. For example, research on breast cancer detection technology shows that the same tool was less effective in sub-Saharan Africa than in HICs (Black and Richmond Reference Black and Richmond2019); African AI researchers often critique the lack of ‘African context’ in research on AI (Cisse Reference Cisse2018; Hassan Reference Hassan2023; Wairegi et alReference Wairegi, Omino and Rutenberg2021). On top of that, AI systems often mimic existing stereotypes, prejudices and discriminatory practices linked with ethnicity, gender and socio-economic status (van Kolfschooten Reference van Kolfschooten2025). These biases in the choices for the development, design and deployment of medical AI eventually augment global health inequity (Owoyemi et alReference Owoyemi, Owoyemi, Osiyemi and Boyd2020). Finally, importing AI tools hinders the development of local tech products, and thus obstructs innovation (Birhane Reference Birhane2020; Ruttkamp-Bloem Reference Ruttkamp-Bloem, Eke, Wakunuma and Akintoye2023). Although not a direct consequence of EU digital regulation, these practices reveal how structural inequities in AI development and deployment align with, and are reinforced by, the data collection strategies addressed in the next section.

Third, and most directly relevant to the EU’s regulatory impact, are practices of large-scale personal data extraction in LMICs, named by Floridi (Reference Floridi2019) as ‘digital ethics dumping’. These involve transferring data collection activities that would not be ethically permissible under the GDPR to jurisdictions with minimal legal oversight. International corporations and foreign governments harvest vast amounts of data from – mainly – African countries without informed consent, adequate compensation or benefit sharing. Indeed, Africans do not actually receive Internet services for free from Big Tech companies. The price Africans pay for this is their personal data, and thus their right to privacy. Tech companies collect personal data through online behavioural tracking technology – especially in countries in Africa with limited data protection legislation and/or enforcement – and count on the relative lack of digital literacy and privacy awareness (Nhemachena et alReference Nhemachena, Hlabangane and Kaundjua2020). Personal data are sold for health analytics or targeted advertisement. Foreign tech companies also provide governments in Africa with free access to AI and other technology solutions, in exchange for access to local citizens’ data (Calzati Reference Calzati2022; Odero Reference Odero2024; Soulé Reference Soulé2024). Given its need for large amounts of data, AI and its growing use may lead to further exploitation of populations in vulnerable positions, due to lack of oversight or regulation.

Practices surrounding the collection, sharing and analysis of data from LMICs by Big Tech companies have been problematised by numerous scholars (Coleman Reference Coleman2018). Comparable to traditional patterns of colonialism, these practices see Big Tech companies in Africa exploit their powers by extorting personal data from individuals (Abebe et alReference Abebe2021). Applied to data collection, the concept of data colonialism critiques how data and digital technologies are deployed in ways that mimic colonial practices, often extracting value from the continent and its peoples, without equitable return or respect for their sovereignty and rights. In many cases, the local community lacks the capacity to access their own personal data and thus misses out on potential economic benefits. At the same time, ‘global’ AI ethics instruments, such as the OECD’s AI Principles, the UNESCO Recommendation on the Ethics of AI and the G7 Hiroshima Process International Guiding Principles for Advanced AI Systems, do not consider the African context, imposing ‘Western’ values on the African continent, for example in relation to privacy and data protection (Corrêa et alReference Corrêa2023; Gwagwa et alReference Gwagwa, Kazim and Hilliard2022). In turn, this lack of fair distribution of benefits amplifies global health inequity. It has been argued that this extraction of personal data is similar to historical colonial extractions of resources (Coleman Reference Coleman2018).

In the domain of health AI development, there is growing concern that personal data collected in LMICs – often without individuals’ knowledge, informed consent or fair compensation – may be used to train algorithms commercialised in HICs. While the full scope of these practices remains difficult to trace due to the opacity of AI development pipelines and proprietary data-sharing agreements, research consistently shows large-scale health data collection in LMICs (Abebe et alReference Abebe2021; Coleman Reference Coleman2018; Nhemachena et alReference Nhemachena, Hlabangane and Kaundjua2020). These dynamics raise particular concern in the context of AI, where large and diverse datasets are needed to develop systems for diagnosis, triage or health monitoring. As demand for such data increases – and as EU regulations like the GDPR and the AI Act place stricter limits on data collection within Europe – the risk of digital ethics dumping is likely to grow. The current EU legal framework does not directly constrain these practices.

1.3 Limited restraints on obtaining personal data abroad

While Sections 2.1 and 2.2 have shown how regulatory, technical and commercial constraints within the EU incentivise the outsourcing of data collection to countries in the Global South, this section turns to the legal frameworks that fail to restrain such outsourcing. Specifically, the GDPR, AI Act and MDR – while strict in their application within the EU – provide no meaningful safeguards for how personal data is sourced from LMICs for the development of AI systems.

First, the GDPR applies primarily to personal data processing in the context of the activities of an establishment in the EU (Article 3(1)), and to data processing related to the offering of goods or services to, or the monitoring of, individuals in the EU (Article 3(2)). It does not extend to the processing of personal data relating to individuals outside the EU, unless those individuals are being specifically targeted. As a result, the GDPR’s strong safeguards, including lawful grounds for processing (Article 6), the prohibition on processing special categories of data including health data (Article 9(1)) and the requirement for explicit consent (Article 9(2)(a)), do not apply to the collection and use of health data from LMICs when EU residents are not involved. This legal gap allows EU-based developers to train AI models using personal data from non-EU contexts without being bound by the GDPR’s protections, as long as their processing activities do not fall within the Regulation’s territorial scope.

Second, the AI Act imposes detailed obligations on developers of AI systems that are placed on the EU market or used in the EU (Article 2(1)), including those developed outside the EU. High-risk AI systems – which include many health-related applications (see Annex III and, in the case of medical devices, Article 6(1); van Kolfschooten Reference van Kolfschooten2022a) – must meet strict requirements regarding risk management (Article 9), data governance and data quality (Article 10), transparency (Article 13) and human oversight (Article 14). Notably, Article 10(3) requires that training, validation and testing data sets be ‘relevant, representative, free of errors and complete’, but it does not address how the data must be obtained or whether it must be collected in an ethically sound or legally compliant manner abroad. Nor does the AI Act require disclosure of data origins or mechanisms for informed consent or benefit-sharing with non-EU data subjects. Although Recitals 21 and 22 and Article 2(1) of the AI Act extend the Regulation’s scope to providers and deployers established in third countries, this applies only when the AI output is intended to be used within the Union. The provision aims to prevent circumvention of EU rules and protect individuals located in the EU. However, it does not impose ethical standards for how data used to train the AI was sourced, nor does it extend protections to individuals outside the Union whose data may be used in training. Moreover, non-medical health-related AI systems (such as menstruation apps, symptom trackers and assisted living sensors) do not have to comply with the AI Act standards (van Kolfschooten 2022b, 2023).

Third, the MDRFootnote 6 governs medical devices placed on the EU market, including software using AI for medical purposes (Article 2(1) and Annex VIII, Rule 11). The MDR requires manufacturers to perform a clinical evaluation to demonstrate conformity with general safety and performance requirements (Articles 5(3), 10(3) and 61). Where clinical investigations are used, they must comply with the ethical principles of the Declaration of Helsinki (Article 62(4)(h)), and with Good Clinical Practice as detailed in Annex XV. However, these requirements are limited to clinical investigations (which not all AI medical devices require), and do not apply to personal data used outside a trial setting – such as data collected through digital health apps, third-party partnerships or observational studies. While medical AI is increasingly used as a health intervention, this regime is not nearly as strict as the EU’s legal framework regulating medicinal products (Albuquerque de Almeida and Ricardo Reference Albuquerque de Almeida and Ricardo2023). The MDR does not require access to the full training dataset used to develop an AI-based medical device – only a summary of safety and clinical performance (Article 32).

In sum, the EU legal framework does not directly prevent AI ethics dumping. As will be shown below, this is consistent with previous patterns in the conduct of clinical trials – where stringent regulations within the EU led to a practice of ethics dumping by EU pharmaceutical companies, which conducted clinical trials in LMICs with often comparatively limited (or at times absent) regulatory legislations.

2. New tools, old tricks: EU regulation of clinical trials, ethics dumping and global norm setting

Ethics dumping and data colonialism are not new phenomena. For decades, cases have surfaced of medical researchers evading strict regulatory regimes by exploiting weaker protections abroad – often in LMICs – where vulnerable populations become subjects of unethical experiments (Schroeder et alReference Schroeder2017Reference Schroeder, Schroeder, Chatfield, Singh, Chennells and Herissone-Kelly2019; Novoa-Heckel et alReference Novoa-Heckel, Bernabe and Linares2017). Although the Declaration of Helsinki is widely recognised as the global standard on ethical trial conduct, some subjects in clinical trials in LMICs experience fewer systemic (legal) protections, lax compliance by research entities (e.g. pharmaceutical companies and their CROs) and weak respect and enforcement of ethical standards and Good Clinical Practice. For example, some trials have used placebos and denied the ‘control’ arm/group of patients access to known, effective therapies for their disease. In other trials, access to experimental medication is provided to participants during the study, but not after it ends, even when the medication is proven safe and effective (Bagcchi Reference Bagcchi2014; Suba et alReference Suba, Ortega and Mutch2018). Finally, some participants may be enrolled in trials testing new medicines with neither their knowledge nor informed consent (Schroeder et alReference Schroeder, Cook and Hirsch2020).

To appreciate the origin of the EU’s regulatory regime for clinical trials, we must first look back to the formulation of the World Medical Association (WMA)’s Declaration of Helsinki, from which many of the principles for trial conduct originate. Established as a physicians’ professional association, the WMA adopted a set of ethical principles in the Declaration of Helsinki to protect human subjects involved in medical research. Examples of the ethical principles enshrined therein include informing participants about and handling possible risks, burdens and benefits arising from the research (including informed consent); involving and protecting vulnerable subjects; and protecting subjects’ privacy and confidentiality. The norms in this global policy document are universal and to be respected by every clinical researcher everywhere. As such, they form the basis for modern ethical rules and regulations adopted by domestic lawmakers. However, in some places, the conduct of some clinical trials strays from these norms. This is even more inappropriate when these breaches occur at non-European trial sites, on medicines destined for the European market.

The EU has responded in several ways to limit the possibilities for ethics dumping in clinical trials in LMICs. The sections below introduce the EU’s hard law and soft law approaches.

2.1 Territorial extension of EU ethical standards through hard law

Joanne Scott’s conception of territorial extension offers a helpful lens to understand how EU legal and ethical standards shape conduct beyond the EU’s borders. Where EU regulations make compliance (and consequently, market entry) ‘conditional upon conduct or circumstances abroad’, this can be understood as giving rise to territorial extension of EU law outside the EU. For territorial extension to apply, two conditions must be met: first, the application of the regulation in question must be triggered by a territorial link, such as the import of pharmaceutical products into the EU; and second, assessing compliance with the regulation requires an evaluation of conduct undertaken abroad to determine whether EU (or EU-equivalent) standards have been met. (Scott 2014, 2020). This differs from extra-territorial application in which EU law explicitly applies to conduct outside the EU regardless of where the product or service is used. It also differs from the Brussels effect, where non-EU actors voluntarily adopt EU standards (Bradford Reference Bradford2020). EU regulations governing the conduct of clinical trials in non-EU countries are acknowledged as instances of territorial extension (Flear Reference Flear2014).

In 2001, the European Parliament and Council adopted Directive 2001/20/EC regarding the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use. Founded on the protection of human rights and human dignity codified in the Declaration of Helsinki (1996), Directive 2001/20/EC sought to protect trial participants in the European Community and third countries from ‘obsolete or repetitive tests’ by harmonising the technical requirements for product development in global fora, namely the International Conference on Harmonisation.Footnote 7 In other words, the European Community’s first binding law regarding the conduct of clinical trials flagged the need to regulate the ethical conduct of trials within and outside the Community; it then delegated the task of harmonised regulation to international fora, rather than taking it up in the binding law of the Community.

Only three years later did EU lawmakers use binding EU law on territorial extension to attempt to regulate the ethical conduct of some trials taking place outside the Union. EU lawmakers introduced the explicit requirement in Regulation 726/2004 (establishing the EMA) that any trial conducted outside the EU supporting an application for market approval of a new medicine for use within the Union must include a statement that Directive 2001/20/EC ethical requirements have been met.Footnote 8 Moreover, Regulation 726/2004 requires the EMA’s Committee on Human Medicinal Products (responsible for evaluating most market approval applications) to ‘seek guidance on important questions of a general scientific or ethical nature’.Footnote 9

For this purpose, the EMA adopted a guideline of ‘points to consider’ for Agency committees evaluating medicines tested outside Europe and seeking EU market approval. First, any evaluation of an EMA application ‘should ensure that these Good Clinical Practice (GCP) principles have been applied to all clinical trials submitted in the dossier’, regardless of where the trials were conducted (EMA Working Group on Clinical Trials conducted outside of the EU/EEA 2012, p. 34). Other points for consideration include the acceptability of ‘foreign data’ from a scientific perspective; ‘the conduct of the study and data reliability’; and the study design (e.g. use of placebos, comparators, informed consent).

There is a dearth of data regarding the global impact of the EU’s territorial extension of its binding norms for ethical trial conduct. EMA data on the authorisation of medicines for sale in the EU between 2005 and 2011 reveals that 27.8 per cent of clinical trials supporting EU market approval applications were conducted outside the EU/EEA/EFTA area and outside North America (EMA 2013, p. 9). Included in this group are 23,165 patients enrolled in 1,024 different investigator sites throughout the continent of Africa, in so-called ‘pivotal trials’, submitted to the EMA as part of a market approval application (EMA 2013, pp. 10, 12, 14). Therefore, it is reasonable to assume that the EU’s ethical requirements in Regulation 726/2004 have added an additional layer of rights protection to these trials’ participants outside the EU (Perehudoff Reference Perehudoff2024).

2.2 EU soft law framing global research norms

Later, European lawmakers took a complementary soft policy approach to introduce safeguards against ‘ethics dumping’, a term first coined by the European Commission in 2013, to describe the emerging practice of outsourcing or offshoring drug development, and specifically clinical trials, to LMICs to avoid strict regulatory regimes. The European Commission explains that research activities conducted outside the EU are particularly vulnerable:

‘Due to the progressive globalisation of research activities, the risk is higher that research with sensitive ethical issues is conducted by European organisations outside the EU in a way that would not be accepted in Europe from an ethical point of view. This exportation of these non-compliant research practices is called ethics dumping’ (European Commission 2014).

More recently, the European Commission adopted a new Code of Conduct as a mandatory reference document for its Horizon 2020 projects to include clear guidelines to aid researchers in identifying and mitigating the sometimes unintentional and obscure instances of ethics violations (TRUST 2018).

As a rising global health actor, the EU expanded its soft policy approach in the field of research policy to avoid undermining the protection of participants in the offshoring of clinical trials (Petryna Reference Petryna2007). In 2018, the European Commission also proposed a global code of conduct for international research partnerships to counter ethics dumping. The Global Code of Conduct for Research in Resource-Poor Settings (later renamed ‘for Equitable Research Partnerships’) provides clear guidelines for research conduct between partners from the so-called Global North and Global South. The code was developed in collaboration with UNESCO, European academics and representatives from LMICs, as part of an EU Horizon 2020 project. DG Research and Innovation, the department of the European Commission responsible for EU policy on research, science and innovation, committed to proposing the Code as a reference document for funding applications to the (Trust Equitable Research Partnerships 2021).Footnote 10

With the rise of digital technology to tackle health issues comes the need to govern the ethical development of such technologies. The development of AI for health purposes is particularly susceptible to the scourge of ethics dumping. Steps to prevent it must be taken at an early stage – and for this, the EU’s regulation of the clinical development of medicines offers a source of inspiration.

3. From EU clinical trials to AI ethics dumping: the path forward

There are several options available to the EU, as a global health actor, to target AI ethics dumping. Here, we present a menu of regulatory options, both at the EU and international levels, and with different degrees of legally binding value. We also highlight the feasibility of each option, as well as the associated advantages and disadvantages, with the idea that one or a combination of these may be tailored to tackle AI ethics dumping.

Although the possibilities sketched out below focus on EU-driven law and regulation, we caution that such rules may exacerbate the problem – or, at least, contribute to the inappropriate phenomenon of so-called ‘Western-style’ norms as the global standard being applied to LMICs (Corrêa et alReference Corrêa2023; Gwagwa et alReference Gwagwa, Kazim and Hilliard2022; Mahamadou et alReference Mahamadou, Ochasi and Altman2024). While these EU regulatory risks are ever-present, we recognise that the EU will inevitably take some form of action to regulate the collection of personal data for AI, and that action will have a global reach. For example, the EU’s Global Health Strategy explicitly mentions the EU as a pioneer in regulation of health data, and foresees a role for the Union in:

‘shaping the digital health ecosystem globally (rules, norms, standards, interoperability), using European examples and best practice’ as well as ‘supporting international rules that are compatible with the EU framework while facilitating person-centred health data governance and protection’.Footnote 11

In this context, the Commission positions EU rules, standards and examples as key reference sources for establishing the boundaries and norms of a global digital health ecosystem.

Two further cautionary notes are warranted. First, (often transnational) private corporations are driving the AI revolution, and their conduct cannot always be controlled by laws alone but rather requires the concomitant shifting of social norms and often economic incentives. Second, law as an intervention to regulate company conduct is a rather slow device to prevent rights violations or elicit desired corporate behaviour. Law will always lag behind the pace of technology development. Below, we present regulatory options available at the EU level, as well as in international forums, where the EU acts as a single actor.

At the EU level, one option would be to use existing non-binding guidelines to attempt to prevent AI ethics dumping. These include the Ethics Guidelines for Trustworthy AI proposed by the High-Level Expert Group on Artificial Intelligence, set up by the European Commission, and the Living Guidelines on the Responsible Use of Generative AI, published by the European Research Area Forum. While non-binding in nature, these guidelines could be applied to EU-funded international research projects to ensure that EU funding is only provided to projects where data collection outside the EU is also conducted in line with these guidelines.

Such an application of EU Guidelines to extra-territorial company conduct can be seen as a form of territorial extension of EU (soft) rules outside the EU (Cremona and Scott Reference Cremona, Scott, Cremona and Scott2019). As discussed above, a similar approach has been used in the past in the context of the regulation of clinical trials. Regulation 726/2004 established requirements that a statement confirming the ethical requirements in preceding EU law (Directive 2001/20/EC) has been fulfilled must be included in any drug marketing authorisation application utilising clinical trials conducted outside the EU for drugs sold within it (Perehudoff Reference Perehudoff2024).Footnote 12 Thus, GCP principles must be applied to all clinical trials submitted in the dossier, regardless of their location (EMA Working Group on Clinical Trials conducted outside the EU/EEA 2012). A similar approach can be taken with respect to AI data collection, with the approval of development, marketing and use of AI systems on the European market being made conditional upon ethical data collection throughout the project, regardless of where the data is collected.

The CJEU has previously allowed territorial extension of EU rules in the context of carbon emissions in international aviation,Footnote 13 and in the context of animal welfare outside EU territory,Footnote 14 but territorial extension has often been met with strong resistance on normative grounds, usually grounded in concerns about weak jurisdictional links impinging on third states’ sovereignty, and a desirability for multilateral action rather than unilateral imperialism (Krisch Reference Krisch2022; Lichuma Reference Lichuma2021).

On the other hand, others have noted that territorial extension is simply a consequence of European actors taking responsibility for the harms they may cause abroad (Scott Reference Scott, Rawlings, Leyland and Young2013). Some such criticism may be addressed with a parallel or alternative focus on multilateral avenues, rather than unilateral action. There is currently no binding international legal instrument that governs AI data gathering. Soft law instruments do exist, such as the UNESCO Recommendation on the Ethics of AI, which emphasises a human rights-based approach. Although not binding, soft law norms can nonetheless have a significant impact, establishing path dependence for future international, regional and national regulation (Bello y Villarino Reference Bello y Villarino2023).

Beyond soft law, creating a new internationally binding legal instrument also remains an option. So far, the only international legally binding instrument on AI and human rights is the Council of Europe’s Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, finalised by the Council of Europe in March 2024. However, it remains to be seen whether many States will ratify the convention: achieving international consensus for a globally binding instrument on AI still seems to be a far-off prospect. Moreover, the current wording of the text does not pose many stringent requirements upon Member States (van Kolfschooten and Shachar Reference van Kolfschooten and Shachar2023).

Whenever such an instrument is created, it is imperative that it equitably reflects the interests of all relevant stakeholders, including different communities from both the Global North and the Global South. An ‘access and benefit-sharing’ system, although imperfect, offers a useful framework for regulating AI data collection globally. This framework, applied already to non-human genetic resources through the Convention on Biological Diversity and its supplementary Nagoya Protocol,Footnote 15 offers a way to tackle AI ethics dumping and data colonialism, by centring prior informed consent and benefit-sharing arrangements (Ambler et alReference Ambler2020; Ho Reference Ho2022). In the Pandemic Treaty currently being negotiated, several countries have called for the inclusion of access and benefit-sharing frameworks for pathogens data. However, this remains a contentious point among negotiating States, with no consensus. Therefore, it remains to be seen if, and in what form, this takes shape in the final treaty. A system of tiered consent, as proposed by the African Academy of Sciences in their Recommendations for Data and Biospecimen Governance in Africa, also provides a valuable way of thinking about AI data regulation, with research participants able to select from a list of options regarding the use of their data.

Another option could be to integrate benefit-sharing mechanisms with countries in the Global South in the context of the recently adopted EHDS Regulation. The EHDS aims to facilitate the exchange and use of health data within the EU, primarily to support health care delivery, research and policy-making (Marelli et alReference Marelli2023). While the final text of the EHDS does not explicitly provide for international benefit-sharing mechanisms, its implementation phase offers opportunities to promote more equitable data governance beyond EU borders. For example, the European Commission could use implementing acts, delegated acts or guidance documents to encourage transparent data-sharing agreements with third countries, including provisions for compensatory measures, technology transfer or capacity-building initiatives. EU-funded health research projects using data sourced from the Global South could, as a condition of funding, be required to include clauses ensuring the involvement of local researchers and fair access to research outputs or other contextually meaningful benefits for data subjects (Anane-Sarpong et alReference Anane-Sarpong, Wangmo and Tanner2020). It is, however, essential to acknowledge that what counts as a ‘benefit’ may differ between the EU and LMICs; for instance, access to algorithms may be of limited value in settings lacking the digital infrastructure to use them.

4. Concluding remarks

The parallels between the offshoring of clinical trials and the current outsourcing of data collection for AI development reveal more than regulatory gaps – they expose persistent patterns of global inequality in knowledge production and technological innovation. While the EU has taken steps in the past to curb ethics dumping in pharmaceutical research, no equivalent safeguards currently exist to prevent the exploitation of personal data collected in LMICs for the development of AI systems marketed in Europe. This absence is not merely a regulatory oversight but a normative choice with global consequences. By failing to extend its ethical standards beyond its borders, the EU risks reinforcing a model of digital extractivism that disproportionately burdens vulnerable populations while concentrating the benefits of innovation within high-income countries.

If the EU wishes to live up to its self-declared role as a global standard-setter in digital and health governance, it must confront the external effects of its own regulation. This means developing binding rules that ensure fair, transparent and accountable data practices globally – including mechanisms for prior informed consent, equitable benefit-sharing and meaningful inclusion of Global South perspectives in AI governance. Learning from the regulatory trajectory of clinical trials, the EU now has an opportunity to lead the way in setting ethical standards for AI data sourcing that reflect global justice.

Funding statement

PP is funded by the INDIGO Project on Affordable Flu Vaccines, financed by the European Commission’s Horizon 2020 programme (Grant no. 874653). KP is funded by the This publication is part of the ‘Global access to medicines through EU law and policy’ project of the Veni talent research programme, which is financed by the Dutch Research Council (NWO). Open access funding provided by University of Amsterdam. The research of HvK, PP and KP was supported by Wellcome Grant Reference No 224856/Z: There is no App for this! Regulating the migration of health apps in Sub Saharan Africa.


1 We use the definition of ‘AI system’ as stipulated in Regulation (EU) 2024/1689 (Artificial Intelligence Act), Article 3(1): ‘“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’.

2 AI Index Report (2023).

3 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) OJ L 119, 4.5.2016, pp. 1–88.

4 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance) PE/24/2024/REV/1 OJ L 2024/1689, 12.7.2024.

5 CJEU, Case C-307/22 (FT v DW), 26 October 2023, ECLI:EU:C:2023:811, §33; CJEU, C-101/01 (Bodil Lindqvist), 6 November 2003, ECLI:EU:C:2003:596, §50.

6 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (Text with EEA relevance) OJ L 117, 5.5.2017, pp. 1–175.

7 Directive 2001/20/EC, Recitals 2 and 6.

8 Article 6(1) of Regulation 726/2004.

9 Article 56(4) of Regulation 726/2004.

10 Stated on the last page of the Global Code of Conduct for Research in Resource-Poor Settings. Global Code of Conduct for Research in Resource-Poor Settings in the reference list: Global Code of Conduct for Research in Resource-Poor Settings. Available at: https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/global-code-of-conduct-for-research-in-resource-poor-settings_he_en.pdf 

11 EU Global Health Strategy, p. 12. Guiding Principle 4 relates to fostering digitalisation as a fundamental enabler.

12 Article 6(1) of Regulation 726/2004.

13 Case C-366/10, Air Transport Association of America and others v Secretary of State for Energy and Climate Change, 21 December 2011, ECLI:EU:C:2011:864.

14 Case C-424/13, Zuchtvieh-Export GmbH v Stadt Kempten, 23 April 2015, ECLI:EU:C:2015:259.

15 The Convention and Protocol do not apply to human genetic resources; see Second Ordinary Meeting of the Conference of the Parties to the Convention on Biological Diversity, Decision COP II/11 at para 2; Conference Of The Parties To The Convention On Biological Diversity, Decision Adopted By The Conference Of The Parties To The Convention On Biological Diversity At Its Tenth Meeting, UNEP/CBD/COP/DEC/X/1 (2010) at I.5.


References

Abebe, R et al. (2021) Narratives and counternarratives on data sharing in Africa. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 329–41. https://doi.org/10.1145/3442188.3445897.CrossRefGoogle Scholar

Albuquerque de Almeida, F and Ricardo, M (2023) Different regulatory framework for medical devices and drugs in the European Union: Impact on clinical research and health technology assessments. The International Journal of Health Planning and Management 38, 1420–34. https://doi.org/10.1002/hpm.3671.CrossRefGoogle ScholarPubMed

Ambler, J et al. (2020) Including digital sequence data in the Nagoya protocol can promote data sharing. Trends in Biotechnology 39, 116–25. https://doi.org/10.1016/j.tibtech.2020.06.009.CrossRefGoogle ScholarPubMed

Anane-Sarpong, E, Wangmo, T and Tanner, M (2020) Ethical principles for promoting health research data sharing with sub-Saharan Africa. Developing World Bioethics 20, 86–95. https://doi.org/10.1111/dewb.12233.CrossRefGoogle ScholarPubMed

Arun, C (2020) AI and the global south: Designing for other worlds. In Dubber, MD, Pasquale, F, and Das, S (ed.), The Oxford Handbook of Ethics of AI. Oxford University Press, pp. 588–606. https://doi.org/10.1093/oxfordhb/9780190067397.013.38.CrossRefGoogle Scholar

Bagcchi, S (2014) Cervical cancer screening trials in India spark controversy. BMJ 348, g3038. https://doi.org/10.1136/bmj.g3038.CrossRefGoogle ScholarPubMed

Bello y Villarino, J. M (2023) Global standard-setting for Artificial Intelligence: Para-regulating international law for AI? The Australian Year Book of International Law Online 1, 157–81. Available at https://doi.org/10.1163/26660229-04101018.CrossRefGoogle Scholar

Birhane, A (2020) Algorithmic colonization of Africa. SCRIPTed 17, 389–409. https://doi.org/10.2966/scrip.170220.389.CrossRefGoogle Scholar

Black, E and Richmond, R (2019) Improving early detection of breast cancer in sub-Saharan Africa: Why mammography may not be the way forward. Globalization and Health 15, 3. https://doi.org/10.1186/s12992-018-0446-6.CrossRefGoogle Scholar

Bradford, A (2020) The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press.CrossRefGoogle Scholar

Browne, R (2020) Facebook is Building a Huge Undersea Cable Around Africa to Boost Internet Access in the Continent. Available at https://www.cnbc.com/2020/05/14/facebook-building-undersea-cable-in-africa-to-boost-internet-access.html (accessed 21 July 2023).Google Scholar

Calzati, S (2022) ‘Data sovereignty’ or ‘data colonialism’? Exploring the Chinese involvement in Africa’s ICTs: A document review on Kenya. Journal of Contemporary African Studies 40, 270–85. https://doi.org/10.1080/02589001.2022.2027351.CrossRefGoogle Scholar

Carter, L, Liu, D and Cantrell, C (2020) Exploring the intersection of the digital divide and artificial intelligence: A hermeneutic literature review. AIS Transactions on Human-Computer Interaction 12, 253–75. https://doi.org/10.17705/1thci.00138.CrossRefGoogle Scholar

Cisse, M (2018) Look to Africa to advance artificial intelligence. Nature 562, 461–461. https://doi.org/10.1038/d41586-018-07104-7.CrossRefGoogle ScholarPubMed

Coleman, D (2018) Digital colonialism: The 21st century scramble for Africa through the extraction and control of user data and the limitations of data protection laws note. Michigan Journal of Race & Law 24, 417–40. https://heinonline.org/HOL/P?h=hein.journals/mjrl24&i=429 (accessed 2 November 2022).Google Scholar

Corrêa, NK et al. (2023) Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns 4, 100857. https://doi.org/10.1016/j.patter.2023.100857.CrossRefGoogle ScholarPubMed

Cremona, M and Scott, J (2019) Introduction: EU law beyond EU borders. In Cremona, M and Scott, J (eds), EU Law Beyond EU Borders: The Extraterritorial Reach of EU Law. Oxford University Press, 0. https://doi.org/10.1093/oso/9780198842170.003.0001.CrossRefGoogle Scholar

Davenport, T and Kalakota, R (2019) The potential for artificial intelligence in healthcare. Future Healthcare Journal 6, 94–8. https://doi.org/10.7861/futurehosp.6-2-94.CrossRefGoogle ScholarPubMed

European Commission (2014) Reducing the Risk of Exporting Non Ethical Practices to Third Countries (GARRI-6-2014). Horizon 2020 Work Programme 2014–2015. Available at https://cordis.europa.eu/programme/id/H2020_GARRI-6-2014 (accessed 1 June 2024).Google Scholar

European Medicines Agency (2013) Clinical Trials Submitted in Marketing-Authorisation Applications to the European Medicines Agency (No. EMA/INS/GCP/676319/2012). London, UK: EMA.Google Scholar

European Medicines Agency Working Group on Clinical Trials conducted outside of the EU/EEA (2012) Reflection Paper on Ethical and GCP Aspects of Clinical Trials of Medicinal Products for Human use Conducted Outside of the EU/EEA and Submitted in Marketing Authorisation Applications to the EU Regulatory Authorities (No. EMA/121340/2011). London, UK: EMA.Google Scholar

Flear, ML. (2014) Clinical trials Abroad: The marketable ethics, weak protections and vulnerable subjects of EU law. Cambridge Yearbook of European Legal Studies 16, 75–107. https://doi.org/10.1017/S1528887000002561.CrossRefGoogle Scholar

Floridi, L (2019) Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32, 185–93. https://doi.org/10.1007/s13347-019-00354-x.CrossRefGoogle Scholar

Gwagwa, A, Kazim, E and Hilliard, A (2022) The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspective. Patterns 3, 100462. https://doi.org/10.1016/j.patter.2022.100462.CrossRefGoogle ScholarPubMed

Haidar, H and Gooshki, ES (2024) Trial-centrism in the declaration of Helsinki: A challenge for post-study access and preventing data colonialism in AI-driven health research. Canadian Journal of Bioethics 7, 138–9. https://doi.org/10.7202/1114971ar.CrossRefGoogle Scholar

Hassan, Y (2023) Governing algorithms from the South: A case study of AI development in Africa. AI & SOCIETY 38, 1429–42. https://doi.org/10.1007/s00146-022-01527-7.CrossRefGoogle Scholar

Ho, CW-L (2022) Operationalizing “one health” as “one digital health” through a global framework that emphasizes fair and equitable sharing of benefits from the use of artificial intelligence and related digital technologies. Frontiers in Public Health 10. https://doi.org/10.3389/fpubh.2022.768977.CrossRefGoogle Scholar

Hosny, A and Aerts, HJWL (2019) Artificial intelligence for global health. Science 366, 955–6. https://doi.org/10.1126/science.aay5189.CrossRefGoogle ScholarPubMed

AI Index Report (2023) ‘Annual private investment in artificial intelligence, by focus area’ (data source: NetBase Quid). Available at www.ourworldindata.org (accessed 27 August 2024).Google Scholar

Krisch, N (2022) Jurisdiction unbound: (Extra)territorial regulation as global governance. European Journal of International Law 33, 481–514. https://doi.org/10.1093/ejil/chac028.CrossRefGoogle Scholar

Leslie, D et al. (2021) Does “AI” stand for augmenting inequality in the era of covid-19 healthcare? BMJ 372, n304. https://doi.org/10.1136/bmj.n304.CrossRefGoogle Scholar

Lichuma, CO (2021) (Laws) made in the ‘first world’: A TWAIL critique of the use of domestic legislation to extraterritorially regulate global value chains. ZaöRV: Zeitschrift Für Ausländisches Öffentliches Recht Und Völkerrecht 81, 497–532. https://doi.org/10.17104/0044-2348-2021-2-497.CrossRefGoogle Scholar

Magalhães, JC and Couldry, N (2021) Giving by taking away: Big tech, data colonialism, and the reconfiguration of social good. International Journal of Communication 15, 20. https://ijoc.org/index.php/ijoc/article/view/15995 (accessed 5 April 2024).Google Scholar

Mahamadou, AJ, Ochasi, A and Altman, RB (2024) Data ethics in the era of healthcare artificial intelligence in Africa: An Ubuntu philosophy perspective. https://doi.org/10.48550/arXiv.2406.10121.CrossRefGoogle Scholar

Marelli, L et al. (2023) The European health data space: Too big to succeed? Health Policy 135, 104861. https://doi.org/10.1016/j.healthpol.2023.104861.CrossRefGoogle ScholarPubMed

Nhemachena, A, Hlabangane, N and Kaundjua, MB (2020) Relationality or hospitality in twenty-first century research? Big data, internet of things, and the resilience of coloniality on Africa. Modern Africa: Politics, History and Society 8, 105–39. https://doi.org/10.26806/modafr.v8i1.278.Google Scholar

Nothias, T (2020) Access granted: Facebook’s free basics in Africa. Media, Culture & Society 42, 329–48. https://doi.org/10.1177/0163443719890530.CrossRefGoogle Scholar

Novoa-Heckel, G, Bernabe, R and Linares, J (2017) Exportation of unethical practices to low and middle income countries in biomedical research. Revista de Bioética y Derecho 40. 167–77. http://scielo.isciii.es/scielo.php?script=sci_arttext&pid=S1886-58872017000200013&lng=es&nrm=iso&tlng=en (accessed 29 April 2024).Google Scholar

Odero, C (2024) Here’s Proof Big Brother Is Watching. https://cioafrica.co/heres-proof-big-brother-is-watching/ (accessed 31 May 2025).Google Scholar

Owoyemi, A, Owoyemi, J, Osiyemi, A and Boyd, A (2020) Artificial intelligence for healthcare in Africa. Frontiers in Digital Health 2. https://www.frontiersin.org/articles/10.3389/fdgth.2020.00006 (accessed 13 July 2023).10.3389/fdgth.2020.00006CrossRefGoogle Scholar

Oyedemi, TD (2021) Digital coloniality and ‘next billion users’: The political economy of google station in Nigeria. Information, Communication & Society 24, 329–43. https://doi.org/10.1080/1369118X.2020.1804982.CrossRefGoogle Scholar

Pahar, M, Theron, G and Niesler, T (2022) Automatic Tuberculosis detection in cough patterns using NLP-style cough embeddings. In 2022 International Conference on Engineering and Emerging Technologies (ICEET). 1–6. https://doi.org/10.1109/ICEET56468.2022.10007261.CrossRefGoogle Scholar

Perehudoff, SK (2024) From Brussels to the world: The diffusion of EU pharmaceutical legislation towards developing economies. European Journal of Risk Regulation 16, 1–17. https://doi.org/10.1017/err.2024.89.Google Scholar

Petryna, A (2007) Clinical trials offshored: On private sector science and public health. BioSocieties 2, 21–40. https://doi.org/10.1017/S1745855207005030.CrossRefGoogle Scholar

Phiri, M and Munoriyarwa, A (2023) Health chatbots in Africa: Scoping review. Journal of Medical Internet Research 25, e35573. https://doi.org/10.2196/35573.CrossRefGoogle ScholarPubMed

Pillamarapu, M, Mohan, A and Saberwal, G (2019) An analysis of deficiencies in the data of interventional drug trials registered with Clinical Trials Registry – India. Trials 20, 535. https://doi.org/10.1186/s13063-019-3592-0.CrossRefGoogle ScholarPubMed

Raab, R et al. (2023) Federated electronic health records for the European health data space. The Lancet Digital Health 5, e840–47. https://doi.org/10.1016/S2589-7500(23)00156-5.CrossRefGoogle ScholarPubMed

Ruttkamp-Bloem, E (2023) Epistemic just and dynamic AI ethics in Africa. In Eke, DO, Wakunuma, K, and Akintoye, S (ed), Responsible AI in Africa: Challenges and Opportunities. Cham: Springer International Publishing, 13–34. https://doi.org/10.1007/978-3-031-08215-3_2.Google Scholar

Schipper, I and Weyzig, F (2008) Ethics for Drug Testing in Low and Middle Income Countries: Considerations for European Market Authorisation. SSRN Scholarly Paper, Rochester, NY. Available at https://papers.ssrn.com/abstract=1660433 (accessed 26 August 2024).Google Scholar

Schroeder, D, Cook, J and Hirsch, F (2020) Ethics Dumping: Case Studies from North-South Research Collaborations. Saint Philip Street Press. Google Scholar

Schroeder, D et al. (eds) (2017) Ethics Dumping: Case Studies from North-South Research Collaborations. New York: Springer. Google Scholar

Schroeder, D et al. (2019) Ethics dumping and the need for a global code of conduct. In Schroeder, D, Chatfield, K, Singh, M, Chennells, R, and Herissone-Kelly, P (ed), Equitable Research Partnerships: A Global Code of Conduct to Counter Ethics Dumping. Cham: Springer International Publishing, 1–4. https://doi.org/10.1007/978-3-030-15745-6_1.CrossRefGoogle Scholar

Scott, J (2013) Territorial sovereignty and territorial extension in an inter-connected world. In Rawlings, R, Leyland, P, and Young, A (eds), Sovereignty and the Law: Domestic, European and International Perspectives. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199684069.003.0015.CrossRefGoogle Scholar

Soulé, F (2024) Digital Sovereignty in Africa: Moving beyond Local Data Ownership (Policy Brief No. No. 185). Centre for International Governance Innovation. Retrieved from https://www.cigionline.org/publications/digital-sovereignty-in-africa-moving-beyond-local-data-ownership/.Google Scholar

Srinivasan, S, Johari, V and Jesani, A (2017) Cervical cancer screening in India. In Schroeder, D, Cook, J, Hirsch, F, Fenet, S, and Muthuswamy, V (ed), Ethics Dumping: Case Studies from North-South Research Collaborations. Springer, pp. 33–48.Google Scholar

Stegemann, L and Gersch, M (2019) Interoperability – Technical or economic challenge? It – Information Technology 61, 243–52. https://doi.org/10.1515/itit-2019-0027.CrossRefGoogle Scholar

Suba, E, Ortega, RE and Mutch, D (2018) Unethical US Government-Funded Cervical Screening Study in India: US Freedom of Information Act Disclosures. Journal of Healthcare, Science and the Humanities. https://www.semanticscholar.org/paper/Unethical-US-Government-Funded-Cervical-Screening-Suba-Ortega/4da557e7b5b0354c27ee5e2a7511c023ac91d343 (accessed 26 August 2024).Google Scholar

TRUST (2018) The TRUST Code – A Global Code of Conduct for Equitable Research Partnerships. https://doi.org/10.48508/GCC/2018.05.CrossRefGoogle Scholar

Trust Equitable Research Partnerships (2021) Global Code of Conduct for Research in Resource-Poor Settings in the reference list: Global Code of Conduct for Research in Resource-Poor Settings. Available at https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/global-code-of-conduct-for-research-in-resource-poor-settings_he_en.pdf (accessed 26 August 2024).Google Scholar

van Kolfschooten, H (2022a) EU regulation of artificial intelligence: Challenges for patients’ rights. Common Market Law Review 59, 81–112. https://kluwerlawonline.com/journalarticle/Common+Market+Law+Review/59.1/COLA2022005 (accessed 17 March 2022).10.54648/COLA2022005CrossRefGoogle Scholar

van Kolfschooten, H (2023) The AI cycle of health inequity and digital ageism: Mitigating biases through the EU regulatory framework on medical devices. Journal of Law and the Biosciences 10, lsad031. https://doi.org/10.1093/jlb/lsad031.CrossRefGoogle ScholarPubMed

van Kolfschooten, H (2025) Towards an EU charter of digital patients’ rights in the age of artificial intelligence. Digital Society 4, 6. https://doi.org/10.1007/s44206-025-00159-w.CrossRefGoogle Scholar

van Kolfschooten, H (2022b) The mHealth power paradox: Improving data protection in health apps through self-regulation in the European Union. In Cohen, IG, Minssen T, Price II WN, Robertson, C, and Shachar C (ed), The Future of Medical Device Regulation: Innovation and Protection. Cambridge: Cambridge University Press, pp. 63–76. https://doi.org/10.1017/9781108975452.006.CrossRefGoogle Scholar

van Kolfschooten, H and Shachar, C (2023) The Council of Europe’s AI convention (2023–2024): Promises and pitfalls for health protection. Health Policy 138, 104935. https://doi.org/10.1016/j.healthpol.2023.104935.CrossRefGoogle ScholarPubMed

Wairegi, A, Omino, M and Rutenberg, I (2021) AI in Africa: Framing AI through an African Lens. Communication, Technologies et Développement 10. https://doi.org/10.4000/ctd.4775.CrossRefGoogle Scholar

Wang, C et al. (2024) The artificial intelligence divide: Who is the most vulnerable? New Media & Society 27, 14614448241232345. https://doi.org/10.1177/14614448241232345.Google Scholar

World Health Organization (2021) Ethics and Governance of Artificial Intelligence for Health: WHO Guidance, pp. 1–150. Geneva. Google Scholar

Zuckerberg, M (2013) Is Connectivity a Human Right? Available at https://www.facebook.com/isconnectivityahumanright (accessed 5 April 2024).Google Scholar