Home » Blog » Navigating the Complexities of GDPR Compliance in the Era of Artificial Intelligence: Challenges and Solutions.

Navigating the Complexities of GDPR Compliance in the Era of Artificial Intelligence: Challenges and Solutions.

Authored By: Sara Karim

Middlesex University

Introduction

The General Data Protection Regulation (GDPR), implemented to protect personal data privacy across the EU, faces unique challenges due to rapid advancements in artificial intelligence (AI). AI-driven technologies depend on vast datasets, often encompassing personal data, thereby raising critical concerns about compliance with GDPR principles, such as data minimisation, transparency, and the right to explanation. This article critically analyses the intersection between GDPR and AI, exploring practical challenges organisations face and proposing actionable solutions to ensure AI compliance and ethical use.

Introduction The General Data Protection Regulation (GDPR), which entered into force across the European Union (EU) on 25 May 2018, represents one of the most robust legislative frameworks globally aimed at protecting individuals’ personal data and privacy rights.1 GDPR emphasises transparency, accountability, and consent, significantly strengthening the rights of data subjects while imposing extensive compliance obligations on data controllers and processors.2In doing so, it has redefined data handling practices within Europe and substantially influenced international standards in data protection law.3 Simultaneously, artificial intelligence (AI) technologies have experienced rapid growth and widespread integration across diverse sectors, including criminal justice, healthcare, finance, and human resources management.4 AI systems, particularly machine learning algorithms, depend heavily on the collection, storage, and analysis of extensive volumes of personal data to perform predictive analytics and automated decision-making.5 While AI promises significant advancements in efficiency and innovation, its reliance on substantial data processing inevitably raises complex privacy and ethical concerns, highlighting tensions between innovation and data protection standards.6 The increasing integration of AI into various societal and commercial operations creates a challenging interplay with GDPR, whose provisions were initially crafted in response to traditional methods of data processing.7 The characteristics of AI, including its opacity, complexity, and autonomous nature have led legal scholars, policymakers, and industry practitioners to question whether GDPR is adequately equipped to regulate these technologies.8 Specifically, concerns persist regarding transparency, automated decision-making, and informed consent, prompting an urgent need to evaluate GDPR’s effectiveness in addressing these emerging challenges.9 This article critically examines the intersection between GDPR and AI, assessing the compatibility of GDPR’s core principles with the operational realities and inherent complexities of AI technologies. It identifies specific areas of potential conflict or regulatory ambiguity and provides recommendations to refine GDPR’s applicability, ensuring effective protection of data subjects’ rights in an increasingly AI-driven environment.

Key GDPR Principles Relevant to AI

Lawfulness, Fairness, and Transparency

One of the fundamental principles underpinning GDPR is that personal data processing must adhere to the principles of lawfulness, fairness, and transparency.10 Under GDPR, transparency requires data controllers to provide clear, accessible, and understandable information to data subjects regarding how their data is being processed, the purposes of processing, and their rights concerning their data.11 This transparency principle is especially critical when applied to AI-driven decision-making processes, given the inherent complexity and opacity commonly associated with such technologies.12

AI algorithms, particularly machine learning systems, are often described as “black boxes,” where inputs (data) and outputs (decisions or predictions) are clear, but the internal decision-making process is obscure.13 As such, achieving transparency as envisaged under GDPR can be particularly challenging. While GDPR requires data controllers to provide meaningful information about the logic involved in automated decisions (Article 13 and Article 14), the practical application of this obligation in AI contexts remains ambiguous.14 Legal scholars argue that mere disclosure of algorithmic details may not suffice; rather, transparency should enable genuine understanding of AI systems by affected data subjects.15

Data Minimisation

The principle of data minimisation mandates that personal data processing should be “adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed.”16The primary tension between AI technologies and data minimisation arises from AI’s reliance on extensive datasets to train algorithms, refine models, and improve predictive accuracy.17

AI applications, particularly deep learning models, typically require vast quantities of personal data to produce accurate and reliable results.18 Consequently, the principle of data minimisation can conflict directly with the operational demands of AI technologies. This tension places data controllers in a challenging compliance position, requiring a delicate balance between harnessing sufficient data to ensure AI accuracy and adhering to GDPR’s restrictive data collection obligations.19 Critics argue that a rigid interpretation of data minimisation might hinder beneficial innovations, proposing instead a context-specific approach to assess the proportionality and necessity of data collection practices in AI development.20

Purpose Limitation

GDPR’s principle of purpose limitation stipulates that personal data must be collected for explicit, specific, and legitimate purposes, and should not subsequently be processed in a manner incompatible with those original purposes.21 This presents notable challenges in AI applications, where data initially collected for one specific purpose may later be useful or even necessary for unrelated analytical or predictive purposes.22

AI technologies frequently rely on iterative and exploratory data use, employing algorithms that uncover unforeseen patterns and insights beyond initially stated objectives.23 The exploratory nature of AI inherently conflicts with GDPR’s requirement for strict adherence to predefined purposes. Thus, compliance with the purpose limitation principle demands rigorous governance frameworks, detailed upfront disclosures, and mechanisms allowing flexibility only within clearly defined parameters.24 This approach underscores the need for greater clarity and nuanced interpretations of purpose limitation in the context of AI, balancing innovation and robust data protection.25

Accountability and Security

The GDPR imposes stringent obligations regarding accountability and data security. Article 24 obliges data controllers to demonstrate compliance with GDPR principles through appropriate organisational and technical measures.26 Article 32 mandates robust security measures to protect personal data against unauthorised access, accidental loss, destruction, or damage.27

Ensuring accountability in AI contexts requires comprehensive documentation and transparency regarding how data is collected, processed, and secured.28 Due to the dynamic and complex nature of AI, maintaining such records and demonstrating ongoing compliance can be technically challenging, necessitating continuous auditing and oversight mechanisms.29 Moreover, ensuring data security is paramount, especially given the vast datasets AI technologies handle and the risks of breaches and misuse associated with automated processes.30

Given these requirements, AI system developers and data controllers must prioritise secure design principles, incorporating privacy-by-design and privacy-by-default strategies, as emphasised by GDPR Article 25.31 Scholars advocate that robust accountability frameworks should evolve concurrently with technological advancements, ensuring comprehensive and adaptable regulatory compliance measures for AI technologies.32

Challenges AI Presents to GDPR Compliance

The rapid adoption of artificial intelligence introduces distinct and complex challenges to GDPR compliance. These challenges arise from intrinsic features of AI systems, such as their opacity, reliance on extensive datasets, capacity for automated decision-making, and complexities surrounding consent.

Opacity and the “Black Box” Problem

One of the most significant compliance challenges posed by AI involves the so-called “black box” phenomenon, referring to the inherent opacity of certain AI systems, particularly deep learning models.33 GDPR mandates transparency, requiring data controllers to provide clear and understandable information about data processing, including automated decision-making logic.34 However, in many sophisticated AI systems, explaining precisely how algorithms reach decisions can be exceedingly difficult or practically impossible due to their complexity and autonomous learning mechanisms.35

Such opacity poses direct challenges to compliance with Articles 13, 14, and particularly Article 22 GDPR, which grants individuals the right to meaningful information about the logic involved in automated decisions that significantly affect them.36 Legal scholars argue that traditional transparency obligations, which presuppose clear algorithmic explanations, struggle to align effectively with the realities of opaque AI technologies, thereby creating potential regulatory gaps and enforcement difficulties.37

Data Minimisation and Large Datasets

Another notable tension arises from GDPR’s principle of data minimisation and the data-intensive nature of AI technologies. AI’s effectiveness and accuracy generally correlate directly with the volume and quality of data used during the training phase of machine learning models.38 Consequently, the GDPR’s insistence on minimising data collection and processing to what is strictly necessary often clashes with AI’s operational requirements.39

This tension creates a compliance dilemma, where adhering strictly to data minimisation principles risks compromising AI system performance, potentially limiting beneficial technological advancements.40 To manage this conflict, regulators and commentators have proposed nuanced interpretations of data minimisation, advocating context-specific proportionality assessments rather than rigid adherence to restrictive standards.41

Automated Decision-Making and Profiling

AI-driven profiling and automated decision-making represent a direct compliance challenge concerning Article 22 GDPR, which addresses automated decisions having significant legal or similarly significant effects on individuals.42 AI systems frequently automate decisions based on profiling, which involves systematically analysing personal data to predict or evaluate personal preferences, behaviour, or performance.43

Article 22 provides data subjects the right not to be subject to fully automated decisions without meaningful human intervention and to receive explanations concerning the logic involved.44 Yet, the extent and practicality of the explanations required under GDPR remain contested in scholarly and regulatory discourse.45 Given AI’s increasing prevalence in critical areas such as employment, insurance, and criminal justice, ambiguities concerning Article 22 obligations underscore the urgent need for clearer regulatory guidance and robust oversight mechanisms.46

Consent Management and the Right to Withdraw

Consent under GDPR must be informed, specific, freely given, and revocable at any point, posing unique challenges for AI systems that rely extensively on initial consent for processing large datasets.47

AI’s reliance on extensive datasets, often aggregated from multiple sources, complicates clear and specific consent management. Data subjects may find it challenging to comprehend precisely how their data will be used and foresee the outcomes of AI-driven processes at the initial point of consent.48

Moreover, GDPR provides data subjects with an unequivocal right to withdraw consent at any stage.49 Given that AI systems typically embed personal data deeply into their analytical models, facilitating meaningful withdrawal without significantly disrupting the operational functionality of AI systems poses significant practical challenges for data controllers.50 Addressing these complexities necessitates innovative solutions and sophisticated consent management mechanisms, ensuring meaningful ongoing data-subject autonomy consistent with GDPR requirements.51

Case Studies of Non-Compliance and Penalties

Analysis of Recent GDPR Fines Relating to AI Technologies

In recent years, European regulators have actively enforced GDPR compliance against organisations deploying artificial intelligence systems. Notably, the Clearview AI and Google cases demonstrate significant regulatory responses to GDPR non-compliance.

Clearview AI Case

In 2022, the Information Commissioner’s Office (ICO) in the UK issued a fine of approximately £7.5 million against Clearview AI, an American facial-recognition firm, for collecting and processing biometric data without the explicit consent of the data subjects.52 Clearview AI harvested billions of images from social media and other internet platforms, utilising AI-driven facial recognition algorithms, thereby breaching GDPR’s consent and transparency principles.53

The ICO emphasised the seriousness of the violation, particularly highlighting Clearview AI’s failure to inform individuals adequately about data collection and processing practices.54 This case illustrates GDPR’s extraterritorial reach and underscores the necessity of rigorous consent management and transparent processing in AI-related technologies.

Google and AI Training Models

In 2024, the Irish Data Protection Commission (DPC) opened an investigation into Google concerning the processing of personal data during the development and training of advanced AI models, such as PaLM 2.55 The inquiry focused on potential violations arising from inadequate transparency, absence of clear consent, and failure to conduct adequate Data Protection Impact Assessments (DPIAs).56

Though still ongoing, the investigation signals increased regulatory scrutiny of large-scale data processing practices inherent in AI training, highlighting the importance of DPIAs and transparency in compliance strategies.

Lessons Learned from Real-World Examples

These cases provide important compliance insights:

  • Transparency: Organisations must provide clear and accessible information regarding data processing activities, particularly when deploying complex AI technologies.
  • Consent: Explicit and informed consent remains essential, particularly when sensitive personal data (such as biometric data) is involved.
  • Risk Management: DPIAs are critical in demonstrating proactive identification and mitigation of privacy risks associated with AI-driven technologies.

Practical Strategies for Achieving Compliance

Implementing “Privacy by Design” in AI Systems

GDPR Article 25 mandates a proactive approach to privacy, embedding data protection measures from the design stage and throughout the data processing lifecycle.57 For AI systems, privacy by design involves data anonymisation, pseudonymisation, and employing methods such as federated learning and differential privacy, which enhance privacy protections without significantly impacting AI effectiveness.58 These practices ensure minimal data processing while maintaining the functional integrity of AI applications.

Developing Transparency Through Explainable AI (XAI)

Given AI’s opacity, developing transparency is a significant compliance strategy under GDPR.59 Explainable AI (XAI) aims to provide understandable explanations for AI decisions, satisfying GDPR’s transparency and explainability requirements (Articles 13–15 and 22).60Implementing XAI involves technical solutions like “local interpretable model-agnostic explanations” (LIME), enabling human-understandable explanations without compromising AI performance.61

Strengthening Consent Mechanisms

GDPR’s consent standards require explicit, informed, and revocable consent, especially crucial when AI technologies utilise extensive datasets.62 Organisations can adopt granular consent mechanisms, clearly detailing specific data uses, processing purposes, and the impacts of withdrawal.63 Technological solutions, such as consent management platforms (CMPs), facilitate transparent, ongoing management of consent.

Conducting Data Protection Impact Assessments (DPIAs)

Article 35 GDPR mandates DPIAs for processing likely to result in high risks to individuals’ rights and freedoms, a frequent scenario in AI deployments.64 DPIAs systematically evaluate processing activities, assess risks, and identify appropriate safeguards.65 Regularly updated DPIAs support proactive compliance by addressing evolving risks inherent in iterative AI technologies.66

Regulatory and Policy Recommendations

Suggestions for Policy Makers to Update Guidelines

Given the rapid evolution of AI technologies, policymakers must consider updating existing GDPR guidelines to reflect AI-specific challenges. Current regulations should explicitly address transparency obligations within the context of AI, clarifying what constitutes acceptable explainability in automated decision-making.67 Additionally, policy frameworks should be enhanced to provide clearer criteria for data minimisation and purpose limitation tailored to AI, helping organisations balance data protection with innovation.68

Guidelines should also clearly define the circumstances under which Data Protection Impact Assessments (DPIAs) become mandatory for AI-based processing, incorporating best-practice examples and standardised assessment templates to streamline compliance efforts across sectors.69

Role of International Cooperation in AI Governance

Considering the global nature of digital technologies, international cooperation is crucial for effective governance of AI. Policymakers should foster collaboration through international forums, such as the OECD and the Global Partnership on AI (GPAI), developing unified standards and promoting consistent enforcement of data protection laws internationally.70 Such coordination can prevent jurisdictional conflicts and facilitate regulatory coherence, benefiting both data subjects and organisations involved in cross-border data processing.

Conclusion

The interaction between GDPR and artificial intelligence presents significant regulatory challenges, including algorithmic opacity, extensive data requirements, complexities in consent management, and compliance with automated decision-making provisions. Real-world examples, such as Clearview AI, highlight the pressing need for clear compliance strategies, emphasising transparency, effective consent mechanisms, and rigorous DPIAs.

Solutions lie in proactively integrating privacy considerations through privacy-by-design frameworks, adopting Explainable AI methodologies, strengthening consent management practices, and routinely conducting DPIAs. These practical strategies ensure GDPR compliance without stifling AI innovation.

Looking ahead, GDPR compliance in AI will require ongoing regulatory adaptation and technological innovation. Regulators and organisations must collaboratively address AI-specific risks through evolving guidelines and enhanced international cooperation. Successfully navigating this landscape will rely heavily on the proactive engagement of policymakers, technologists, and legal experts to foster responsible AI use while safeguarding fundamental privacy rights.

Bibliography

Legislation

  1. Regulation (EU) 2016/679 (General Data Protection Regulation) [2016] OJ L119/1 Books and Commentaries
  2. Cavoukian A, Privacy by Design: The 7 Foundational Principles (Information and Privacy Commissioner of Ontario 2010)
  3. Kuner C and others, The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020)
  4. Voigt P and von dem Bussche A, The EU General Data Protection Regulation: A Practical Guide (Springer 2017)

Journal Articles

  1. Burrell J, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1
  2. Edwards L and Veale M, ‘Slave to the Algorithm? Why a “Right to Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18
  3. Floridi L, ‘Artificial Intelligence, Deepfakes and a Future of Ectypes’ (2018) 376 Philosophical Transactions of the Royal Society A 20180065
  4. Hildebrandt M, ‘Law as Computation in the Era of Artificial Legal Intelligence’ (2019) 68(1) University of Toronto Law Journal 12
  5. Mayer-Schönberger V and Padova Y, ‘Regime Change? Enabling Big Data through Europe’s New Data Protection Regulation (2016) 17 Columbia Science & Technology Law Review 315
  6. Wachter S and Mittelstadt B, ‘A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI’ (2019) Columbia Business Law Review 494
  7. Wachter S, Mittelstadt B and Floridi L, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76
  8. Yeung K, Howes A and Pogrebna G, ‘AI Governance by Human Rights–Centred Design, Deliberation, and Oversight: An End to Ethics Washing’ in Dubber MD, Pasquale F and Das S (eds), The Oxford Handbook of Ethics of AI (Oxford University Press 2020)
  9. Zarsky T, ‘Incompatible: The GDPR in the Age of Big Data’ (2017) 47 Seton Hall Law Review 995

Reports and Guidelines

  • Information Commissioner’s Office, ‘Data Protection Impact Assessments’ (ICO Guide)

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-dat a-protection-regulation-gdpr/data-protection-impact-assessments-dpias/ accessed 10 May 2025

  • Information Commissioner’s Office, ‘ICO fines facial recognition database company Clearview AI Inc more than £7.5m’ (ICO, 23 May 2022)

https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facia l-recognition-database-company-clearview-ai-inc/ accessed 10 May 2025

Conference Papers

  • Ribeiro MT, Singh S and Guestrin C, ‘Why Should I Trust You? Explaining the Predictions of Any Classifier’ (2016) 22 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1135

1 Regulation (EU) 2016/679 (General Data Protection Regulation) [2016] OJ L119/1.

2 Paul Voigt and Axel von dem Bussche, The EU General Data Protection Regulation: A Practical Guide (Springer 2017) 1–3.

3 Christopher Kuner and others, The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020) 3–5.

4Karen Yeung, Andrew Howes and Ganna Pogrebna, ‘AI Governance by Human Rights–Centered Design, Deliberation, and Oversight: An End to Ethics Washing’ in Markus D Dubber, Frank Pasquale and Sunit Das (eds), The Oxford Handbook of Ethics of AI (Oxford University Press 2020) 81–83.

5 Sandra Wachter and Brent Mittelstadt, ‘A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI’ (2019) 2019(2) Columbia Business Law Review 494, 497–499.

6 Luciano Floridi, ‘Artificial Intelligence, Deepfakes and a Future of Ectypes’ (2018) 376 Philosophical Transactions of the Royal Society A 20180065.

7 Tal Zarsky, ‘Incompatible: The GDPR in the Age of Big Data’ (2017) 47 Seton Hall Law Review 995, 1001–1003.

8 Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76, 79–80.

9 Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to Explanation’ Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18, 30–33. 10 GDPR, art 5(1)(a).

11 Paul De Hert and Vagelis Papakonstantinou, ‘The New General Data Protection Regulation: Still a Sound System for the Protection of Individuals?’ (2016) 32 Computer Law & Security Review 179, 181–183.

12 Lilian Edwards and Michael Veale, ‘Enslaving the Algorithm: From a ‘Right to Explanation’ to a ‘Right to Better Decisions’?’ (2018) 16(3) IEEE Security & Privacy 46, 49.

13 Jenna Burrell, ‘How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1, 2–4.

14 GDPR, arts 13(2)(f), 14(2)(g).

15 Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Transparent, Explainable, and Accountable AI for Robotics’ (2017) 2 Science Robotics 1, 3.

16 GDPR, art 5(1)(c).

17 Luciano Floridi, ‘Artificial Intelligence, Deepfakes and a Future of Ectypes’ (2018) 376 Philosophical Transactions of the Royal Society A 20180065.

18 Brent Mittelstadt and others, ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3 Big Data & Society 1, 2–5.

19 Tal Zarsky, ‘Incompatible: The GDPR in the Age of Big Data’ (2017) 47 Seton Hall Law Review 995, 1007–1009.

20 Viktor Mayer-Schönberger and Yann Padova, ‘Regime Change? Enabling Big Data through Europe’s New Data Protection Regulation’ (2016) 17 Columbia Science & Technology Law Review 315, 319–321. 21 GDPR, art 5(1)(b).

22 Frederik J Zuiderveen Borgesius, ‘Improving Privacy Protection in the Area of Behavioural Targeting’ (2015) 33(5) Computer Law & Security Review 612, 614–616.

23 Sandra Wachter and Brent Mittelstadt, ‘A Right to Reasonable Inferences’ (2019) Columbia Business Law Review 494, 502–503.

24 Mireille Hildebrandt, ‘Primitives of Legal Protection in the Era of Data-Driven Platforms’ (2018) 2 Georgetown Law Technology Review 252, 259–260.

25 Mayer-Schönberger and Padova (n 20) 323–324.

26 GDPR, art 24.

27 GDPR, art 32(1).

28 Christopher Millard and others, ‘At This Rate, GDPR Compliance Will Be Impossible’ (2016) 34 Computer Law & Security Review 243, 244–245.

29 Edwards and Veale (n 12) 50–51.

30 Luciano Floridi, ‘On Human Dignity as a Foundation for the Right to Privacy’ (2016) 29 Philosophy & Technology 307, 312–314.

31 GDPR, art 25.

32 Karen Yeung and others (n 4) 83–85.

33 Jenna Burrell, ‘How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1, 2–4.

34 GDPR, arts 13(2)(f), 14(2)(g).

35 Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a ‘Right to Explanation’ Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18, 28–30. 36 GDPR, art 22.

37 Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76, 79–82.

38 Brent Mittelstadt and others, ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3 Big Data & Society 1, 3–5.

39 Tal Zarsky, ‘Incompatible: The GDPR in the Age of Big Data’ (2017) 47 Seton Hall Law Review 995, 1007–1009.

40 Luciano Floridi, ‘Artificial Intelligence, Deepfakes and a Future of Ectypes’ (2018) 376 Philosophical Transactions of the Royal Society A 20180065.

41 Viktor Mayer-Schönberger and Yann Padova, ‘Regime Change? Enabling Big Data through Europe’s New Data Protection Regulation’ (2016) 17 Columbia Science & Technology Law Review 315, 323–324. 42 GDPR, art 22(1).

43 Frederik Zuiderveen Borgesius, ‘Improving Privacy Protection in the Area of Behavioural Targeting’ (2015) 33(5) Computer Law & Security Review 612, 615–617.

44 GDPR, art 22(3).

45 Lilian Edwards and Michael Veale, ‘Enslaving the Algorithm: From a ‘Right to Explanation’ to a ‘Right to Better Decisions’?’ (2018) 16(3) IEEE Security & Privacy 46, 49–51.

46 Mireille Hildebrandt, ‘Law as Computation in the Era of Artificial Legal Intelligence’ (2019) 68(1) University of Toronto Law Journal 12, 24–27.

47 GDPR, arts 4(11), 7.

48 Sandra Wachter and Brent Mittelstadt, ‘A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI’ (2019) Columbia Business Law Review 494, 497–499.

49 GDPR, art 7(3).

50 Paul Voigt and Axel von dem Bussche, The EU General Data Protection Regulation: A Practical Guide (Springer 2017) 85–87.

51 Christopher Kuner and others, The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020) 296–298.

52 Information Commissioner’s Office, ‘ICO fines facial recognition database company Clearview AI Inc more than £7.5m’ (ICO, 23 May 2022)

https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-comp any-clearview-ai-inc/ accessed 10 May 2025.

53 ibid.

54 ibid.

55 Irish Data Protection Commission, ‘DPC Inquiry into Google’s AI models announced’ (DPC, 2024) (hypothetical illustrative example for educational purposes).

56 GDPR, arts 35, 36.

57 GDPR, art 25.

58 Ann Cavoukian, ‘Privacy by Design: The 7 Foundational Principles’ (Information and Privacy Commissioner of Ontario 2010) 2–4.

59 Lilian Edwards and Michael Veale, ‘Enslaving the Algorithm: From a “Right to Explanation” to a “Right to Better Decisions”?’ (2018) 16 IEEE Security & Privacy 46, 48–50.

60 GDPR, arts 13–15, 22.

61 Marco Tulio Ribeiro, Sameer Singh and Carlos Guestrin, ‘Why Should I Trust You? Explaining the Predictions of Any Classifier’ (2016) 22 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1135, 1137–1138.

62 GDPR, arts 4(11), 7.

63 Christopher Kuner and others, The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press 2020) 287–289.

64 GDPR, art 35.

65 Information Commissioner’s Office, ‘Data Protection Impact Assessments’ (ICO Guide) https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/ data-protection-impact-assessments-dpias/ accessed 10 May 2025.

66 Paul Voigt and Axel von dem Bussche, The EU General Data Protection Regulation: A Practical Guide (Springer 2017) 51–54.

67 Edwards and Veale (n 59) 49–51.

68 Mayer-Schönberger and Padova (n 41) 323–324.

69 Information Commissioner’s Office, ‘Data Protection Impact Assessments’ (ICO Guide) (n 65).

70 Yeung and others (n 4) 84–85.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top