Home » Blog » ARTIFICIAL INTELLIGENCE AND THE LAW: CHALLENGES TO ACCOUNTABILITY PRIVACY AND HUMAN RIGHTS

ARTIFICIAL INTELLIGENCE AND THE LAW: CHALLENGES TO ACCOUNTABILITY PRIVACY AND HUMAN RIGHTS

Authored By: Anju S

Government Law College Chengalpattu

INTRODUCTION

Artificial Intelligence (AI) has emerged as one of the most transformative technologies of the twenty-first century, reshaping industries, governance systems, and social interactions. From automated medical diagnoses to algorithm-driven law enforcement tools, AI has increasingly influenced decision-making processes that were traditionally performed by humans. Its capacity to process vast quantities of data and learn from experience has allowed for unprecedented efficiency and innovation. However, alongside these benefits, AI poses significant legal and ethical challenges that existing legal frameworks struggle to address[1].

The law has historically evolved in response to human behaviour, assuming the presence of intent, foreseeability, and personal responsibility. AI systems, by contrast, operate autonomously, often producing outcomes beyond the direct control of their creators or users. This shift raises fundamental questions regarding liability, transparency, and protection of individual rights. Moreover, AI’s heavy reliance on personal data threatens the right to privacy, while algorithmic bias risks reinforcing social discrimination[2].

This article critically examines the legal implications of artificial intelligence with a particular focus on accountability, privacy, and human rights. It also analyzes the limitations of current legal frameworks, including the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023 in India, as well as the European Union’s General Data Protection Regulation (GDPR), to highlight the urgent need for comprehensive AI-specific regulation.

ARTIFICIAL INTELLIGENCE IN CONTEMPORARY GOVERNANCE AND SOCIETY

Artificial Intelligence refers to computational systems capable of performing tasks that ordinarily require human intelligence, such as learning, reasoning, pattern recognition, and predictive analysis. Modern AI relies largely on machine learning and deep learning techniques that process enormous datasets to generate insights and automate decisions[3].

Governments and private institutions increasingly deploy AI across various sectors. In healthcare, AI assists in diagnosing illnesses and predicting disease outbreaks. Financial institutions use algorithms to assess creditworthiness and detect fraud. Law enforcement agencies employ predictive policing tools to identify crime hotspots, while courts in some jurisdictions utilize algorithmic risk assessments in bail and sentencing decisions. Corporations rely on AI-driven recruitment software to screen job applicants and determine employee performance.[4]

While these applications promise efficiency and objectivity, they also concentrate significant power in technological systems that lack transparency. The complexity of machine-learning models often makes it difficult to understand how particular decisions are reached. This “black box” nature of AI undermines legal principles of accountability and due process, especially when such systems influence rights and freedoms.[5]

ACCOUNTABILITY AND LIABILITY FOR AI-INDUCED HARM

One of the most complex legal challenges posed by artificial intelligence concerns the attribution of responsibility when harm occurs. Traditional liability regimes are grounded in human conduct and fault-based standards such as negligence or intentional wrongdoing. AI systems, however, function autonomously and may evolve unpredictably through continuous learning processes.[6]

For instance, when an autonomous vehicle causes a collision, determining fault becomes problematic. The accident could stem from flawed programming, insufficient training data, hardware malfunction, or unforeseen environmental conditions. Assigning responsibility among software developers, manufacturers, system operators, and users presents substantial legal uncertainty.

Product liability law provides limited remedies, as it typically requires proof that a product was defective at the time of sale. AI systems, by contrast, change their behaviour over time through algorithmic learning, complicating the identification of defects.[7]

Moreover, some scholars have suggested granting legal personhood to AI systems to address liability gaps. However, such proposals remain controversial and risk diverting accountability away from human actors who design and deploy these technologies. Granting legal status to machines could undermine victim compensation and weaken deterrence.[8]

The absence of clear liability frameworks not only leaves victims without effective remedies but also creates regulatory uncertainty for businesses, potentially stifling innovation or enabling irresponsible deployment of harmful technologies.

ARTIFICIAL INTELLIGENCE AND THE RIGHT TO PRIVACY

Privacy concerns constitute one of the most significant legal challenges associated with AI. Most AI systems depend on extensive data collection, including personal, biometric, and behavioural information. Facial recognition technologies, smart surveillance systems, and predictive analytics tools continuously monitor individuals, often without their explicit knowledge or consent.[9]

The right to privacy is recognized as a fundamental human right under international law.[10]AI-driven surveillance capabilities, however, have dramatically expanded both state and corporate capacity to track individuals’ movements, preferences, and interactions. This erosion of privacy threatens personal autonomy and democratic freedoms.

The right to privacy has been recognized as a fundamental right by constitutional courts worldwide. In Justice K.S. Puttaswamy v Union of India[11], the Supreme Court of India affirmed privacy as intrinsic to human dignity and personal liberty.This landmark judgment has significant implications for AI-driven data processing and mass surveillance practices.

In India, the Information Technology Act, 2000 provides a basic framework for regulating electronic data and cyber activities. While it addresses issues such as unauthorized access and data breaches, it was enacted long before the emergence of sophisticated AI systems and does not regulate automated data profiling or algorithmic surveillance.

The Digital Personal Data Protection Act, 2023 seeks to strengthen data protection by emphasizing consent, lawful processing, and data security. However, it remains limited in addressing AI-specific concerns such as automated decision-making, large-scale data analytics, and predictive behavioural profiling.

Internationally, the European Union’s General Data Protection Regulation (GDPR) represents one of the most comprehensive data protection regimes. It includes provisions on transparency, purpose limitation, and data subject rights. Notably, it grants individuals the right not to be subject solely to automated decision-making that significantly affects them.[12] Despite these protections, enforcing GDPR principles against complex AI systems remains challenging.[13]

The growing scale and sophistication of AI-driven data processing demand stronger regulatory safeguards that directly address algorithmic operations rather than relying solely on traditional data protection norms.

ALGORITHMIC BIAS AND HUMAN RIGHTS IMPLICATIONS

Another critical concern surrounding AI is algorithmic bias. AI systems learn from historical data, which often reflects entrenched social inequalities and discriminatory practices. When biased data is fed into machine-learning models, the resulting algorithms may perpetuate or even amplify these disparities.[14]

Research has demonstrated that facial recognition systems frequently misidentify individuals from minority ethnic groups at significantly higher rates than white individuals.[15] Similarly, recruitment algorithms have disadvantaged women by favoring male-dominated employment histories, while predictive policing tools disproportionately target marginalized communities.[16]

These outcomes undermine fundamental human rights principles, including equality before the law, non-discrimination, and access to justice. When AI influences employment opportunities, law enforcement decisions, or welfare allocations, biased outcomes can have profound and lasting consequences.

The opacity of AI decision-making exacerbates these harms. Individuals affected by algorithmic decisions often lack access to meaningful explanations or legal remedies. Without transparency, it becomes nearly impossible to challenge discriminatory outcomes or hold responsible parties accountable.

Thus, AI threatens not only individual rights but also broader social trust in legal and governmental institutions.

INADEQUACY OF EXISTING LEGAL FRAMEWORKS

Current legal systems rely largely on traditional technology laws, data protection statutes, and general liability principles to govern AI-related activities. While these frameworks offer partial protection, they were not designed to regulate autonomous, self-learning systems.

The Information Technology Act, 2000 focuses primarily on cybercrime and electronic transactions rather than automated decision-making or algorithmic accountability. Similarly, the Digital Personal Data Protection Act, 2023 emphasizes data security and consent but does not sufficiently address systemic AI risks such as mass surveillance or predictive profiling.

Even advanced frameworks like the GDPR struggle with enforcement complexities due to the technical opacity of AI systems and cross-border data flows.

International human rights law provides broad protections but lacks specialized mechanisms for regulating technological harms.[17]

These gaps create a fragmented regulatory environment where harmful AI practices may escape effective oversight, particularly in private sector deployments.

 THE CASE FOR AI-SPECIFIC REGULATION

Given the unique challenges posed by artificial intelligence, there is an urgent need for dedicated legal frameworks tailored to AI technologies. Effective AI regulation should incorporate the following principles:

  1. Clear Accountability Mechanisms

Legislation must define liability among developers, manufacturers, deployers, and users to ensure victims can access remedies.

  1. Transparency and Explainability

AI systems, particularly those used in public decision-making, should be required to provide understandable explanations for outcomes.

  1. Anti-Discrimination Safeguards

Mandatory bias audits and fairness testing should be implemented to prevent discriminatory impacts.

  1. Enhanced Privacy Protections

AI data usage should comply with strict consent standards and limitations on surveillance practices.

  1. Human Oversight

Critical decisions affecting fundamental rights should always involve meaningful human review.

Several countries and international organizations have begun proposing AI governance frameworks, yet global cooperation remains essential due to the transnational nature of technology.

CONCLUSION

Artificial Intelligence has the potential to revolutionize governance, industry, and daily life, offering unparalleled efficiency and innovation. However, its rapid expansion has exposed serious legal vulnerabilities concerning accountability, privacy, and human rights.

Existing laws such as the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023 in India, along with international frameworks like the GDPR, provide important foundations but remain insufficient to regulate the complexities of AI-driven systems.

Without comprehensive AI-specific regulation, societies risk enabling unchecked surveillance, discriminatory decision-making, and accountability gaps that undermine justice and democratic values.

To ensure that AI serves as a tool for societal progress rather than a source of harm, lawmakers must adopt proactive regulatory approaches emphasizing transparency, fairness, and human-centered governance. Only through robust legal reform can technological innovation coexist with the protection of fundamental rights and the rule of law.

BIBLIOGRAPHY

Books

  1. Pasquale F, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015)
  2. Russell S and Norvig P, Artificial Intelligence: A Modern Approach (4th edn, Pearson 2021)
  3. Cohen J, Between Truth and Power: The Legal Constructions of Informational Capitalism (Oxford University Press 2019)

 Journal Articles

  1. Abbott R, ‘Strict Liability for Artificial Intelligence’ (2020) 59 Harvard Journal of Law & Technology 1
  2. Barocas S and Selbst A, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671
  3. Brayne S, ‘Big Data Surveillance: The Case of Policing’ (2017) 51 American Sociological Review 977
  4. Buolamwini J and Gebru T, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ (2018) 81 Proceedings of Machine Learning Research 1
  5. Lynskey O, ‘Aligning Data Protection Rights with Competition Law Remedies’ (2017) 42 European Law Review 130
  6. Yeung K, ‘Algorithmic Regulation: A Critical Interrogation’ (2017) 12 Regulation & Governance 505

Cases

  1. Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1

Legislation

  1. Information Technology Act 2000 (India)
  2. Digital Personal Data Protection Act 2023 (India)
  3. Regulation (EU) 2016/679 (General Data Protection Regulation)

International Instruments

  1. Universal Declaration of Human Rights 1948

[1] Frank Pasquale, The Black Box Society (Harvard University Press 2015).

[2] Karen Yeung, ‘Algorithmic Regulation’ (2017) 12 Regulation & Governance 505.

[3] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Pearson 2021).

[4] Karen Yeung (n 2).

[5] Frank Pasquale (n 1).

[6] Ryan Abbott, ‘Strict Liability for Artificial Intelligence’ (2020) 59 Harvard Journal of Law & Technology 1.

[7] Gary Marchant and Rachel Lindor, ‘The Coming Collision Between Autonomous Vehicles and the Liability System’ (2012) 52 Santa Clara Law Review 1321.

[8] John Dewey, ‘The Historic Background of Corporate Legal Personality’ (1926) 35 Yale Law Journal 655.

[9] Sarah Brayne, ‘Big Data Surveillance’ (2017) 51 American Sociological Review 977.

[10] Universal Declaration of Human Rights (1948) art 12.

[11] Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1.

[12] Regulation (EU) 2016/679 (GDPR) art 22.

[13] Orla Lynskey, ‘Aligning Data Protection Rights with Competition Law Remedies’ (2017) 42 European Law Review 130.

[14] Solon Barocas and Andrew Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671.

[15] Joy Buolamwini and Timnit Gebru, ‘Gender Shades’ (2018) 81 Proceedings of Machine Learning Research 1.

[16] Sarah Brayne (n 9).

[17] Julie Cohen, Between Truth and Power (Oxford University Press 2019).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top