Home » Blog » Artificial Intelligence and Data Privacy in India: Emerging Challenges and Legal Perspectives

Artificial Intelligence and Data Privacy in India: Emerging Challenges and Legal Perspectives

Authored By: Samriddha Ray

St.Xavier’s University,Kolkata

Abstract

The proliferation of Artificial Intelligence (AI) has transformed sectors such as healthcare, banking, and governance, bringing unprecedented efficiencies. Yet, this technological leap raises profound concerns about data privacy, especially in a jurisdiction like India, where digital infrastructure is advancing rapidly but legal safeguards are still evolving. This article critically analyses the interplay between AI and data privacy under Indian law, evaluates the efficacy of the Digital Personal Data Protection Act, 2023, and suggests reforms for a more accountable AI ecosystem.

Introduction

Artificial Intelligence (AI) is no longer a futuristic concept—it has become an integral part of everyday life. AI-powered systems process enormous volumes of personal data to provide personalised services, detect fraud, and even assist in predictive policing. However, this data-driven evolution poses substantial privacy risks, including profiling, surveillance, and algorithmic discrimination.

The recent enactment of the Digital Personal Data Protection Act, 2023 (“DPDP Act, 2023”) marks India’s first dedicated step toward regulating data privacy. Yet, whether this framework sufficiently addresses AI-specific challenges remains an open question.

The emergence of artificial intelligence (AI) is reshaping societies and economies across the world, and India is no exception. From automated language translation to predictive healthcare, AI holds transformative potential for public welfare, business innovation, and governance efficiency. Yet, this technological evolution depends fundamentally on vast volumes of personal and sensitive data—raising significant concerns about data privacy, individual autonomy, and accountability.

In the Indian context, these challenges are compounded by the country’s unique socio-legal landscape: a large and diverse population, limited digital literacy among significant sections, and a legal framework still adapting to the realities of pervasive digital technologies. While the Supreme Court’s landmark decision in Justice K.S. Puttaswamy (Retd.) v. Union of India enshrined the right to privacy as a fundamental right, translating this constitutional promise into concrete protections in the age of AI remains an unfinished task.

The recently enacted Digital Personal Data Protection Act, 2023 marks an important step forward by introducing consent requirements, data fiduciary duties, and penalties for misuse of data. However, AI-driven applications present distinct and more complex threats—ranging from algorithmic bias and lack of explainability to the potential for large-scale profiling and surveillance—which existing legal norms are ill-equipped to fully address.

This article critically explores these emerging challenges and evaluates the adequacy of current legal responses in India. It seeks to highlight gaps, draw insights from comparative frameworks, and suggest pathways for harmonising technological progress with the core constitutional values of privacy, fairness, and human dignity.

AI and Its Dependence on Data

AI relies heavily on large datasets to train machine learning models. The more data an algorithm processes, the more accurate its predictions tend to be. In sectors like health-tech, fintech, and e-commerce, AI systems use sensitive personal data, including financial information, biometric data, and health records.

This dependency raises unique concerns:

  1. Opacity (“Black Box” Problem): AI algorithms often operate without meaningful explainability, making it hard to identify privacy breaches.
  2. Profiling and Discrimination: AI systems may inadvertently entrench biases present in the training data, leading to unfair treatment of individuals.
  3. Surveillance: AI-powered facial recognition and tracking systems pose significant threats to the right to privacy.

Legal Framework in India:

An Overview India has historically relied on piecemeal provisions under the Information Technology Act, 2000 and related rules for data protection. The landmark judgment in Justice [1]K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1 recognised the right to privacy as a fundamental right under Article 21 of the Constitution.

In August 2023, the DPDP Act, 2023 was enacted to establish comprehensive safeguards. It applies to the processing of digital personal data, imposes duties on data fiduciaries, and recognises rights of data principals, including the right to access, correction, and grievance redressal.

However, the Act is largely technology-neutral and does not contain AI-specific provisions.

Critical Analysis of [2]DPDP Act, 2023 and AI

While the DPDP Act brings much-needed reform, several gaps emerge in the AI context:

  1. Lack of Algorithmic Accountability

The Act mandates fair and reasonable processing but does not require AI developers or deployers to:

  • Explain automated decisions,
  • Disclose algorithmic logic, or
  • Assess and mitigate bias.

Given AI’s opacity, this omission makes it difficult for individuals to challenge unfair profiling or discrimination.

  1. Absence of Impact Assessments

Modern AI regulations, such as the [3]EU’s proposed AI Act, recommend mandatory Algorithmic Impact Assessments (AIA) for high-risk AI systems. The DPDP Act lacks a similar requirement, missing an opportunity to ensure pre-deployment risk evaluation.

  1. Cross-Border Data Flow

AI training often involves transferring datasets across jurisdictions. The DPDP Act empowers the government to notify countries to which data can be transferred, but lacks clarity on AI-specific safeguards to prevent misuse once data leaves India.

  1. Automated Decision-Making

Unlike the EU’s GDPR, which grants individuals the right not to be subject to decisions based solely on automated processing, the DPDP Act has no explicit right against fully automated decision-making.

Comparative Perspective: EU’s GDPR and AI Act

The [4]General Data Protection Regulation (GDPR), effective since 2018, addresses AI challenges more directly:

  • Provides a right to explanation,
  • Recognises data protection impact assessments (DPIAs),
  • Regulates profiling.

Further, the proposed EU AI Act classifies AI systems into risk categories and imposes stricter obligations on high-risk systems, including requirements for:

  • Human oversight,
  • Transparency,
  • Robust risk management.
  • India’s DPDP Act, though significant, does not yet reflect this nuanced approach.

Judicial Trends and AI

While Indian courts have not directly ruled on AI-related data privacy violations, jurisprudence on privacy and data protection principles sets a relevant backdrop:

In Puttaswamy, the Supreme Court underscored proportionality, necessity, and fairness as central to data processing.

The Court in [5]Anuradha Bhasin v. Union of India, (2020) 3 SCC 637, highlighted the need for reasoned decision-making when restricting fundamental rights in digital contexts.

Applying these principles, future litigation could question opaque AI systems that affect individual rights without transparency or accountability.

Suggestions and the Way Forward

To make India’s legal framework more responsive to AI challenges, the following reforms may be considered:

  1. Mandate Algorithmic Impact Assessments for AI systems handling sensitive data.
  2. Introduce a right to explanation for decisions made solely by AI.
  3. Set sector-specific AI standards, especially for high-risk applications like health and law enforcement.
  4. Encourage self-regulation through AI ethics boards and internal audits.
  5. Facilitate cross-border data transfers through binding corporate rules and data-sharing agreements with AI-specific safeguards.

Conclusion

AI offers immense potential to enhance India’s digital transformation journey. However, without AI-specific legal safeguards, the promise of AI risks overshadowing individual rights to privacy and non-discrimination. The DPDP Act, 2023 is a vital first step, but policymakers must now bridge the gap by adopting targeted measures that promote responsible AI innovation.

A forward-looking, balanced approach—grounded in constitutional values and inspired by international best practices—can help India lead not just in AI adoption, but also in AI governance.

The rapid growth of artificial intelligence in India marks a turning point for data-driven innovation, yet it simultaneously exposes profound tensions between technological advancement and the right to privacy. As AI systems become embedded in everyday life—whether in governance, healthcare, finance, or social media—they inevitably draw upon and process vast quantities of personal data, raising complex legal and ethical dilemmas. These include the risk of unchecked surveillance, algorithmic discrimination, opaque automated decisions, and the potential erosion of individual consent.

While the Digital Personal Data Protection Act, 2023 introduces crucial principles of transparency, accountability, and data minimisation, it does not fully engage with the specific challenges posed by AI systems—such as the need for explainability and safeguards against bias in machine learning models. Current laws in India are thus only a starting point; they lay the groundwork but stop short of addressing the broader consequences of AI-driven data processing.

To bridge this gap, India must look beyond traditional data protection frameworks. A forward-thinking legal response would involve creating dedicated guidelines for AI ethics, mandating impact assessments for high-risk AI systems, and requiring clear disclosure when automated tools significantly affect individuals. Equally important is fostering interdisciplinary dialogue between policymakers, technologists, jurists, and civil society to ensure that laws remain adaptive to fast-evolving AI technologies.

Ultimately, the challenge lies not in choosing between innovation and privacy, but in crafting a balanced legal ecosystem that upholds both. India’s constitutional commitment to dignity and autonomy must guide the deployment of AI, ensuring that these technologies serve society rather than overshadow individual rights. As the country aspires to become a global leader in AI, the task ahead is clear: to embrace innovation responsibly, guided by legal safeguards that protect the privacy and trust of every citizen.

Bibliography

  1. Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
  2. Anuradha Bhasin v. Union of India, (2020) 3 SCC 637.
  3. Digital Personal Data Protection Act, 2023.
  4. General Data Protection Regulation (GDPR), Regulation (EU) 2016/679.
  5. European Commission, Proposal for a Regulation on Artificial Intelligence (AI Act), 2021.

[1] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.

[2] Digital Personal Data Protection Act, 2023.

[3] European Commission, Proposal for a Regulation on Artificial Intelligence (AI Act), 2021.

[4] General Data Protection Regulation (GDPR), Regulation (EU) 2016/679.

[5] Anuradha Bhasin v. Union of India, (2020) 3 SCC 637.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top