Authored By: Pratyaksh Sharma
Christ Deemed To Be University, Delhi Ncr
Abstract
Justice K.S. Puttaswamy v Union of India1 has established the right to privacy as a fundamental right and led to a new constitutional basis in India. The emergence and saturation of artificial intelligence (AI) systems (from facial recognition, predictive policing to algorithmic profiling) have also presented a unique challenge to this right. AI technologies rely on personal and sensitive data, often on a scale and potentially without prior informed consent to do so, and operate with dubious legal authority. This article discusses the legislation on privacy in India, how courts have interpreted that legislation, as well as the relationship between legislative provisions and court rulings and an analysis of international developments. It assesses the legal gaps and limitations from developments since Puttaswamy, considers the implications of the Digital Personal Data Protection Act, 20232, provides an understanding of the ethical queries posed by AI, and wraps with policy recommendations for a privacy safeguarding and innovator-friendly AI regulatory environment.
Introduction
The constitutional right to privacy in India, established by Justice K.S. Puttaswamy v Union of India3, has laid the legal groundwork for the safeguarding of personal autonomy in the age of digital technology. However, the surge of artificial intelligence (AI) brings additional concerns about the adequacy of privacy protection4. Collectively, the use of artificial intelligence along with big data allows for profiling, surveillance, and predictive behaviour at an entirely new level. For example, India’s proposed use of AI-facilitated facial recognition technologies in public could lead to surveillance on a massive scale and without sufficient checks and balances. The same debates are happening around AI development internationally as seen with the proposed EU AI Act5, and the proposed AI Bill of Rights by the United States6.
This article will discuss, despite recognizing the intrinsic relationship between privacy, dignity and liberty in Indian jurisprudence under Article 21 of the Indian Constitution, that the law, at present, is unsuited to dealing with the risks created by AI. Discussion unfolds in development of the legal context, judicial developments, key gaps in the law, and new updates before providing concrete proposals for reform.
Research Methodology
The paper follows a doctrinal and analytical methodology7. Primary sources include constitutional provisions, statutes including the IT Act, 20008, and the Digital Personal Data Protection Act9, and important Supreme Court of India judgments. Secondary sources would include books, journal articles, and policy documents. The paper will take a comparative lens through the EU’s GDPR and AI Act, and the US regulatory conversations. The main focus of the analysis will be the intersection of AI technologies and privacy law, specifically, the gaps that exist and how to reform them.
Legal Framework
The right to privacy in India is grounded in Article 21 of the Constitution10– the right to life and personal liberty. The Supreme Court has held that privacy relates to dignity and liberty, with Puttaswamy finding privacy as “an inviolable right”. The constitutional right is supplemented by statutory protection under the Information Technology Act, 2000 through Sections 43A and 72A11 which govern data security and confidentiality.
The introduction of a rights based framework for personal data processing in the form of the Digital Personal Data Protection Act, 2023 (DPDP Act)12 has established obligations from data fiduciaries and individual rights which include access, correction and erasure. The DPDP Act, as it currently stands, does not address any AI risks directly or even as potentally harms – such as algorithmic decision making and automated decision making. In fact, the DPDP Act is also technology neutral.
Internationally, the General Data Protection Regulation (GDPR)13 of the EU provides a more explicit regime, providing individuals with a “right not to be subject to a decision based solely on automated processing” (Article 22). Similarly, California’s Consumer Privacy Act (CCPA)14 deals with transparency in automated decision making.
Judicial Interpretation
The Supreme Court of India has played a key role in defining the contours of the right to privacy. In Justice K.S. Puttaswamy v Union of India15 The nine-judge bench concluded in unison that privacy is a fundamental right under Article 21, with a specific consideration of informational privacy, at a time when it is acute that individuals can be identified from an individual, due to our digital information. The majority judgement acknowledged the opportunities and challenges associated with big data analytics, search and seizure, and algorithm decision-making even though AI was never the ‘direct subject-matter’ of the Puttaswamy case.
In Selvi v State of Karnataka16, which endorsed the concept of the right to privacy being based on and around personal autonomy, held that ‘involuntary narcoanalysis, polygraph, and brain-mapping tests would amount to the violation of a person’s personal autonomy under article 21’ and ‘the right against self-incrimination’, expressing the persistent judicial distrust of technological threats to people’s privacy. In Shreya Singhal v Union of India17the Supreme Court struck down statute section 66A of the IT Act as both vague and over-broad and in doing so, The Court of Justice of the European Union has also struck down the Data Retention Directive in Digital Rights Ireland Ltd v Minister for Communications18 on the grounds that the provisions of the Directive disproportionately interfered with rights to privacy. Critical Analysis Despite India’s Constitution providing for privacy, the regulatory schema does not account for the particular risks of AI in its equations.
Critical Analysis
The Personal Data Protection Act, 202319is technology-agnostic so as to be flexible, but it does not contain specific provisions to oversee AI so as to tailor it to account for automated profiling, predictive analytics, or algorithmic decision making specifically. Additionally, the act does not mandate any “opt-out” rights for AI-based automated decisions covered under Article 22 of the GDPR. The transparency of artificial intelligence aggravates the issue. AIl models, and particularly deep-learning models, are “black boxes,” whereby individuals will be unable to meaningfully show difficulties with decisions regarding either what data or the manner in which it was processed. The deep “black box” character of AI systems was one of the main concerns regarding informed consent and data protection law.20
AI based on discriminatory data runs the risk of normalising and indeed reinforcing discrimination when it violates core rights – potentially showing weakness, especially concerning equal protections under Article 14.Several predictive policing initiatives have been openly denounced, even as analysts of their practice admit that some activities are a de facto acceptable or even implied method of unfairly targeting marginalised communities and groups with questions concerning constitutional law and the rights of all individuals within such communities.21
Furthermore, the challenges due to the absence of substantial cross-border data flow regulations put the data of citizens of India at risk of being exposed to foreign surveillance regimes. As much of the hosting of AI systems occurs on a cloud instance that spans the globe, data sovereignty becomes a particularly relevant area of concern.
Recent Developments
India’s Digital Personal Data Protection Act, 202322is a giant leap towards a comprehensive privacy regime, although it is still only in its early implementation stage. For one, the Act does not introduce any specific accountability obligations for AI and data equity nor does it require impact assessments for high-risk AI applications.
A few of the Niti Aayog’s ethical principles and guidelines on AI in India, Niti Aayog Responsible AI for All (2021)23, expounds on ethical AI practices and ethics like, accountability, transparency, and equity, yet all these pledges are non-binding.
The EU AI Act24, which is set to roll out worldwide by 2026, sets out a risk-based AI regulation framework with specific requirements for “high risk” AI systems. The Act will include algorithmic transparency, human oversight, and consequences for non-adherence. Contrastingly, the US AI Bill of Rights (2022)25 sets out non-binding privacy principles, fairness and transparency for emerging automated systems.
These global developments underscore the necessity for India to frame AI-specific legislative safeguards, instead of continuing to be dependent on the generic data protection regime.
Conclusion
In India, the right to privacy has historic challenges to face in the age of artificial intelligence, and whilst the DPDP Act, 202326is a strong step forward, it cannot mitigate AI’s particular risks taking into account the challenges of algorithmic opacity, algorithmic bias and mass surveillance. Efficiency in regulation is believed to be possible, and the existing international developments, such as the European Union’s (EU) AI Act27, and the US AI Bill of Rights offer important potential information on pathways for how to continue to innovate whilst also protecting and safeguarding human rights.
India urgently needs to formulate an AI governance framework that explicitly protects dignity, autonomy and democratic freedoms, ensuring that AI and other technologies can be trusted for their children and future generations, and does not come at the expense of these rights.
Reference(S):
1 Justice K.S. Puttaswamy (Retd) v Union of India (2017) 10 SCC 1
2 Digital Personal Data Protection Act 2023.
3 Justice K.S. Puttaswamy (Retd) v Union of India (2017) 10 SCC 1
4 Selvi v State of Karnataka (2010) 7 SCC 263.
5 EU, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) COM(2021) 206 final
6 White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights (2022).
7Ian Dobinson and Francis Johns, Legal Research Methodology (Lawbook Co 2007).
8Information Technology Act 2000.
9 Digital Personal Data Protection Act 2023.
10 Constitution of India, art 21.
11Information Technology Act 2000, ss 43A, 72A.
12 Digital Personal Data Protection Act 2023.
13 Regulation (EU) 2016/679 (General Data Protection Regulation), art 22
14 California Consumer Privacy Act 2018, s 1798.185(a)(16).
15 Justice K.S. Puttaswamy (Retd) v Union of India (2017) 10 SCC 1.
16 Selvi v State of Karnataka (2010) 7 SCC 263.
17 Shreya Singhal v Union of India (2015) 5 SCC 1.
18 Case C-293/12 Digital Rights Ireland Ltd v Minister for Communications EU:C:2014:238.
19 Digital Personal Data Protection Act 2023.
20 Rashmi Dyal-Chand, ‘Bias in AI: A Legal Perspective’ (2020) 55(3) Wake Forest L Rev 857.
21 Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI’ (2021) 41(2) Computer Law & Security Review 105567.
22 Digital Personal Data Protection Act 2023.
23 NITI Aayog, Responsible AI for All: Strategy for India (2021).
24 EU, Artificial Intelligence Act COM(2021) 206 final.
25 White House OSTP, Blueprint for an AI Bill of Rights (2022).
26 Digital Personal Data Protection Act 2023.
27 EU, Artificial Intelligence Act COM(2021) 206 final.