Authored By: Alan Alex
Christ Academy Institute of law
Abstract
The rapid deployment of Artificial Intelligence (AI) technologies across governance, law enforcement, welfare administration, and digital platforms has generated profound constitutional questions in India. While AI promises efficiency, predictive capacity, and enhanced public service delivery, its integration into state functions poses significant risks to equality, privacy, due process, and democratic accountability. This article argues for the development of a rights-centric regulatory framework grounded in the constitutional guarantees of Articles 14, 19, and 21 of the Constitution of India. Drawing upon the Supreme Court’s jurisprudence, particularly the proportionality standard articulated in Justice K.S. Puttaswamy v. Union of India, the article examines how algorithmic decision-making, biometric surveillance, and automated governance systems implicate informational privacy, substantive equality, and procedural fairness. It critically evaluates India’s fragmented regulatory landscape, including the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023, highlighting the absence of comprehensive AI-specific safeguards. The article proposes a risk-based regulatory architecture incorporating transparency mandates, algorithmic audits, human oversight mechanisms, independent institutional supervision, and accessible grievance redressal systems. It contends that embedding constitutional safeguards within AI governance does not hinder innovation but strengthens democratic legitimacy and public trust. By situating AI regulation within India’s transformative constitutional framework, the article advances a normative model for balancing technological advancement with fundamental rights protection in the digital age.
Keywords
Artificial Intelligence Regulation; Informational Privacy; Proportionality Doctrine; Algorithmic Accountability; Constitutional Governance
Constitutionalism in the Age of Algorithms: Locating AI within India’s Fundamental Rights Framework
The governance of Artificial Intelligence (AI) in India must be anchored in constitutional principles rather than administrative convenience or market imperatives. While the Constitution of India does not expressly contemplate algorithmic systems, its transformative character permits adaptation to emerging technological realities. The triadic framework of Articles 14, 19, and 21 provides the normative bedrock for evaluating AI deployment by the State and private actors performing public functions.
Article 14’s guarantee of equality before the law and equal protection of the laws prohibits arbitrariness in State action. The Supreme Court has clarified that arbitrariness is antithetical to equality and violates constitutional discipline.1 Algorithmic systems deployed in welfare distribution, predictive policing, or digital governance often operate through opaque models trained on historical datasets. These datasets may reproduce systemic biases — caste, gender, religion, or socio-economic disadvantage — thus converting technological neutrality into structural discrimination.
Article 19(1)(a) safeguards freedom of speech and expression. AI-driven content moderation, automated takedown mechanisms, and algorithmic ranking systems increasingly shape public discourse. Although reasonable restrictions under Article 19(2) are constitutionally permissible, automated systems lack contextual sensitivity and may result in disproportionate censorship. The chilling effect generated by opaque algorithmic regulation of speech demands constitutional vigilance.
Most significantly, Article 21 has evolved into a repository of substantive due process. In Justice K.S. Puttaswamy v. Union of India, the Supreme Court recognised informational privacy as intrinsic to dignity and autonomy.2 The Court affirmed that technological advancements cannot dilute constitutional protections. AI systems that process biometric identifiers, behavioural data, and predictive analytics directly implicate informational self-determination. Thus, AI governance must comply with the proportionality doctrine articulated in Puttaswamy.
Constitutionalism in the algorithmic age therefore demands that technological governance remain subordinate to fundamental rights. Efficiency cannot override dignity; automation cannot displace accountability.
Fragmented Governance and Regulatory Vacuum: Assessing India’s Existing AI Policy Landscape
India presently regulates AI through a patchwork of sectoral statutes and executive policy documents rather than a consolidated legislative framework. This fragmented approach creates normative uncertainty and enforcement gaps.
The Information Technology Act, 2000 primarily addresses cyber offences and intermediary liability.3 Section 79 provides safe harbour protections to intermediaries, subject to due diligence obligations. However, the statute does not impose affirmative duties of algorithmic transparency or explainability. Consequently, AI-driven decision-making systems deployed by digital platforms operate with limited regulatory oversight.
The Digital Personal Data Protection Act, 2023 establishes a consent-based data processing regime and recognises rights such as correction and erasure. Yet it does not explicitly regulate automated decision-making or mandate algorithmic audits. Unlike Article 22 of the EU’s General Data Protection Regulation (GDPR), Indian law does not guarantee a right against solely automated decisions with legal or similarly significant effects.
Policy initiatives such as NITI Aayog’s National Strategy for Artificial Intelligence emphasise innovation and economic competitiveness.4 While commendable in ambition, these policy documents lack statutory force and do not establish independent oversight institutions. The absence of comprehensive AI legislation risks regulatory arbitrage and uneven accountability. A rights-centric framework would require legislative clarity, defined liability standards, and institutional mechanisms capable of balancing innovation with constitutional safeguards.
Algorithmic Decision-Making and the Equality Mandate under Article 14
Algorithmic governance often rests on the presumption of neutrality. However, neutrality in design does not guarantee neutrality in outcome. Machine learning systems trained on historically biased datasets may perpetuate discriminatory patterns in employment screening, credit allocation, predictive policing, and welfare eligibility assessments.
The Supreme Court’s jurisprudence under Article 14 has evolved from formal classification doctrine to a broader principle prohibiting arbitrariness. In E.P. Royappa v. State of Tamil Nadu, the Court recognised that arbitrariness is inherently unequal.5 When algorithmic outputs remain opaque, affected individuals are deprived of any meaningful opportunity to challenge discriminatory decisions, thereby insulating State action from judicial review.
Moreover, substantive equality jurisprudence recognises the need to address structural disadvantage rather than merely formal neutrality. AI systems deployed without bias audits or fairness testing risk reinforcing systemic inequities. A constitutionally compliant AI framework must therefore incorporate:
- Mandatory algorithmic impact assessments prior to public deployment
- Periodic third-party bias audits
- Disclosure obligations ensuring explainability
- Accessible grievance redressal mechanisms
Without these safeguards, algorithmic governance risks entrenching discrimination under the veneer of technological objectivity.
Informational Privacy, Surveillance, and the Proportionality Standard after Puttaswamy
AI-driven surveillance technologies represent one of the most constitutionally sensitive applications of artificial intelligence in contemporary governance. Facial recognition systems (FRS), predictive policing tools, automated number plate recognition systems, and biometric authentication infrastructures facilitate unprecedented aggregation, analysis, and retention of personal data. Unlike traditional surveillance mechanisms, AI systems enable real-time identification, behavioural profiling, and predictive inference at scale, thereby transforming the architecture of the digital State.
The constitutional foundation for evaluating such technologies lies in the landmark decision of the Supreme Court in Justice K.S. Puttaswamy v. Union of India, which unequivocally recognised the right to privacy as intrinsic to life and personal liberty under Article 21 of the Constitution.6 The Court held that privacy encompasses informational self-determination, decisional autonomy, and protection against arbitrary State intrusion.
The legality requirement mandates the existence of a valid law authorising surveillance measures. Executive circulars, administrative guidelines, or internal police manuals cannot substitute for parliamentary legislation when fundamental rights are implicated. Surveillance regimes grounded solely in executive discretion risk violating the rule of law and separation of powers principles. Therefore, any deployment of AI-enabled facial recognition or predictive analytics by State authorities must derive legitimacy from a clear statutory framework that defines scope, safeguards, and limits.
The second limb — legitimate aim — permits surveillance in pursuit of objectives such as national security, public order, or crime prevention. However, the articulation of a legitimate aim does not conclude constitutional scrutiny. The necessity requirement demands that the measure adopted be the least restrictive alternative available to achieve the intended objective. Mass facial recognition systems operating in public spaces without individualised suspicion raise serious concerns under this limb. Blanket data collection may be constitutionally disproportionate if narrower, targeted investigative methods could suffice.
The final balancing stage requires courts to weigh the extent of rights infringement against the importance of the State objective. AI surveillance technologies amplify the intensity of intrusion by enabling continuous monitoring and predictive profiling. Such pervasive monitoring risks chilling effects on freedom of speech, association, and movement. When individuals are aware that their movements and interactions are algorithmically tracked, democratic participation may be indirectly constrained.
The Supreme Court’s reasoning in Anuradha Bhasin v. Union of India further reinforces that restrictions on fundamental rights must be proportionate, temporary, and subject to judicial review.7 The Court emphasised transparency and the publication of orders affecting civil liberties. Applying similar logic, AI surveillance regimes must incorporate procedural safeguards ensuring reviewability and accountability.
Beyond proportionality, AI surveillance implicates the doctrine of procedural due process. Automated systems that generate risk scores or flag individuals for investigation may influence law enforcement action without providing affected persons notice or opportunity to contest algorithmic outputs. Such opacity undermines principles of natural justice. Meaningful procedural safeguards — notice, explanation, and review — are therefore indispensable.
The constitutional concern extends to data retention and secondary use. AI systems frequently rely on large datasets retained indefinitely, enabling function creep where data collected for one purpose is repurposed for another without informed consent. The principles of data minimisation and purpose limitation, recognised in modern data protection regimes, are essential to prevent disproportionate interference with privacy.8 Storage limitation safeguards must also ensure that surveillance data is not preserved beyond necessity.
Another emerging challenge is the integration of biometric identification systems into public service delivery mechanisms. When access to welfare schemes or essential services becomes contingent upon biometric authentication, technological failures or inaccuracies may result in exclusion. Such exclusion may violate substantive due process and the right to dignity. Therefore, fallback mechanisms and alternative identification processes must be embedded within regulatory frameworks.
Unchecked AI surveillance threatens to normalise a culture of constant monitoring incompatible with constitutional democracy. The shift from reactive investigation to predictive governance alters the relationship between citizen and State, potentially transforming the presumption of innocence into algorithmic suspicion. Democratic constitutionalism demands that surveillance powers remain exceptional rather than ubiquitous.
Ultimately, the proportionality doctrine articulated in Puttaswamy provides a normative compass for navigating technological governance. It affirms that constitutional rights do not recede in the face of innovation. Instead, technological advancement must operate within a framework that preserves human dignity, liberty, and accountability. Embedding these safeguards within AI regulation ensures that security imperatives do not eclipse the foundational values of a free and democratic society.
Designing a Rights-Centric AI Architecture: Accountability, Transparency, and Institutional Oversight
The future of AI governance in India must reconcile technological innovation with constitutional fidelity. A rights-centric AI statute should adopt a risk-based approach while embedding robust accountability mechanisms capable of withstanding constitutional scrutiny. In a democratic polity governed by the rule of law, the legitimacy of algorithmic governance depends not merely on efficiency gains but on its alignment with principles of equality, due process, transparency, and human dignity.
A risk-based regulatory architecture provides a principled starting point. AI systems should be categorised according to the degree of impact they exert on fundamental rights and public interests. High-risk systems — particularly those deployed in policing, criminal justice, welfare distribution, healthcare diagnostics, biometric identification, credit scoring, and electoral processes — must be subject to heightened compliance obligations. Such obligations may include mandatory pre-deployment impact assessments, periodic independent audits, bias mitigation testing, cybersecurity certification, and continuous monitoring requirements. Prohibited categories may also be contemplated, such as AI systems that enable indiscriminate mass surveillance or manipulative behavioural profiling inconsistent with constitutional morality.
Transparency must extend beyond superficial disclosure of terms of service. Meaningful transparency requires explainability. Individuals affected by automated decisions must be informed that an AI system has been used, the nature of data relied upon, the logic of decision-making in intelligible form, and the factors that materially influenced the outcome. Explainability is essential not only for fairness but also for judicial review. Without intelligible reasoning, courts cannot effectively assess arbitrariness under Article 14 or proportionality under Article 21. Therefore, statutory mandates should require documentation of training datasets, model architecture, risk assessments, and decision-making criteria, subject to limited exceptions for legitimate trade secrets balanced against public interest.
Human oversight must remain central, particularly in contexts affecting liberty, livelihood, and reputation. AI systems should operate as decision-support tools rather than autonomous decision-makers in sensitive domains. The principle of “human-in-the-loop” ensures that automated outputs are subject to contextual evaluation, ethical judgment, and discretionary reasoning. In criminal justice, for instance, predictive risk assessments must not substitute individualised judicial determination. Similarly, in welfare administration, automated eligibility determinations should be reviewable by trained officials capable of correcting systemic or contextual errors.
Institutionally, India may consider establishing an independent Artificial Intelligence Regulatory Authority endowed with investigative, supervisory, and quasi-judicial powers. The Authority should possess technical expertise, independence from executive control, and clear statutory mandates. Its functions could include registration of high-risk AI systems, issuance of binding standards, imposition of civil penalties, coordination with sectoral regulators, and publication of transparency reports. Parliamentary oversight mechanisms should complement regulatory supervision to ensure democratic accountability.
Equally critical is the creation of accessible grievance redressal frameworks. Individuals adversely affected by automated decisions must have the right to seek review, demand correction of erroneous data, and obtain compensation where harm is established. Procedural safeguards — notice, opportunity to be heard, and reasoned decisions — must be codified to prevent algorithmic opacity from undermining natural justice principles. Collective redress mechanisms may also be necessary in cases involving systemic bias affecting large groups.
Liability regimes require careful articulation. The diffusion of responsibility among developers, deployers, data processors, and end-users complicates traditional fault-based models. A calibrated approach combining strict liability for high-risk deployments with negligence standards for lower-risk systems may provide balance. Clear allocation of responsibility ensures that victims of algorithmic harm are not left remediless due to technological complexity.
Furthermore, regulatory sandboxes may be introduced to foster innovation while maintaining oversight. Controlled experimentation under regulatory supervision allows technological development without compromising constitutional safeguards. Such mechanisms ensure that innovation is not stifled but guided by rights-conscious principles. A rights-centric AI framework also demands periodic legislative review. Given the rapid evolution of machine learning technologies, sunset clauses and adaptive regulatory mechanisms should be incorporated to prevent obsolescence. Continuous consultation with civil society, technical experts, industry stakeholders, and constitutional scholars will strengthen regulatory legitimacy.
Importantly, embedding constitutional safeguards within AI governance enhances, rather than impedes, innovation. Predictable regulatory standards foster investor confidence and public trust. When citizens are assured that algorithmic systems respect privacy, equality, and due process, technological adoption becomes socially sustainable. Trust functions as an economic asset in digital societies.
Ultimately, designing a rights-centric AI architecture requires recognising that technology is not normatively neutral. Algorithms encode values, priorities, and assumptions. Democratic constitutionalism demands that these embedded values reflect principles of dignity, liberty, and equality. By institutionalising accountability, transparency, and effective oversight, India can craft a regulatory model that harmonises technological progress with constitutional morality. In doing so, it may not only regulate AI responsibly but also shape a distinctly democratic vision of algorithmic governance for the Global South.
Bibliography
Case Law
Anuradha Bhasin v. Union of India, (2020) 3 S.C.C. 637 (India).
E.P. Royappa v. State of Tamil Nadu, A.I.R. 1974 S.C. 555 (India).
Justice K.S. Puttaswamy v. Union of India, (2017) 10 S.C.C. 1 (India).
Maneka Gandhi v. Union of India, (1978) 1 S.C.C. 248 (India).
State of West Bengal v. Anwar Ali Sarkar, A.I.R. 1952 S.C. 75 (India).
Legislation
Constitution of India, 1950.
Digital Personal Data Protection Act, No. 22 of 2023, India Code (2023).
Information Technology Act, No. 21 of 2000, India Code (2000).
Policy Documents
NITI Aayog, National Strategy for Artificial Intelligence #AIforAll (2018).
International Instruments
General Data Protection Regulation, Regulation (EU) 2016/679, Art. 22.
Footnotes
1. Maneka Gandhi v. Union of India, (1978) 1 S.C.C. 248 (India). See also E.P. Royappa v. State of Tamil Nadu, A.I.R. 1974 S.C. 555 (India) for the proposition that arbitrariness is antithetical to equality.
2. Justice K.S. Puttaswamy v. Union of India, (2017) 10 S.C.C. 1 (India).
3. Information Technology Act, No. 21 of 2000, India Code (2000).
4. NITI Aayog, National Strategy for Artificial Intelligence #AIforAll (2018).
5. E.P. Royappa v. State of Tamil Nadu, A.I.R. 1974 S.C. 555 (India).
6. Justice K.S. Puttaswamy v. Union of India, (2017) 10 S.C.C. 1 (India).
7. Anuradha Bhasin v. Union of India, (2020) 3 S.C.C. 637 (India).
8. Digital Personal Data Protection Act, No. 22 of 2023, India Code (2023).





