Authored By: Laxita Raju Hawelikar
ILS Law College Pune
Abstract
The advent of artificial intelligence has revolutionized surveillance capabilities, creating unprecedented challenges for the constitutional right to privacy. This article examines the legal framework governing privacy rights in India, analyzes the implications of AI-powered surveillance systems, and evaluates the adequacy of existing legal protections. Through an examination of landmark judicial pronouncements, statutory provisions, and emerging technologies such as facial recognition and predictive policing, this article argues that current legal mechanisms are insufficient to address the invasive nature of AI surveillance. The article concludes with recommendations for comprehensive reform, including the urgent need for robust data protection legislation and judicial oversight mechanisms to preserve the delicate balance between national security, technological advancement, and fundamental rights.
Introduction
In August 2023, the Delhi High Court raised concerns over the deployment of facial recognition technology (FRT) by law enforcement agencies without adequate legal framework or privacy safeguards. This incident epitomizes the growing tension between technological advancement and constitutional protections in contemporary India. As artificial intelligence becomes increasingly integrated into surveillance infrastructure—from automated facial recognition systems in public spaces to predictive policing algorithms—the fundamental right to privacy faces threats of a scale and sophistication unprecedented in human history.
The right to privacy, recognized as a fundamental right under Article 21 of the Indian Constitution in the landmark Justice K.S. Puttaswamy (Retd.) v. Union of India (2017), now confronts challenges that the framers of the Constitution could scarcely have imagined. AI surveillance systems possess capabilities that extend far beyond traditional monitoring: they can identify individuals in crowds, predict behavior patterns, track movements across cities, and create comprehensive digital profiles without human intervention or oversight.
This issue assumes critical importance in the current legal scenario for several reasons. First, AI surveillance technologies are being rapidly deployed by government agencies and private entities alike, often without transparent policies or accountability mechanisms. Second, the absence of comprehensive data protection legislation creates a regulatory vacuum that leaves citizens vulnerable to privacy violations. Third, the COVID-19 pandemic accelerated the adoption of digital surveillance tools, normalizing intrusive monitoring practices that persist beyond the public health emergency.
This article seeks to examine the constitutional and statutory framework governing privacy rights in India, analyze the specific challenges posed by AI surveillance technologies, evaluate judicial responses to these emerging threats, and propose reforms necessary to protect privacy in an age of ubiquitous algorithmic monitoring.
Legal Framework
Constitutional Provisions
The foundation of privacy rights in India rests upon Article 21 of the Constitution, which guarantees the right to life and personal liberty. In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1, a nine-judge bench of the Supreme Court unanimously declared that the right to privacy is a fundamental right intrinsic to Article 21 and Part III of the Constitution. The Court held that privacy includes three essential elements: decisional autonomy, informational self-determination, and the right to be left alone.
Article 19(1)(a), guaranteeing freedom of speech and expression, and Article 19(1)(d), protecting freedom of movement, also have significant privacy dimensions. Surveillance, particularly AI-enabled surveillance, can create chilling effects on these freedoms by monitoring and potentially deterring lawful expression and movement.
Statutory Framework
India’s statutory framework for privacy protection remains fragmented and inadequate in addressing AI surveillance challenges. The Information Technology Act, 2000 (IT Act) governs electronic data but was enacted before the proliferation of AI technologies. Section 43A of the IT Act requires body corporates to implement reasonable security practices for sensitive personal data, while Section 72A criminalizes unauthorized disclosure of personal information. However, these provisions contain significant exemptions for government agencies and lack specific provisions addressing AI-based processing.
The Digital Personal Data Protection Act, 2023 (DPDP Act), which received Presidential assent in August 2023, represents India’s first comprehensive data protection legislation. The Act establishes principles of consent, purpose limitation, and data minimization. However, critics argue that its broad exemptions for government agencies under Section 17, particularly concerning “sovereignty and integrity of India” and “security of the State,” create loopholes that could legitimize extensive AI surveillance without adequate safeguards.
Sector-specific regulations, such as the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016, govern biometric data collection but have themselves been subjects of privacy litigation. The Telegraph Act, 1885, and the Indian Telegraph Rules, 1951, regulate telecommunication interception but are antiquated and ill-equipped to address modern AI surveillance capabilities.
Notably absent is legislation specifically regulating artificial intelligence or algorithmic decision-making. The absence of AI-specific regulation creates uncertainty regarding liability, transparency, and accountability when AI systems infringe upon privacy rights.
Judicial Interpretation
Landmark Judgments
The Indian judiciary has progressively developed privacy jurisprudence, though it continues to grapple with technology-specific challenges.
Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1 stands as the cornerstone of privacy rights in India. The Supreme Court established that any infringement of privacy must satisfy a three-pronged test: (i) legality, requiring a law that authorizes the invasion; (ii) legitimate state aim, serving a legitimate goal; and (iii) proportionality, ensuring the extent of interference is proportionate to the need. This test, derived from European human rights jurisprudence, provides the framework for evaluating AI surveillance measures.
In Justice K.S. Puttaswamy v. Union of India (Aadhaar Judgment) (2018) 1 SCC 1, the Supreme Court upheld the constitutional validity of Aadhaar while reading down several provisions. Significantly, the Court emphasized data minimization, prohibited private entities from mandating Aadhaar authentication, and required robust data protection measures. The judgment recognized that biometric data collection creates unique privacy risks, particularly when combined with extensive databases—a concern directly applicable to AI surveillance systems.
People’s Union for Civil Liberties v. Union of India (1997) 1 SCC 301 established that telephone tapping must comply with procedural safeguards, including authorized cause and limited duration. While predating AI surveillance, the judgment’s emphasis on procedural protections remains relevant. However, modern AI systems that process publicly available data or conduct surveillance without traditional “tapping” may fall outside this judgment’s scope.
In Shreya Singhal v. Union of India (2015) 5 SCC 1, while primarily addressing free speech, the Supreme Court struck down Section 66A of the IT Act for vagueness. This judgment reinforces the principle that laws enabling surveillance or restricting rights must be precise and not overbroad—a standard many AI surveillance practices currently fail to meet.
Recent lower court interventions have directly addressed AI surveillance. In Manohar Lal v. Union of India (2023), the Delhi High Court questioned the deployment of facial recognition technology by Delhi Police, noting the absence of legal framework, lack of public consultation, and potential for mass surveillance. The Court directed the government to explain the legal basis for FRT deployment and assess its proportionality under the Puttaswamy test.
Emerging Concerns in Case Law
Judicial pronouncements reveal several recurring concerns: the need for explicit legislative authorization for surveillance; the importance of procedural safeguards and oversight mechanisms; the principle of purpose limitation; and the requirement that surveillance measures be the least intrusive means available. However, courts have struggled to apply these principles to AI systems that operate continuously, autonomously, and at scale.
Critical Analysis
Loopholes and Challenges
The intersection of AI surveillance and privacy rights reveals critical gaps in India’s legal framework that demand urgent attention.
Absence of AI-Specific Regulation: India lacks comprehensive legislation governing artificial intelligence development and deployment. Unlike the European Union’s proposed AI Act, which categorizes AI applications by risk level and imposes corresponding obligations, Indian law does not distinguish between AI-powered surveillance and traditional monitoring. This creates legal uncertainty and allows deployment of high-risk AI systems without adequate scrutiny.
Inadequate Transparency and Accountability: AI surveillance systems often operate as “black boxes,” with proprietary algorithms that resist external scrutiny. Citizens subjected to AI surveillance typically remain unaware of: what data is collected, how algorithms process this data, what decisions are automated, and whether human review occurs. The DPDP Act’s transparency requirements are undermined by national security exemptions that permit secretive surveillance programs.
Algorithmic Bias and Discrimination: Studies document that facial recognition technology exhibits significantly higher error rates for women and individuals with darker skin tones. When deployed in law enforcement contexts, these biases translate into discriminatory targeting and wrongful identifications. Indian law currently lacks mechanisms to audit AI systems for bias or require impact assessments before deployment in sensitive contexts.
Scope Creep and Function Creep: Surveillance technologies deployed for specific purposes frequently expand beyond their original scope. Contact tracing applications introduced during COVID-19 provide a cautionary example: data collected for public health purposes could potentially be repurposed for law enforcement or other governmental functions. The DPDP Act’s purpose limitation principle lacks robust enforcement mechanisms to prevent such creep.
Inadequate Remedies: When AI surveillance systems violate privacy rights, existing remedies prove insufficient. Affected individuals may not know they have been surveilled, lack standing to challenge secretive programs, or find courts reluctant to intervene in national security matters. The DPDP Act establishes a Data Protection Board, but its independence and powers remain to be tested.
Public-Private Surveillance Networks: AI surveillance increasingly involves partnerships between government agencies and private technology companies. This blurs lines of accountability, with each entity disclaiming responsibility for privacy violations. Private companies collecting vast amounts of data through commercial services create surveillance infrastructure that governments can access through purchase, subpoena, or partnership—circumventing restrictions on direct government collection.
Comparative Analysis
International jurisdictions offer instructive comparisons. The European Union’s General Data Protection Regulation (GDPR) provides robust protections including requirements for data protection impact assessments, restrictions on automated decision-making, and meaningful consent requirements. Several EU member states have imposed moratoria on facial recognition technology in public spaces pending regulatory clarity.
The United States presents a more fragmented approach, with sector-specific federal laws and varying state regulations. Cities including San Francisco, Boston, and Portland have banned government use of facial recognition technology, recognizing its risks to privacy and civil liberties. However, the absence of comprehensive federal data protection legislation leaves significant gaps.
China’s extensive AI surveillance infrastructure, including social credit systems and ubiquitous facial recognition, represents a cautionary counterexample where technological capability has overwhelmed privacy protections, resulting in pervasive state monitoring.
Recent Developments
Several recent developments highlight the evolving landscape of AI surveillance and privacy law in India.
Digital Personal Data Protection Act, 2023: The passage of India’s first comprehensive data protection law marks a significant milestone. However, civil liberties organizations have criticized its broad exemptions for government agencies, limited individual rights, and absence of restrictions on government surveillance. The Act’s implementation through rules—yet to be finalized—will determine its effectiveness in restraining AI surveillance.
National Intelligence Grid (NATGRID): NATGRID, operational since 2022, integrates databases from multiple agencies to create comprehensive profiles of individuals. While proponents argue it enhances security, critics warn it enables unprecedented surveillance without adequate safeguards. The system’s use of AI analytics to identify patterns and predict threats exemplifies the privacy concerns surrounding automated mass surveillance.
Facial Recognition Technology Projects: Multiple states have deployed or proposed FRT systems. The National Crime Records Bureau’s Automated Facial Recognition System (AFRS) aggregates databases of photographs from various sources. Civil liberties groups have challenged these deployments, arguing they lack legislative authorization and fail the proportionality test established in Puttaswamy.
Draft Digital India Act: The government has announced plans to replace the IT Act with a comprehensive Digital India Act. Early consultations suggest the legislation may address AI governance, platform accountability, and online safety. Privacy advocates urge that any new framework must include specific provisions regulating AI surveillance, mandating algorithmic transparency, and prohibiting discriminatory automated decision-making.
Judicial Developments: Courts continue to develop privacy jurisprudence. Recent cases address issues including workplace surveillance, COVID-19 contact tracing data retention, and law enforcement access to personal devices. These cases collectively suggest growing judicial recognition that digital-age privacy requires robust protections extending beyond traditional Fourth Amendment-style restrictions on physical searches.
Suggestions and Way Forward
Protecting privacy in the age of AI surveillance requires comprehensive legal reform, institutional innovation, and cultural change. The following recommendations provide a roadmap for balancing technological advancement with constitutional rights.
Enact AI-Specific Legislation: India urgently needs comprehensive legislation governing artificial intelligence. Such legislation should: classify AI applications by risk level; prohibit high-risk applications such as real-time biometric identification in public spaces absent compelling justification; require algorithmic impact assessments before deploying AI in sensitive contexts; mandate transparency regarding AI system capabilities and limitations; and establish liability frameworks for AI-caused harms.
Strengthen Data Protection Framework: The DPDP Act must be strengthened through rules and amendments that: narrow exemptions for government surveillance; require judicial authorization for surveillance except in genuine emergencies; mandate periodic review of surveillance authorizations; prohibit purpose creep through strict enforcement of data minimization; and establish meaningful penalties for violations.
Establish Independent Oversight Mechanisms: Effective privacy protection requires institutional independence. India should: ensure the Data Protection Board possesses genuine independence from government interference; establish a specialized AI oversight body with technical expertise; create mechanisms for civil society participation in surveillance policy decisions; and mandate regular transparency reports detailing surveillance activities.
Mandate Algorithmic Audits: AI systems used in surveillance should undergo: pre-deployment impact assessments evaluating privacy risks; regular audits for bias and accuracy; third-party testing where feasible; and public disclosure of audit results except where genuine security concerns require confidentiality.
Strengthen Judicial Capacity: Courts must develop institutional capacity to adjudicate technology cases effectively. This requires: specialized benches with technical expertise; court-appointed experts to assist in evaluating AI systems; streamlined procedures for challenging secretive surveillance programs; and protective orders allowing classified evidence review without public disclosure.
Promote Privacy-Enhancing Technologies: Government and industry should invest in privacy-preserving alternatives including: differential privacy techniques that enable data analysis while protecting individual privacy; federated learning approaches that train AI models without centralizing sensitive data; and homomorphic encryption enabling computation on encrypted data.
Enhance Public Awareness: Citizens cannot exercise privacy rights they do not understand. Government, civil society, and educational institutions must: conduct public education campaigns about AI surveillance risks; promote digital literacy; and empower individuals to make informed decisions about data sharing.
International Cooperation: Given AI’s global nature, India should: engage with international standard-setting bodies; consider adequacy determinations to facilitate responsible data transfers; and participate in multilateral discussions on AI governance while preserving regulatory sovereignty.
Conclusion
The right to privacy stands at a critical juncture in India. The constitutional guarantee established in Puttaswamy provides a robust foundation, but AI surveillance technologies threaten to render this protection hollow without corresponding legal and institutional reforms. The capabilities of modern surveillance systems—continuous monitoring, automated analysis, predictive profiling, and persistent data retention—create risks to privacy, autonomy, and democratic freedoms that demand urgent attention.
Current legal frameworks, developed for analog-era surveillance, prove inadequate for algorithmic monitoring. The DPDP Act represents progress but contains exemptions that permit extensive government surveillance without sufficient safeguards. Judicial doctrine has established important principles but struggles to address the scale and sophistication of AI systems.
The path forward requires recognizing that privacy and security need not be mutually exclusive. Well-designed legal frameworks can enable legitimate security measures while preserving constitutional rights. However, achieving this balance demands political will, technical expertise, and genuine commitment to democratic values.
As India positions itself as a global technology leader, the choices made today regarding AI surveillance will define not only the privacy rights of current citizens but the nature of Indian democracy itself. Will India chart a course that harnesses AI’s benefits while safeguarding fundamental freedoms? Or will technological capability outpace constitutional values, normalizing surveillance that would have been unthinkable a generation ago?
The answer lies in immediate, comprehensive reform. The right to privacy in the age of AI surveillance is not merely a legal question—it is a defining challenge for constitutional democracy in the twenty-first century.
Reference(S):
Cases
- Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1
- Justice K.S. Puttaswamy v. Union of India (Aadhaar Judgment), (2018) 1 SCC 1
- People’s Union for Civil Liberties v. Union of India, (1997) 1 SCC 301
- Shreya Singhal v. Union of India, (2015) 5 SCC 1
- Manohar Lal v. Union of India, W.P.(C) 6576/2020 (Delhi High Court, 2023)
Statutes
- The Constitution of India, 1950
- Information Technology Act, 2000
- Digital Personal Data Protection Act, 2023
- Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016
- Indian Telegraph Act, 1885
Books and Journals
- Basu, D.D., Commentary on the Constitution of India (LexisNexis, 12th ed., 2021)
- Datta, Pratap Bhanu, “The Future of Privacy in India,” Indian Law Review, Vol. 7, Issue 2 (2023)
- Khera, Reetika, Dissent on Aadhaar: Big Data Meets Big Brother (Orient Blackswan, 2019)
- Reddy, G.V., “Artificial Intelligence and Privacy Rights: An Indian Perspective,” Journal of Constitutional Law, Vol. 15, Issue 4 (2024)
- Sinha, Rishab Bailey & Srikrishna, B.N., Privacy and the State: The Story of India’s Data Protection Framework (Cambridge University Press, 2022)
Official Documents and Reports
- Data Protection Board of India, Annual Report 2023-24
- Law Commission of India, Report No. 287 on Privacy, Data Protection and Surveillance Reforms (2024)
- Ministry of Electronics and Information Technology, Draft Digital India Bill Consultation Paper (2023)
- NITI Aayog, National Strategy for Artificial Intelligence (2018)
News Reports and Online Resources
- “Delhi HC Questions Facial Recognition Use by Police,” The Hindu, August 15, 2023
- Internet Freedom Foundation, “The State of Surveillance in India 2023,” https://internetfreedom.in
- Software Freedom Law Centre, “Facial Recognition Technology in India: Privacy and Civil Liberties Concerns” (2023)
- “NATGRID Goes Live: What It Means for Privacy,” LiveLaw, January 10, 2022





