Authored By: Resha Aanabh
Aligarh Muslim University
Abstract
In recent years, AI technologies have risen rapidly, from facial recognition, predictive algorithms to the expanding Internet of Things. While these developments have facilitated research and assisted in everyday life, at the same time, they undermine the right to privacy, a right that is understood to be at the heart of all rights. This paper evaluates how AI affects privacy in a continuously evolving digital landscape driven by surveillance systems, data collection, and automated decisions that undermine autonomy and informed consent. It also evaluates the legal regimes that exist, such as the EU’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA), as they influence the adoption of AI in society, and offer prosaic solutions while balancing technology and human rights.
Introduction
In the digital age, Artificial Intelligence (AI) has emerged as a transformative force that is transforming not only the way we think but also the way we work. Many industries, including law, healthcare, finance, etc., have been impacted by AI. According to Dictionary Britannica, AI refers to the ability of computers to perform all those tasks that are usually done by intelligent beings, such as problem solving, research, etc., and are often associated with the use of reasoning power. Most of the functions performed by AI are based on the vast datasets that are sometimes collected without the explicit consent of the users. In consequence of this, a profound concern about the privacy of users is raised.[1]
Article 12 of the Universal Declaration of Human Rights (UDHR) declares the Right to Privacy as a Fundamental Right and states that, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks.”[2] The Supreme Court of India, in the case of K.S. Puttaswamy v. Union of India[3], held that Right to Privacy is an essential part of Article 21 of the Constitution of India[4] and is intrinsic to the entire constitutional scheme.
When AI systems enable unprecedented data surveillance, data mining, and behavioral profiling, they automatically challenge the principles of autonomy, consent, and anonymity, which ultimately leads to the violation of the privacy of the user. High-profile cases like China’s social credit system and Clearview AI’s large-scale scraping of social media images show the worldwide extent of these threats. Marginalized communities often suffer the most from intrusive technologies.
Although there are various legislations like the EU’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA), they still have a limited applicability in dealing with the challenges which are being posed by AI. AI, though very useful, poses some serious threats to the users, and this paper aims to examine those threats. It also highlights those legislations that deal with AI.
AI Technologies Impacting Privacy
Artificial Intelligence (AI) technologies have transformed how data is collected, processed, and utilized. But their constant use has raised the significant question of how privacy can be maintained when there is a constant threat of unauthorized breach of data. Below are some of the AI features that disrupt the privacy of the users:
- Facial Recognition System –
Functionality
It is a system which is used to recognize the face of an individual by the use of biometrics. Facial Recognition is a system which we use every day, sometimes without us knowing, like entering our biometric data while boarding a flight or simply by looking into the camera while paying for groceries. This feature works by first capturing the image of an individual’s face. When the image is captured, the facial recognition system identifies the face within it and analyzes the physiognomy. After comparing and analyzing all the features, it decides on identification.[5]
Privacy Risks
- In FRT, consent of the user is not taken, and it ultimately leads to a breach of privacy. Technologies that are used in surveillance and collecting the database do so without taking any consent from their users. Although data privacy laws require that an organization before taking such information from the user must take their consent but in the FRT system, it is seen that non-compliance with this principle is shown by these entities.
- Major problem which FRT faces is that of unencrypted faces, where the faces of an individual can be easily accessed without that person knowing, and unlike passwords and credit card information, it cannot be changed, making it a threat to cybercriminals. In 2019, India’s Aadhar Card faced allegations of vulnerabilities allowing hackers to access unencrypted biometric data of over 1 billion citizens.[6]
- Data Mining
Functionality
Data mining involves examining data to reveal concealed insights; this is referred to as knowledge discovery. The information is examined to uncover trends and additional useful insights that can enhance our business. Data mining involves examining vast quantities of typically unprocessed data and utilizing various techniques to uncover patterns and insights that can assist companies in decision-making, reducing risks, streamlining their operations, improving fund allocation, and much more.[7]
Privacy Risks
- Data mining collects the information of users without their explicit consent, which raises serious concerns about the privacy of users. Gathering data from various sources, like website cookies and social media applications, poses a real threat to users, especially when there is sensitive information. Apps like Facebook ask for the permission of the user to access contacts and real-time location without expressly mentioning the reason for obtaining such information.
- There are risks that the data which is collected after taking the consent of the user for one purpose can be used for different purposes. Although this is a secondary use but it still raises serious issues which are prevalent in data mining.
- AI can access sensitive information through data mining, and can be shared with third parties, which is a very serious issue. In 2018, it was revealed that Cambridge Analytica took data from millions of Facebook users to build psychological profiles for political campaigns. This incident highlighted how data mining could affect democratic processes.[8]
- Internet of Things (IoT) Devices
Functionality
The Internet of Things (IoT) refers to a system of interconnected devices equipped with sensors, software, and various technologies that gather and transmit data over the internet. These devices encompass a range of items such as household appliances, wearable technology, industrial equipment, and vehicles. By engaging with their surroundings and other devices, IoT devices generate vast amounts of data. [9]
Privacy Risk
- Some IoT devices are based upon poor security policies, and as a consequence of this, they become prone to hacking. In 2019, Ring home security cameras were hacked and which led to the exposure of private footage.
- Many IoT devices don’t have a specified privacy policy, and users are often unaware of what data is collected or how it is used. Instances like these breach the trust of people who use such devices.
Laws on Artificial Intelligence
Many countries have recognized the hazardous impact of AI on the privacy rights of users, and as a result, some of them have formulated legislation that deals with the limitations on AI. Below are some of the legislations which different nations have adopted:
- European Union (EU)
The EU is the world’s first comprehensive act on regulation of Artificial Intelligence (AI), which was the first time the European Commission has acted. The European Union Act was introduced in April 2021 and started enforcement on 1 August 2024. The function of the EU Act is primarily to deal with privacy issues that arise from the use of AI (primarily concerning biometric data (e.g., facial recognition)), etc.. This law prohibits the use of any manipulative AI methods that exploit vulnerable people. The Act also provides a regulatory method to ban unauthorized real-time recognition of biometrics in public. Under the expired 2019 Act, for an AI system that poses a threat to a person’s safety or fundamental rights, a risk assessment under the methodology to achieve critical assessment has included scope for legal recourse that could be applied to an AI system under the EU’s Act. A person or people acting following these regulations can face penalties of up to €35 million, which may include restitution fees, meaning that 7% of global turnover will also be penalized.[10]
- United States
The decentralized approach to regulating artificial intelligence in the United States mirrors its overall governance style. Typically, the majority of regulatory practices and policies target sectoral levels, and this system similarly mirrors the AI domain. In conclusion, there is no specific federal regulatory structure that is extensive for elements of artificial intelligence in particular. Nevertheless, the US has created various sector-focused AI-related agencies and organizations that tackle some challenges linked to the advancement of AI. For instance, about AI applications, the Federal Trade Commission (FTC) focuses on consumer protection and aims to ensure fair and transparent business practices in the industry. The National Highway Traffic Safety Administration (NHTSA) also oversees the safety features of AI-driven technologies, particularly regarding their application in self-driving vehicles. Other states enacted their regulations to a certain degree. For instance, the CCPA established rigorous obligations for businesses handling consumer data, and these obligations were similarly applicable to those employing AI technologies. In general, although the AI regulation in the United States is not centralized, it is balanced by the comprehensive sectoral approach.[11]
- India
At present, India doesn’t have any specific laws that deal with the regulation of AI. But existing laws like Information Technology Act, 2000[12], and the Digital Personal Data Protection Act, 2023[13], provide a footprint for the laws that will be enacted shortly for the regulation of AI. But till then, only these laws are providing an insight into the regulation of AI in India.
Limitations of Existing AI Laws
- Lack of AI Specific Provisions- At present, many laws like CCPA and GDPR deal with the regulation of use of AI, but the major reason for which they were enacted was to do general data protection and not AI-specific issues like facial recognition or unencrypted biometric data. As a result of this, these laws don’t provide a detailed provision for dealing with the challenges posed by AI. This gap allows various companies to exploit unencrypted data like face recognition features without any violation of these laws
- Slow Regulatory Adaptation- AI technology is evolving at a very fast pace, but the laws that deal with such technology require years for their implementation. Rigid processes like drafting, passing, and then enforcing require much time, which ultimately causes delay in their implementation. This pacing problem leads to the storage of unencrypted data without any regulations, and as a result, it increases the risk of unauthorized access and surveillance.
- Fragmentation across jurisdictions and inconsistently applied standards- AI is a tool that is applied across the globe. But the regulations that are imposed on AI are jurisdictional, resulting in fundamentally inconsistent application of regulations and complicated enforcement of cross-border AI systems without encrypting facial data. Because of these inconsistent standards, large corporations can avoid strict privacy protection by simply implementing practices in areas of law with weaker regulations.
- Limited capacity of biometric data regulation- Concerning biometric data, few laws contextualize the unique risks of these data types, specifically including unencrypted facial data. Unlike passwords, this kind of data is permanent and can never be changed. Most existing regulation systems categorize biometrics within general personal data, thus playing down their sensitivity. Without targeted laws, unencrypted facial data runs the risk of residing in unregulated recordings of data, many of which have been stored ineffectively and without information technology security. The 2019 Aadhaar breach is a splendid example of the need for the highest level of security, given that the breach exposed millions to identity theft. The Clearview AI case represents another instance of highlighting the dangers of unregulated biometric data.
- Lack of consideration of algorithmic bias- The shortcomings of AI legislation are many in addressing algorithmic bias in both facial recognition and predictive systems, which tend to disproportionately harm marginalized communities. Unencrypted facial datasets contain algorithmic bias, which may entail misidentification and discrimination based on race and gender; there are almost no regulations that mandate bias audits or bias mitigation strategies. The biases of existing unencrypted facial datasets continue to exacerbate systemic inequalities, particularly those affecting racial minorities, minorities, and non-binary persons, as demonstrated by NIST’s 2019 study on facial recognition.
- Weaknesses of privacy-focused regulation- Certain jurisdictions are more interested in advancing their regional economy and technological progress than they are in protecting personal privacy. Accordingly, scant to no AI regulations exist within these jurisdictions. This is especially clear in jurisdictions that have considerable AI sectors or authoritarian tendencies. Such focus enables privacy-invasive practices. Consider China’s social credit score as an example of state repugnant practices that undermine the rights of individuals.
Recommendations to Protect Privacy from AI
- Pass AI Specific Privacy Legislation:
Enact comprehensive AI-specific statutes, such as the EU’s AI Act (2024), requiring encryption, informed consent, and data minimization for biometric data, ensuring safeguards against unencrypted facial data misuse (example of Clearview AI’s breaches).
- Require Encryption Standards:
Require end-to-end encryption for biometric data, including looking into homomorphic encryption for real-time AI, to protect unencrypted facial data, addressing weaknesses exposed by breaches of biometric data (ex., India’s Aadhaar breaches from 2018-2019).
- Improve Transparency and Consent:
Require precise disclosures and opt-out methods for AI systems – including facial recognition in public spaces or home IoT devices like Amazon Alexa – so users have agency and informed consent.
- Conduct Periodic Bias Audits:
Require independent audit of AI systems to minimize bias and harm from facial recognition systems that, as in the case of Robert Williams’ wrongful arrest (2020), are a significant risk to marginalized communities.
- Create international AI privacy framework:
Formulate a UN-managed charter (see OECD AI Principles; 2019, Paris Charter; 2025) to coordinate national and international standards and protect against exploitation of weak jurisdictions, like Clearview AI’s activity in the U.S.
- Promote Digital Literacy:
Initiate public awareness campaigns regarding AI privacy risks that empower users to control data collection and mitigate the misuse of unencrypted data.
- Regulate Authoritarian Surveillance:
Trade on the international stage meaningfully to circumscribe the use of AI and surveillance systems in authoritarian regimes, such as China’s social credit system, and to limit the usage of unencrypted facial data collected by facial recognition systems by using sanctions or human rights legislation.
Conclusion
The rapid dominance of artificial intelligence (AI) technologies, which include facial recognition, data mining, predictive algorithms, and Internet of Things (IoT) devices, has ushered in an age of unprecedented innovation that has benefited industry and increased global connectivity. However, as discussed in this paper, innovation has its costs, and the cost is individual privacy rights. The widespread use of unencrypted facial data, from instances such as Clearview AI’s unauthorized facial database and the social credit system in China, along with advances in data mining and the surveillance of individuals through IoT technology, emphasizes a pressing need to address privacy violations with new independent legislation. There are some existing legal frameworks, like the EU’s AI Act and the EU’s GDPR, that are a beginning, but in the face of free-market innovation in AI, they are undone through slow legislative development, jurisdictional inconsistency, and weak enforcement, especially when it comes to the use of biometric data. To balance innovation and privacy, this paper has proposed the development of AI-specific legislation, mandatory encryption, transparency, bias auditing, global regulatory collaboration, digital literacy campaigns, and limitations on government or state access to AI technology. Collectively, these actions would enhance the likelihood that individual autonomy will be preserved, allied to a call for responsible AI development. As AI continues to advance, so must research and collaboration on a global basis to make sure that our technological convenience does not trample over our first principles of privacy — a difficult balance to negotiate in the digital age.
Reference(S):
Websites
- Ahmed HSA, ‘2022 Volume 51 Facial Recognition Technology and Privacy Concerns’ (ISACA, 21 December 2022) <https://www.isaca.org/resources/news-and-trends/newsletters/atisaca/2022/volume-51/facial-recognition-technology-and-privacy-concerns> accessed 23 July 2025
- ‘Artificial Intelligence’ (Encyclopædia Britannica, 22 July 2025) <https://www.britannica.com/technology/artificial-intelligence> accessed 23 July 2025
- dotData, ‘What Is Predictive Data Mining?’ (dotData, 23 November 2024) <https://dotdata.com/blog/what-is-predictive-data-mining/> accessed 23 July 2025
- ‘EU AI Act: First Regulation on Artificial Intelligence: Topics: European Parliament’ (Topics | European Parliament, 8 June 2023) <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#:~:text=In%20April%202021%2C%20the%20European,risk%20they%20pose%20to%20users.> accessed 23 July 2025
- Savio, ‘AI Regulations around the World – Spiceworks’ (Spiceworks, 30 April 2024) <https://www.spiceworks.com/tech/artificial-intelligence/articles/ai-regulations-around-the-world/> accessed 23 July 2025
- Sharma R, ‘10 Data Privacy Issues in Data Mining and Their 2025 Impact’ (upGrad blog, 25 March 2025) <https://www.upgrad.com/blog/data-privacy-issues-in-data-mining/> accessed 23 July 2025
- Terekhin A, ‘Facial Recognition: What Is It and How to Employ’ (Regula, 17 October 2024) <https://regulaforensics.com/blog/employ-face-recognition-process-in-identity-verification/> accessed 23 July 2025
- ‘What Is Ai Iot’ (What is AI IoT | HPE India) <https://www.hpe.com/in/en/what-is/ai-iot.html> accessed 23 July 2025
Acts and Statutes
- Constitution of India 1950
- Digital Personal Data Protection Act 2023 (India).
- Information Technology Act 2000 (India).
- Universal Declaration of Human Rights (adopted 10 December 1948) UNGA Res 217 A(III) (UDHR)
Case Laws
K.S. Puttaswamy v Union of India (2017) 10 SCC 1.
[1] ‘Artificial Intelligence’ (Encyclopædia Britannica, 22 July 2025) <https://www.britannica.com/technology/artificial-intelligence> accessed 23 July 2025
[2] Universal Declaration of Human Rights (adopted 10 December 1948) UNGA Res 217 A(III) (UDHR), art 12.
[3] K.S. Puttaswamy v Union of India (2017) 10 SCC 1.
[4] Constitution of India 1950, art 21.
[5] Terekhin A, ‘Facial Recognition: What Is It and How to Employ’ (Regula, 17 October 2024) <https://regulaforensics.com/blog/employ-face-recognition-process-in-identity-verification/> accessed 23 July 2025
[6] Ahmed HSA, ‘2022 Volume 51 Facial Recognition Technology and Privacy Concerns’ (ISACA, 21 December 2022) <https://www.isaca.org/resources/news-and-trends/newsletters/atisaca/2022/volume-51/facial-recognition-technology-and-privacy-concerns> accessed 23 July 2025
[7] dotData, ‘What Is Predictive Data Mining?’ (dotData, 23 November 2024) <https://dotdata.com/blog/what-is-predictive-data-mining/> accessed 23 July 2025
[8] Sharma R, ‘10 Data Privacy Issues in Data Mining and Their 2025 Impact’ (upGrad blog, 25 March 2025) <https://www.upgrad.com/blog/data-privacy-issues-in-data-mining/> accessed 23 July 2025
[9] ‘What Is Ai Iot’ (What is AI IoT | HPE India) <https://www.hpe.com/in/en/what-is/ai-iot.html> accessed 23 July 2025
[10] ‘EU AI Act: First Regulation on Artificial Intelligence: Topics: European Parliament’ (Topics | European Parliament, 8 June 2023) <https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#:~:text=In%20April%202021%2C%20the%20European,risk%20they%20pose%20to%20users.> accessed 23 July 2025
[11] Savio, ‘AI Regulations around the World – Spiceworks’ (Spiceworks, 30 April 2024) <https://www.spiceworks.com/tech/artificial-intelligence/articles/ai-regulations-around-the-world/> accessed 23 July 2025
[12] Information Technology Act 2000 (India).
[13] Digital Personal Data Protection Act 2023 (India).