Home » Blog » Navigating the Complexities of AI and Data Privacy: A Global Perspective

Navigating the Complexities of AI and Data Privacy: A Global Perspective

Authored By: RANDLE ELDAD AYOMIKUN

UNIVERSITY OF LAGOS (UNILAG)

Abstract

This article delves into the increasing difficulties that Artificial intelligence poses to dataprivacy especially the loopholes in current laws.It underscores that most data protectionlawsthough evolving aren’t well equipped to handle the unique risks associated with Ai suchasautomated decision making,lack of transparency,and algorithm bias.By comparing howtheEu,Us and India regulate Ai along with international rules,the article identifies both goodpractices and areas needing improvement.It concludes by proffering solutions such as theneed for laws specifically for Ai,stronger rights for individuals,better supervision,andinternational co-operation across countries.The objective is to provide a realistic and practical understanding of how laws should adapt to ensure that data privacy remains possible as Ai evolves.

In this evolutionary and technological age,Artificial intelligence(Ai) has swiftly developedfrom a modern concept into an essential part of daily life.From Virtual assistants and Algorithms to forecasting future occurrences of Crime and Robotic cars,Ai innovations areincreasingly influencing personal,commercial and Government decisions.Primary to this evolution is the use of large,elaborate and personal Data.Artificial intelligence systems thriveon data,examining patterns to make predictions and decisions that affect both individuals andthe society.However,this unique data processing capability raises key questions about individual privacy rights.As Artificial intelligence systems become more independent andunclear,concerns about how personal data is collected,stored,used and shared increases.Traditional data privacy laws,created in an era before complex Ai systems oftenstruggle to keep up with these rapid developments.This article seeks to delve into the relationship between Artificial intelligence and data privacy law.It critically examines theexisting legal frameworks,identifies challenges that are caused by Articial intelligence toprivacy protection and it highlights areas where regulations were not carefully consideredor implemented.Through comparative study and proposed solutions,the article seeks to suggest for a flexible Legal framework specially designed for Artificial intelligence systems inorder to ensure that there is a proper balance between technological innovation and The fundamental rights of the people.

Even with some regions implementing wide-reaching data protection rules, many existinglaws are either outdated or too unclear to effectively govern the nuances of modern AI. Amajor issue with current regulations is the lack of adequate control over automated decision- making. AI frequently makes or influences decisions that affect the life of the people andthesociety as a whole, things like loan approvals, job offers, or even being flagged by lawenforcement. However, few legal frameworks provide clear explanations of howthese decisions are reached or arrived at. The European Union’s General Data Protection Regulation (GDPR), specifically Article 22, attempts to address this by giving individuals theright to refuse decisions based solely on automated processing. But, this right is limitedanddifficult to exercise, especially as AI is increasingly used for tasks like managing online content or targeted advertising, which may not be regarded as being “fully automated” or “producing significant effects.”1 [Regulation (EU) 2016/679 of the European Parliament andof theCouncil of 27 April 2016 (General Data Protection Regulation) [2016] OJ L119/1]

In countries such as India, the situation is even more complicated. The 2023 Digital Personal Data Protection Act (DPDPA) marks major progress for India’s legislation, but it lacks specific provisions regarding AI-driven profiling or automated decision-making. Similarly, inthe United States, laws like the California Consumer Privacy Act (CCPA) and the CaliforniaPrivacy Rights Act (CPRA) are limited.The lack of clarity in the meaning of terms like “consent,” “personal data,” “processing,” and “profiling” makes governing AI even moredifficult. Different regions have varying definitions, and AI systems that create inferredor derivative data push the boundaries of what’s traditionally considered personal data. TheIndian DPDPA, for example, brings in the concept of “deemed consent” for situations involving public interest, which could be misused, particularly when government bodies employ AI for surveillance or managing social welfare programs. This might limit the freedom of individuals and go against the right to privacy guaranteed by the constitution, asseen in key judgments like Justice K.S. Puttaswamy v. Union of India.The problems are onlycompounded by a failure to properly enforce the rules. Even when laws are in place toprotect people, they’re often not put into practice well because the laws are badly written. Organizations like the European Data Protection Board (EDPB) or the UK’s InformationCommissioner’s Office (ICO) often don’t have the right technical knowledge or skills toreallyexamine AI systems.In the U.S., rule enforcement is inconsistent, with many agencies governing different areas of AI and data privacy. Meanwhile, India’s newly formed DataProtection Board lacks both constitutional autonomy and the authority to launch investigations, raising doubts about its ability to effectively regulate powerful tech firms.Right now, most places don’t have laws specifically about AI. They’re mostly usinggeneral data protection laws, which don’t really cut it when it comes to the particular problems AI can cause. Things like biased algorithms, AI being a black box, and the whole”we don’t know how it works” aspect of machine learning really need specific rules. TheEuropean Union is trying to do something about it with this proposed AI Act, which is basedon how risky something is. But it’s still just a proposal and doesn’t fully connect with existingprivacy laws. Other countries, like India and the US, haven’t really put forward completeAI governance models that go beyond just ethical guidelines.2 [Justice K.S. Puttaswamy (Retd.) vUnion of India (2017) 10 SCC 1]3 [California Consumer Privacy Act 2018 (Cal Civ Code § 1798.100)]4[California Privacy Rights Act 2020 (Cal Civ Code § 1798.100)]5 [European Data Protection Board, ‘Guidelines 05/2020 on consent under Regulation 2016/679’ (2020)]6 [Information Commissioner’sOffice, ‘Big data, artificial intelligence, machine learning and data protection’ (ICO 2017)]7 [Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) COM(2021) 206 final] Around the globe, various regions are experimenting with diverse strategies to governAI, alogical approach given the distinct legal frameworks and political objectives of each. Europeis leading the charge, adopting a particularly forward-thinking approach centered on the potential hazards AI presents.They’ve put forward a thorough plan for regulating AI, sortingAI systems into different risk levels picture it as a traffic light system with red (prohibited), yellow (high-risk), and green (low-risk) categories.

Different categories have their own rulebooks to follow. Take high-risk applications, for instance things like using AI for biometric ID or in police work. These have tight restrictions, such as making sure the data used is top-notch, keeping detailed records, and ensuringhumaninvolvement.The FTC has raised concerns about algorithm-based discrimination, and Illinois’ BIPA statute imposes stringent limitations on businesses’ use and collection of biometricdata. However there is a lack of organization due to the fact that the us doesnt have a unilateral national standard.Due to distinct regulations varying across states,companies tend to exist inareas where the laws are favorable,a tactic referred to as “regulatory arbitrage”.In india theyhave started to specifically tackle the unique problems presented by AI.Also earlier rulingsbythe indian supreme court like the puttaswamy case have created a foundation for individual rights which may be used as a basis for emerging AI laws.Policy papers and programs suchas the NITI Aayog’s “Responsible Ai” program promote values such as responsibility,transparency and honesty.These documents are still goal-oriented and theylacklegal authority.Guidelines from institutions such as The Organization for Economic Co- operation and Development and United Nations Educational, Scientific and Cultural Organization provide moral standards for Ai governance on the global stage.These frameworks help serve as support systems to universally acceptable values such as equalityand human rights.They impact domestic laws and promote international discussions onthesafe usage of Ai though still lacking legal backing8 [NITI Aayog, ‘Responsible AI for All: Strategyfor India’ (2021)]

To create an efficient AI and data privacy governance system that is both effective andprepared for the future, a variety of measures have to be taken.These steps have to address both the technical aspects of Ai as well as reflecting societal values,ethical standards andlegal obligations .As AI is being utilized in various aspects of society such as the the healthcare sector and criminal justice system,there is need for government frameworks that arenot outdated in order to ensure that the fundamental human rights of the people are upheldandguaranteed.First and foremost,the top priority of any government must be to develop comprehensive laws specifically addressing AI and covering issues or instances that the traditional data protection laws failed to.While many jurisdictions rely on Data privacylawssuch as GDPR,CCPA or DPDPA,These laws were not created or designed to handle the complexities that autonomous learning systems possessing large quantities of data cause.Laws made specifically to address Ai systems should be clear and easily comprehensible with standard definitions of terms such as “inferred data” “automateddecision making”.These definitions are very crucial as vague and ambiguous definitions make enforceability difficult as well as enabling companies to find loopholes in whichtheycan exploit.This legislation should be able to identify the level of risk that each Ai systemposes as seen in the European union’s proposed AI act.Ai systems should be categorizedintovarious risk levels such as minimal,limited,high,unacceptable depending on howmuchtheycan influence individual rights and social stability.For example AI utilized in biometric authentication systems and credit scoring will be measured as high risk therefore meaningthat they would be subject to control and regular audits.AI systems measured as unacceptablerisk such as those used to manipulate human behaviour at scale could be banned altogether.Before the deploration of Ai systems there should be a proper evaluation andassessment of the harm an Ai system could pose and ensure that they are alternative approaches in place in case of an unforeseen circumstance of danger.These assessments should be subject to scrutiny by independent bodies who will approve usage as well as making them publicly available to ensure transparency Additionally,Legal frameworks must be improved in order to ensure that the rights of individuals are enhanced in relation to Ai systems.currently,most people are ignorant of thefact that AI is involved in making decisions that affects their lives,such decisions includeloan approvals and job screenings.This unequal disparity in knowledge possessed creates a power imbalance whereby the individuals are at a disadvantage because they lack adequate information.An effective governance framework must introduce new digital rights functioning in the context of Ai.These rights should include the right to explanation whichensures that an individual is given comprehensible knowledge concerning the process bywhich an Ai system arrives at a particular decision.This is crucial in sectors like healthcare,finance and criminal justice where an untransparent Algorithmcan carry out discrimination without accountabilityAnother important right is the right to contest automateddecisions and to seek human review which helps ensure that individuals are not victims of unjust outcomes due to the faulty or biased algorithm.There should be bodies and mechanisms such as an independent body who perform the function of resolving disputes efficiently.These mechanisms should be supported by data protection authorities withadequate technical expertise to analyze AI systems as well as intervene when necessary.Furthermore vulnerable populations like children,minority groups should be specially protected.Ai driven educational tools that rank students should be examinedandreviewed regularly to ensure fairness.In work places algorithms that evaluate employee performance mst respect employee rights and employees should be informed in advance.

To ensure that Ai is used properly,we need to be very clear about how we get consent especially when dealing with personal information.This should be carried out followingessential rules like collecting only useful data and using it for the purposes we agreedtouseit for,measures like these help to prevent the misuse of data.Also there is the need for moreeffective rules and mechanisms in place to ensure these rules are adhered to.Those in chargeof this should be provided with the necessary power and resources to look at howAi systemswork including the code and data that has been used in training them.furthermore,due totheinterconnectedness between Ai and data flows,international collaboration is pivotal.Thereshould be deliberate efforts made by countries to harmonize regulations with global ethical standards and form alliances for the purpose of enabling data transfer and enforcement efforts.

As Ai gradually becomes more powerful and impactful,there is need to be much better at regulating it.The threat that Ai poses like privacy violation,biased algorithms,misuse bydictators aren’t abstract concerns but rather they are real life problems that we face presently.Allowing these systems run wild will lead to loss of trust creating a social divide and significantly weakening democratic systems.To address these issues,it is important for governments,universities,groups and business establishments to work together in order tocreate rules that are not just technically effective but also morally acceptable and fair toeveryone.There is need for special laws concerning Ai that help to strengthen our rights tounderstanding how it works,challenge Its decisions and ensure that humans are saddledwiththe responsibility of controlling these Ai systems.When it comes to volatile information,weneed clear ways for people to give their consent.Independent supervisors with vast knowledge in technical affairs should be vested with the power and authority to checkAi systems and monitor the activities of the people who build them.It is also essential for countries to work together in order to set global rules for data protection and responsibleuseof Ai.By making sure Ai development confirms to democratic ideals such as freedom,fairness,responsibility and privacy,we can enjoy its benefits without sufferingit’s consequences.This careful and calculated approach is key to ensuring that Ai becomes atool for advancement of human societies and not a cause of damage or destruction.

Bibliography

Primary Sources

Cases

Justice K.S. Puttaswamy (Retd.) v Union of India (2017) 10 SCC 1 Balogh v Hungary App no 47940/99 (ECHR, 20 July 2004)

Google Spain SL v Agencia Española de Protección de Datos (AEPD) (Case C-131/12) EU:C:2014:317

Schrems v Data Protection Commissioner (Case C-362/14) EU:C:2015:650 -Legislation

California Consumer Privacy Act 2018 (Cal Civ Code § 1798.100) California Privacy Rights Act 2020 (Cal Civ Code § 1798.100)

Digital Personal Data Protection Act 2023 (India)

General Data Protection Regulation, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 [2016] OJ L119/1

Human Rights Act 1998, s 8

Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) COM(2021) 206 final

Secondary Sources

Books / Reports

Information Commissioner’s Office, Big Data, Artificial Intelligence, Machine LearningandData Protection (ICO 2017)

European Data Protection Board, ‘Guidelines 05/2020 on Consent under Regulation 2016/679’ (2020)

NITI Aayog, Responsible AI for All: Strategy for India (2021)

OECD, ‘Recommendation of the Council on Artificial Intelligence’ OECD/LEGAL/0449(2019)

UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021)

House of Lords Select Committee on Artificial Intelligence, AI in the UK: Ready, Willingand Able? (HL Paper 100, 2018)

UK Government, National AI Strategy (2021)

Journal Articles

Paul Craig, ‘Theory, “Pure Theory” and Values in Public Law’ [2005] PL 440

JAG Griffith, ‘The Common Law and the Political Constitution’ (2001) 117 LQR42

Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76

Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a Right to an ExplanationIs Probably Not the Remedy You Are Looking For’ (2017) 16(1) Duke Law&TechnologyReview 18

Online Journals

Graham Greenleaf, ‘The Global Development of Free Access to Legal Information’ (2010) 1(1) EJLT <http://ejlt.org/article/view/17> accessed 27 July 2010

Luciano Floridi, ‘AI and Its New Winter: From Myths to Realities’ (2020) 29(1) Minds andMachines <https://link.springer.com/article/10.1007/s11023-019-09513-w> accessed 10June2025

Websites and Blogs

Sarah Cole, ‘Virtual Friend Fires Employee’ (Naked Law, 1 May 2009) <http://www.nakedlaw.com/2009/05/index.html> accessed 19 November 2009

Michael Veale, ‘How the GDPR Regulates AI: The Known, the Unknown and the Uncertain’(2018) <https://osf.io/preprints/lawarxiv/2dxu5/> accessed 10 June 2025

Newspaper Articles

Jane Croft, ‘Supreme Court Warns on Quality’ *Financial Times* (London, 1 July 2010) 3

Madhumita Murgia, ‘India’s AI Ambitions Are Hobbled by Data Privacy Concerns’ *Financial Times* (London, 18 December 2023)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top