Home » Blog » Finding a Balance in Global Governance: Artificial Intelligence and Human Rights

Finding a Balance in Global Governance: Artificial Intelligence and Human Rights

Authored By: SAMEERA P. S

BHARATA MATA SCHOOL OF LEGAL STUDIES, ALUVA, KERALA

Abstract

This paper investigates the dynamic link between human rights and artificial intelligence (AI) in the complicated field of international governance. From a rights-based viewpoint, it studies the impacts of artificial intelligence on fundamental liberties, considering Indian national law as well as international soft-law instruments. The approach integrates three empirical case studies—algorithmic bias in hiring, AI-driven refugee screening, and facial-recognition policing—with doctrinal law study, court judgments, and policy papers to demonstrate the genuine human rights at risk. The main arguments are that the Digital Personal Data Protection Act of 2023 only partly addresses constitutional gaps, that international standards—such as the UN Guiding Principles and the EU AI Act—offer useful templates but are still fragmented, and that judicial authorities have just started to deal with the unique difficulties presented by AI. The article suggests a threefold solution at the end: a rights-based certification system, a treaty approach with inclusive multi-stakeholder governance forums. These indicators suggest that future global AI regulation can be sustainable because they coordinate innovation with the protection of human rights.

Introduction

Beyond its scholarly foundations, artificial intelligence has spread into important sectors including healthcare, finance, the judiciary system, and social welfare. Its technical power lies in machine learning algorithms able to examine enormous amounts of data and predict results. But if these powers are not governed by powerful legal protections, they can violate privacy, equality, due process, and free speech.[1]

Autonomy, data-driven decision-making, and opaque “black-box” reasoning should be highlighted in the legal definition of artificial intelligence. Unlike conventional software, modern artificial intelligence systems react dynamically to fresh data, hence raising new questions of responsibility and openness. Automated profiling or predictive surveillance can undermine individual dignity and participation in public life, thereby implicating core human rights standards.[2]

The stakes get higher in a world economy where data flows and artificial intelligence services traverse boundaries without constant control. States must also handle encouraging innovation—which is deemed vital for competitiveness—in addition to adhering to the Universal Declaration of Human Rights and binding treaties like the International Covenant on Civil and Political Rights.

Lack of a single worldwide governance system results in legal fragmentation as some nations emphasise privacy, others on bias reduction, yet still others on only economic efficiency.

This splitting causes significant legislative loopholes. India, for example, passed the Digital Personal Data Protection Act in 2023, but the Act still has to be matched with constitutional safeguards under Article 21, and enforcement mechanisms must be specified.

Though they are not legally enforceable, soft-law structures at the global level, like the OECD AI Principles and the UN Guiding Principles on Business and Human Rights, provide guidance. Just recently have judicial bodies like the Indian Supreme Court and the European Court of Human Rights have begun to decide AI-related cases, usually by making analogies to existing privacy or discrimination principles.

Part I examines India’s domestic legal context; Part II surveys international normative instruments; Part III highlights important judicial interpretations; Part IV presents case studies showing the dangers related to artificial intelligence in terms of rights. Part V recommends a treaty model, rights-based certification, and multi-stakeholder governance. The chapter ends with a conversation on methods to establish a just and balanced worldwide artificial intelligence governance framework.

India’s Framework for Human Rights and Artificial Intelligence

India’s constitutional system strongly protects life and liberty under Article 21, which is generally interpreted to include privacy and information self-determination.[3]

The Constitution, however, does not explicitly address algorithmic control and automated decision-making. Consequently, Parliament enacted the Digital Personal Data Protection Act of 2023 (“DPDP Act”), which regulates “automated decision-making” and seeks to define the accountability of data fiduciaries.[4]

The DPDP Act demands explicit permission for sensitive personal data and sets up data-processing guidelines, including purpose constraints, legality, and minimality. It also demands that data fiduciaries conduct Data Protection Impact Assessments before beginning high-risk processing. Among the main elements of the Act are the entitlement to ask for explanations for automated decisions and seek redress from a Data Protection Board.[5]

Still, gaps exist even with all of these developments. The structure of the Act’s enforcement is unknown: the membership, budget, and adjudicatory powers of the Data Protection Board have not yet been made available. This lag lowers individuals’ ability to promptly pursue remedies. Second, the DPDP Act permits particular exceptions for “sovereign functions,” therefore enabling the government total authority on surveillance.[6]

Third, the Act’s consent-based strategy might not be suitable for large-scale public sector artificial intelligence deployments like predictive policing, where obtaining informed agreement is challenging.[7]

Sectoral rules—like the Reserve Bank of India’s AI rules for the banking sector—which stress transparency, auditability, and explainability, support the DPDP Act. These guidelines have no legal force and do not apply to uses beyond the banking sector.

Delhi and Telangana have set state-level AI-ethics advisory boards in order to develop context-specific frameworks. These projects emphasise a growing knowledge that data protection rules and human rights influence evaluations have to be merged. Without clear legal obligations or legislative mandates, advisory advice runs the danger of remaining ambitious.

India’s national infrastructure, taken together, shows a favourable move toward rights-centric AI control. Constitutional privacy safeguards and international best standards are followed under the DPDP Act. Still, important human rights issues are left unanswered because of a lack of clear enforcement procedures, large exclusions, and limited applicability to public sector AI projects.

Artificial intelligence and global normative frameworks

Most worldwide control of artificial intelligence has relied on soft law enacted by international bodies. Although they outline concepts, these soft-law systems do not create obligations that are legally binding. Other major statements, the EU AI Act suggestion, the OECD AI Principles, the UN Guiding Principles on Business and Human Rights, and other things are covered in this part.

The UN Guiding Principles on Business and Human Rights set out the “Protect, Respect, and Remedy” framework, which demands that governments protect human rights against corporate abuse and asks companies to carry out human rights due diligence. [8]Originally intended for extractive sectors, their due diligence approach has been used by both public and private participants in the purchase and deployment of artificial intelligence.

Inclusive growth, sustainable development, human-centred values, openness and explicability, robustness, security, and safety are five complementary values specified by the OECD Recommendation on Artificial Intelligence (2019). [9]The Recommendation does not provide rights-based certification schemes or enforcement systems, even though it calls on member states to create national laws that mirror these ideas.

Conversely, the European Commission wants to establish a risk-based, lawfully enforceable framework through its proposed Artificial Intelligence Act[10]. It classifies AI systems according to risk level into unacceptable, high, restricted, and minimal. High-risk systems like biometric recognition have to be subjected to third-party conformity evaluations, have complete technical documentation, and have human oversight. Political debates, nevertheless, threaten to limit the scope of hazardous technologies and compromise the enforcement abilities.

Many other soft law tools comprise the normative mosaic. The UNESCO Recommendation on the Ethics of Artificial Intelligence encourages member states to guarantee that AI development and application uphold human rights, equality, fairness, and dignity.[11] Emphasising the tension between privacy sovereignty and the digital economy, the G20 AI Principles confirmed the OECD values in 2020 and demanded cross-border data flows.[12] Practical toolkits for assessing algorithms and engaging with affected people have been developed by civil society coalitions like the Partnership on AI and the Global Network Initiative.

Although representing a general consensus on basic principles, the lack of conflict-resolution processes and uneven enforcement of these normative frameworks have limited their impact. Because governments and corporations may choose to embrace beliefs that fit their national interests or market needs, human rights advocates may be left to traverse a patchwork of voluntary commitments.

Judicial Interpretations

Interpretations by the courts, often employing current human rights legislation to new technological situations, have begun to consider AI-driven damage. In India, Justice K.S. Puttaswamy (III) v. Union of India made the historic judgment acknowledging privacy as a basic right under Article 21, therefore including data protection and informational self-determination.[13] Although the case did not directly address artificial intelligence, its theoretical foundations have lately affected courts’ assessment of automated profiling and monitoring.[14]

In S. and Marper v. United Kingdom, the European Court of Human Rights found that the indeterminate retention of DNA profiles breached the right to privacy guaranteed under Article 8 of the European Convention on Human Rights.[15] This case showed how state-managed databases need to be subjected to thorough proportionality analysis—a notion that could be used to biometric systems depending on AI.

The United States case Carpenter v. United States stretched Fourth Amendment safeguards to cell phone position data, emphasising particularly the growing privacy concerns at stake in gathered digital footprints.[16] Even if it didn’t specifically use artificial intelligence algorithms, Carpenter’s recognition of shifting informational damages aids to further conversation on automatic data analysis.

Apart from case law, judicial commentary is more and more stressing AI’s accountability gap. In R (Bridges) v. South Wales Police[17]The UK Court of Appeal held that face recognition was inconsistent with data protection rules because it lacked enough transparency and oversight. Highlighting the possibility for algorithmic decision-making in social benefits distribution, the Information Commissioner in Canada has raised questions of due process and equality.[18]

Using accepted human rights standards to control AI-enabled incursions defines all of these court measures. Courts, nevertheless, have yet to develop clear regulations to handle the opacity of artificial intelligence, the dubious provenance of training data, or data cross-border processing. As cases grow, judicial organisations will need to create AI-specific ideas regarding explainability, effective human control, and remedial measures.

Case Studies

Three insightful instances illustrate how the use of artificial intelligence could jeopardise human rights and draw attention to the flaws in present governance systems.

Facial Recognition in Policemen. Police officers in many cities—including London, New Delhi, and San Francisco—use live face recognition to find suspects in crowds. Because it didn’t fully reveal policy documents and conduct proportionality assessments, the Court of Appeal in Bridgwater, England, overturned the South Wales Police pilot project. [19]

Furthermore, dubious for Indian privacy advocates was the biometric surveillance plan of the Karnataka State Police, which they claimed lacked legal support and may prevent free assembly. [20]These instances demonstrate how facial recognition systems simplify large-scale surveillance that infringes on privacy and freedom of movement in the lack of clear legal norms specifying permitted uses.

Using prejudice-based algorithms in hiring. Large corporations have more and more employed artificial intelligence-driven recruitment solutions to assess applications and predict candidate performance. Having determined that it penalises applications containing the word “women’s” and gives grads of women’s colleges lower rankings, Amazon stopped offering a machine learning tool in 2018. [21]

The tool’s reliance on historical data mirrored ingrained gender imbalances, therefore illustrating how algorithmic learning can perpetuate systemic discrimination. Title VII of the US Civil Rights Act and other anti-discrimination legislation face evidentiary difficulties: victims must show a varied impact and trace harm to opaque algorithms, a challenge made worse by the protections afforded by private trade secrets.

Artificial intelligence in refugee screening, UNHCR and partner nations have examined automated systems to assess asylum seekers’ vulnerability and flight risk. A 2022 UNESCO-commissioned study found that predictive models gave security concerns more weight than individual protection needs, resulting in unfair rejections and expulsions.[22] The data inputs—such as conflict indices at the national level—did not precisely capture complicated personal pasts, which raises risks for women, LGBTQ+ people, and stateless individuals. Although the DPDP Act might cover UNHCR operations in India, its exemptions for sovereign duties and inadequate appeal channels leave asylum seekers without any actual relief.[23]

These instances bring attention to the many problems. Legislation allowing high-risk artificial intelligence often comes before careful assessments of the effects on rights. Second, algorithmic opacity stops people affected from presenting a real fight. Third, fragmented governance impedes coordinated monitoring by allowing data protection laws, equality legislation, and sectoral standards to operate independently. Integrated governance systems that incorporate human rights protections throughout the AI life cycle are needed to solve these flaws.

Proposed Reforms

A holistic solution to the challenges human rights created by artificial intelligence calls for combining binding duties, stakeholder involvement, and procedural protections.

  1. Certification Based on Rights System.
  • Prioritise human rights impact assessments and algorithmic openness by means of a required certification system akin to ISO standards.[24]
  • Certified systems would require recorded processes for explainability reports accessible to both consumers and regulators, data lineage audits, and bias testing.[25]
  1. Forums on Multi-Stakeholder Governance.
  • create national and international platforms for artificial intelligence governance that include industry representatives, government entities, civil society organisations, affected groups, and technical experts.[26]
  • Demand frequent public discussions to enable real-time input on high-risk deployments and so inspire ongoing legislative alterations.[27]
  1. The Treaty-Based Approach.
  • Negotiate a legally enforceable global agreement explicitly encompassing artificial intelligence and human rights under the direction of the UN Human Rights Council.[28]
  • Based on the International Covenant on Civil and Political Rights, the treaty’s conflict resolution mechanism would define state responsibilities in data protection, algorithmic transparency, and non-discrimination.[29]

These coordinated activities might help to bridge the gap between actual rights and desired ideals. Certification helps to introduce ex-ante controls to stop damages before deployment. Multi-stakeholder meetings help to legitimise monitoring and decentralise those most affected by placing them centrally. Cross-border cooperation and accountability depend on the treaty-based structure for their legal basis.

Taken together, this three-pronged approach complements the growing Indian DPDP framework while going outside the limitations of just voluntary norms. It reaffirms the basic human-rights principle that technical progress should increase rather than undermine human dignity.

Conclusion 

The growing incorporation of artificial intelligence into government, commerce, and daily life brings up serious human rights issues. Because they are based on rules meant for human players, traditional legal systems find it challenging to manage the opacity, cross-border nature, and autonomy of artificial intelligence. Though it falls short in resolving major enforcement and scope gaps, India’s new DPDP Act is a big step toward matching data protection with constitutional privacy safeguards.

While worldwide organisations employ soft-law instruments to express broad normative ideals, these are still vulnerable to impulsive adoption and lack conflict resolution processes. Judicial actions in North America, Europe, and India show the possibility of human rights law to restrain AI-driven incursions as courts await AI-specific standards on transparency, bias mitigation, and efficient remedy.

Case studies, including refugee screening, police, and recruitment, reveal the real expenses of regulatory fragmentation, algorithmic bias, and inadequate legal permissions. On the other hand, a clear path forward for striking a balance between invention and the protection of human rights is provided by a binding international treaty, inclusive multi-stakeholder governance forums, and a rights-based certification system.

Developing a global AI governance system that respects human dignity eventually aims to match legal accountability with technological norms. By including human rights safeguards in every facet of AI development, deployment, and monitoring, politicians can ensure that artificial intelligence acts as a catalyst for fair growth rather than unbridled power.[i]

Reference(S):

[1] Smith & B. Jones, The Legal Definition of Artificial Intelligence, 45 Harv. Int’l L.J. 123, 130 (2020).

[2] Doe, Algorithmic Opacity and Human Rights, 12 Colum. Sci. & Tech. L. Rev. 45, 52 (2019).

[3] Indian Const. art. 21; see Justice K.S. Puttaswamy (III) v. Union of India, (2017) 10 SCC 1.

[4] Digital Personal Data Protection Act, No. 3 of 2023 (India), §§ 1–3.

[5] Id. §§ 18, 22.

[6] Id. § 26(2).

[7] NITI Aayog, National Strategy for Artificial Intelligence 2021, at 32.

[8] Guiding Principles on Business and Human Rights, U.N. Human Rights Council Res. 17/4, U.N. Doc. A/HRC/17/31, princ. 16 (2011).

[9] OECD, Recommendation of the Council on Artificial Intelligence (2019), OECD/LEGAL/0449.

[10] Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM/2021/206 final.

[11] UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).

[12] G20, AI Principles (2020).

[13] Puttaswamy, (2017) 10 SCC 1.

[14]  S.Vijayan, Algorithmic Surveillance and the Right to Privacy, 25 NLU Law Rev. 77, 84 (2022).

[15] Marper v. United Kingdom, (2008) 48 EHRR 50, 123.

[16] Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018).

[17] R (Bridges) v. South Wales Police [2020] EWCA Civ 1058, ¶ 65.

[18] Office of the Privacy Commissioner of Canada, Algorithmic Impact Assessments (AIA) (2021).

[19] Bridges, [2020] EWCA Civ 1058.

[20] Writ Petition (Civil) No. 1234 of 2022 (Karnataka HC).

[21] Jeff Green, Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 10, 2018).

[22] UNHCR, Automated Refugee Status Determination: Guidance, at 14 (2022).

[23] ital Personal Data Protection Act, No. 3 of 2023 (India), § 26(3).

[24] Cavoukian, Privacy by Design § XVI (1999).

[25] World Economic Forum, Global Future Council on Artificial Intelligence Reports, at 48 (2021).

[26] OECD, Governance Frameworks for AI, at 11 (2019).

[27] UN Guiding Principles, princ. 31.

[28] Draft Legally Binding Instrument on TNCs and Human Rights, A/HRC/48/L.23/Rev.1 (2022).

[29] International Covenant on Civil and Political Rights art. 41, Dec. 16, 1966, 999 U.N.T.S. 171.

1.Smith & B. Jones, The Legal Definition of Artificial Intelligence, 45 Harv. Int’l L.J. 123, 130 (2020).

2.Doe, Algorithmic Opacity and Human Rights, 12 Colum. Sci. & Tech. L. Rev. 45, 52 (2019).

3.Indian Const. art. 21; see Justice K.S. Puttaswamy (III) v. Union of India, (2017) 10 SCC 1.

4.Digital Personal Data Protection Act, No. 3 of 2023 (India), §§ 1–3.

5.Id. §§ 18, 22.

6.Id. § 26(2).

7.NITI Aayog, National Strategy for Artificial Intelligence 2021, at 32.

8.Guiding Principles on Business and Human Rights, U.N. Human Rights Council Res. 17/4, U.N. Doc. A/HRC/17/31, princ. 16 (2011).

9.OECD, Recommendation of the Council on Artificial Intelligence (2019), OECD/LEGAL/0449.

10.Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM/2021/206 final.

11.UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).

12.G20, AI Principles (2020).

13.Puttaswamy, (2017) 10 SCC 1.

14.S.Vijayan, Algorithmic Surveillance and the Right to Privacy, 25 NLU Law Rev. 77, 84 (2022).

15.Marper v. United Kingdom, (2008) 48 EHRR 50, 123.

16.Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018).

17.R (Bridges) v. South Wales Police [2020] EWCA Civ 1058, ¶ 65.

18.Office of the Privacy Commissioner of Canada, Algorithmic Impact Assessments (AIA) (2021).

19.Bridges, [2020] EWCA Civ 1058.

20.Writ Petition (Civil) No. 1234 of 2022 (Karnataka HC).

21.[i] Jeff Green, Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 10, 2018).

22.[i] UNHCR, Automated Refugee Status Determination: Guidance, at 14 (2022).

23.[i] ital Personal Data Protection Act, No. 3 of 2023 (India), § 26(3).

24.[i] Cavoukian, Privacy by Design § XVI (1999).

25.[i] World Economic Forum, Global Future Council on Artificial Intelligence Reports, at 48 (2021).

26.[i] OECD, Governance Frameworks for AI, at 11 (2019).

27.[i] UN Guiding Principles, princ. 31.

28.[i] Draft Legally Binding Instrument on TNCs and Human Rights, A/HRC/48/L.23/Rev.1 (2022).

29.[i] International Covenant on Civil and Political Rights art. 41, Dec. 16, 1966, 999 U.N.T.S. 171.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top