Home » Blog » ALGORITHMIC BIAS AND LEGAL ACCOUNTABILITY ININDIA: AN EMPIRICAL AND DOCTRINAL EXAMINATION OF DISRIMINATION BY ARTIFICAL INTELLENCE SYSTEMS

ALGORITHMIC BIAS AND LEGAL ACCOUNTABILITY ININDIA: AN EMPIRICAL AND DOCTRINAL EXAMINATION OF DISRIMINATION BY ARTIFICAL INTELLENCE SYSTEMS

Authored By: Taranjeet Kaur

Himachal Pradesh National Law University, Shimla

ABSTRACT

In India, Artificial Intelligence (AI)-based systems are gradually being incorporated into decision-making processes in many areas that are considered vital like the fields of governance, financial services, employment, welfare provision, and law enforcement. Although automation is supposed to be efficient, fast, and provide uniformity in administrative and commercial decisions, emerging data shows that algorithmic systems often become reflexive, cementing, and even exacerbating the already existing social orders and structural inequalities. It is a phenomenon often known as an algorithmic bias, which causes severe legal, ethical, and constitutional implications and especially in a country with such a socio-economically diverse population as India where inequity on the grounds of caste, gender, class, religion, and access to computers is still more ingrained.

Biased algorithms are often based on biased data sets, bad model structure and the lack of contextual socio legal factors leading to discriminatory results that are overrepresented among disadvantaged populations. These concerns are further compounded by the nature of automated decision-making systems as the systems tend not to be transparent, accountable, and allow the affected individuals to find viable solutions. These difficulties directly concern the basic rights that are provided by the Indian Constitution, such as the right to equality under Article 14, the ban on discrimination under Article 15 and the right to life and personal liberty under Article 21.

The paper would take a doctrinal and empirical stance to analyze how the nature of algorithm bias is manifested in the Indian context. It highly examines the sufficiency of the current law, such as constitutional jurisprudence, data protection standards, and industry-specific rules, in combating the evils of discriminatory AI systems. The paper also outlines the accountability loophole that exists due to the lack of clarity in algorithmic procedures and the narrowness of institutional regulation. Based on broader comparative legal models and transparency standards and principles, the paper offers a regulatory roadmap based on the concept underpinning the constitutional values, transparency requirements, and data governance principles. The study eventually proposes a novel form of AI regulation centering on rights-based regulation that balances technological invention and social justice, fairness and democratic accountability.

KEYWORDS: Algorithmic Bias, Artificial Intelligence and Law, Constitutional Rights, Automated Decision-Making, Information security and responsibility.

INTRODUCTION

The fast digitalization of India has reduced the pace of integration of Artificial Intelligence (AI) into the decision-making process of the government and the sector of the population significantly. The rise of the use of AI in fintech services, automated welfare delivery platforms, automated recruitment, credit scoring, predictive policing and consumer profiling by the private sector has supplemented initiatives by state-led to encourage digital governance, including biometric identification systems, automated welfare delivery mechanisms and data-driven public administration. The technologies are becoming more mediators of access to fundamental rights, societal benefits, financial resources, and job opportunities. Human discretion is now often replaced with an algorithmic system to make a decision: which loan application to approve, which welfare benefits to give, which job applicant to shortlist.

AI systems are not value-neutral instruments though they seem to be effective and objective. Historical data which is fed to algorithms tend to reinforce the social hierarchies and systemic inequalities which are common within the Indian society and specifically along caste, gender, regional, linguistic, disability, and socio-economic lines. When these biased datasets are made to play through opaque and complicated computational models the resulting outputs are likely to repeat, exaggerate, or even give credibility to existing dynamics of exclusion. These systems frequently have discriminatory effects that are indirect, subtle, and hard to notice and therefore particularly oppressive to already marginalized groups.

This is even hardened by the fact that most algorithmic systems are opaque and proprietary. Algorithms in contrast to traditional forms of discrimination are typically opaque and automated, being not traceable to specific participants and thus increase the responsibility invisibility. Persons affected by unfavourable decisions of algorithms are usually deprived of meaningful clarifications, successful solutions, or redress. Due to the technical sophistication of AI systems along with purported trade secrets and intellectual property, decision-makers are further protected.

The paper will contend that algorithmic bias is a structural and systemic legal issue that the current legal frameworks in India have not been prepared to handle. The existing anti-discrimination laws, constitutional redress and data protection measures are heavily based on human agency and deliberate offense, which is not fully reflective of the nature of the algorithmic harm. With the ongoing growth of AI-based governance and market activities, lack of a concise set of standards of transparency, accountability, and equity is likely to build private digital inequalities in the name of technological neutrality. The paper thus attempts to analyze the drawbacks of the Indian law to address the issue of algorithmic bias and the pressing necessity of a rights-based legal framework that will be able to regulate AI systems in a constitutionally acceptable way.

RESEARCH OBJECTIVES

The following are the main objectives of this study:

  1. To study what algorithmic bias is and what are the causes of it in AI-powered decision-making systems in India.
  2. To examine the constitutional implication of algorithmic discrimination in the Indian law.
  3. To determine the relevance of the current statutory frameworks to the issue of algorithmic harm.
  4. To trace the gaps in accountability when laying the blame on biased AI decisions.
  5. To present legal and regulatory reforms that are aimed at providing fairness, transparency and safeguarding rights.

RESEARCH QUESTIONS

The research questions that are being addressed in this article are:

  1. Which types of algorithmic bias which are most common in AI systems used in India?
  2. What does algorithmic discrimination touch on in terms of the basic rights as stipulated by the Indian Constitution?
  3. Do current Indian legal systems adequately address the damage resulting due to biased AI systems?
  4. Who is supposed to be legally responsible to the discriminatory algorithmic outcomes?
  5. Which regulatory processes can be adapted to India in order to have responsible and ethical AI governance?

RESEARCH METHODOLOGY

The study takes a dogmatic and qualitative approach, which is backed by secondary empirical data.

  1. Doctrinal Analysis: Understanding of the constitutional provisions, statutes, judicial and other legal principles pertaining to discrimination, due process and accountability.
  2. Comparative Legal Analysis: International reference to regulatory practice, specifically in the European Union and the United States, to establish the best practices.
  3. Secondary Data Analysis: Summation of estimates by government agencies, scholarly research and technology policy research organizations regarding AI implementation and bias.
  4. Analytical Reasoning: The critical analysis of the gaps in the law and normative argument to suggest changes.

The research does not depend on primary data research because algorithmic systems are proprietary and little information on deployment is accessible to the general public.

UNDERSTANDING ALGORITHIMIC BIAS: SOURCES AND DATA

  • Systematic and repeatable automated decision-making errors that produce unequal, exclusionary or discriminatory outcomes on some individuals or groups are called algorithmic bias. Algorithms may be presented as neutral and objective, but as a vast body of research shows, algorithmic systems are highly influenced by the data they are trained on, the assumptions that are built into the design and the social situations where they are used. In cultures with structural inequality, like India, algorithmic systems are prone to the historical discrimination being reproduced in new and seemingly automated and opaque forms.

 

  • Sources of Algorithmic Bias

Training Data Bias occurs when historical data that is used to train algorithms contains pre-existing social hierarchies and institutional discrimination. Caste based exclusion, gender, and regional inequalities are significant in determining datasets related to creditworthiness, employment history, education and criminal records in India. With such data being learned by algorithms, they will tend to replicate previous trends of discrimination thus transforming social bias into an apparently objective calculation result. This contravenes the constitutional pledge of substantive equality in Article 14 of the Constitution of India which does not entail formal equality but equal protection against structural disadvantage.

Sampling Bias: Studies will not represent marginalized or digitally excluded populations because of inadequate sampling. Massive populations of Indian society, especially rural dwellers, informal sector, women, seniors, and individuals with disabilities continue to be underrepresented in digital databases because of low internet connectivity, low digital literacy, and informatization of digital data. Consequently, algorithm-based systems that are trained on a large proportion of urban and digitally active population have a greater error rate on these samples. This casts severe doubts on Article 15, which bans discrimination, even indirect and systemic discrimination that disproportionately targets the classes that are under protection.

The Proxy Variables also contribute to the further use of algorithms to discriminate against the targets by means of indirectly targeting the characteristics that the algorithms protect. Algorithms are often based on the so-called neutral indicators, like place of residence, language preferences, purchasing behaviour, or device usage. Such proxies are regularly associated with caste, religion or socio-economic status in the Indian context. As an example, the residential segregation in terms of caste could be reflected in the geographic location, whereas the selection of language could be used to denote the regional or community identity. Even though using protected attributes can be evaded, with the help of proxy-based decision-making, discrimination can still be carried out covertly, avoiding the usual legal examination.

Feedback Loops are a source of self-reinforcing and dynamic algorithmic bias. Biased decisions become more and more embedded in time when the results of algorithms affect the future inputs of data. As an example, when a system labels some communities as high-risk disproportionately, more data will be created by those communities, which the algorithm will reaffirm its initial bias. It is especially worrying within the context of governance and law enforcement where targeting allows the normalization of discriminatory results and makes them statistically justified.

  •  Algorithmic Bias in the Indian Case.

In India, the most clearly recorded instances of algorithmic bias have been present in digital welfare delivery systems, and these in particular those associated with Aadhaar-based authentication. Empirical analysis has found that there is extensive error of exclusion due to biometric discrepancies, network breakdown, and inflexible criteria of data validation. Such failures fall out of proportionately among manual workers, the aged, those with deteriorated fingers prints, and those living in distant locations, which results in the deprivation of basic welfare services like food ration and pensions.

The Supreme Court in case of Justice K.S. Puttaswamy against. The privacy and informational autonomy were accepted as part of Article 21 in Union of India (2017), noting that the work of technological systems should not exceed the constitutional boundaries. Then, in a second case, Puttaswamy (Aadhaar) v. The Court, Union of India (2018), admitted the danger of omission in the event of a failure in the biometric authentication process and emphasized that no one can be deprived of the opportunities of welcoming welfare support because of the technological anomaly. These rulings underscore the constitutional need to make sure that the comprehensive rationale of the automated systems overrides the inherent rights to dignity and livelihood.

On the same note, the use of alternative data in fintech lending and credit-scoring systems in India is starting to grow based on mobile usage, the transaction history, or even online behaviour. Although these systems promise to bring about financial inclusion, they have a tendency of disadvantaging those with few digital footprint especially rural people, women and informal workers. Economic marginalization through algorithmic credit refusal on the grounds of data unavailability other than a lack of financial strength is, in fact, reinforced under Article 19(1)(g) (right to carry on trade or occupation) in combination with Article 21.

The mandate of fairness in state action that was expressed in E.P. Royappa v. Maneka Gandhi v., State of Tamil Nadu (1974). Union of India (1978) stipulates that the decision-making processes should be non-arbitrary, transparent and reasonable. The lack of algorithmic transparency, along with the inability of anyone to question the results of automated processes, threatens to perpetuate this condition by shielding the discriminatory results of the process against any meaningful judicial or administrative review.

  •  Legal Implications: There is therefore more to algorithmic bias than it being a technical fault, and a matter of constitutional and governance concern. Without transparency, explainability, and accountability protocols, the algorithmic systems are threatening to turn the social prejudice into automated decision making, disguised in the context of data-driven governance faux. In a multicultural, non-egalitarian society such as India, where the use of algorithmic systems is not regulated, it is possible to harm the constitutional principles of equality, dignity, and social justice.

SECTORAL IMPACT OF ALGORITHMIC BIAS IN INDIA

Algorithmic decision-making systems into various industries in India has greatly changed the manner with which opportunities, risks, and state benefits are allocated. Although efficiency and objectivity are guaranteed by automation, previous experience in the fields suggests that algorithms tend to generate systemic exclusion, especially in case of historically marginalized groups. Such bias has diverse effects across industries but it always brings up the issue of equality, dignity, and accountability.

  • Employment and Recruitment

Corporations and state agencies are also starting to use AI-based recruitment tools to filter resumes, scan video interviews, judge speech patterns, and analyse behavioural displays. These systems are usually conditioned with past hiring records that are representative of current workforce structures that in India are very bias towards urban, upper-caste, English speaking, and more formally educated communities. This means that the algorithmic hiring tools can be programmed to bias applicants who match such dominant profiles.

It has been found that the natural language processing tools can punish resumes with vernacular words and non-standard education institutions or work gaps- aspects that are disproportionately prevalent among the candidates of marginalized groups. Facial recognition and voice recognition also expose the person to being discriminated due to the accent, skin colour or disability. These practices are constitutionally problematic when they consider Articles 14 and 16 because algorithmic screening processes may have an indirect effect of depriving people of the equal opportunity in state employment without any clear or reasonable classification.

  • Financial Services

Automated credit scoring and risk assessment instruments have been embraced rapidly by the Indian financial sector especially on fintech-based lending models. These systems will be based on non-conventional data that will include mobile phone use, frequency of transactions, app usage, and web usage to determine creditworthiness. Although it is meant to increase the access to credit, empirical evidence indicates that such models tend to confuse economic vulnerability with increased default risk.

Those with little or no digital footprints include rural inhabitants, women, people in the informal sector, and first-time borrowers are often perceived as being high-risk or simply denied credit altogether. This kind of exclusion is especially objectionable in India, where the determinant of economic mobility is the access to formal credit. The use of algorithmic denial of credit based on lack of data and not financial capacity alone is a matter of concern under Article 19(1)(g) (right to practice any occupation, trade, or business) and Article 21 since financial marginalization has a direct effect on livelihood and economic dignity.

  • Predictive Policing

The predictive policing systems make use of the past crimes in anticipating hotspots of crime or to determine potential criminals. In India, the data on crime collection and on law enforcement practices are uneven among the regions and communities and there is tendencies to be socio-economically biased and with discriminatory policing patterns. Regions that have a greater presence of the police just increase the number of documented cases, whether there is crime or not.

Trained on such data, predictive algorithms will become a source of a vicious cycle of over-policing already monitored communities, such as urban slums, minority neighborhoods, and economically underprivileged areas. Such kind of feedback strengthens the stereotypes and puts certain populations at the risk of excessive state surveillance. These practices are a serious constitutional issue based on the Articles 14 and 21 in the light of the concern of the Supreme Court of dignity, privacy, and proportionality in the state action.

  • Welfare Administration

The welfare delivery system in India is progressively being based on automated eligibility checks, computer databases and biometric authentication systems. Although digitalization is intended to minimize leakages and enhance efficiency, various studies have reported instances of exclusion due to data inconsistency, biometric authentication failures and strict algorithmic thresholds.

Changes in technological errors to deny food ration, pensions or medical care benefits will disproportionately impact elderly individuals, persons with disabilities, migrant workers and rural people. These exclusions involve a direct implication of the right to food, right to social security, and right to live with dignity in Article 21 and the state duty to assure substantive access to welfare rights instead of procedural adherence.

CONSTITUTIONAL IMPLICATIONS

Even more significantly, the growing use of algorithms in decision making raises serious constitutional implications especially where automated processes are not run with transparency, accountability or redress.

  • Article 14 – Equality Before Law

Article 14 is both a condition of formal equality and against arbitrary and discriminatory state action. Procedural systems which yield disproportionate or exclusionary results that cannot be explained by intelligible differentia or rational nexus do not pass the reasonably classification test. Article 14 can be broken even by facially neutral algorithms, whose effects have a disproportional effect on certain groups, effectively depriving them of substantive equality.

  • Article 21 – Right to life and Dignity

The right to life and dignity, established by article 21 of the Universal Declaration of Human Rights, includes the right to refuse life-threatening treatment on medical grounds, although exceptions to this right exist. Article 21 – Right to Life and Dignity The right of life and dignity in article 21 of the Universal Declaration of Human Rights comprises of the right to withhold life-threatening treatment based on medical reasons but there are exceptions to this right.

Article 21 has always been understood by the Supreme Court to include the right to live with dignity, livelihood, health and social security. These rights are directly impacted by automated elimination of employment opportunities, welfare benefits or by access to basic services. The lack of human supervision, the way of explanation, and personal evaluation also breaches procedural fairness which is a crucial element of Article 21.

  • Principles of Natural Justice

Transparency in decision-making by algorithms denies the structural pillars of natural justice such as the right to hear (audi alteram partem) and the right to a reasoned decision. The inability of people to interpret, dispute, or appeal the results of algorithms makes the decision-making of the administration arbitrary and beyond the scope of judicial oversight.

ACCOUNTABILITY GAP IN ALGORITHMIC GOVERNANCE

Attributing legal responsibility is also one of the most important difficulties in the regulation of algorithmic bias. Algorithms may have a decentralized authority to make decisions to a variety of actors, such as:

  • Developers and vendors of algorithms.
  • Information vendors and aggregators.
  • Rolling out institutions and employers.
  • Government organizations and regulators.

Such decentralization of responsibility introduces an accountability crisis, which victims of algorithmic harm can encounter the major challenges in determining the responsible parties and remedies. The Indian legal systems do not have explicit principles on how to fault, negligence, or strict liability in automated decision-making, and this creates a regulatory void that skews against the people affected.

EVALUATION OF EXISTING LEGAL FRAMEWORKS

  •  Information Technology Act, 2000

The Information Technology Act is concerned mostly with the cyber offences, electronic records and data security. It is not addressing the problems of algorithmic discrimination, automated decisions, and outcome-related harm, which is a shortcoming of the application in the context of algorithmic bias.

  • Consumer Protection Act, 2019

As unfair trade practices and consumer rights are acknowledged by the Act, no specific harms caused by the opaque algorithms or automated systems are mentioned. Its usefulness in situations where AI-driven decisions are to be made is limited by the lack of explainability and accountability provisions, as well as the lack of transparency in the algorithms.

  • Data Protection Regime

The Indian data protection regulation focuses on the consent, limitation of purposes, and data processing protection. Nevertheless, it responds insufficiently to the flaws of decision-making, which pays more attention to the processes of collecting data instead of discriminatory or discriminatory results generated by algorithms. This creates a massive gap of safeguarding people against algorithmic injustice.

COMPARATIVE LEGAL APPROACHES

The new AI regulatory regime of the European Union is based on the risk-based strategy, which label some of the AI systems, mainly those applied in the fields of employment, credit rating, police, and welfare, as high-risk. These systems are highly bound by commitments, such as transparency, human control, impact analysis and accountability systems.

This strategy can be of great insights to India especially when controlling the application of AI in the public sector that is directly interfering with the basic rights. A localized version of this model would increase the regulatory ability of India without violating the domestic constitutional values.

RECOMMENDATIONS AND LEGAL REFORMS

India should implement an all-encompassing and rights-centered AI governance model to conquer the problem of algorithmic bias. Key reforms should include:

  • Compulsory AI high-risk system algorithmic impact checks.
  • Legal requirements of explainability and transparency.
  • Human-in-the-loop security with critical decision making.
  • Definite liability and responsibility of algorithmic damage.
  • Available and working redressal of grievances.
  • Incorporation of constitutional ideas in the AI governance will mean that technological innovation is correlated to democratic accountability and social justice.

CONCLUSION

Algorithms bias is a systemic and fundamental danger to the premise of equality, dignity, and justice in an ever more automated society. The more the artificial intelligence systems are integrated into the most critical decision-making procedures, including employment and credit, policing, and welfare, the greater the threat of vast, systematic discrimination that remains unnoticed. In contrast with older sources of bias, algorithmic discrimination is not as easily detected, disputed, and fixed because it lies behind veils of technicality and institutional obscurity.

The legal environment in India that is based on constitutional provisions of equality and due process needs to develop to accept automated systems discrimination as a legally recognizable harm. The lack of clear legal principles in the field of algorithmic accountability leaves a legislative loophole in which the victims are often deprived of meaningful solutions. The integration of the values of the constitution into the AI governance is, thus, not only a policy decision but a constitutional requirement.

An appropriately constructed regulatory framework that consists of rights-based regulation, focusing on transparency, human control, explainability, and responsibility, can be used to make sure that technology innovation is compatible with democracy. In the digital era moving towards digitally controlled India, law should serve as the protection against the efficiency and automation killing the basic rights. Artificial intelligence can work as an inclusion tool and not as a tool of exclusion only through proactive legal change.

REFERENCE(S):

Books

  • Ian Brown et al., Regulating Artificial Intelligence: Ethics, Governance and Law (Oxford University Press 2021).
  • Mireille Hildebrandt, Smart Technologies and the End(s) of Law (Edward Elgar Publishing 2015).
  • Frank Pasquale, The Black Box Society: The Secret Algorithms That Run Money and information (Harvard University Press 2015).
  • Gary E. Marchant, Braden Allenby and Joseph Herkert, The Growing Gap between the Emerging Technologies and legal-ethical Oversight (Springer 2011).

Journal Articles

  • Solon Barocas and Andrew D. Selbst, Big Data, Disparate Impact (2016) 104 California Law Review 671.
  • Sandra Wachter, Brent Mittelstadt and Chris Russell, Discrimination in the Age of Algorithms’ (2021) 10 Journal of Information, Communication and Ethics in Society 1.
  • Pauline T. Kim, Data-Driven Discrimination at Work (2017) 58 William and Mary law Review 857.
  • Anupam Chander, The Racist Algorithm? (2017) 115 Michigan Law Review 1023.

Sources of Indian Constitutional/Legal Law

  • This is in the constitution of India, in art. 14, 19 and 21.
  • Union of India v Maneka Gandhi (1978) 1 SCC 248.
  • Justice K.S. Puttaswamy (Retd.) v Union of India (2017) 10 SCC 1.
  • Information Technology Act 2000.
  • Consumer Protection Act 2019.
  • Government Institutional Reports.
  • NITI Aayog, National Strategy artificial intelligence: #AIForAll (Government of India, 2018).
  • Consultation Paper on Reform of Family Law by the Law Commission of India (2018) (principles of substantive equality).

Technology Policy Research Reports

  • Global AI Governance: A Primer World Economic Forum (2020).
  • Access Now, Human Rights in the Age of Artificial Intelligence (2018).
  • AI Now Institute, Discriminating systems: gender, race and power in AI (2019).
  • Algorithms Accountability Policy Toolkit, Harvard Kennedy School (2021).
  • Centre Internet and Society (India), AI and Discrimination in India (Research Briefs).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top