Home » Blog » THE CONSTITUTIONAL LIMTIS OF ARTIFICIAL INTELLIGENCE: ARTICLE 14 PRIVACY AND COMPETITION IN INDIA.

THE CONSTITUTIONAL LIMTIS OF ARTIFICIAL INTELLIGENCE: ARTICLE 14 PRIVACY AND COMPETITION IN INDIA.

Authored By: Yoga Lakshmi

NUSRL, Ranchi

INTRODUCTION

Have you ever wondered why social media platforms like Instagram or YouTube shows you certain posts but hide others? Have you ever questioned why? Why it chose to show you one post over the other. Most importantly, how it chose between the vast options? 

Well, no one can give the answer for this not even the developers. This is what ‘BLACK BOX’ refers to, and is mostly a concern with almost every algorithm incorporating Artificial Intelligence (AI). These automated decisions affect human rights such as Article 14 of the Indian Constitution which guarantees equality before law and equal protection of law to all the citizens of the country. Many companies use AI-driven tools for shortlisting candidates during hiring/recruitment, fintech apps use these tools to assess creditworthiness and for loan approvals accordingly. Companies offering services like ride-hailing often make situations like algorithmic bias where they set the pricing of services without even collaborating with each other solely by the use of algorithm known as ‘parallel pricing’ is overlooked by the Competition Act 2002. These acts together reduce transparency, accountability, and oversight in fairness and compliance. 

Research Questions

  1. Do traditional Article 14 tests (reasonable classification, non-arbitrariness) adequately apply to algorithmic decision-making?
  2. Should India adopt a “right to explanation” (like EU’s GDPR) to strengthen equality safeguards?
  3. What liability framework should exist for discriminatory AI outcomes?

Objectives 

  1. To investigate the applicability of the principles of Article 14 that state that there should be equality before law and non-arbitrariness to the idea of algorithmic decision-making. 
  2. To illustrate weaknesses in the existing legal and regulatory frameworks of India (e.g., DPDP Act 2023, lack of AI-related equality protection). 
  3. To examine the liability gaps in the responsibility of discriminative AI results. 

The paper would follow the role of AI in redefining the Indian law, analyzing three goals. On the one hand, it is essential to criticize Article 14 in the light of AI algorithms, uncovering the holes in the regulations under the DPDP Act and determining the party in charge of such biased AI debacles. In addition, it also raises other important questions like; whether old quality tests can be used with algorithms, and whether India requires a right to explanation as in Europe, and what is an appropriate liability sharing model. It goes down the line beginning with constitutional problems, and flagging biasness and rights with judicial cases, then the competition law problems such as algorithmic bias and covert price fixing and then how the DPDA Act attempts to regulate data grabs. All in all, capped off with effective solutions and proposal of improved rules. 

DISCUSSIONS

Legal Framework and Current Law

The Indian legislation is lagging behind to keep up with the swift AI, drawing upon the Indian Constitution, competence-based Competition Act of old, 2002 and the new DPDP Act. Central to the constitution is Article 14 which demands equal treatment and non-random decisions by the state, it must fall into practical categories which binds it with real objectives, such as in Maneka Gandhi v. Union of India (1978) [1]. Then there is Article 21 [2] that lays stress on privacy as one of the elements in life and liberty that is gained in the case of K.S. Puttaswamy v. Union of India (2017) [3]. This is a check to, which monitors AI gathering personal data without any reasonable cause. Such provisions are crucial because they are not just but they compel any government or businesses that utilize AI powered tools in their processes to demonstrate why it is just and reasonable.

When it comes to Competition territory, the Competition Act of 2002 [4] gives the power to Competition Commission of India (CCI) to pull down big players who abuse their dominance under Section 4 of the Act or forming secret cartels via Section 3 of the Act. But the problem is this was drafted way before AI came into the picture, with its data empires and algorithms that somehow “learns” to fix prices without any human intervention. Companies serving through digital platforms are heavily dependent on AI and their developed algorithms to maintain consumer ratio, which relies on their preference and thus, the algorithm suggests them products based on their preference. The loophole is how does it filter out certain things? Well, there is no answer for that, based on your last orders, or maybe it read your mind. 

Moving towards Digital Personal Data Protection Act, 2023 [5]. This came into legislature to protect the data and maintain the privacy of the people as it demands clear consent for using personal data, keeping it limited to what its needed for and thus slamming breaks on “Significant data Fiduciaries” or “data diary”, that are usually AI companies with impact checks and audits. The draft rule from 2025 ensures a complete check on fully automated calls that infringe with rights but they miss the big stuff like inbuild bias fixes or a real “right to explanation”. These gaps reduce transparency as the most pressing concerns are overlooked by the framework. The guidelines issued by Ministry of Electronics and Information Technology (MeitY) [6] has issued ethical guidelines that emphasize transparency, accountability, and fairness. These are said to be aspirational, but voluntary and lacks a human binding force. This creates uneven compliance, with some organizations adopting best practices and others ignoring them entirely. All in all, this leaves India with no proper structured framework and just few pieces of law scattered in a patchwork format. The requirement is that of a single AI law to tie it all together.

These pieces overlap in messy ways. DPDP watches the data going in, CCI eyes the market results, and the Constitution guards the people at the end. But holes everywhere, nothing bans high-risk AI like real-time face scans in public, and audits? Mostly on paper. It’s like building a dam with buckets.

Case law analysis

Although AI concept is too new, the Indian judiciary has been shaping it wherever it’s tangled. Starting with a landmark move by Kerala High Court, in 2025 the high court came up with ‘Policy Regarding Use of Artificial Intelligence Tools in District Judiciary’ for enhancing the privacy and transparency in the judicial use of these tools due to their increasing availability and access, the court said that an AI pilot drafting petitions violated Article 21’s [7] promise of fair hearings, also stating that machines are not capable of grasping human nuance and empathy thereby risking it all with arbitrary outcomes mocking Article 14. Recently, the Delhi High Court also banned AI verdicts and its extreme usages as a lot of times non-existing and fictional cases were cited by advocates. It also stated in a suo moto order that tools like SUPACE (Supreme Court’s AI research helper) is a tool designed to help with maintaining of data analysis, legal research and organizing case records, that it is for administrative and preparatory work rather than making final decisions.

Switching to competition turf where CCI has been evading algorithms. Online sellers, such as Amazon, which used AI to self-preferencing, and promoting their own sellers with the assistance of opaque recos, were accused of abusing dominance in section 4 of the Act. Flipkart and Amazon were raided following allegations of predatory pricing through dynamic AI algorithms which reduced competitors in the middle of the night. Next there is the case of airline cartel, where CCI searched for indications of parallel pricing bots meeting without discussion and without any human involvement, thus colluding in an algorithmic way. They pinned down fines on the airlines organizations and is relentlessly seeking penalties over anti-competitive behaviour.

Critical Evaluation and Analysis

We begin with Article 14, the provision of the constitution of equality that states that there is no random or unequal treatment to anybody. It is the core of human choices in which you may fault a policy or a court decision because of logic or notice arbitrariness such as that in Maneka Gandhi v. Union of India (1978) [1]. However, when it comes to AI accomplishing all these, it is a huge no. These tools are conditioned on decades of uneven data, believing that bank loans bypassing rural loan applications or some communities will do so because history has it so. No wicked villain required only patterns in deep. Courts desire a good goal (nexus test) to be explicitly connected with a why, which is concealed in the algorithm of AI. 

The DPDP Act of 2023 goes to the next level of privacy. It compels the firms to seek permission prior to taking over your information, to gather as minimal as necessary that is the essentials and imposes checks on the AI companies in the form of fines also as restitution which amount to 5 percent of a firm all over revenue. Non-however, equality or bias: but also, there is no right to an explanation, such as you have under the GDPR of the Europe (Article 22) [8] where, in case an algorithm (dismissing your loan or job application) tells you no, they must explain why, so that you can push back. In India? The draft 2025 rules are somewhat discussing the issue of human interference with the automated decision-making, although practically, the people hiring bot systems only drift through the air with no actual control. 

CONCLUSION

The primary conclusions drawn in this paper show that there is a gap-filled framework – The tests on equality provided by Article 14 are overstretched by opaque algorithms, CCI instruments are falling behind self-learning collusion, and DPDP 2023 assumes consent but does not address bias or explanations. Case law, since SUPACE limits down to pricing investigations, has demonstrated the courts requiring human control, but without enforcement capability. Blocking AI judges court battles to investigating the tricks Amazon does demonstrate that judges desire humans in the role of judges, however, they have no actual hammer with which to strike. To the point, rules that were used many years ago do not apply to new technology, particularly in a nation as varied as the one we live in, where prejudices take a significant toll.

 Recommended reforms –

  • Add “right to explanation” into DPDP rules of bot decisions that appears to infringe upon the rights of a person, to minimize their biasness and enhance transparency. 
  • CCI should have an authority to prevent unlawful AI mergers and insist on peeking of the algorithm. 
  • Integrating all the arms of the government whichever necessary to boost and improve this sector, and transforming the soft recommendations of MeitY into a reality and set of rules. 

 In so doing, AI can be addressed and safely integrated in the society. The cry of not ignoring the rules and committing the blunder of handing it over to the future tech giants is the fix it.

Reference(S):

[1] 

“Maneka Gandhi v. Union of India, (1978) 1 S.C.C. 248 (India)”. 

[2] 

“Indian Constitution, Article 21”. 

[3] 

“K.S. Puttaswamy v. Union of India (2017)”. 

[4] 

Competition Act, 2002. 

[5] 

Digital Personal Data Protection, 2023. 

[6] 

“National Strategy for Artificial Intelligence: Responsible AI for All (2021).”.

[7] 

“Constitution of India”.

[8] 

“Regulation of the European Parliament and of the Council”.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top