Authored By: Tuhiena Malik
Faculty of Law, University of Delhi
Abstract
As Indian companies move from digital transformation to AI-first operational models, the legal protection of the “Black Box” is disappearing. With the introduction of the India AI Governance Guidelines in November 2025 and the full enforcement of the Digital Personal Data Protection (DPDP) Act in 2023, corporate boards are facing new fiduciary risks. This article explores the shift from seeing AI as a technical tool to viewing it as a governance requirement. By examining Section 166 of the Companies Act, 2013, and recent judicial responses to algorithmic errors, I argue that directors now have a non-delegable duty of AI stewardship. To reduce personal liability, boards must adopt a structured “Accountability-by-Design” framework.
Introduction
For the past decade, corporate boards often treated Artificial Intelligence (AI) as a black box. In law, a black box refers to any system, algorithm, or decision-making process where the inputs and the outputs are known, but the inner workings, how the system actually reached that conclusion, are hidden, secret, or too complex to understand. In simpler terms, it is a mystery machine where data is put in, results are given out but how those results were reached is unknown. It is too complicated, filled with jargon and codes only experts could understand.
This complex system seemed too technical for most directors. Legally, this allowed them to use the “Business Judgment Rule”. This doctrine protects directors from personal liability, framing errors in AI output as unforeseeable technical glitches rather than failures in oversight.
However, the year 2025 marked a significant change in Indian law. In the notable case of KMG Wires vs. NFAC Delhi, the Bombay High Court expressed serious concern when an Assessing Officer relied on AI-generated legal citations that did not exist.
Soon after, the Supreme Court, in Deepak Raheja v. Omkara Assets, dealt with a case where a litigant’s response, created with a Generative AI tool, included numerous fabricated precedents.
These cases effectively ended the idea of “Technological Exceptionalism.” This assumption claimed that new technologies, especially digital and AI tools, were revolutionary and superior, making them immune to existing regulations and societal norms. This belief, rooted in cyber-utopianism, suggested that technology provided unmatched solutions to complex social problems while old regulations stifled innovation.
If a machine’s “hallucination” can cause embarrassment in court, it can certainly lead to disclosure violations, biased hiring, or significant financial errors in the boardroom.
For corporate law recruiters, the question is no longer whether a board is liable for AI, but how the law defines the “Standard of Care” in an age of autonomy.
The standard of care is a legal concept in tort law that sets the level of caution, skill, and diligence that a reasonable person or professional would use in similar situations to avoid foreseeable harm. It serves as the measure for negligence; failing to meet this standard is considered a breach of duty.
The Fiduciary Framework: Section 166 and Due Diligence 2.0
In India, the foundation of director liability is Section 166 of the Companies Act, 2013. Specifically, Section 166(3) requires directors to perform their duties with “due care, skill, and diligence.”
Traditionally, “diligence” meant that directors needed financial literacy. By 2026, this requirement has expanded to include AI literacy. The Supreme Court’s earlier ruling in Official Liquidator v. P.A. Tendolkar stated that directors must act with vigilance suitable to the circumstances; this now extends to selecting and monitoring algorithms.
If a board approves a high-frequency trading AI or credit-scoring model without investigating its “Explainability,” they risk claims of “Informed Negligence.”
The Non-Delegability of Oversight
Directors might argue that they delegated AI implementation to experts. However, Section 166(3) is non-delegable. Just as a director cannot blame an auditor for a failure they should have discovered, they cannot blame a vendor for an AI bias that leads to multi-crore fines under the Consumer Protection Act of 2019. The emerging standard is that a director must understand the tool’s limitations, even if they do not know the code.
The DPDP Act 2023: AI as a Significant Data Fiduciary Risk
While the Companies Act governs the relationship between directors and their company, the Digital Personal Data Protection (DPDP) Act, 2023, regulates the company’s relationship with the outside world. Balancing these two relationships is crucial for any company to succeed in today’s economy.
Under the Act, most large Indian corporations are classified as Significant Data Fiduciaries (SDFs). An SDF is an entity identified by the Government of India under the DPDP Act, based on factors like processing large volumes of sensitive data or posing risks to data principals. SDFs must meet stricter compliance standards, such as hiring a Data Protection Officer (DPO) based in India, conducting Data Protection Impact Assessments (DPIAs), and performing independent audits.
Most AI models rely on “Data scraping”, a technique that extracts large amounts of unstructured data from the internet, usually for market research. However, the DPDP Act enforces strict “Purpose Limitation.” Data collected for customer support cannot be reused to train a predictive marketing AI without new consent.
The DPDP Act also enforces “The Algorithmic Right to be Forgotten”. One of the 2026 challenges for social media companies is the “unlearning” problem. If a Data Principal (user) wants to exercise the “Right to Erasure” under the DPDP Act, the company will have a legal dilemma if the central neural network already ingested the user’s data. Boards that have not implemented protocols which result in “Machine Unlearning” could be seen as failing to meet their statutory duty to maintain an “operationally effective” compliance system in the near future.
If a board oversees a “data-hungry” AI strategy that breaches these rules, penalties (up to ₹250 crores per instance) can lead to derivative lawsuits from shareholders against directors for failing to maintain internal controls under Section 134(5)(f). This section states that the Directors’ Responsibility Statement must show that directors have put in place proper systems to ensure compliance with all relevant laws and that these systems are effective.
The 2025 guidelines now explicitly connect AI governance to DPIAs. Boards are expected to review bias audit reports as part of their quarterly compliance. Failing to demonstrate that the board has examined the AI’s data lineage is increasingly viewed as a breach of statutory duty.
The India AI Governance Guidelines (Nov 2025): The Graded Liability System
The Ministry of Electronics and Information Technology (MeitY) released the official AI Governance Guidelines in late 2025. This framework shifted from the EU’s top-down regulation to a techno-legal approach based on seven principles. The most important for corporate lawyers is Accountability. The guidelines introduce a “Graded Liability System.” This means that if a company uses AI for high-risk tasks, such as managing critical infrastructure, medical diagnostics, or financial lending, the board’s duty of oversight becomes more stringent.
For these high-risk applications, the guidelines suggest that the Business Judgment Rule’s protections should be thinner, leaning toward a standard of strict liability.
How can a board protect itself from personal liability? The solution lies in establishing “Human-in-the-loop” (HITL) systems. Just as the Audit Committee manages financial risks, the AIGC should handle algorithmic risks. Its responsibilities include Algorithmic Audits, ensuring models are tested for caste, gender, and regional bias, a specific requirement under the 2025 Indian Guidelines.
It also maintains a Model Inventory to record all AI tools used within the firm, including “Shadow AI” (unauthorized tools used by employees). Additionally, it establishes an Incident Response to create a “Kill Switch” protocol for autonomous systems that deviate from expected parameters.
In future court cases, a director’s best defense will be the “Paper Trail of Logic.” By insisting on Explainability, where the machine explains its reasoning, directors can show they exercised “Independent Judgment” as required by Section 166.
For modern corporate law firms, AI liability is no longer a niche tech issue; it is a core corporate risk. A company with sound AI governance is not only compliant; it is more valuable.
Mergers & Acquisitions in the Age of Autonomy
In corporate law, artificial intelligence (AI) has transformed the Due Diligence process. By 2026, AI Due Diligence has become as customary as tax audits or intellectual property audits. Recruiters regularly look for associates to identify the following red flags during a transaction.
- Data Lineage Gaps: Does the target company have clear ownership and consent for its training data?
- Liability Indemnities: Are there indemnities in the target’s SaaS agreement with AI vendors against algorithm failures?
- Compliance Deficit: Has the target company completed the mandated Data Protection Impact Assessments (DPIAs) required for SDFs?
Focus on AI liability in corporate boardrooms rather than tech, to support legal professionals in guiding Indian businesses confidently and legally into the Age of Autonomy.
Conclusion: Governance is the enabler of innovation
The “black box” nature of an artificial intelligence system, meaning that it produces results but cannot be fully understood, has been common language for many years. Historically, this black box nature has at times been used as a rationale for lessened accountability of operators of artificial intelligence systems. However, in emerging Indian law, this doctrine has steadily lost its grip on logic, and the courts and regulators have set the precedent that there can be no exception to compliance because of technological complexity.
To the contrary, the more complex and consequential the system is, the more it demands oversight, transparency, and institutional accountability. Thus, governance is not an obstacle to innovation, but rather a necessary condition to make it legitimate and sustainable.
These developments have important implications for corporate law and practice because artificial intelligence has moved from fringe business operations into core business operations. Now that artificial intelligence is used as a standard business operation in financial decision-making, consumer analytics, hiring analytics, fraud detection, and regulatory compliance, corporate liability for artificial intelligence has become an important corporate risk, on par with financial misreporting, data breaches, or regulatory non-compliance. It is not credible for directors and senior officers to say they were ignorant of the operation of algorithmic systems that affect company performance. Directors have a duty to understand and oversee algorithmic risks responsibly.
For the modern corporate law practice, this fact generates a number of implications for counsel. Lawyers need to consider AI not merely as a tool, but as a structure that is intrinsic to corporate governance itself. Today, advising on AI is around risk frameworks, disclosure frameworks, internal controls and board level oversight. Companies building well governed AI frameworks such as regular audits, bias testing, documentation and clear lines of accountability are not just checking a box. They are strengthening their institutional resilience.
Good AI governance can increase corporate value. Investors have focused more on corporate governance as new technology creates reputation and regulatory risks for companies and their government regulators. A company that can successfully deploy AI responsibly will appear to have greater vision and operational competence, and will earn stability, credibility in the market, and investor confidence. In this sense, governance is not a constraint on innovation, it is what transforms it into reality.
So while the “black box” will not open entirely, it will open somewhat more. Not because every algorithm will be transparent, but because the norms of the corporate function will compel engagement, and those directors who choose to question and oversee these systems will lead the next decade of corporate law. However, organizations that view governance as an enabler rather than a hindrance are likely to be more innovative, accountable, resilient, and future-ready.
References & Bibliography
Statutes & Regulatory Frameworks
- THE COMPANIES ACT, No. 18 of 2013, §§ 134, 149, 166, 177, 205 (India).
- THE DIGITAL PERSONAL DATA PROTECTION ACT, No. 22 of 2023 (India).
- MINISTRY OF ELECTRONICS & INFORMATION TECHNOLOGY, GOV’T OF INDIA, INDIA AI GOVERNANCE GUIDELINES (Nov. 5, 2025).
Judicial Precedents
Supreme Court of India
- Deepak Raheja v. Omkara Assets, [Docket No. N/A] (India Dec. 10, 2025).
- Official Liquidator v. P.A. Tendolkar, (1973) 1 S.C.C. 602 (India).
High Courts
- KMG Wires Private Ltd. v. NFAC Delhi, W.P. (L) No. 24366 of 2025 (Bom. Oct. 6, 2025) (India).
III. Institutional Reports
- MINISTRY OF ELECTRONICS AND INFORMATION TECHNOLOGY, GOV’T OF INDIA, MEITY EXPERT COMMITTEE REPORT, A GRADED LIABILITY MODEL FOR HIGH-RISK AI SYSTEMS (Jan. 2026).
- DATA SECURITY COUNCIL OF INDIA, GOVERNANCE SUTRAS: IMPLEMENTING THE INDIA AI MISSION (Dec. 2025).
- NITI AAYOG, GOV’T OF INDIA, RESPONSIBLE AI FOR ALL: ADOPTING THE 2025 NATIONAL GUIDELINES (Jan. 2026).





