Home » Blog » Who Watches the Algorithms? AI ESG and the New Rules for Boards

Who Watches the Algorithms? AI ESG and the New Rules for Boards

Authored By: Ananya Ghose

Shyambazar Law College, University of Calcutta

Abstract:

Artificial intelligence is reshaping corporate decision making at scale, while environmental, social, and governance (ESG) norms are increasingly enforceable through regulation and investor scrutiny. This article examines whether existing board oversight and fiduciary frameworks are adequate to govern AI driven corporate activity. Drawing on Indian corporate law, SEBI’s emerging AI governance expectations, and global governance principles, it argues that AI constitutes a material ESG and fiduciary risk. Boards must therefore adopt proactive, process-based accountability structures to meet their legal duties in the algorithmic age.

  1. Introduction:

Artificial intelligence (AI) has become central to corporate strategy in India, shaping outcomes in recruitment, credit assessment, fraud detection, trading, supply chain optimisation, and compliance. AI systems now carry legal, financial, and social consequences, not just support managerial decisions.

Simultaneously, ESG obligations have evolved from voluntary disclosures to material governance requirements. Investors, regulators, and exchanges increasingly assess performance based on risk management, ethical conduct, and long-term sustainability. In India, the SEBI Business Responsibility and Sustainability Reporting (BRSR) framework places board level accountability at the heart of ESG oversight.

The intersection of AI and ESG raises a key governance question: are boards equipped to oversee algorithmic decision-making? AI brings unique risks – opacity, scale, vendor reliance, and embedded bias that challenge traditional human centric oversight. Failures in AI governance can result in social harm, environmental impact, and regulatory or reputational consequences.

AI should therefore be treated as a core governance and fiduciary risk, not merely a technical issue. Under Indian law and SEBI guidance, boards cannot rely on technical ignorance, directors are expected to implement governance processes to identify, monitor, and mitigate AI related risks.

  1. The AI–ESG–Corporate Governance Nexus:

AI systems are now embedded across corporate value chains. Automated hiring tools screen candidates at scale, algorithmic credit scoring influences access to finance, and AI driven analytics optimise logistics and inventory management. While these systems promise efficiency and competitive advantage, they also re allocate decision making authority from human judgment to algorithmic outputs.

Each pillar of ESG is directly implicated. From an environmental perspective, large scale AI models require substantial computing power, contributing to energy consumption and carbon emissions through data centres and cloud infrastructure. Social risks arise where algorithms reproduce or amplify bias, leading to discriminatory outcomes in employment, lending, or service delivery. Governance risks stem from opacity, lack of explainability, and unclear accountability when automated systems cause harm.

What distinguishes AI risk from traditional operational risk is its systemic and scalable nature. Algorithmic errors can be replicated across thousands of decisions instantaneously, often without detection until harm has already occurred. As a result, AI governance has become inseparable from corporate governance. Boards are increasingly expected to understand how AI affects strategy, risk exposure, and ESG performance, even if they do not design or code the technology themselves.

       3.Board Duties and AI Oversight under Indian Corporate Law:

Under Indian corporate law, board accountability is grounded in section 166 of the Companies Act, 2013, which requires directors to act in good faith, exercise due care and diligence, and act in the best interests of stakeholders. These duties are reinforced by disclosure and risk management obligations under the SEBI (LODR) Regulations, 2015.

AI systems complicate these duties in three key ways: their ‘black box’ nature limits explainability, reliance on third party vendors blurs responsibility, and most directors lack technical expertise. However, technological complexity does not dilute fiduciary obligations. The duty of care does not require technical mastery, but it does require boards to put in place appropriate governance, risk controls, and reporting mechanisms for AI driven decisions.

Failure to establish such oversight structures may amount to a breach of duty under section 166. Comparative jurisprudence supports this approach. In re Caremark International Inc. Derivative Litigation, U.S. courts held directors liable not for adverse outcomes, but for failing to implement systems to monitor compliance risks. Applied to AI governance, passive reliance on algorithms or vendor assurances without meaningful oversight may fall short of acceptable board conduct.

  1. AI Failure as an ESG and Fiduciary Breach:

AI related failures increasingly translate into ESG risks. Algorithmic bias in areas such as recruitment or lending can trigger discrimination claims and regulatory action, particularly where protected groups are adversely affected. From a governance perspective, limited explainability weakens accountability and may result in inadequate or misleading disclosures to investors.

Under SEBI’s BRSR framework, boards must disclose material ESG risks, along with governance and oversight mechanisms. As AI becomes embedded in business operations, algorithmic risks may qualify as material. Failure to identify or disclose such risks could attract scrutiny under the SEBI (LODR) Regulations.

AI also raises environmental concerns. Energy intensive models can undermine corporate climate commitments if emissions and resource use are not monitored or mitigated, linking AI governance directly to environmental stewardship obligations under ESG disclosures.

Investor expectations further heighten these pressures. Institutional investors increasingly view AI governance as an indicator of broader risk management quality. Companies that fail to treat AI as a fiduciary and ESG issue face regulatory, financial, and reputational consequences.

  1. Regulatory Developments and Indian AI Governance Trends:

India’s AI governance framework is increasingly centring on board level accountability. The National Cyber and AI Centre’s AI Governance Framework for India (2025) promotes lifecycle-based risk management, ethical deployment, and senior management oversight, encouraging AI Ethics and Risk (AIRE) Committees and clear accountability roles.

Similarly, the India AI Governance Guidelines 2025 place ultimate responsibility on boards to ensure AI use aligns with law and societal impact, shaping regulatory expectations despite being non-binding.

This direction is reinforced by SEBI’s June 2025 Consultation Paper on AI/ML in securities markets, which proposes board approved AI governance frameworks, accountability akin to Companies Act section 134(5), and strong emphasis on risk management, explainability, and oversight.

Together, these developments mark a clear shift: AI governance is becoming a core board responsibility, embedded in corporate risk, compliance, and disclosure frameworks rather than treated as a purely technical issue.

  1. Counterargument: Board Capacity and the Risk of Over-Regulation:

A common objection is that AI oversight obligations overburden boards and expect directors to master complex technologies, potentially stifling innovation. While the concern has merit, it is overstated. Corporate governance has never required boards to hold deep technical expertise, their role is to oversee processes, not engineer systems. As with financial risk, boards can govern AI by ensuring robust policies, audits, and reporting structures, relying on expert input while retaining ultimate accountability. This process-based approach balances innovation with responsibility, strengthening trust, resilience, and long-term value rather than undermining efficiency.

  1. Best Practices and Emerging Governance Models:

Forward looking companies are already adapting their governance structures. One emerging best practice is the establishment of board level technology or AI oversight committees that integrate AI risks into enterprise risk management.

Another development is the introduction of specialised roles, such as a Chief AI Risk Officer (CARO), as recommended in NCAIC guidelines. Such roles act as bridges between technical teams, management, and the board, ensuring that AI risks are communicated and addressed effectively.

Regular AI risk assessments, ethical AI policies aligned with ESG commitments, and targeted director training further strengthen governance. From a legal perspective, these measures demonstrate diligence and reduce liability exposure by evidencing proactive oversight.

  1. Conclusion:

Artificial intelligence has fundamentally altered the nature of corporate decision making and risk. In this environment, ESG and corporate governance frameworks must evolve beyond assumptions of human centric control. Under Indian law, boards cannot rely on technical ignorance as a defence against AI driven failures.

AI is now a fiduciary risk, an ESG concern, and a governance challenge rolled into one. Treating it as such requires proactive oversight, robust governance processes, and informed engagement with algorithmic systems.

As Indian companies integrate more deeply into global markets, their ability to govern AI responsibly will shape investor confidence, regulatory trust, and long-term competitiveness. The future of board accountability will not be defined by resistance to technology, but by the capacity to govern it with care, foresight, and responsibility.

Bibliography:

Ahern KR, AI in Listed Companies’ in Oxford Intersections: AI in Society (Oxford University Press 2025).

Buckley RP and others, ‘Regulating Artificial Intelligence in Finance with Senior Managerial Responsibility’ (2021) 44 University of New South Wales Law Journal 359.

Companies Act 2013.

Lieder M, ‘AI Governance in Corporate Decision-Making’ (2022) 50 Journal of Corporate Law 123.

Möslein F, ‘Fiduciary Duties in the Age of Algorithms’ (2023) 18 European Company and Financial Law Review 45.

National Cyber and AI Centre, AI Governance Framework for India (2025).

In re Caremark International Inc Derivative Litigation 698 A 2d 939 (Del Ch 1996).

India AI Governance Guidelines 2025.

SEBI, Business Responsibility and Sustainability Reporting (BRSR) (2021).

SEBI, Consultation Paper on AI/ML in Securities Markets (June 2025).

SEBI (Listing Obligations and Disclosure Requirements) Regulations 2015.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top