Authored By: Ojaswini Verma
Savitribai Phule Pune University
Abstract
Artificial intelligence (AI) and data-driven technologies have become integral to corporate operations in India, enhancing efficiency in finance, human resources, logistics, customer service and strategic decision-making. However, the scale, sensitivity and opacity of automated systems have magnified legal risks associated with privacy, profiling, cybersecurity and discriminatory outcomes. The enactment of the Digital Personal Data Protection Act, 2023 (DPDPA) establishes a comprehensive statutory framework governing the processing of personal data, while sectoral authorities such as the Reserve Bank of India (RBI), Securities and Exchange Board of India (SEBI) and Insurance Regulatory and Development Authority of India (IRDAI) have issued domain-specific guidelines affecting AI-driven systems. This article critically analyses the evolving responsibilities of in-house counsel as organisations deploy AI at scale. Drawing upon constitutional jurisprudence, statutory mandates and global regulatory trends, it argues that corporate legal teams must transition from narrow advisory functions to proactive governance roles that integrate compliance, technological literacy, contractual safeguards and organisational accountability. The paper concludes with recommendations for harmonising corporate innovation with legal oversight in an era of rapid technological transformation.
Introduction
Artificial intelligence has rapidly shifted from being a specialised technological tool to an operational necessity across organizational structures in India. Banks and fintech firms utilise machine-learning models for credit scoring and fraud detection; e-commerce platforms rely on predictive analytics and recommendation engines; human resource departments employ AI-based screening tools to manage high-volume applications; hospitals and insurers increasingly adopt diagnostic and underwriting models; and customer service functions incorporate large-language-model-driven chat systems. These systems depend heavily on personal data, often collected from disparate sources and processed in opaque ways. As a result, legal considerations relating to privacy, accuracy, fairness and explainability have become central to corporate governance.
The Digital Personal Data Protection Act, 2023 represents India’s first comprehensive personal data protection statute. Although it does not explicitly regulate AI, its provisions apply to all automated processing of personal data, thereby encompassing the entire spectrum of corporate AI systems. The Act imposes obligations such as lawful purpose, consent, notice, data minimisation, purpose limitation, data accuracy and security safeguards. Sectoral regulators have supplemented these rules with guidelines on algorithmic lending, AI-based market operations and digital underwriting. Alongside this statutory and regulatory framework is the jurisprudence of the Supreme Court, particularly Justice K.S. Puttaswamy (2017), which affirmed privacy as a fundamental right and articulated principles of necessity and proportionality that inform responsible data governance.
Against this backdrop, the role of in-house counsel has expanded considerably. Corporate legal teams must now understand data flows, risk factors in algorithmic decision-making, cross-border transfer obligations, vendor dependencies, and the operational challenges inherent in deploying AI systems. This article explores how these legal developments recalibrate the responsibilities of in-house counsel and what organisational structures are necessary to ensure that innovation aligns with statutory and constitutional requirements.
Research Methodology
This article employs a doctrinal and analytical research methodology. It examines statutory developments under the DPDPA, relevant sector-specific guidelines issued by regulatory authorities, and constitutional jurisprudence governing privacy, fairness and digital governance. Comparative analysis is drawn from international instruments such as the GDPR, the EU AI Act, OECD Principles on AI and the NIST AI Risk Management Framework. Secondary sources, including regulatory papers, policy briefs and academic writings, are used to contextualise how in-house counsel can operationalise compliance within corporate structures. The methodology emphasises interpretation, critical evaluation and synthesis rather than empirical or quantitative assessment.
Legal Framework Governing AI and Data Protection in India
The DPDPA establishes principles that directly shape the use of AI systems within corporations. Section 4 mandates that personal data may be processed only for lawful purposes and in a fair and reasonable manner, a requirement that limits arbitrary or excessive reliance on automated tools[i]. Section 5 reinforces purpose limitation, making it impermissible to use data collected for one objective, such as customer service, to train AI models for unrelated functions without proper authority. Section 6 mandates notice requirements that extend to disclosure of automated activities, ensuring that individuals are aware when algorithms influence decisions affecting them.
Section 7 governs consent and requires that it be free, informed, specific and unambiguous. Many AI models involve repurposing or secondary uses of personal data; such practices may violate consent requirements unless appropriate notice and consent mechanisms are integrated. Section 8 imposes duties on Data Fiduciaries, including safeguards to avoid unauthorised processing, obligations of accuracy, and the need for data deletion after purpose fulfilment. These provisions limit excessive data retention, a common issue in machine-learning systems where historical datasets are stored indefinitely for model retraining [ii].
Section 9 introduces Significant Data Fiduciaries (SDFs), a category likely to include firms engaging in large-scale profiling or deploying high-impact automated decision-making tools. SDFs must conduct Data Protection Impact Assessments (DPIAs), appoint a Data Protection Officer (DPO) and undergo periodic audits. These provisions elevate compliance expectations for corporations heavily dependent on AI.
Sectoral regulators impose additional requirements. The RBI’s digital lending guidelines emphasise transparency in algorithmic credit decisions and prohibit discriminatory practices arising from opaque scoring models [iii]. SEBI’s 2021 framework requires entities using AI/ML systems in trading, surveillance or risk management to document model logic, maintain audit trails and report material failures [iv]. IRDAI has highlighted the dangers of discriminatory underwriting when AI systems are trained on biased datasets, stressing the need for fairness and documentation [v].
International frameworks influence Indian corporate governance indirectly. The GDPR shapes global expectations of transparency, data minimisation and automated decision-making rights, including the right to explanation in certain circumstances [vi]. The EU AI Act categorises AI systems by risk level, imposing strict obligations on high-risk systems that process sensitive personal data or affect essential services [vii]. The OECD Principles on AI emphasise human-centricity, accountability and robustness [viii]. The NIST AI Risk Management Framework provides practical mechanisms for assessing and mitigating AI risk throughout the model lifecycle [ix]. Together, these instruments supply normative and procedural models that Indian companies may voluntarily adopt.
Judicial Interpretation and Constitutional Standards
India’s constitutional jurisprudence forms the normative core of data protection and AI governance. The landmark decision in Justice K.S. Puttaswamy (Retd.) v. Union of India recognised privacy as a fundamental right and grounded it in dignity, autonomy and informational self-determination [x]. The judgment articulated a four-part proportionality standard: legality, legitimate aim, necessity and procedural safeguards. While this standard governs state action, it also informs corporate governance, particularly where firms use AI systems that affect individuals’ rights or participate in state functions such as e-governance or welfare delivery.
The Aadhaar judgments reinforced the principles of purpose limitation and minimisation, striking down provisions that allowed private entities unrestricted access to Aadhaar authentication services [xi]. These rulings underscore that large-scale digital infrastructures must incorporate safeguards to prevent function creep. In Anuradha Bhasin v. Union of India, the Supreme Court held that indefinite internet shutdowns violate proportionality and procedural fairness, illustrating that digital restrictions must satisfy necessity and least-restrictive means analysis [xii]. In Shreya Singhal v. Union of India, the Court struck down vague online-speech restrictions, establishing that digital regulation must be precise, narrowly tailored and procedurally fair [xiii].
Although Indian courts have not yet evaluated liability for corporate AI systems, the constitutional values articulated in these judgments create an interpretive framework for evaluating fairness, transparency and accountability in automated processing.
Critical Analysis: The Evolving Duties of In-House Counsel
The convergence of statutory obligations, constitutional principles and global regulatory norms has transformed the expectations placed on in-house counsel. Their role now extends far beyond traditional legal advisory functions.
First, in-house counsel must be involved early in the design and procurement of AI systems. They must ensure that data collection is lawful, that consent requirements are satisfied, and that datasets used for model training do not violate purpose limitation or minimisation principles. Counsel must also advise on mechanisms for transparency, including notices that disclose automated decision-making and channels for individuals to seek explanations or raise objections [xiv].
Second, vendor management has become a central concern. AI vendors often use proprietary models trained on undisclosed datasets. In-house counsel must negotiate Data Processing Agreements that require transparency regarding training data provenance, model documentation, security obligations, audit rights and indemnity provisions. They must evaluate cross-border data transfer mechanisms, particularly when vendors rely on foreign cloud infrastructure [xv].
Third, in-house counsel must operationalise compliance within the organisation. This includes conducting DPIAs for high-risk systems, ensuring breach-notification readiness, coordinating with privacy and cybersecurity teams, and establishing internal policies that govern acceptable AI use. Counsel must also address issues of algorithmic bias and discrimination by working with technical teams to review model outputs and performance metrics.
Fourth, in-house counsel must grapple with structural and cultural challenges. Many organisations lack adequate documentation of data flows or internal awareness of AI risks. Counsel must advocate for AI registers, periodic audits, model-validation protocols and training programmes for employees at all levels. They must also mediate between technical teams and senior leadership, ensuring that AI investments are aligned with legal obligations rather than driven solely by operational expedience.
Finally, counsel must engage externally, interacting with regulators, responding to consumer complaints, participating in industry bodies and shaping internal policy in anticipation of regulatory changes.
Recent Developments and Emerging Challenges
Governmental and regulatory developments continue to shape corporate AI governance. The MeitY advisory on deepfakes and AI-generated content requires platforms to establish safeguards against harmful or misleading synthetic media [xvi]. The DPDPA’s yet-to-be-released rules on SDF designation, DPIA standards and audit requirements will significantly impact how companies document AI systems.
AI-driven discrimination remains a pressing concern. Machine-learning models may reproduce historical biases embedded in datasets. In sectors such as lending, hiring and insurance, this can result in unlawful discrimination. Companies must therefore incorporate fairness metrics, explainability tools and bias-testing protocols.
Cybersecurity threats are also escalating. AI models can be attacked through adversarial inputs, data poisoning or model inversion attacks. In-house counsel must ensure that contractual and internal safeguards address such risks. Cross-border data transfers remain complex, with varying international standards and the potential for conflict between foreign laws and Indian regulations.
These challenges underscore the need for proactive governance, robust documentation and a multidisciplinary approach to AI oversight.
Suggestions / Way Forward
To harmonise innovation with legal accountability, corporations must adopt structured AI governance frameworks. They should establish governance committees that oversee AI deployment, monitor risks and ensure alignment with legal requirements. DPIAs must be treated as substantive exercises rather than procedural formalities, identifying specific harms and mitigation strategies. Vendor contracts must emphasise transparency, data provenance, auditability, security obligations and liability allocation.
Organisations must invest in training programmes that enhance AI literacy among legal, technical and managerial staff. Internal policies must clarify acceptable AI uses and establish mechanisms for ongoing monitoring of model performance, fairness and transparency. Companies should also participate in regulatory sandboxes and industry working groups, fostering dialogue and contributing to the development of responsible AI norms.
At the policy level, regulators should publish sector-specific guidance addressing transparency, algorithmic explainability, fairness testing and documentation standards. Clarification of the DPDPA’s applicability to automated decision-making is essential, as is the development of standards for algorithmic audits and model documentation.
Conclusion
Artificial intelligence has become an essential tool for corporate innovation in India, but its deployment raises profound legal and ethical questions. The DPDPA, sectoral guidelines and constitutional jurisprudence provide a framework for responsible data governance, but organisations must translate these principles into actionable internal mechanisms. In-house counsel are central to this effort. Their evolving role encompasses governance, risk assessment, vendor management, policy formulation and organisational training. As AI continues to expand across sectors, the strategic involvement of corporate legal teams will be indispensable in ensuring that innovation is balanced with legal accountability and respect for individual rights.
At the same time, the integration of AI within complex business functions requires sustained internal coordination, continuous monitoring and periodic reassessment of risks, particularly in areas involving profiling or automated decision-making. Ensuring fairness, transparency and explainability will remain ongoing challenges, reinforcing the need for in-house counsel to work closely with technical and managerial teams. By adopting structured governance frameworks and aligning technological development with legal standards, corporations can navigate the complex intersection of AI and data protection while maintaining public trust, regulatory compliance and long-term organisational credibility.
Reference(S):
[i] Digital Personal Data Protection Act, No. 22 of 2023, INDIA CODE.
[ii] Id. §§ 5–8.
[iii] Reserve Bank of India, “Guidelines on Digital Lending” (2022).
[iv] Securities and Exchange Board of India, “Framework for AI/ML Applications by Market Intermediaries” (2021).
[v] Insurance Regulatory and Development Authority of India, Circulars on Digital Underwriting (2021–2023).
[vi] Regulation (EU) 2016/679 (General Data Protection Regulation).
[vii] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (EU AI Act), Political Agreement 2024.
[viii] OECD, Recommendation of the Council on Artificial Intelligence (2019).
[ix] NIST, “Artificial Intelligence Risk Management Framework” (Version 1.0, 2023).
[x] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 S.C.C. 1 (India).
[xi] K.S. Puttaswamy v. Union of India (Aadhaar), (2019) 1 S.C.C. 1 (India).
[xii] Anuradha Bhasin v. Union of India, (2020) 3 S.C.C. 637 (India).
[xiii] Shreya Singhal v. Union of India, (2015) 5 S.C.C. 1 (India).
[xiv] Digital Personal Data Protection Act, No. 22 of 2023, § 6.
[xv] See typical Data Processing Agreements and AI vendor due-diligence standards.
[xvi] Ministry of Electronics & Information Technology (MeitY), “Advisory on Deepfakes and AI Safety” (2024).





