Authored By: Thiyagesh R
IFIM LAW COLLEGE
Abstract
This paper examines the current landscape of artificial intelligence (AI) regulation in India, analysing the challenges faced by policymakers and proposing a framework for the road ahead. As India emerges as a significant player in the global Al ecosystem, the need for comprehensive and balanced regulatory approaches becomes increasingly important. This research explores India’s existing policy initiatives, highlights key regulatory gaps, and presents recommendations that balance innovation with ethical considerations, data protection, and socioeconomic impacts. By examining comparative international approaches[1] and India-specific contexts, this paper contributes to the discourse on developing Al governance frameworks that are both globally aligned and locally responsive. The report suggests a multi-tiered regulatory strategy that includes sector-specific guidelines for high-risk applications, the implementation of strong governance mechanisms with transparent accountability structures, the creation of comprehensive AI legislation that builds upon pre-existing frameworks, and the promotion of international collaboration on AI standards.
Keywords: Artificial Intelligence, data security, data privacy biased algorithm.
“Regulation of Artificial Intelligence in India: Challenges and the Road Ahead”
Introduction
“The development of full artificial intelligence could spell the end of the human race” …
Stephen Hawking
India is aware of AI’s possibilities and difficulties. The Principal Scientific Advisor is the chair of an advisory group that has been established to work on creating an “AI for India-Specific Regulatory Framework.” A Subcommittee on “AI Governance and Guidelines Development,” guided by the Advisory Group, will offer practical suggestions for AI governance in India. The Subcommittee’s report on AI Governance, which is currently available for public review, intends to direct the creation of an accountable and reliable AI ecosystem in India. In order to ensure the ethical and sustainable development of AI technologies, the paper places a strong emphasis on responsible AI, identifying design flaws, and reducing risks and downsides.
As of right now, India lacks a law specifically addressing AI, in contrast to the US or the EU. AI Regulation in India 2025: Impact, Jobs & Ethics – Desi Drill Nonetheless, this report represents India’s most significant move in the direction of creating an all-encompassing AI governance framework.
Globally, artificial intelligence (AI) technologies are drastically altering economies and society, with far-reaching effects on daily life, business, and governance. One important component of technological breakthroughs is artificial intelligence. The development of computer systems designed to do tasks that typically call for human intelligence is known as artificial intelligence (AI). These tasks encompass learning, problem-solving, reasoning, understanding natural language, recognizing speech, and evaluating visual information. By simulating human cognitive processes, artificial intelligence (AI) systems enable robots to evaluate data, adjust to new information, and make judgments or predictions. With regard to business and industry, healthcare, finance, transportation, education, entertainment, security, agriculture, environmental monitoring, smart cities, customer services, research and development, and many other areas, artificial intelligence is becoming increasingly prevalent worldwide[2].
India’s Current Regulatory Environment
“We should regulate AI in the context of every application,” stated Jensen Huang, CEO of the massive AI chip company Nvidia, during a recent ET Conversations event in Mumbai. An accountant who uses AI should be subject to regulations. AI lawyers ought to be subject to regulations when they practice law. A doctor who practices AI should be subject to regulations”.
1.Relevant Existing Legislation
Although it predates contemporary AI applications and does not include measures for algorithmic accountability or automated decision-making, this serves as India’s main legislative framework for digital governance.
Section 43A: This clause addresses damages for data privacy violations brought on by careless treatment of private information. This rule applies to AI systems that handle user data.
Section 66D: This section, which is pertinent to deepfakes and other fraudulent content produced by artificial intelligence, punishes impersonation through the use of a computer resource.
Section 67: This clause, which may apply to AI systems that are able to produce offensive content, forbids the publication or transfer of pornographic material in electronic form.
Accountability for social media, intermediaries, and digital platforms is further improved by these regulations[3]. They require data privacy protection, content control, and grievance redressal procedures—all of which are pertinent to platforms driven by artificial intelligence.
The Indian Supreme Court acknowledged the right to privacy as a fundamental right under the Indian Constitution in the case of Justice K.S. Puttaswamy v. Union of India[4]. Despite having nothing to do with AI specifically, this ruling establishes a standard for safeguarding personal information, which is essential for AI systems that frequently handle sensitive data.
Digital Personal Data Protection Act, 2023
India has a thorough framework for securing personal data thanks to the Digital Personal Data Protection Act, 2023, which was signed into law on August 11, 2023. The Act is extremely pertinent to AI systems that manage vast amounts of personal data since it addresses the collection, storage, processing, and sharing of data. The Act’s main provisions include:
These guidelines require AI systems to get user authorization before handling personal information, maintain openness, and provide users the option to revoke their consent. AI systems that depend on cross-border data transfers are impacted by the Act’s requirement that some sensitive data be kept in India. To further ensure accountability, businesses using AI are required to notify regulatory bodies of data breaches within a predetermined window of time.
2.Institutional Framework
In 2018, NITI Aayog unveiled India’s first National Strategy for Artificial Intelligence, which placed a strong emphasis on inclusive AI development through the AIF or All campaign. Five major sectors where the focus of the strategy are Agriculture, Education, Healthcare, Intelligent Cities and smart cities too. The plan called for building legal frameworks for AI-related cybersecurity and data protection, improving research capacities, and producing high-quality datasets. In order to ensure responsible AI development and foster growth in these vital industries, the goal was to achieve a balance between innovation and regulation.
MeitY is actively influencing India’s AI regulatory environment by striking a balance between the hazards of using AI and the necessity for innovation. While addressing possible dangers and damages, the emphasis is on encouraging the development and application of AI in a responsible and ethical manner.
In order to create a thorough framework for AI governance, the DoT has been collaborating with other governmental entities, such as MeitY (Ministry of Electronics and Information Technology), since early 2024. The DoT’s function, which focuses on telecommunications-specific issues, supports India’s larger attempts to regulate AI. The DoT has been especially worried about: Implications of AI systems for network security, making sure there is enough bandwidth for new AI applications, Establishing guidelines for the use of AI in communications networks.
Regulatory Landscape of Artificial Intelligence
International Strategies:
A) The European Union[5] is aware of AI’s revolutionary potential and the necessity of striking a balance between innovation and morality, consumer protection, and fundamental rights. The objective is to provide a legal framework that promotes AI research while guaranteeing a reliable and human-centered methodology. Transparency, accountability, equity, data protection, and respect for fundamental rights are some of the core values that underpin the EU’s strategy. These guidelines seek to allay worries about prejudice, discrimination, and the possible effects of AI technologies on society. In “Proposal for a Regulation on a European approach for Artificial Intelligence,” the European Commission has published its findings. This proposal provides a thorough framework for regulating AI in a number of industries.
B) China[6] has a sophisticated grasp of the fine line that separates promoting innovation from resolving societal issues, as seen by its strategic approach to AI policies. The country understands that good regulation fosters sustainable development rather than impeding advancement.
- National AI Standards: To guarantee consistency, compatibility, and quality among diverse AI applications, national standards were established.
- Data Security and Privacy Laws: put strict rules into place to protect
sensitive and private information, particularly in vital industries like healthcare and finance. - Ethical AI Development Guidelines: Rules and guidelines addressing algorithmic bias, accountability, and transparency in AI development were introduced.
- Sector-Specific rules: Recognizing the particular difficulties and ramifications of AI in fields like finance, healthcare, and education, industry-specific rules were put into place.
- Cybersecurity Regulations: To ensure the security and resilience of AI applications, cybersecurity regulations have been strengthened to reduce potential risks connected with AI systems.
- International Cooperation: Taking an active part in international cooperation to create uniform AI standards and laws, helping to create a unified global regulatory framework.
C) The goal of the Ukrainian Strategy of Artificial Intelligence Development is to lay the groundwork for the technologically sophisticated era in order to ensure the state’s continued economic growth and to improve welfare and living standards. It seeks to:
- To adhere to data protection regulations and improve the amount and calibre of data needed for AI technology development
- To create a reliable communication system by utilizing the processing power that is available.
- To increase the number of skilled workers in the nation’s AI industry and decrease negative AI system behaviour (the development and use of AI systems that can intentionally injure people should be limited).
D) NATO’s Artificial Intelligence Strategy aims:
- To encourage the proper development and use of AI for Allied defence and security, and to offer a framework that would allow NATO and its partners to lead by example.
- To protect and manage AI technology and its potential for innovation while considering security policy concerns including the application of “Principles of Responsible Use.”
- Help identify and guard against the dangers that malicious AI use poses.
to offer transparent and appropriately understandable AI applications, especially through the use of review methodologies, sources, and procedures. - Should proactively reduce any unintentional bias in data sets and in the creation and deployment of AI.
Challenges and Future Perspectives
India’s approach to regulating AI is still dispersed over several legislative documents rather than being unified under a single, all-encompassing framework[7]. The lack of laws specifically addressing AI is the biggest weakness in India’s legal system. Although some parts of AI research are covered by current laws, topics like liability, accountability, bias, and intellectual property in AI-generated content are not fully covered. The creation of specific AI legislation is necessary to guarantee responsible innovation given AI’s disruptive potential.
Liability and Accountability
It can be challenging to assign blame for mistakes or misuse of AI systems because they involve a number of actors, including developers, deployers, users, and regulators. Certain AI-specific challenges, such as liability for AI-generated content or automated choices, cannot be adequately handled by current laws like the IT Act and Copyright Act.
Data Privacy and Security
Although the DPDPA[8] seeks to improve privacy, there are still loopholes in its application and scope, putting citizens at risk, particularly as AI systems handle enormous volumes of personal data. When AI firms employ international datasets, jurisdictional issues occur, making it more difficult to implement Indian laws and giving rise to worries about data sovereignty. Privacy is a top priority as AI systems process enormous volumes of personal data. While certain privacy concerns are addressed by the DPDPA, more protections will be required as AI develops to prevent users’ personal data from being misused by AI platforms.
Algorithmic Bias and Fairness
Models trained on unrepresentative or biased data can perpetuate or exacerbate societal biases, particularly in sensitive areas like law enforcement or recruitment. There is a pressing need for guidelines that ensure AI systems are transparent, explainable, and auditable to build public trust and accountability. Al systems have the potential to produce discriminatory results in crucial domains like recruiting, financing, and law enforcement by inheriting biases from the data they are trained on. Clear rules for data collection, algorithm design, and continuous monitoring to identify and address any inequalities are necessary to ensure fairness and minimize bias.
Ethical and Societal Implications
India’s emphasis on Sovereign Al is a reflection of its desire to develop Al independently[9]. India hopes to develop locally appropriate Al solutions and lessen its dependency on foreign technologies by developing its own Al skills. This strategy promotes both national security and economic progress. The Indian government is aggressively revising its regulatory framework to encourage the responsible usage of Al. The India Al Safety Institute, which would create Al safety standards in cooperation with academic institutions and industrial partners, was launched by the Ministry of Electronics and Information Technology in January 2025.At the legislative level, the Information Technology Act of 2000 will be replaced by the impending Digital India Act, which will include Al-specific measures pertaining to consumer rights, algorithmic responsibility, and regulatory oversight.
Conclusion
India is at a turning point in its AI development, with plenty of potential to use these technologies for social and economic advancement. Realizing these advantages while reducing potential risks requires the development of an efficient regulatory framework.
The existing legal environment, significant obstacles, and a methodical approach to AI governance that strikes a balance between innovation and protection have all been described in this study. The suggested framework places a strong emphasis on sectoral guidelines, risk-based regulation, institutional capacity, and phased implementation strategies that are specific to India’s situation. India’s regulatory strategy needs to be flexible yet moral, cooperative but independent, as the AI ecosystem continues to develop quickly.
Reference(S):
thehindu.com/opinion/op-ed/the-approach-to-regulating-ai-in-india/article69453499.ece
morganlewis.com/blogs/sourcingatmorganlewis/2024/01
shodhganga.inflibnet.ac.in
carnegieendowment.org
economictimes.indiatimes.com
[1] These include China, the European Union, Canada, Korea, Peru, and the United States (although previous President Joe Biden’s Executive Order on the use of AI has since been repealed by U.S. President Donald Trump) https://www.thehindu.com/opinion/op-ed/the-approach-to-regulating-ai-in-india.
[2] Bill Whyman, AI Regulation is Coming- What is the Likely Outcome? Centre for Strategic and International Studies, available at https://www.csis.org/blogs/strategic-technologies-blog/ai-regulation-coming-what-likelyoutcome, accessed in 15th October, 2023.
[3] The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
[4] Writ Petition (Civil) No 494 of 2012; (2017) 10 SCC 1; AIR 2017 SC 4161
[5] https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwisw8- claGDAxXye_UHHf96AL4QFnoECBkQAQ&url=https%3A%2F%2Fec.europa.eu%2Finfo%2Flaw%2Fbetterregulation%2Fhave-your-say%2Finitiatives%2F12527-Artificial-intelligence-ethical-and-legalrequirements&usg=AOvVaw395hCtnfjo4_q5b6rwNXSr&opi=89978449.
[6] https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwio 59K3laGDAxXaYfUHHW6BicQFnoECBAQAQ&url=https%3A%2F%2Flink.springer.com%2Farticle%2F10.1007%2Fs00146-020-00992- 2&usg=AOvVaw2l7Bmk-oBCIsyHm6LR67gj&opi=89978449.
[7] Ministry of Electronics and Information Technology, “Draft National Strategy for Artificial Intelligence,” 2023.
[8] Digital Personal Data Protection Act of 2023
[9] Ministry of Electronics and Information Technology, “Bhashini: National Language Translation Mission Progress Report,” 2023.