Home » Blog » REGULATING EMERGING TECHNOLOGIES: CURRENT LEGAL ISSUES IN AI, DATA PRIVACY, AND DIGITAL PLATFORMS 

REGULATING EMERGING TECHNOLOGIES: CURRENT LEGAL ISSUES IN AI, DATA PRIVACY, AND DIGITAL PLATFORMS 

Authored By: RIHAASH T

Saveetha School Of Law, SIMATS​

ABSTRACT 

The fast growth of technology has caused issues for courts all across the globe that have never  happened before. Artificial intelligence, digital platforms, and data flows across borders are  changing the way governments, businesses, and people protect their rights. But the laws we  have now aren’t good enough to deal with these problems. This article thoroughly analyzes  current legal issues in technology law, such as liability and artificial intelligence, data privacy  frameworks, platform regulation, and cybersecurity. This research evaluates the approaches of  the European Union, United States, and India to see whether divergent national laws can uphold  accountability in a borderless digital environment. The research asserts that although ethical  ideals are insufficient, rigorous legal measures such as enforceable AI liability frameworks,  extensive data protection laws, and cohesive global digital governance—are crucial to  harmonize innovation with the preservation of human rights. 

Keywords: Artificial Intelligence, Data Privacy, Platform Regulation, Cybersecurity,  Technology Law

INTRODUCTION 

Technology has always driven changes in the law, but the problems of the twenty-first century  are bigger and more urgent than ever before. The rise of artificial intelligence (AI), the growth  of digital platforms, and the widespread sharing of personal data throughout the world have  changed economies and governments in ways that make it hard for traditional regulatory  systems to keep up. Legal frameworks initially designed for physical locations, delineated  regions, and human-mediated transactions now confront problems posed by autonomous  decision-making systems, algorithmic governance, and global networks that surpass  jurisdictional boundaries. So, the law is racing against time to protect rights, hold people  accountable, and make sure that competition is fair without getting in the way of new ideas. 

Artificial intelligence is the best example of this problem. More and more, algorithms  that can work on their own are being applied in fields including healthcare diagnostics,  financial trading, predictive police, and self-driving cars. But questions about who is  responsible when an AI system does harm are still not answered. The common law of  negligence, designed for human actors, has difficulties in adapting to autonomous systems.  Scholars debate whether artificial intelligence should be classified as a tool, an agent, or  perhaps a new legal entity. The European Union’s proposed AI Liability Directive aims to fill  this vacuum, although there is still discussion over whether it is enough and how it would be  enforced1

Along with AI, controlling personal data has become a major concern in the digital  age. The General Data Protection Regulation (GDPR) of the European Union has set a global  standard by giving consumers a lot of rights and putting strict rules on data controllers. On the  other hand, the US continues to use a fragmented, sectoral model, which leaves a lot to be  desired in terms of consumer protection. India’s Digital Personal Data Protection Act of 2023  aims to protect people’s privacy while also keeping the country safe. However, it has been  criticized for giving too many exemptions to government agencies2. Squared The different  ways of doing things lead to a disjointed set of compliance standards for global companies.  This raises the question of whether it is possible to harmonize in a digital world that has no  borders. 

Digital platforms also have big concerns. Meta, Google, Amazon, and Twitter/X are huge tech  companies that not only act as middlemen but also as powerful regulators of commerce,  conversation, and information. Their decisions on how to manage content, utilize data, and  advertise have a big impact on both democratic discourse and economic competition. Different  countries have reacted in different ways. For example, the United States has strong protections  for intermediaries under Section 230 of the Communications Decency Act. The European  Union has passed the Digital Services Act and Digital Markets Act to hold “gatekeeper”  platforms accountable. India has passed the 2021 Information Technology Rules to give the  government more power to oversee online content. These regulatory experiments show how  free speech, government power, and corporate power may clash in the digital public space3

The legal framework is made worse by the difficulties of cybersecurity and digital evidence.  The increasing frequency of ransomware attacks, state-sponsored cyber operations, and  multinational data breaches challenges the capacity of national legal systems to investigate and  punish cybercrimes. Instruments such as the Council of Europe’s Budapest Convention on  Cybercrime provide a foundation for international cooperation; yet, countries are reluctant to  cede sovereignty over security matters. Additionally, courts have difficulties with the  acceptance of digital evidence, issues related to the chain of custody, and the identification of  cyberattacks. The use of technology by police shows how important it is to have worldwide  legal systems that balance the needs of the state with the rights of individuals4

There are five parts to this article. Part I looks at the problems that come with controlling AI,  focussing on responsibility and accountability. Part II examines data privacy frameworks,  contrasting the rights-based approach of the GDPR with the more fragmented systems in the  United States and India. Part III talks about regulating platforms and the problems that come  with intermediary liability, competition law, and content moderation. Part IV looks at  cybersecurity and digital evidence, stressing how important it is for countries to work together.  

Part V looks at what could happen to technology law in the future, such as the idea of global  digital governance and the trade-offs between protecting rights and encouraging innovation. 

ARTIFICIAL INTELLIGENCE AND LIABILITY 

Integrating AI into essential functions of government and business poses considerable liability  challenges. Autonomous systems may operate with a degree of unpredictability, producing  outcomes that are unforeseen by their developers or users. This leads to a lack of regulation  since traditional tort concepts presume that people are responsible for their actions, yet rigorous  liability frameworks only apply to activities that are dangerous in and of themselves. 

The European Union has attempted to tackle this issue with its Artificial Intelligence Act and  the related AI Liability Directive. The Act classifies AI systems by danger tiers, enforcing  rigorous rules on “high-risk” technologies, including biometric surveillance, autonomous cars,  and credit scoring algorithms5. The Directive alleviates the evidentiary burden for claimants,  acknowledging that the opacity of algorithms complicates the demonstration of negligence.  These measures, however, remain contentious: industrial players caution against hindering  innovation, while civil society organisations contend that the framework is insufficient to  guarantee accountability. 

On the other hand, the United States has mainly adopted a laissez-faire approach, with  established negligence and product liability frameworks handling liability issues. Judicial  agencies have been dealing with cases of algorithmic discrimination and accidents involving  self-driving cars. However, the absence of federal law makes things very unclear. The Ninth  Circuit allowed a negligence lawsuit against Snapchat for its “speed filter” in the case of  Lemmon v. Snap Inc. This was because the filter supposedly encouraged reckless driving6. The  case shows that the courts are willing to hold tech companies responsible for design choices  that lead to predicted dangers, even if they don’t have anything to do with AI. 

India still doesn’t have a complete set of rules governing AI liability. The approach is broken  up into several parts, such as consumer protection regulations, parts of the IT Act, and restrictions that only apply to certain industries. India’s growing investment in AI-driven public  administration, such as predictive policing and welfare distribution, underscores the need of  implementing accountability mechanisms. Without solid safeguards, the chances of  algorithmic bias hurting poor populations go up a lot7

The conversation on AI culpability raises deep questions about freedom, responsibility, and  human rights. Should artificial intelligence be seen only as a commodity, or does its capacity  for autonomous decision-making require the creation of a new legal classification? Scholars  have proposed solutions that include the creation of “electronic personhood” and rigorous  liability frameworks underpinned by mandatory insurance8. Although achieving agreement is  challenging, it is clear that legal systems must swiftly adjust to ensure that technological  improvements do not outpace the law’s capacity to assign guilt. 

DATA PRIVACY AND PROTECTION 

Data privacy is one of the areas of technology law that has gotten the most attention from across  the world. The GDPR has set the standard by giving people rights like the right to erase their  data, the freedom to move their data, and strict consent requirements. The extraterritorial reach  means that companies who handle data of EU people must follow the rules, no matter where  they are outside of the EU9. Efforts to enforce regulations have been quite strong, and  companies like Meta and Google have had to pay billions of euros in fines. 

 The United States, on the other hand, still doesn’t have a full federal data protection  law. Instead, it uses a mechanism that is particular to each sector: HIPAA for health  information, GLBA for financial information, and COPPA for information about children. The  Federal Trade Commission (FTC) protects privacy by utilising its authority to stop businesses  from engaging unfair and misleading business practices. The California Consumer Privacy Act  (CCPA) and its successor, the California Privacy Rights Act (CPRA), move U.S. legislation  closer to a GDPR-like framework. However, the lack of standardisation makes it hard for both  businesses and individuals to follow the rules10

The Digital Personal Data Protection Act, 2023 of India represents a significant progress but  remains contested. While it grants individuals rights to their data and imposes obligations on  data fiduciaries, others argue that its broad exemptions for governmental bodies undermine  privacy protections. The Supreme Court of India’s recognition of privacy as a fundamental  right in Puttaswamy v. Union of India11 created the constitutional foundation for the law;  nonetheless, its effectiveness would depend on independent enforcement mechanisms and  judicial oversight. The Personal Information Protection Law (PIPL) in China is similar to the  GDPR in certain ways, but it also includes parts on national security. China’s government, via  its Data Security Law, demonstrates a model where privacy protection is subordinate to state  objectives, raising concerns around surveillance and authoritarian rule12.  

The difference in data privacy regulations shows how hard it is to get everyone  on the same page in a global digital economy. Businesses that operate in multiple nations have  to deal with contradictory rules, and individuals get different levels of protection depending on  where they are. Scholars have proposed the creation of an international treaty on data  protection, akin to the Budapest treaty on Cybercrime; however, geopolitical rivalries render  such harmonisation unlikely in the near future13

PLATFORM REGULATION 

Digital platforms provide some of the most difficult legal challenges of the modern age. Their  supremacy in markets and debate has rendered them quasi-sovereign entities, prompting  enquiries on accountability, competition, and freedom of speech. 

Section 230 of the Communications Decency Act in the United States gives platforms a lot of protection for user-generated content. This means they can’t be held  responsible for what users post, but they may moderate content in good faith14. Section 230 has  been praised for encouraging free speech and new ideas, but it has also been criticised for  letting platforms get away with spreading false information, hate speech, and harmful content. Recent cases like Gonzalez v. Google LLC looked at the limits of Section 230, but the Supreme Court chose not to limit its use15. The European Union has adopted a more interventionist  stance. The Digital Services Act (DSA) mandates due diligence requirements for platforms,  including openness in content filtering, risk evaluations, and independent audits. The Digital  Markets Act (DMA) addresses anti-competitive behaviours by “gatekeepers,” aiming to  promote equitable competition16. These regulations embody the EU’s overarching goal of  digital sovereignty, intended to mitigate the supremacy of U.S.-based technology  conglomerates. 

The content Technology Rules of 2021 in India require platforms to have grievance officers,  make it easier to trace messages, and quickly delete unlawful content when they are told to.  Critics say that these rules give the government too much power over what people say online,  which makes people worry about censorship and stops people from speaking their minds17

The disagreement between platform autonomy, government regulation, and user rights  continues without a solution. Some experts suggest classifying platforms as public utilities, so  imposing more obligations for impartiality and fairness18. Some people warn that too much  regulation may make the government more powerful and stop new ideas from coming up. It is  clear that platforms can no longer be seen as just intermediaries; they need to be held legally  responsible for their role as judges of digital public discussion. 

CYBERSECURITY AND DIGITAL EVIDENCE 

The digital age has created new kinds of weakness. Ransomware attacks, supply chain  breaches, and cyber operations backed by the government all pose serious threats to important  infrastructure. The job of legal institutions is to stop these kinds of attacks and punish those  who do them. 

The Budapest Convention on Cybercrime is the most important international agreement. It  encourages cooperation between countries on issues like extradition, sharing evidence, and  making sure that serious crimes are treated the same way in all countries. However, important  countries like Russia, China, and India are not signatories, which makes it less effective19. The  United Nations has started talks for a new cybercrime treaty, but there are still big differences  between countries over human rights and sovereignty. 

Authentication, chain of custody, and attribution make it harder to prosecute people in court.  The Supreme Court of India, in Anvar P.V. v. P.K. Basheer, clarified that electronic documents  need a certificate according to Section 65B of the Evidence Act; nevertheless, subsequent cases  have shown inconsistencies in its application20. U.S. courts also use the Federal Rules of  Evidence for digital documents, but the fact that they are so easy to change is still a problem. 

Cybersecurity is related to questions of state accountability in international law. The Tallinn  Manual, although not legally enforceable, provides guidance on the applicability of  international humanitarian law to cyber operations, including principles of proportionality and  sovereignty21. Still, countries are still hesitant to officially blame cyberattacks, which shows  that they are both having trouble finding evidence and being careful with geopolitics. The  regulation of cybersecurity illustrates the tension between national sovereignty and global  interdependence. Effective governance requires transnational cooperation; nevertheless,  concerns about national security hinder the formulation of enforceable international  regulations. 

FUTURE PATHWAYS 

The analysis reveals a common issue: judicial systems are struggling to adapt to technology  that crosses borders and traditional lines of accountability. There are a few options that should  be looked at going forward. To control AI, we need uniform liability frameworks. Mandatory insurance systems, together with strict accountability for high-risk applications, might make it  easier to be paid while also encouraging new ideas. Second, it is very important for countries  throughout the globe to agree on data privacy, but the current geopolitical situation makes it  unlikely that a convention would be enforced. Interoperability agreements, which let different  systems recognise one other’s sufficiency, might be a good way to do this. Third, platform  regulation has to find a balance between freedom of expression and accountability. Co regulation schemes, in which platforms set rules of conduct with government oversight, might  represent a middle ground. 

Fourth, cybersecurity governance needs a new kind of  multilateralism. Increasing the number of members of the Budapest Convention, together with  regional efforts to establish trust, might make the group stronger. In the end, legal education  and capacity building are very important. Judges, lawyers, and regulators must be technically  skilled in order to work well with complex technology. 

CONCLUSION 

Technology law is at a very important point in its development. AI’s potential to disrupt, the  abundance of personal data, the dominance of digital platforms, and the rising threat of  cybercrime all make it harder for legal institutions to hold people accountable and protect their  rights. Current frameworks, although significant, are fragmented and reactive. To solve these  problems, the law needs to move forward in three ways: first, it needs to create enforceable  liability frameworks for AI and autonomous systems; second, it needs to put in place strong  and interoperable data protection regimes; and third, it needs to hold digital platforms and cyber  operations accountable. The main problem is finding a balance between protecting rights and  encouraging new ideas. Too much control might slow progress, while too little regulation could  lead to abuse and harm. Nations may make sure that technological progress upholds human  dignity by using legal systems that are flexible, open, and based on rights. The law should not  only keep up with new technologies, but it should also help shape their path towards fairness  and long-term viability.

REFERENCE(S): 

  1. Proposal for a Directive of the European Parliament and of the Council on Adapting  Civil Liability Rules to Artificial Intelligence, COM (2022) 496 final (Sept. 28, 2022). 2. Regulation (EU) 2016/679 (General Data Protection Regulation). 
  2. 47 U.S.C. Pg. 230 (2018); Regulation (EU) 2022/2065 (Digital Services Act);  Information Technology (Intermediary Guidelines and Digital Media Ethics Code)  Rules, 2021 (India). 
  3. Convention on Cybercrime, Nov. 23, 2001, 2296 U.N.T.S. 167. 
  4. Proposal for a Regulation of the European Parliament and of the Council Laying Down  Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021)  206 final (Apr. 21, 2021). 
  5. Lemmon v. Snap Inc., 995 F.3d 1085 (9th Cir. 2021). 
  6. Vidushi Marda, Artificial Intelligence Policy in India: A Framework for Ethical and  Rights-Based Governance, 33 Nat’l L. Sch. India Rev. 67 (2021). 
  7. Ugo Pagallo, The Legal Challenges of Big Data: Putting Secondary Rules First in the  Field of EU Data Protection, 3 Eur. Data Prot. L. Rev. 36 (2017). 
  8. Case C-131/12, Google Spain SL v. Agencia Española de Protección de Datos, 2014  E.C.R. I-317. 
  9. Cal. Civ. Code Pg 1798.100–1798.199 (West 2020). 
  10. K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India). 
  11. Personal Information Protection Law of the People’s Republic of China (promulgated  Aug. 20, 2021). 
  12. Graham Greenleaf, Global Data Privacy Laws 2023: 162 National Laws and Many  Bills, 183 Privacy Laws & Bus. Int’l Rep. 10 (2023). 
  13. Jeff Kosseff, The Twenty-Six Words That Created the Internet (2019). 15. Gonzalez v. Google LLC, 598 U.S. 617 (2023). 
  14. Regulation (EU) 2022/1925 (Digital Markets Act). 
  15. Chinmayi Arun, Rebalancing Regulation of Speech: Intermediary Liability in India, 33  Harv. J.L. & Tech. 49 (2019). 
  16. Frank Pasquale, From Territorial to Functional Sovereignty: The Case of Amazon, 40  Fordham Urb. L.J. 1107 (2013).
  17. Michael Chertoff & Paul Rosenzweig, A Stronger International Legal Framework on  Cybercrime, 112 Am. J. Int’l L. Unbound 197 (2018). 
  18. Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473 (India). 
  19. Michael N. Schmitt ed., Tallinn Manual 2.0 on the International Law Applicable to  Cyber Operations (2017).

1 Proposal for a Directive of the European Parliament and of the Council on Adapting Civil Liability Rules to  Artificial Intelligence, COM (2022) 496 final (Sept. 28, 2022). 

2 Regulation (EU) 2016/679 (General Data Protection Regulation).

3 47 U.S.C. Pg230 (2018); Regulation (EU) 2022/2065 (Digital Services Act); Information Technology  (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (India). 

4 Convention on Cybercrime, Nov. 23, 2001, 2296 U.N.T.S. 167.

5 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on  Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final (Apr. 21, 2021). 6 Lemmon v. Snap Inc., 995 F.3d 1085 (9th Cir. 2021).

7 Vidushi Marda, Artificial Intelligence Policy in India: A Framework for Ethical and Rights-Based Governance,  33 Nat’l L. Sch. India Rev. 67 (2021). 

8 Ugo Pagallo, The Legal Challenges of Big Data: Putting Secondary Rules First in the Field of EU Data  Protection, 3 Eur. Data Prot. L. Rev. 36 (2017). 

9 Case C-131/12, Google Spain SL v. Agencia Española de Protección de Datos, 2014 E.C.R. I-317. 10 Cal. Civ. Code pg 1798.100–1798.199 (West 2020).

11 K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India). 

12 Personal Information Protection Law of the People’s Republic of China (promulgated Aug. 20, 2021).

13 Graham Greenleaf, Global Data Privacy Laws 2023: 162 National Laws and Many Bills, 183 Privacy Laws &  Bus. Int’l Rep. 10 (2023). 

14 Jeff Kosseff, The Twenty-Six Words That Created the Internet (2019).

15 Gonzalez v. Google LLC, 598 U.S. 617 (2023). 

16 Regulation (EU) 2022/1925 (Digital Markets Act). 

17 Chinmayi Arun, Rebalancing Regulation of Speech: Intermediary Liability in India, 33 Harv. J.L. & Tech. 49  (2019). 

18 Frank Pasquale, From Territorial to Functional Sovereignty: The Case of Amazon, 40 Fordham Urb. L.J. 1107  (2013).

19 Michael Chertoff & Paul Rosenzweig, A Stronger International Legal Framework on Cybercrime, 112 Am. J.  Int’l L. Unbound 197 (2018). 

20 Anvar P.V. v. P.K. Basheer, (2014) 10 SCC 473 (India). 

21 Michael N. Schmitt ed., Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (2017).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top