Home » Blog » BANGLADESH’S DIGITALCRISIS: WHY WE NEED NEW LAWS FOR AI MISUSE ON SOCIAL MEDIA

BANGLADESH’S DIGITALCRISIS: WHY WE NEED NEW LAWS FOR AI MISUSE ON SOCIAL MEDIA

Authored By: Meer Joheb

University of Asia Pacific

Introduction 

Artificial Intelligence (AI) has changed the world in many ways. It offers improvements in areas  such as healthcare, education, and technology. However, it has also created serious social and  ethical problems.1 The growing use of AI on social media platforms has increased the risk of  misinformation, privacy breaches, and digital manipulation. In Bangladesh, social media networks  like Facebook, TikTok, and YouTube use AI systems to recommend and distribute content. While  this technology helps connect people, it also exposes users to new forms of exploitation. The  country’s existing laws were not made for this modern reality, and they are now too weak to  address these problems.2 

This legal crisis is not only technical but also institutional. Law making in Bangladesh is slow and  cannot keep up with the rapid growth of AI. Political instability, the migration of skilled experts,  and limited local research make it even harder to create proper AI legislation. Unless this gap is  closed soon, the misuse of AI could become a major social and democratic threat. 

The Growing Misuse of AI on Social Media 

AI misuse on social media has become a serious issue in Bangladesh. It is not just about fake news  but about how AI is being used to exploit individuals and communities. AI tools are now capable  of creating false identities, fake videos, and targeted propaganda. 

The Problem of Deepfakes and Explicit Content 

One of the most dangerous uses of AI is the creation of fake explicit videos known as deepfakes.  On platforms such as Telegram, people can upload a photo and use AI bots to generate realistic  fake videos without consent. These are often used to blackmail victims, particularly women and  older citizens. 

This problem is worsening. In 2024, the National Center for Missing and Exploited Children  (NCMEC) reported over 1.1 million cases of child sexual abuse material in Bangladesh, with AI  generated content becoming a major contributor.3 In a conservative society, victims face extreme  shame and emotional harm, even when proven innocent.4 

Unfortunately, the current laws like the Pornography Control Act 2012, the Penal Code 1860 and  the newly enacted Cyber Security Ordinance 2025 are not designed to handle synthetic content.5 These laws focus on physical crimes but fail to define and punish AI generated material properly.  This makes it easy for offenders to escape accountability. 

AI Manipulation and Political Influence 

AI is also being used to influence political opinions. Algorithms can target users with specific posts and videos to shape their political beliefs. This manipulation threatens democratic processes. Globally, AI powered misinformation has been identified as one of the biggest modern risks.6 In  Bangladesh, the Chief Election Commissioner warned that AI generated deepfakes using voices  and images of political leaders had already interfered with election campaigns. As deepfakes  become more realistic, with over 68 percent now nearly indistinguishable from real media,7 it  becomes harder for people to know what is true. 

This kind of digital manipulation can quietly influence millions of people. It can also be used to  silence critics or spread propaganda, undermining free speech and fair elections. 

Financial Crimes and Privacy Concerns 

AI is also being used for advanced financial scams. Criminals now use large language models to  write convincing emails and text messages that trick people into giving away their money or  personal data.8 One major example was a 25-million-dollar scam in Hong Kong, where executives  were tricked using AI generated video calls. 

AI can also be used to create “synthetic identities” by mixing real and fake data to commit fraud.9 Social media platforms collect large amounts of personal information, and their AI systems  analyze it for targeted advertising. This not only invades privacy but also allows companies to  manipulate users’ online behavior. 

Since Bangladesh still lacks strong data protection laws, people have little control over how their  personal data is used or shared. This makes the country an easy target for both local and  international digital crimes. 

Gaps in Current Laws and Policies 

The current legal framework in Bangladesh cannot handle AI related crimes effectively because  Old Laws for New Problems 

Existing laws such as the Information and Communication Technology Act 2006, and the Penal  Code 1860 were created before the rise of modern AI. Although the Interim Government enacted  the Cyber Security Ordinance 2025, it still lacks tackling AI related crimes. Also, these laws focus  on traditional cybercrimes like hacking or data theft but say nothing about algorithmic bias,  automated decisions, or synthetic content. 

For example, the Cyber Security Act allows authorities to remove content for “data security”  reasons, but this is vague and can be abused. Tort law can in theory help victims of deepfakes, but  there are very few successful cases. The law still focuses on physical evidence rather than digital  systems, leaving AI driven crimes unregulated. 

The Personal Data Protection Ordinance 2025 

The interim government took an important step by drafting the Personal Data Protection Ordinance  (DPO) 2025.10 This law introduces strong privacy protections, including explicit consent for data  collection, strict rules for handling data from children, and a requirement to report data breaches within 72 hours.11 It also limits sending personal data outside the country without safeguards. The DPO 2025 is a vital first step toward protecting citizens’ privacy. It gives people rights such  as access, correction, and deletion of their personal data. However, while this law focuses on data  collection, Bangladesh still needs a separate AI Act to deal with how algorithms make and use  decisions. 

The earlier Data Protection Act 2023 also exists but cannot function properly because the planned  Data Protection Office has not yet been established.12 

Regulatory Transition Table 

Risk of AI 

Existing Law 

Main Gap

Solution in New  

Frameworks

Deepfakes  

and Explicit  Content

Penal Code 1860, Pornography Control  Act 2012, Cyber Security Ordinance 2025

No clear  

definition of  

synthetic media

Create AI specific law  

banning deepfake exploitation

Data  

Collection  

and Tracking

Constitution Article 43(b), Contract Act  1872

No data rights or  consent rules

DPO 2025 includes consent  and breach reporting¹¹

Algorithmic  Manipulation

ICT Act 2006, Cyber Security Ordinance 2025

No transparency  in algorithms

National AI Policy 2024 calls  for accountability¹⁶

Ethical Principles and AI Policy 

Bangladesh’s National AI Policy 2024 draft promotes a human centered approach to AI that values  transparency, accountability, and fairness.13 It stresses respect for human rights and the rule of law  in AI development. However, enforcing these principles remains difficult because the institutions  responsible for technology governance are weak. 

Ethical policies are a good start, but they need to be supported by legal enforcement and technical  expertise. Otherwise, they will remain symbolic statements without real impact. 

Learning from Other Countries 

The European Union Approach 

The European Union (EU) AI Act, passed in 2024, follows a risk-based framework. It divides AI  systems into categories such as unacceptable, high, and limited risk. Systems that pose an  unacceptable risk, like manipulative or social scoring systems, are completely banned.14 High risk  systems must follow strict rules on transparency and safety. 

Bangladesh can adopt the principle of unacceptable risk by banning harmful AI uses such as  deepfake creation or political manipulation tools. 

The Singapore and Japan Models 

Countries like Singapore and Japan use a more innovation friendly model. Singapore’s AI  Governance Framework offers voluntary guidance on trust, accountability, and incident  reporting.15 It also allows companies to test AI technologies safely through regulatory sandboxes.  Japan’s AI Promotion Act 2025 aims to make the country one of the most AI friendly in the  world.16 

Bangladesh can learn from these examples. Regulatory sandboxes would allow local developers  to test AI safely and build ethical solutions in areas such as content moderation or legal research. This would help balance innovation with safety. 

Summary of Global Models 

Model 

Focus 

Key Mechanism 

Lesson for Bangladesh

EU AI Act 

Safety and Risk Control 

Bans unacceptable AI  uses

Ban harmful AI tools like  

deepfake bots¹⁵

Singapore  

Framework

Innovation and  

Accountability 

Regulatory sandboxes 

Encourage ethical local AI

Japan AI  

Promotion Act

Research and  

Development

Incentives for  

innovation 

Strengthen local AI research

Bangladesh  

(Proposed)

Human Centered and  Sovereign

Custom AI Act and  

DPO 2025

Combine safety with innovation

Data Sovereignty and Local Innovation 

Unlike countries such as China, Bangladesh depends heavily on foreign social media platforms.  This means that national data is stored and analyzed abroad. Such dependency is risky because it  gives foreign companies control over local data and public opinion. 

To protect national interests, Bangladesh must build local data centers and encourage homegrown  AI platforms. This would help retain control over personal information and support the country’s  growing tech talent. Encouraging domestic innovation and offering better opportunities to local  researchers would also reduce the ongoing brain drain. 

Recommendation 

  • Creating New AI Laws 

Bangladesh urgently needs a specific AI Act. This law should require transparency from  all social media platforms operating in the country and make them legally responsible for  AI misuse. It should define deepfake crimes clearly and ensure strong punishment for those  who use AI to harm others. 

The government should also finalize and enforce the DPO 2025 as soon as possible,  ensuring that citizens’ data rights are protected. 

  • Building Strong Institutions 

A new, independent AI Regulatory Authority is essential. This body should monitor  algorithmic fairness, investigate misuse, and ensure that both public and private  organizations follow the rules. Strengthening local research institutions and universities  will also help Bangladesh build its own AI talent pool. 

  • Educating the Public 

Digital literacy is key to fighting AI misuse. Schools and universities should include AI  awareness in their lessons. Citizens must be taught how to identify deepfakes, recognize  misinformation, and protect their personal data. This will create a more informed and  resilient society. 

Conclusion 

AI misuse on social media has become one of the most serious threats to privacy, democracy, and  social harmony in Bangladesh. Old laws created for a different era cannot handle new AI  challenges such as deepfakes, online scams, and algorithmic manipulation. The Personal Data  Protection Ordinance 2025 is a strong starting point, but it must be followed by an AI specific law  and strong enforcement institutions. 

Bangladesh should learn from global models. It can combine the EU’s focus on safety with  Singapore’s emphasis on innovation to create a balanced approach. By protecting data,  establishing local AI infrastructure, and educating the public, Bangladesh can turn AI into a force  for progress instead of exploitation. The future depends on how quickly and wisely the country  acts.

Reference(S):

  1. World Economic Forum, Global Risks Report 2025 (WEF 2025). 
  2. Jural Acuity, Reforming AI Laws and Regulation in Bangladesh (Tech Global Institute,  2024). 
  3. Childlight Global Child Safety Institute, ‘Study Finds Millions of Children Face Sexual  Violence – and AI Deepfakes Surge is Driving New Harm’ (Childlight, 2024).
  4. Shirish and Komal, ‘A Socio Legal Inquiry on Deepfakes’ (2024) 54(2) California  Western International Law Journal 558. 
  5. OECD.AI Policy Observatory, ‘Bangladesh’s Chief Election Commissioner Warned That  AI Generated Misinformation and Disinformation Have Already Disrupted Election  Campaigns’ (2025). 
  6. Keepnet Labs, ‘Deepfake Statistics and Trends’ (2024). 
  7. Ibid 
  8. Mastercard, ‘Cybersecurity for the Enterprise: Staying Ahead of AI Powered Scams and  Threats’ (2025). 
  9. CanIPhish, ‘What Is Synthetic Identity Fraud and How to Prevent It in 2025’. 
  10. PPC Land, ‘Bangladesh Finalizes Comprehensive Data Protection Ordinance Draft’  (2025). 
  11. Jural Acuity, ‘Key Updates to Bangladesh’s Privacy Laws in 2025’. 
  12. Md Al Riaz and others, ‘Addressing Deepfake through the Existing Legal Strategies in  Bangladesh: An Assessment’ (IJLMH, 2023). 
  13. Government of Bangladesh, National AI Policy 2024 Draft. 
  14. European Commission, ‘Regulatory Framework for AI: A Risk Based Approach’.
  15. Diligent, ‘Singapore AI Regulation’ (2025). 
  16. Navex, ‘The Evolving AI Regulatory Landscape in Asia: What Compliance Leaders  Need to Know’ (2025).

1 World Economic Forum, Global Risks Report 2025 (WEF 2025). 

2 Jural Acuity, Reforming AI Laws and Regulation in Bangladesh (Tech Global Institute, 2024).

3 Childlight Global Child Safety Institute, ‘Study Finds Millions of Children Face Sexual Violence – and AI  Deepfakes Surge is Driving New Harm’ (Childlight, 2024). 

4 Shirish and Komal, ‘A Socio Legal Inquiry on Deepfakes’ (2024) 54(2) California Western International Law  Journal 558. 

5 OECD.AI Policy Observatory, ‘Bangladesh’s Chief Election Commissioner Warned That AI Generated  Misinformation and Disinformation Have Already Disrupted Election Campaigns’ (2025).

6 Keepnet Labs, ‘Deepfake Statistics and Trends’ (2024). 

7 Ibid 

8 Mastercard, ‘Cybersecurity for the Enterprise: Staying Ahead of AI Powered Scams and Threats’ (2025).

9 CanIPhish, ‘What Is Synthetic Identity Fraud and How to Prevent It in 2025’. 

10 PPC Land, ‘Bangladesh Finalizes Comprehensive Data Protection Ordinance Draft’ (2025).

11 Jural Acuity, ‘Key Updates to Bangladesh’s Privacy Laws in 2025’.

12 Md Al Riaz and others, ‘Addressing Deepfake through the Existing Legal Strategies in Bangladesh: An  Assessment’ (IJLMH, 2023). 

13 Government of Bangladesh, National AI Policy 2024 Draft.

14 European Commission, ‘Regulatory Framework for AI: A Risk Based Approach’.

15 Diligent, ‘Singapore AI Regulation’ (2025).

16 Navex, ‘The Evolving AI Regulatory Landscape in Asia: What Compliance Leaders Need to Know’ (2025).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top