Authored By: Tumelo Nathal Makamedi
University of Mpumalanga
Introduction
South Africa aims to be a continental AI leader by leveraging its 2024 National AI Policy Framework to drive inclusive economic growth. The strategy prioritizes addressing the digital divide and historical inequities through ethical, human centric development. By focusing on five pillars including talent development and digital infrastructure, the government seeks to use AI as a catalyst for social equity. However, success hinges on overcoming significant hurdles like energy constraints, limited local expertise and the need for a sector specific regulatory model by 2027.
While South Africa has not yet enacted a standalone “AI Act”, it has made significant progress in establishing a formal regulatory framework. In March 2026, the Department of Communications and Digital Technologies (DCDT) moved a gazette to the Draft National AI Policy for a 60-day public comment period. This draft represents a shift from high level principle to concrete governance, opting for a specific sector, multi-regulator model rather than a single central authority. This middle road approach aims to avoid the perceived over regulation of the EU’s AI while maintaining higher standards of accountability than more laissez faire models. Prioritize ethical deployment and skills development to ensure AI serves as a too for social equity and inclusive growth. Develop tailored guidelines for high impact sectors like finance and healthcare with full implementation expected between 2027 and 2028.
South Africa is pivoting from its previous wait and see stance to a structure, sector specific model that balances innovation with risk management. Rather than creating a single AI Act or a central regulator, the 2026 Draft National AI Policy empower existing bodies like the information Regulator and ICASA to oversee AI within their specific domains. This agile approach aims to prevent red tape from stifling growth while ensuring that high impact areas like healthcare and finance have clear, ethical guardrails tailored to their unique need.
The Foundations: Existing legal Framework
Protection of Personal Information Act 4 of 2013
In the absence of dedicated AI legislation, the Protection of Personal Information Act (POPIA) serves as the primary “de facto” regulator for AI systems that process personal data. It provides a binding legal framework that mandates accountability, transparency and security throughout the AI lifecycle. POPIA governs AI through several critical mechanisms. Section 71, Automated Decision Making, this more direct AI related provision. It generally prohibits decisions based solely on automated processing (such as profiling) that have significant legal or personal consequences for an individual, such as credit scoring or recruitment. There are exceptions: automated decisions are only permitted if they occur under a contract request by the data subject, are authorized by law or include appropriate measures like giving the individual a chance to make representations.
In the case of Mavundla v MEC: Dept of Co- operative Government KZN, the danger of unsupervised automation was explained. It proves that even if an AI system is efficient, the lack of meaningful human intervention, a core requirement of POPIA leads to legal liability. It grounds the abstract section 71 in a real world South African courtroom failure.
AI models must adhere to eight core conditions, including purpose specification (using data only for its original intent) and data minimization (collecting only what is strictly necessary). Training AI on historical data requires verifying that the original collection complied with the principle. Since many AI models rely on global cloud infrastructure, POPIA restrict transferring personal data outside South Africa. Transfers are only allowed if the recipient country has adequate data protection laws, lawsuits like POPIA. The data subject provides explicit consent. There are binding agreements or corporate rules ensuring continued protection. This independent body is empowered to monitor and enforce POPIA compliance. It can investigate complaints, issues fines, and require prior authorization for high-risk processing, such as linking unique identifiers across different databases for purposes other than their original collection.
Consumer Protection Act 68 of 2008
While POPIA manages the data, the Consumer Protection Act (CPA) acts as the safety net for AI outputs. It ensures that AI driven products and services don’t unfairly disadvantage or harm South African consumers. Keyways the CPA regulates AI include AI systems like automated credit scoring or pricing algorithms must not be used to offer unfair, unreasonable or unjust terms. The CPA prohibits discriminatory marketing and pricing. If an AI model uses proxy data, like postal codes to charge certain groups more, it could violate the right to equal access to goods and services. Under section 61, suppliers and manufacturers face strict liability for harm caused by defective products. If an AI driven service like autonomous delivery bot or diagnostic tool causes injury or less, the provider can hold liable even without proof of negligence. Consumers have a right to plain and understandable language. This challenges black box AI, as companies must be able to explain how an automated service works or why a specific result was reached.
Cybercrimes Act 19 of 2020
While there is no AI specific Cyber Act, the Cybercrimes Act 19 of 2020 provides the criminal law foundation for prosecuting unauthorized access, interference, or manipulation of AI systems and their underlying data. It codifies several offences that directly address risks in AI environments. The Act addresses the malicious side of AI through several specific sections like section 2, known as the hacking provision, it criminalizes unlawfully and intentionally securing access to a computers system or data. This applies to unauthorized entry into AI models, databases or training environments. Section 5 and 6, these sections prohibit, it the unlawful manipulation, alteration or deletion of data and computer programs. This is critical for addressing AI model poisoning or the unauthorized modification of algorithms to change their outputs.
The Act provides SAPS and the judiciary with enhanced powers to investigate and prosecute these digital crimes. The Convictions can result in fines or imprisonment for up to 5 years for unauthorized access and up to 10 or 15 years for aggravated offences involving restricted systems like those in finance or government. Higher penalties apply if the target is a restricted computer system, e.g. a bank’s AI driven fraud detection system, or if crime endangers lives. Admissibility of evidence, the act works alongside Electronic Communication and Transaction Act (ECTA) provide guidelines for handling digital evidence, ensuring AI related crimes can be effectively tried in court.
The 2026 Draft National AI Policy
The South African government’s decision to adopt a multi regulator model in the 2026 Draft National AI Policy is a pragmatic move designed to avoid the red tape of a centralized, one size fits all “AI Tsar”. By embedding governance within existing bodies, the state aims to leverage sector specific expertise. Creating a new standalone regulator would take years and massive funding. Instead, the information Regulator handles data privacy (POPIA), ICASA manages telecommunications and spectrum AI, and the Prudential Authority oversees AI in banking. A single agency might struggle to understand the nuances of AI in both healthcare diagnostics and autonomous mining. Existing regulators already understand their industries’ unique risks. This model will prevent jurisdictional turf wars by allowing bodies to update their own frameworks like the Consumer Protection Act and Protection of Personal Information Act to include specific guidelines. While there is no Tsar, central AI Advisory Council will likely act as a connective tissue to ensure these different regulators stay aligned and share technical resources.
The 2026 Draft National AI Policy is Structured around five strategic pillars designed to transition South Africa from a consumer of global AI to a leading, sovereign developer. These pillars specifically address the country’s unique socio-economic context, and they are capacity and talent, responsible governance, ethics and inclusion, culture preservation and human centered deployment. Capacity and talent pillar focuses on massive skills development and local R&D. it prioritizes creating AI ready graduates and researchers to reduce reliance on international tech firms and foster a domestic innovation ecosystem. Responsible Governance adopts a risk based oversight model instead rigid laws. Higher levels of scrutiny and mandatory risk management are applied to high impact use cases such as AI in law enforcement or hiring while allowing low risk innovations to flourish with minimal interference. In the case of Northbound Processing v SA Diamond Regulator the standard of care for AI use in South Africa was defined. It was established that Responsible Governance isn’t just about ethics but about preventing professional negligence. It shows that the judiciary is already enforcing AI literacy as a requirement for professional practice. Ethics and Inclusion prevent algorithm bias; this pillar mandates that AI training datasets must be representative of South African demographics. It aims to ensure that AI does not replicate or amplify historical inequalities. Cultural preservation is a unique local priority that involves using AI to digitize and protect indigenous languages and knowledge systems. This ensures that South Africa’s rich heritage is represented in the digital age rather than being marginalized by Western centric large language models. Human centered and deployment ensures that AI serves as a tool for human agency rather than a replacement for it. It mandates meaningful human oversight in high stakes decisions, ensuring accountability and preventing black box automated systems from making life altering choices without recourse.
Key Regulatory Challenges
South Africa faces unique hurdles in its quest for AI leadership. While the 2026 Draft Policy is ambitious, several structural and technical challenges could impede its success. The structural and technical challenges are the imported bias problem whereby AI models are often trained on data from global north, these models may not be suitable for the use in South Africa because local dialects and the nuances of South Africa’s 12 official languages may be misinterpreted. Algorithms for recruitment, policing or healthcare may produce biased results because they were taught on populations that do not reflect South Africa’s racial and socio economic makeup.
To counter imported bias, South Africa is pushing for algorithm sovereignty. It involves prioritizing the collection and curation of Africanized datasets to train models that understand local contexts. Reducing reliance on foreign cloud providers to ensure that sensitive national data and strategic AI assets remain under local jurisdiction and control.
There is a tension between high level policy and on the ground realities. AI requires stable computing power. Ongoing energy instability and the high cost of high-speed connectivity in rural areas create a digital divide. While the policy mandates capacity and talent, there is currently a shortage of local AI engineers and data scientists. This leads to a brain drain where top South African talent is recruited by global firms, leaving the local ecosystem struggling to implement its own strategies.
South Africa vs The World
South Africa’s approach to AI governance is increasingly defined as a middle road that balances the need for innovation with the protection of fundamental rights. By opting for a specific sector, multi regulator model rather than a single overarching law, the country seeks to foster a flexible environment while aligning with broader continental goals.
The Middle Path: Global Comparison
South Africa’s regulatory philosophy occupies a unique space between the world’s major AI framework. While the EU AI Act is a mandatory, horizontal law that classifies AI systems by risk across all sectors, South Africa avoids this perceived over regulation. Instead, it uses existing laws like POPIA to manage risk proportionately within specific industries. Unlike the more hands off or laissez faire models of India or the United States, which prioritize private sector led innovation, South Africa’s 2026 Draft Policy emphasizes human centric oversight and government guided ethics to address historical socio-economic inequalities.
Alignment with the African Union (AU)
South Africa’s policy is intentionally designed to domesticate the AU Continental AI Strategy endorsed in Juli 2024. This alignment ensures that South Africa remains a leader in unified African digital market. Both frameworks prioritize African data sovereignty, the preservation of indigenous cultures and the creation of localized datasets to combat imported bias from the Global North. South Africa serves as a key implementer of the AU’s goal to build a national pool of AI talent, leveraging local institutions like the AI Institute of South Africa (AIISA) to anchor regional research. The move toward a risk-based model in 2026 Draft Policy directly mirrors the AU’s recommendation for member states to develop agile regulatory instruments that don’t stifle emerging economies.
Conclusions
The shift in South Africa’s digital landscape is definitive, meaning AI regulation is no longer a futuristic concept, it is a present reality. By integrating oversight into the existing fabric of POPIA, CPA and the Cybercrimes Act, the government has moved away from a passive wait and see stance toward a robust, structured framework.
This multi regulator, sector specific model ensures that while the country pursues its goal of continental leadership, it does so with guardrails tailored to its unique socio-economic challenges.
For South African businesses and innovators, 2026 marks the end of regulatory ambiguity. Success now depends on proactive governance and active engagement with legislative process. The 60-day public commentary period for the Draft National AI Policy is a critical window to help shape a framework that balances global competitiveness with local inclusivity.
Bibliography
Book
Patil, M “Enterprise strategy for human centered AI”.
Case
Mavundla v MEC: Dept of Co- operative Government KZN [2025].
Northbound Processing v SA Diamond Regulator [2025].
Journal Article
Wendy Tembedza and Learne Mostert, “Artificial Intelligence has POPIA implications”.
Tayla Pinto, “How POPIA affects AI”, [2025].
Nemko Digital, “AI regulation on the horizon, [2025].
Despina Lazanakis, “South Africa: AI Policy moves towards approval”, [2025].
David Mhlanga Ph.D., “AI governance frameworks, ethical considerations and lessons from fintech inclusion models in African and Asia”, [2025].
UNESCO, “South Africa: Artificial Intelligence Readiness assessment report”, [2025].
Nemko Digital, “AI regulation in South Africa: Laws and Compliance Guide,” [2025].
Betty Wangari, “South Africa’s draft AI Policy expected to be gazette”, [2026].
African Union, “Continental Artificial Intelligence Strategy,” [2024].
DCDT, “Briefing on the draft National AI Policy framework”, [2026].
CSIR, “AI for social good ethics report”, [2025].
Legislation
Cybercrimes Act 19 of 2020.
Consumer Protection Act 68 of 2008.
Protection of Personal Information Act 4 of 2013





