Authored By: Dennis Okile
Kenyatta University Parklands
Introduction
Artificial Intelligence (AI) is transforming society, the economy, and governance at an unprecedented pace.1 From healthcare diagnostics to autonomous vehicles and personalised recommendations, AI systems offer immense benefits but also pose significant ethical challenges, including bias amplification, privacy erosion, lack of transparency, and potential misuse.2 As AI capabilities advance toward more general and autonomous systems, the need for robust ethical frameworks has become paramount.3 Ethics encompasses principles guiding the design, development, deployment, and governance of AI to ensure it aligns with human values, respects rights, and promotes societal good.4 While no single universal set of pillars exists, a consensus has emerged from global frameworks, including those from UNESCO, OECD, the EU, and scholarly analyses.5 A landmark review of 84 ethical guidelines identified 11 clustered principles: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity. Drawing from these, this article focuses on five widely accepted core pillars that appear consistently across major frameworks as of 2025: Transparency and Explainability, Fairness and Non-Discrimination, Accountability and Responsibility, Privacy and Data Protection, and Robustness, Safety, and Security.6 These pillars provide a practical foundation for trustworthy AI, influencing regulations like the EU AI Act, OECD AI Principles, and UNESCO’s Recommendation on the Ethics of AI.7 The article explores each pillar’s definition, importance, interconnections, implementation challenges, and real-world applications, emphasising their role in fostering ethical AI globally.
The Evolution of AI Ethics Frameworks
Ethics frameworks have proliferated since the mid-2010s, driven by concerns over biases in systems such as COMPAS (recidivism prediction) and facial recognition errors that disproportionately affect marginalised groups. Early efforts, such as the 2017 Asilomar AI Principles (23 guidelines focusing on safety, value alignment, and long-term risks), laid the groundwork for research and the establishment of values. International bodies soon followed: OECD’s 2019 (updated 2024) AI Principles emphasise inclusive growth, human-centred values, transparency, robustness, and accountability. UNESCO’s 2021 Recommendation on the Ethics of AI, the first global standard, centres on human rights with 10 core principles, including proportionality, safety, privacy, and multi-stakeholder governance. The EU’s Ethics Guidelines for Trustworthy AI (2019) outline seven key requirements, which include human agency, technical robustness, privacy, transparency, diversity/non-discrimination, societal well-being, and accountability.8 These informed the EU AI Act (2024), a risk-based regulation embedding ethical considerations.9
As of 2025, over 1,000 policy initiatives align with these, reflecting broad acceptance of the five core pillars discussed here. These frameworks shift from high-level declarations to actionable tools, like assessment checklists and governance mechanisms, promoting “ethics by design.”10
Pillar 1: Transparency and Explainability
Transparency requires clear disclosure of AI operations, data sources, limitations, and decision processes. Explainability goes further, ensuring decisions are understandable to stakeholders, from users to regulators.11 This pillar addresses “black-box” issues in complex models like deep neural networks, where opacity hinders trust and auditing. Transparency builds trust, enables bias detection, and supports accountability.12 Without it, users cannot challenge harmful decisions, undermining autonomy.13
Pillar 2: Fairness and Non-Discrimination
Fairness ensures AI treats individuals equitably, avoiding bias or discrimination based on protected characteristics like race, gender, or age. Biases arise from skewed training data (e.g., historical inequalities) or proxy variables. Cases like Amazon’s biased hiring tool (penalizing women) illustrate risks. Frameworks demand fairness: UNESCO promotes social justice and inclusivity; OECD stresses fairness; EU requires diversity and non-discrimination. Metrics include demographic parity, equal opportunity, and disparate impact assessments. Mitigation occurs pre-processing (debiasing data), in-processing (adversarial training), or post-processing (adjusting thresholds). Challenges: conflicting fairness definitions (group vs. individual) and cultural variations. Diverse teams and inclusive datasets help. Fairness intersects with transparency (revealing biases) and privacy (protecting sensitive attributes). It prevents perpetuating inequalities, promotes inclusive benefits.14
Pillar 3: Accountability and Responsibility
Accountability assigns clear responsibility for AI outcomes, ensuring mechanisms for oversight, redress, and liability.15 Who is liable when AI errs? Is it the developer, deployer, or user? This pillar closes gaps in autonomous systems.UNESCO emphasizes auditability and oversight; OECD requires risk management; EU demands accountability. Tools include impact assessments, governance boards, and logging. Proposals: hybrid liability regimes or insurance.Challenges: distributed responsibility in complex supply chains; attributing unforeseeable harms. Accountability deters misuse, enables recourse (e.g., against discriminatory decisions), and aligns AI with societal norms.16It reinforces other pillars by enforcing compliance.
Pillar 4: Privacy and Data Protection
Privacy safeguards personal data, ensuring consent, minimisation, and security in AI’s data-hungry nature.17 Risks include re-identification and inferential privacy breaches.UNESCO prioritises privacy throughout the lifecycle; OECD includes it in human rights; EU aligns with GDPR. Techniques: differential privacy, federated learning, anonymisation.Challenges: balancing utility (more data improves models) with protection; evolving threats like model inversion attacks.18Privacy links to fairness (preventing sensitive data misuse) and transparency (disclosing usage). It upholds dignity and prevents surveillance abuses.
Pillar 5: Robustness, Safety, and Security
Robustness ensures AI reliability, resilience to errors, attacks, and drift; safety prevents harm; security protects against malicious exploitation. Vulnerabilities: adversarial examples, data poisoning.19UNESCO avoids unwanted harms; OECD demands robustness and security; EU requires resilience. Approaches: adversarial training, uncertainty quantification, red-teaming.Challenges: accuracy-robustness trade-offs; scaling to large models.20 This pillar supports safety-critical applications (e.g, medical AI) and intersects with others (robustness prevents biased attacks).
Challenges in Implementing the Core Pillars of AI Ethics. While the five core pillars, Transparency and Explainability, Fairness and Non-Discrimination, Accountability and Responsibility, Privacy and Data Protection, and Robustness, Safety, and Security, offer a clear and compelling roadmap for ethical AI, their practical implementation faces significant hurdles.21 These challenges are not insurmountable, but addressing them demands sustained commitment, innovation, and collaboration.22
First, measurement conflicts and definitional ambiguity persist across pillars. Fairness, for instance, lacks a single universally accepted metric: demographic parity may satisfy one stakeholder group while violating individual equality for another. Similarly, transparency requirements can clash with intellectual property protections, forcing developers to withhold details that could enhance explainability. Cultural and contextual differences further complicate matters what constitutes “fair” or “private” varies across jurisdictions, making global standards elusive.23
Second, resource intensity poses a barrier, particularly for smaller organizations and developers in the Global South. Conducting rigorous fairness audits, robustness testing, privacy impact assessments, and ongoing monitoring requires expertise, computational power, and funding that are often concentrated in large tech firms. This risks creating an uneven playing field where only well-resourced entities can meaningfully comply, potentially stifling innovation and diversity in AI development.24
Third, the rapid evolution of generative and foundation models introduces novel risks that strain existing pillar-based approaches. Deepfakes challenge transparency and robustness; hallucinated outputs undermine explainability; and massive training datasets amplify privacy and bias concerns. Current tools and metrics, designed primarily for narrower systems, often fall short when applied to multimodal, agentic, or continually learning models.25
Fourth, enforcement and governance gaps remain. While frameworks like the EU AI Act and UNESCO’s Recommendation provide normative guidance, binding mechanisms are limited.26Accountability is diluted in complex supply chains where responsibility is distributed across data providers, model developers, deployers, and end-users. Without robust international coordination, regulatory arbitrage and fragmented standards threaten to undermine ethical progress.
Ultimately, human and societal factors must not be overlooked. Over-reliance on technical fixes risks ignoring deeper issues of power asymmetry, commercial incentives that prioritise speed over safety, and insufficient inclusion of marginalised voices in ethical deliberations. Ethics washing superficial adherence to principles without substantive change further erodes public trust. These challenges are real and pressing, yet they also represent opportunities for advancement. History shows that transformative technologies from aviation to biotechnology have navigated similar ethical terrain through iterative improvement, stakeholder dialogue, and adaptive governance. The very existence of converging global frameworks demonstrates growing consensus and political will.27
Conclusion: An Optimistic Path
The core pillars of AI ethics are not mere aspirational ideals; they are practical, evidence-based foundations for building artificial intelligence that genuinely serves humanity. Far from being obstacles to innovation, these pillars, when thoughtfully implemented, enhance trust, reduce long-term risks, and unlock broader societal acceptance and adoption of AI technologies. The trajectory is encouraging. Convergence around shared principles is accelerating: from UNESCO’s global standard to the OECD’s influential recommendations, from the EU’s pioneering risk-based regulation to growing national strategies worldwide. Open-source toolkits, ethical impact assessments, and multi-stakeholder initiatives are democratizing access to responsible AI practices. Younger generations of researchers and engineers increasingly view ethics as integral to good engineering, not an afterthought. By embracing these pillars proactively, we have the opportunity to shape a future where AI amplifies human potential, reduces inequality, advances scientific discovery, and addresses pressing global problems—from climate change to healthcare access with unprecedented effectiveness. This is not a utopian vision; it is an achievable one. The choice and the opportunity are ours to seize
Bibliography:
- European Commission, ‘HIGH-LEVEL EXPERT GROUP on ARTIFICIAL INTELLIGENCE SET up by the EUROPEAN COMMISSION ETHICS GUIDELINES for TRUSTWORTHY AI’ (2019) <https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines% 20for%20Trustworthy%20AI.pdf>
- European Union, ‘Regulation – EU – 2024/1689 – EN – EUR-Lex’ (Europa.eu2024) <https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng>
- Floridi L and Cowls J, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 1 Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8>
- futureoflife.org, ‘Welcome to Zscaler Directory Authentication’ (Futureoflife.org2025) <https://futureoflife.org/open-letter/ai-principles> accessed 31 December 2025
- Heinz Stapf-Fine and others, ‘Policy Paper on the Asilomar Principles on Artificial Intelligence’ (ResearchGate28 December 2018) <https://www.researchgate.net/publication/329963051_Policy_Paper_on_the_Asilomar_ Principles_on_Artificial_Intelligence>
- Jobin A, Ienca M and Vayena E, ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1 Nature Machine Intelligence 389 <https://www.nature.com/articles/s42256-019-0088-2>
- OECD, ‘State of Implementation of the OECD AI Principles’ (OECD2024) <https://www.oecd.org/en/publications/state-of-implementation-of-the-oecd-ai-principles _1cd40c44-en.html>
- UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (Unesco.org2021) <https://unesdoc.unesco.org/ark:/48223/pf0000380455>
- ——, ‘Recommendation on the Ethics of Artificial Intelligence | UNESCO’ (www.unesco.org16 May 2023) <https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence>
1 Jobin A, Ienca M and Vayena E, ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1 Nature Machine Intelligence 389 <https://www.nature.com/articles/s42256-019-0088-2>
2ibid
3ibid
4ibid
5ibid
6ibid
7ibid
8 European Union, ‘Regulation – EU – 2024/1689 – EN – EUR-Lex’ (Europa.eu2024) <https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng>.
9ibid
10 UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (Unesco.org2021) <https://unesdoc.unesco.org/ark:/48223/pf0000380455>
11futureoflife.org, ‘Welcome to Zscaler Directory Authentication’ (Futureoflife.org2025) <https://futureoflife.org/open-letter/ai-principles> accessed 31 December 2025
12 Floridi L and Cowls J, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 1 Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8>
13 UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (Unesco.org2021) <https://unesdoc.unesco.org/ark:/48223/pf0000380455>
14 Floridi L and Cowls J, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 1 Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8>
15 ibid
16 UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (Unesco.org2021) <https://unesdoc.unesco.org/ark:/48223/pf0000380455>
17 Floridi L and Cowls J, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 1 Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8>
18 European Commission, ‘HIGH-LEVEL EXPERT GROUP on ARTIFICIAL INTELLIGENCE SET up by the EUROPEAN COMMISSION ETHICS GUIDELINES for TRUSTWORTHY AI’ (2019) <https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20 AI.pdf>
19 Floridi L and Cowls J, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 1 Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8>
20 ibid
21 OECD, ‘State of Implementation of the OECD AI Principles’ (OECD2024) <https://www.oecd.org/en/publications/state-of-implementation-of-the-oecd-ai-principles_1cd40c44-en.html>.
22 ‘Recommendation on the Ethics of Artificial Intelligence | UNESCO’ (www.unesco.org16 May 2023) <https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence>
23 Recommendation on the Ethics of Artificial Intelligence | UNESCO’ (www.unesco.org16 May 2023) <https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence>
24 Recommendation on the Ethics of Artificial Intelligence | UNESCO’ (www.unesco.org16 May 2023) <https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence>
25 OECD, ‘State of Implementation of the OECD AI Principles’ (OECD2024) <https://www.oecd.org/en/publications/state-of-implementation-of-the-oecd-ai-principles_1cd40c44-en.html>.26 ibid
27 Recommendation on the Ethics of Artificial Intelligence | UNESCO’ (www.unesco.org16 May 2023) <https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence>





