Authored By: Majoyeogbe Boluwatife Ebenezer
Obafemi Awolowo University
AI is currently the major buzz everywhere, with concerns of an oncoming “winter” that may pose a threat to humans. Just as music has rhythm that enriches its harmony, the rapid tempo of AI is advancing with efforts to reduce the hazardous risks it could create. Recently, the Center for Digital Policy raised an alarm on the need to suspend GROK due to issues around ideological bias, OMB, and related concerns. This indicates a progressive step, because generative AI does not come without its peculiar dangers. Several AI ethics guidelines have emerged and continue to undergo review daily, such as the EU AI Act, UNESCO AI Ethics, and U.S. National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework. Among these developments, we shall have a quick overview of the U.S. National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0).
The U.S. National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF) is a voluntary framework designed to help organizations better manage risks associated with AI systems. It aims to improve the design, development, use, and evaluation of AI products and systems by incorporating trustworthiness principles such as reliability, fairness, transparency, security, privacy, and accountability. The AI RMF was developed collaboratively with public and private sector input and released in January 2023.1
To start with, ARTIFICIAL INTELLIGENCE according to AI RMF refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions that influence real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
AIMS AND SCOPE
As directed by the National Artificial Intelligence Initiative Act of 2020, the goal of the AI RMF is to offer a resource to help manage various risks, such as promoting the trustworthy and responsible development and use of AI systems. The scope of the framework is intended to be voluntary, non-sector-specific, use-case agnostic, rights-preserving, and flexible for organizations and all sectors.
The objectives and scope are further explained as follows:
a) Helping organizations identify, assess, and mitigate AI risks to protect individuals, communities, society, and the environment.
b) Promoting the development of AI systems that are trustworthy, safe, reliable, secure, accountable, transparent, privacy-enhanced, fair, and free from harmful bias. c) Providing a flexible, voluntary, non-sector-specific framework that can be adapted by organizations of all sizes and across all sectors to address AI risks contextually and throughout the AI lifecycle.
d) Encouraging organizational integration of AI risk management alongside broader enterprise risk practices to achieve effective governance and accountability. e) Supporting responsible AI practices that align with human-centric values, social responsibility, sustainability, and equity.
f) Enhancing transparency, documentation, testing, evaluation, verification, and validation processes to foster increased trustworthiness.
g) Offering practical guidance through four core functions (Govern, Map, Measure, and Manage) to operationalize AI risk management activities.
h) Addressing emerging and evolving AI risks dynamically, recognizing the unique challenges of AI, such as complex socio-technical interactions, emergent behavior, and evolving contexts.3
Framework
The AI RMF structure is divided into two parts. The first part includes Framing Risks, Audience and Trustworthiness.
Framing Risks: This addresses how organizations can frame risks related to AI. The AI Risk Management Framework (AI RMF) emphasizes flexibility to adapt to emerging risks, especially where impacts are uncertain or difficult to measure. Challenges include lack of reliable metrics, risks from third party data and systems, differences across the AI lifecycle, real world deployment uncertainties, limited transparency, and difficulties in setting human baseline comparisons.
Risk tolerance varies across organizations, sectors, and societies, influenced by legal, regulatory, and cultural contexts. Since complete elimination of risk is impossible, organizations must prioritize risks, focusing on those with the greatest potential harms. In high risk cases, development or deployment should be paused until risks are controlled.
Audience: Identifying and managing AI risks and impacts requires diverse perspectives and actors across the AI lifecycle. The AI RMF is designed for use by AI actors involved in design, development, deployment, evaluation, and use of AI systems, emphasizing diversity in expertise, experience, and backgrounds.
The OECD developed a framework classifying AI lifecycle activities into socio technical dimensions relevant for policy and governance, adapted by NIST to highlight the importance of test, evaluation, verification, and validation (TEVV). These dimensions include Application Context, Data and Input, AI Model, and Task and Output. AI actors within these areas drive risk management and form the main audience for the AI RMF.4
TEVV processes, when performed regularly, provide insights aligned with technical, societal, legal, and ethical standards. They also help anticipate impacts, assess emergent risks, and enable midcourse remediation and post hoc risk management.
AI Risk & Trustworthiness: This analyzes and outlines the characteristics of trustworthy AI systems, such as being valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias management.5
The second part comprises the “Core” of the framework, which describes four specific functions to help organizations address the risks of AI systems in practice. These functions are:
Govern
The govern function builds a culture of AI risk management by aligning policies processes and values across organizations. It integrates compliance accountability and lifecycle oversight linking technical design to organizational priorities. Through leadership documentation and continuous governance, it strengthens transparency responsibility and risk tolerance to ensure trustworthy AI systems;
Map
The map function frames AI risks by analyzing context interdependencies and uncertainties across the lifecycle. It gathers diverse perspectives to identify limitations assumptions and potential impacts. This enables informed decisions on appropriateness model management and deployment guiding prevention of negative risks and supporting measure manage and govern functions for trustworthy AI;6
Measure
The measure function uses qualitative quantitative and mixed methods to assess AI risks trustworthiness and social impact. It emphasizes rigorous testing benchmarking documentation and independent review to reduce bias. Measurement guides management decisions ensures transparency and supports continuous monitoring enabling organizations to evaluate risks impacts and system reliability throughout the AI lifecycle; and
Manage
The manage function allocates resources to mapped and measured risks guided by the govern function. It includes treatment recovery communication and continuous monitoring. Drawing on contextual information and systematic documentation it reduces failures enhances accountability and ensures transparency. Ongoing assessment and improvement strengthen the ability to manage risks across deployed AI systems.7
Conclusion
In conclusion, AI RMF is one of layout to manage AI risk. This layout was built on trustworthiness and responsible development of AI systems among every other.
Reference(S):
1 U.S. National Institute of Standards and Technology, AI Risk Management Framework (AI RMF 1.0) (January 26, 2023) https://www.nist.gov/itl/ai-risk-management-framework accessed 30 August 2025
2U.S. Department of Commerce, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (National Institute of Standards and Technology, SP 100-1, January 2023) https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf accessed 30 August 2025
3 U.S. Department of Commerce, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (National Institute of Standards and Technology, SP 100-1, January 2023) https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf accessed 30 August 2025
4 U.S. Department of Commerce, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (National Institute of Standards and Technology, SP 100-1, January 2023) https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf accessed 30 August 2025
5 Wiz Experts Team, ‘NIST AI Risk Management Framework: A tl dr’ (Wiz, 31 January 2025) https://www.wiz.io/academy/nist-ai-risk-management-framework accessed 31 August 2025. 6 NIST, ‘AI RMF Core’ (NIST Trustworthy & Responsible AI Resource Center) https://airc.nist.gov/airmf resources/airmf/5-sec-core/ accessed 31 August 2025.
7 U.S. Department of Commerce, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (National Institute of Standards and Technology, SP 100-1, January 2023) https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf accessed 31 August 2025





