février 24 2026

Governance of Agentic Artificial Intelligence Systems

Share

Agentic artificial intelligence (AI) systems present unique opportunities and challenges for organizations. Unlike traditional AI systems that simply generate an output, agentic AI systems can autonomously execute tasks with limited human involvement. However, because of its autonomous nature, agentic AI systems also present heightened AI governance considerations. Below, we describe core components of an agentic AI governance program to help mitigate risks as organizations rapidly develop and deploy such systems in the marketplace.

What is Agentic AI?

An agentic AI system is a type of AI system that can autonomously plan and execute multi-step tasks to achieve a goal. Instead of just suggesting content, they are autonomously taking action on the company’s behalf. Agentic AI systems can be built on top of small, large, or multimodal large language models to make decisions and execute tasks. Such AI systems (a) are trained to plan and reason by outputting the steps necessary to complete a task; (b) take action and interact with other AI systems by calling tools to execute tasks; and (c) utilize standardized ways to communicate with tools and other agents, such as the Model Context Protocol and Agent2Agent Protocol.

Framework for Governing Agentic AI Systems

To govern agentic AI systems, organizations should use their existing comprehensive AI governance framework with updates as described below. That framework may consist of certain core components, such as (a) establishing an AI governance team; (b) applying appropriate data governance techniques; (c) evaluating and implementing compliance with applicable laws; (d) identifying and documenting AI risks through an AI impact assessment; (e) implementing mitigation measures to reduce or eliminate risks; and (f) documenting policies and procedures to demonstrate accountability. Below, we describe how organizations may address these core components for agentic AI systems.

AI Governance Team

While agentic AI systems are intended to operate autonomously, it is important for human stakeholders within and outside the organization to properly oversee the agents. Within the organization, key stakeholders include: (i) the decisionmakers that adopt AI policies, define the agent’s goals, prohibit certain use cases, and implement a risk management framework and escalation process; (ii) the product teams that interpret and implement the decisionmakers’ directives, including conducting pre-deployment testing and post-deployment continuous monitoring and educating users on the proper uses of such systems; (iii) the cybersecurity and data privacy teams that integrate agentic AI use cases into the organization’s data security and privacy procedures, incident-response plans and red-teaming processes; and (iv) the frontline employees who use the agentic AI system’s output who can identify and escalate issues observed with the agentic AI system.

The company deploying the agentic AI system will also need to work with external parties along the AI value chain, such as conducting due diligence and entering into appropriate contracts with the AI providers and developing terms of use and acceptable use policies for external agentic AI users.

Data Governance

Organizations should ensure that the agentic AI system is trained and/or prompted using representative datasets so that its autonomous actions are accurate for the given use case and minimize bias. This is particularly important for agentic AI systems because they make decisions based on their training data and prompt with limited-to-no human interpretation of the output. In addition, organizations deploying agentic AI systems should consider applying the rule of least privilege, so that the agent, when operating autonomously, does not access systems containing sensitive data and trade secrets, which may lead to security incidents and data privacy violations.

Legal Compliance

Because agentic AI systems are merely one type of AI, they can be subject to existing regulations governing AI systems. For example, an agentic AI system should not take actions that are prohibited1 under the EU AI Act and the Texas Responsible AI Governance Act. Agentic AI systems may also trigger obligations under comprehensive AI laws, such as the EU AI Act and Colorado AI Act, depending on what decisions they are making on behalf of the organization and context of use. Moreover, agentic AI systems that interact with individuals could trigger transparency obligations under AI, communications and chatbot laws (including companion chatbot regulations), such as making it clear to users that they are interacting with an AI system and not a human. Organizations using agentic AI may also need to comply with subject-matter specific AI laws, such as in the context of employment decisions, algorithmic pricing, healthcare, insurance, critical infrastructure, real estate, and foundation models. Thus, before developing or deploying an agentic AI system, evaluate the context of use and which AI regulations apply to the tasks the agent will execute.

Risk Assessments

Organizations should consider assessing and documenting any risks involved with developing and/or deploying agentic AI systems. Potential risks that agentic AI systems may present include (i) executing erroneous actions (e.g., incorrectly scheduling appointments or producing flawed programming codes); (ii) taking actions that humans did not authorize; (iii) making biased or unfair decisions; (iv) revealing or manipulating sensitive data; (v) disrupting connected systems if they are compromised or malfunction (e.g., deleting a production codebase or overwhelming external systems); (vi) making decisions without necessary domain knowledge; and (vii) optimizing decisions that could negatively impact the market because of misaligned or poorly designed reward functions. In addition, depending on the use case, agentic AI systems may trigger legally-defined high-risk categories under comprehensive AI laws by taking actions involving critical infrastructure, product safety, biometric identification and surveillance, education and vocational training, employment and recruitment, essential goods, services and benefits, law enforcement and administration of justice, immigration and border control, financial and lending services, healthcare, housing, insurance and legal services.

After identifying these risks, organizations should assess the likelihood and severity of impact and implement mitigation measures to eliminate or reduce such risks commensurate with the risk score and applicable legal requirements, such as obligations applicable to high- and low-risk AI systems. This risk analysis should be documented in an AI impact assessment to address legal requirements (e.g., data protection and AI impact assessment requirements under privacy and AI laws) and demonstrate accountability if subject to a regulatory investigation.

Mitigation Measures

The following are some of the organizational and technical measures that companies may consider implementing to mitigate risks specific to agentic AI.

  • Transparency: Organizations deploying agentic AI systems that interact with individuals outside the organization (e.g., customers) should consider informing customers that they are communicating with AI and not a human. The organization may also inform users (i) that they should confirm all information the agent provides; (ii) about the range of actions and decisions the agent is authorized to take; (iii) regarding the organization’s data-processing practices (often described in a privacy policy and/or just-in-time consent notice); and (iv) provide the contact information of a human who is responsible for the agent. In addition to the above, organizations should provide appropriate training to employees who use agentic AI within their internal job functions, as well as information regarding the agent’s capabilities and limitations and contact information to escalate malfunctions within the organization.
  • Human oversight: While agentic AI is intended to operate autonomously, humans still place boundaries on their actions. Organizations may define important checkpoints and action boundaries that require human approval before the agentic AI system executes them. For example, human approval may be required when the agentic AI system (i) is making certain types of decisions (e.g., in the healthcare, legal or financial services context); (ii) can cause irreversible harm; or (iii) needs to take steps outside of its work scope or user-defined boundaries. Organizations should also implement measures to ensure the continued effectiveness of human oversight and mitigate the risk of alert fatigue and automation bias. This can be accomplished through training and regular audits.
  • Technical controls: Additional technical controls and processes may be warranted for agentic AI systems. During the design-and-development stage, organizations may consider (i) prompting the agentic AI system to confirm that the responses it gives are aligned with the intended design; (ii) configuring the system to require strict input formats; (iii) applying the rule of least privilege to limit the tools available to the agent; (iv) prohibiting the agent from gaining access to sensitive databases; (v) using standardized protocols; and (vi) allowing the agent to only interact with approved services. Before deploying an agentic AI system, organizations may test for overall task execution, policy compliance, whether the agent calls the right tools, robustness in real or realistic environments, and how it performs across its entire workflow when it interacts with other agents and across varied datasets. Finally, after the agentic AI system is deployed, organizations may continue to reasonably monitor and log the agent’s behavior, and intervene in real-time to stop the agent if it is not performing as intended or creates unforeseen risks.
Accountability

To demonstrate accountability, organizations should consider documenting the above governance practices through appropriate policies, procedures and technical documentation and logs. In a regulatory investigation, having an auditable record can help to demonstrate that the organization acted reasonably and applied best practices. Failure to maintain such documents may indicate that the organization considered these issues after the incident, which is less persuasive when defending regulatory investigations and litigation.    

* * *

For further reading on implementing governance programs for all AI systems, including sample policies, procedures and documents, Arsen Kourinian has authored Implementing a Global Artificial Intelligence Governance Program.

 


 

1 These include using subliminal techniques to distort a person’s behavior in a harmful manner, exploiting vulnerabilities of disadvantaged groups, engaging in social scoring, creating or expanding facial recognition databases, recognizing emotions in the workplace, categorizing biometric data to infer sensitive data, engaging in predictive policing or discipline, discriminating against individuals based on protected characteristics, inciting or encouraging persons to commit physical self-harm (including suicide), harm another person, or engage in criminal activity, and executing illegal actions related to sexual abuse material.

Compétences et Secteurs liés

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe