Contracting for Agentic AI Solutions: Shifting the Model from SaaS to Services
Takeaway: As agentic AI products shift from passive tools to autonomous actors, we see a move beyond traditional SaaS contracting models to a hybrid approach incorporating BPO-style clauses, including clauses covering service definitions, warranties, outcome-based SLAs, broader indemnification, governance and audit rights, and data ownership.
For years, contracting for generative AI (GenAI) products has largely settled into a familiar Software-as-a-Service (SaaS) model: the provider makes the GenAI product available on its platform and the customer company is responsible for how it is used. This model often makes sense when the AI product is a passive tool—a co-pilot that suggests but does not act.
Agentic AI, however, does not neatly fit into this contracting model. “Agentic AI” refers to systems that can autonomously plan and execute multi-step tasks to achieve a goal. Instead of just suggesting content, they are autonomously taking action on the company’s behalf.
Agentic AI includes a spectrum of products. At one end of the spectrum, there are general-purpose agentic AI tools that allow companies to develop and build their own AI agents. The company’s team has substantial ability to train, fine-tune, adapt, program and otherwise direct those AI agents. At the other end of the spectrum, there are agentic AI solutions. Agentic AI solutions are developed by providers to perform specific functions, such as handling payment inquiries from suppliers or helping employees access their benefits. The company’s team has limited or no ability to affect how the AI agents operate. In between, there are varying levels of control and company involvement.
As an agentic AI solution shifts to acting autonomously on a company's behalf, the nature of the provider’s relationship with the company shifts from licensing a tool toward providing a service. With that change in relationship, we see the contracting model shifting from a SaaS contracting model—with limited performance guarantees and software-focused risk allocations—to a more service-oriented contracting model. That service-oriented model would require defining the service, setting guardrails and governance rights and obligations, creating incentives for proper oversight and management on each side, and allocating liability for service failures.
Fortunately, these are not wholly new issues. The business process outsourcing (BPO) industry has established market terms and conditions to address these thorny issues when a company hires a service provider to perform business services using people. The challenge now is to adapt those concepts fairly and appropriately from the BPO industry when hiring a service provider to deliver services using AI agents.
This Legal Update identifies six critical clauses where the standard SaaS contracting framework is a poor fit for agentic AI solutions and proposes updated versions of BPO-style solutions as a more balanced, appropriate starting point for both buy-side and sell-side negotiations.
1. Definitions & Scope of Service
The SaaS Clause: A standard SaaS agreement defines the “Service” as a hosted software platform, with the company receiving a non-exclusive right to access and use that platform. The provider is responsible for providing the platform; the user is responsible for all actions taken with the platform.
The BPO-Style Solution: The “Service” would be defined as the set of tasks and responsibilities that the provider agrees to complete using AI agents. This definition of services would explicitly define the provider’s “delegation of authority” and any “policy guardrails.” The delegation of authority would outline what the provider can do using AI agents (e.g., offer an on-site service call) and what it cannot do using AI agents (e.g., offer refunds or accept liability for claims). The policy guardrails would specify how the AI agents can operate, including mandatory escalation triggers for human-in-the-loop (HITL) approval. This provides clarity for the company and a defensible liability guardrail for the provider.
Practice Tips for Companies: Key Questions to Ask
- What parts of the business process will be performed with AI agents?
- Which steps are critical to define in the policy guardrails?
- What is the precise delegation of authority? What is the exact threshold where the agentic AI solution must stop and ask a human for approval?
2. Service Warranties
The SaaS Clause: “THE SERVICE IS PROVIDED AS-IS, WITH ALL FAULTS.” This is the opening of an extensive, all-caps disclaimer in many SaaS agreements for AI products. A SaaS provider may be willing to offer a warranty that its product will perform in material conformance with its documentation. However, SaaS-based AI providers argue that they cannot offer even that warranty given the probabilistic nature of AI.
The BPO-Style Solution: A BPO-style approach would include performance warranties. BPO agreements generally include warranties that services will be performed in a good, professional, diligent, and workmanlike manner in accordance with industry standards. In this solution, the contract would apply that warranty both to the work of the people who create, monitor, and maintain the AI agents and expressly also to the work performed by AI agents as if performed by people. In addition, BPO agreements routinely include warranties of compliance with key restrictions, so this solution might include a warranty that the Services will be in compliance with law and in material conformance with the delegation of authority and policy guardrails as defined in the Agreement.
Practice Tips for Companies:
- Providers will generally not accept warranties of perfection in delivery of services. Quality is thus addressed by the more circumscribed warranties described above or Service Level Agreements (as described below).
- The warranty of compliance to the delegation of authority and policy guardrails is directly linked to Clause 1. The better you define the delegation of authority and the policy guardrails, the more willing a provider may be to warrant that its AI agents will stay within that defined scope.
3. Service Level Agreements
The SaaS Clause: Service Level Agreements (SLAs) are technical and measure platform availability. 99.99% “uptime” is a common standard in a SaaS agreement. This provides little comfort if the agent is “up” but making costly errors.
The BPO-Style Solution: SLAs would be operational and measure outcomes rather than solely availability. Service credits can still be a remedy, but they are triggered by performance failures, not just downtime.
Key metrics for an agentic SLA might include:
- Accuracy: e.g., 99% of invoices processed correctly against the purchase order.
- Timeliness: e.g., 99% of support tickets actioned within the required service window.
- Satisfaction: e.g., <1% of autonomous actions lead to consumer complaints.
Practice Tips for Companies:
- Talk to your internal business stakeholders and ask, “What does good look like for this agent?” Translate their business-focused answers (e.g., “it doesn't make mistakes,” “it's fast”) into measurable metrics like “Accuracy” and “Timeliness.”
- Propose these outcome-based SLAs instead of accepting the provider's standard “uptime” SLA.
- Note that this may also be in the interest of the provider, as this will help the provider define the remedies for poor performance (e.g., if the provider’s position is that service credits are a sole and exclusive remedy).
4. Indemnification
The SaaS Clause: In many SaaS agreements, indemnities are often narrow. In some cases, the provider’s only indemnity is to defend and hold harmless the company from third-party IP infringement claims in relation to the Services.
The BPO-Style Solution: Indemnities are often broader, as they are designed to cover risks that arise from the way the service is performed. For agentic AI, a company may seek indemnification from the provider for a broader range of third-party claims arising from the agent's autonomous actions—provided those actions were within the agreed-upon scope.
Examples could include claims where the agent discriminates in an automated hiring workflow or breaches the policy guardrails. These indemnities may be balanced with provider-favorable carve-outs (e.g., indemnification would not apply to harms caused by (a) company misconfiguration, (b) faulty company data, or (c) an action the agent escalated and the company's HITL explicitly approved).
Practice Tips for Companies:
- Consider proposing a provider indemnity for third-party claims “arising from the agent's autonomous performance of the Services.” Be prepared to narrow this to specific areas of risk.
- To make this reasonable, proactively offer carve-outs for your own company's failures, like providing bad data or a bad decision.
5. Governance & Audit Rights
The SaaS Clause: Audit rights may be limited to the provision of a multi-customer SOC 1 or SOC 2 audit report, and the provider may even have the right to audit the company for seat-license or usage overages (a legacy from on-premise software licensing models).
The BPO-Style Solution: A BPO-style approach would be to provide broader company audit (or transparency) rights. A company delegating a core function may require a contractual “right to transparency.” For example:
- Technical: The provider would have an obligation to maintain decision logs for all decisions by AI agents and customer would have the right to audit any agent's decision logs to help answer the question, “why did it do that?”
- Operational: The right to formally assess AI agent performance against the agentic SLAs and other contractual commitments.
Practice Tips for Companies:
- Before signing, conduct a technical and operational analysis to determine how to audit whether the provider’s work complies with requirements, then ask for contractual rights to conduct that audit.
- Require that “decision-logs” and other key records be maintained in structured, legible form so that they are useful.
6. Data, IP Rights & Model Training
The SaaS Clause: SaaS terms can sometimes grant the provider a broad, perpetual license to all data generated by or through the platform (including company inputs and AI outputs) to “improve the service.” This often means the provider can train its models on the company’s confidential data.
The BPO-Style Solution: While BPO providers may similarly ask for rights to use company data, a BPO relationship is a service provider/processor relationship and therefore the lines are much clearer. Under a BPO-style approach:
- Data & IP Ownership: The contract would state unambiguously that the company owns (a) all data that is submitted to or obtained by the agent (the inputs), and (b) all outputs created (including IP rights therein) or generated by the agent in the performance of the service. Any license to use such company data (e.g., in original or deidentified form) would be carefully negotiated.
- Model Training Rights: The contract would explicitly prohibit the provider from using company data (inputs or outputs) to train its models unless the company consents to such training, perhaps for its own benefit or for collective benefit.
Practice Tips for Companies:
- Find the “Data Use” or “License to Provider” clause and insert an explicit prohibition against using company data to train, fine-tune, or otherwise improve any AI model without your approval. Be open to discussions about win-win scenarios regarding use of data.
- With some exceptions, ensure that, as between the parties, you own the generated outputs. You are paying for a service that creates those outputs on your behalf. However, providers may want you to acknowledge that the output generated for you may be the same or substantially similar to an output generated for another customer.
Conclusion: Forging the Hybrid Contract
The path forward for procuring agentic AI will likely not be to scrap the SaaS contracting model entirely but to create a new hybrid contracting model. This new model can leverage both the scalable, subscription-based framework from SaaS contracts and BPO-style performance and governance commitments. In doing so, it can allow both companies and providers achieve new levels of value in contracts for services delivered using agentic AI.



