AI is now embedded in how businesses deliver products and services. Whether you're buying consulting services, licensing software, outsourcing manufacturing, or entering a distribution agreement, there's a growing likelihood that your counterparty is using AI somewhere in the delivery chain. And that means your contracts need to address it.
This isn't about procuring 'AI services' as a distinct category. This is about any commercial agreement where one or both parties are introducing AI-related provisions, data usage restrictions, training prohibitions, output ownership, disclosure obligations into the standard terms.
At ThoughtRiver, we've been reviewing contracts with AI for nearly a decade, and we've watched this shift happen in real time. What follows is a practical guide to the AI clauses that are becoming standard, the provisions still being heavily negotiated, and the specific language and strategies that matter from both buyer and supplier perspectives.
This is the second in a series of blog posts about the evolution of contract clauses in the age of AI.
The AI Clauses That Are Becoming Standard
Across commercial agreements of all types; SaaS, professional services, manufacturing, distribution and others, a core set of AI-related provisions is now expected. Here's what buyers typically want, what suppliers typically resist, and where the negotiation usually lands.
1. AI Audit Rights and Transparency
Without the ability to verify how AI is being used, governance is theoretical. However, audits no longer tell the full story as they have in previous generations of software development.
On the one hand, understanding how a supplier uses AI – particularly with a Buyer’s data – is useful for confidential and IP right tracking. On the other hand, use of that data may be intertwined in proprietary data orchestration or systems which the Supplier has a legitimate interest in protecting. It can be beneficial or a fools’ errand – administratively burdensome to both sides.
Consequently, use sparingly with very specific terms to address the business issue that is clearly illuminated to both sides.
What buyers want:
Example language (buyer-friendly):
Customer may, upon no less than ten (10) business days' prior written notice, audit Supplier's AI systems, models, training data, algorithmic decision-making processes, and any third-party AI tools or sub-processors used in connection with the Services provided to Customer. Customer shall use reasonable efforts to avoid material disruption to Supplier's operations during any audit conducted in the absence of a Security Incident or material compliance concern. Supplier shall provide Customer or its designated auditor with access to relevant documentation, system logs, and personnel to verify compliance with this Agreement.
Supplier shall bear the cost of such audits unless non-compliance is identified, in which case Customer's reasonable audit costs shall be reimbursed.
What suppliers resist: Unlimited audit frequency and scope, particularly when it exposes proprietary model details or competitive information. Also, the “use of AI systems” is extremely broad. Narrowing it to precisely what is necessary to identify the requisite behaviour is more likely to yield agreement.
Compromise position: Annual audits with scope limited to verifying compliance with data usage, training restrictions, and security obligations. Customer may use a mutually agreed third-party auditor bound by confidentiality. Supplier's proprietary model architecture and training data sources are excluded from scope unless there's evidence of a breach.
Red flag: Complete absence of audit rights, or language limiting audits to "on-site only" (which can be impractical for cloud-based services).
2. Incident Response and Breach Notification
AI systems can be breached. AI outputs can cause harm. Contracts need clear notification and remediation obligations.
What buyers want:
Example language (buyer-friendly):
Supplier shall notify Customer within 24 hours of becoming aware of any: (i) unauthorized access to or breach of any AI system processing Customer Data; (ii) AI-generated output that is materially inaccurate and has been or may be relied upon by Customer or third parties; or (iii) regulatory investigation related to Supplier's AI systems. Supplier shall provide a root cause analysis within 10 business days and implement corrective measures at no additional cost.
What suppliers resist: Very short notification timeframes (particularly for accuracy issues, which may require investigation to confirm) and bearing all costs of remediation.
Compromise position: 24-hour notification for security breaches; 48-72 hours for accuracy issues (allowing time for investigation); 5 business days for regulatory matters. Supplier bears remediation costs for incidents caused by Supplier's negligence or breach; costs are shared for incidents outside Supplier's reasonable control.
Red flag: No incident notification obligations, or notification timelines of "reasonable time" without defining it.
The Provisions That Are Still Being Fought Over
Beyond the emerging baseline, there are several areas where buyers and suppliers are still negotiating hard, and where the law hasn't fully caught up with commercial reality.
1. Accuracy Warranties
The flashpoint: Vendors typically refuse to warrant that AI outputs will be accurate, arguing that AI is inherently probabilistic. Buyers pushing back have a fair point: if you're selling a system based on its performance, that promise should mean something.
What suppliers offer (vendor-friendly):
Supplier makes no warranty regarding the accuracy, completeness, or reliability of any AI-generated outputs. Customer acknowledges that AI systems may produce erroneous results and agrees to independently verify all outputs before relying on them.
What buyers want (buyer-friendly):
Supplier warrants that AI-generated outputs will meet the accuracy standards and performance metrics set forth in Exhibit A. If outputs fail to meet such standards, Supplier shall re-perform the work at no cost or refund fees paid.
Possible compromise:
Supplier warrants that AI systems will perform substantially in accordance with the specifications and performance benchmarks documented in Exhibit A, measured over a statistically significant sample size. If performance falls materially below documented benchmarks (e.g., >10% deviation), Customer may request remediation or, if remediation is not achieved within 30 days, terminate the affected portion of the Services for refund.
This ties warranties to documented, measurable performance rather than absolute accuracy, which is more defensible for suppliers while still giving buyers recourse.
2. Liability for AI Hallucinations
Where a system generates convincing but completely false outputs and someone relies on them, who is responsible? Courts are starting to answer this question, and the answers aren't always what vendors hope.
What suppliers want (vendor-friendly):
Supplier shall not be liable for any damages arising from Customer's or any third party's reliance on AI-generated outputs. Customer is solely responsible for verifying the accuracy of all outputs before use.
What buyers want (buyer-friendly):
Supplier shall be liable for all direct and consequential damages arising from materially inaccurate AI outputs, including but not limited to legal costs, regulatory fines, and reputational harm.
It’s improbable for a Supplier to accept all liability given that the input may be one of many.
Possible compromise:
Supplier shall be liable for direct damages arising from AI outputs that are materially inaccurate due to Supplier's failure to implement reasonable quality controls or human oversight as specified in this Agreement, up to the liability cap set forth in Section X. Supplier shall not be liable for consequential damages or for inaccuracies in outputs where Customer failed to perform the verification procedures specified in the Documentation.
This creates shared responsibility: suppliers are accountable for implementing reasonable safeguards, but buyers must also verify outputs in high-stakes situations.
Emerging case law to reference: In Air Canada's chatbot case (2024), the tribunal held that businesses cannot disclaim liability for what their AI systems tell customers. Point to this when pushing back on blanket disclaimers.
3. AI Training Data and Copyright Indemnification
Several major AI developers are facing litigation over whether their models were trained on copyrighted material without authorization. Sophisticated buyers are asking for indemnities covering not just use of outputs, but the provenance of training data itself.
What buyers want (buyer-friendly):
Supplier warrants that: (i) its AI models were trained only on data for which Supplier holds all necessary rights and licenses; and (ii) Customer's use of AI-generated outputs will not infringe any third-party intellectual property rights. Supplier shall indemnify, defend, and hold Customer harmless from any claims, damages, or costs arising from breaches of these warranties, including claims related to the training data used to develop Supplier's AI models.
What suppliers resist: Providing indemnities for training data provenance when the legal landscape is unsettled and they may have relied on fair use or other defences.
Possible compromise:
Supplier represents that it has taken commercially reasonable steps to ensure its AI models were trained in compliance with applicable intellectual property laws and contractual obligations. Supplier shall indemnify Customer for third-party claims arising from Customer's use of AI outputs, but such indemnity shall not extend to claims based solely on the composition of Supplier's training data, provided Supplier has complied with applicable law. If a final court ruling determines that Supplier's training data violates third-party rights, Supplier shall at its option: (i) obtain necessary licenses; (ii) modify the AI system to remove infringing elements; or (iii) refund fees paid.
This acknowledges legal uncertainty while still providing buyers with recourse if training data issues materialize.
Stay tuned for Part 3!!
