AI is now embedded in how businesses deliver products and services. Whether you're buying consulting services, licensing software, outsourcing manufacturing, or entering a distribution agreement, there's a growing likelihood that your counterparty is using AI somewhere in the delivery chain. And that means your contracts need to address it.
This isn't about procuring 'AI services' as a distinct category. This is about any commercial agreement where one or both parties are introducing AI-related provisions, data usage restrictions, training prohibitions, output ownership, disclosure obligations into the standard terms.
At ThoughtRiver, we've been reviewing contracts with AI for nearly a decade, and we've watched this shift happen in real time. What follows is a practical guide to the AI clauses that are becoming standard, the provisions still being heavily negotiated, and the specific language and strategies that matter from both buyer and supplier perspectives.
This is the first in a series of blog posts about the evolution of contract clauses in the age of AI.
The Clauses That Are Becoming Standard
Across commercial agreements of all types; SaaS, professional services, manufacturing, distribution and others, a core set of AI-related provisions is now expected. Here's what buyers typically want, what suppliers typically resist, and where the negotiation usually lands.
1. Definitions
What buyers want: A clear, broad definition of AI that doesn't become outdated as technology evolves. It should clearly defines how AI is used, such as in the service of the buyer, for the benefit of the seller, as well as how any of the buyer’s data is used for the service and model training Be wary of general purpose or catch-all statements as that can be an open door to use your data in ways you may not want.
Example language (buyer-friendly):
"Artificial Intelligence" or "AI" means any automated system, software, algorithm, or technology that performs tasks requiring human-like perception, reasoning, learning, decision-making, or prediction, including but not limited to machine learning models, natural language processing systems, computer vision systems, generative AI, and any successor or related technologies.
What suppliers resist: Definitions so broad they capture basic automation or rule-based systems (e.g., Excel macros, basic if/then logic).
Compromise position: Add a carve-out for basic automation: "...excluding deterministic, rule-based systems that do not involve learning, adaptation, or probabilistic outputs."
Red flag: Definitions that are overly narrow (e.g., "AI means only systems explicitly marketed as artificial intelligence by Supplier") leave too much room for evasion.
2. Disclosure and Transparency
What buyers want: Upfront disclosure of which AI tools are being used, and advance notice before any changes.
Example language (buyer-friendly):
Supplier shall provide Customer with a written list of all AI systems used in connection with the Services within 30 days of the Effective Date and shall update such list within 30 days of any material change. Supplier shall provide Customer with at least 60 days' advance written notice before deploying any new AI system that will process Customer Data or materially affect Service delivery.
What suppliers resist: Detailed, tool-by-tool disclosure, model disclosure (citing competitive sensitivity as well as the administrative burden) and lengthy advance notice periods that slow down innovation.
Compromise position: Disclosure at a category level with a specific purpose essential to its utility (e.g., "natural language processing for customer support," "machine learning for fraud detection") rather than naming specific vendors/models. Reduce notice period to 30 days for non-critical changes but maintain 60+ days for changes involving customer data or high-risk use cases.
Red flag: Clauses that permit AI deployment "at Supplier's discretion" with no notice obligation whatsoever.
3. Data and Training Restrictions
This is the most commercially sensitive area. Buyers want absolute control over whether their data is used to train or improve AI models. Suppliers often resist blanket prohibitions.
What buyers want:
Example language (buyer-friendly):
Supplier shall not use Customer Data, including any inputs, prompts, queries, or outputs generated in connection with the Services, to train, fine-tune, or otherwise improve any AI model or system, whether proprietary to Supplier or provided by a third party, without Customer's prior written consent. This restriction survives termination of this Agreement.
What suppliers resist: Total prohibition on using anonymized or aggregated data to improve their platform.
Compromise position:
Supplier may use aggregated, de-identified usage data (not including any Customer Data content) to improve the Services only and for no other purpose, provided such data cannot reasonably be re-identified. Supplier shall not use any Customer Data content, prompts, or outputs for training purposes.
Be sure the lower-case term “data” is clearly understood in the agreement. Remember: outputs can be data and still contain sensitive client information. In addition, outputs can trigger a reasonable guess as to the inputs as well.
Buyers may want to push for an annual right to audit how de-identification is performed and confirm it meets recognized standards (e.g., NIST de-identification guidelines). But keep in mind that suppliers often view audits as a back door to trade secrets: how they create their algorithms and build their technology. If audit rights are granted, they will be limited to reasonably account for confidentiality, and a third party (other than the client) may not be permitted to conduct an audit.
Red flags:
- Clauses that permit use of customer data "to improve the platform" without defining what that means or limiting it to non-content metadata
- Any language allowing training "with anonymization" without specifying the de-identification method or providing audit rights
- Suppliers who refuse to commit in writing that customer data won't be used for training
4. Intellectual Property Ownership of AI Outputs
When AI generates content such as code, designs, reports or marketing copy, contracts must clarify who owns it.
What buyers want:
Example language (buyer-friendly):
All outputs generated by AI systems in connection with the Services, including but not limited to text, images, code, designs, and reports, shall be the sole and exclusive property of Customer. Supplier hereby assigns to Customer all right, title, and interest in such outputs.
What suppliers resist: Assigning IP when the AI model itself (or templates/prompts used to generate outputs) is proprietary to the supplier.
Compromise position:
Customer owns all AI-generated outputs. Supplier retains ownership of its underlying AI models, algorithms, training data, and any general-purpose templates or prompts. Supplier grants Customer a perpetual, irrevocable, worldwide license to use the outputs for any purpose.
Red flag: Clauses that give suppliers joint ownership of outputs or retain any rights to customer-specific outputs. Also watch for restrictions on using outputs (e.g., "for internal use only").
5. Regulatory Compliance and Change Management
The EU AI Act came into force in 2024. US states are passing their own AI laws. Compliance obligations are expanding, and contracts need to keep pace.
What buyers want:
Example language (buyer-friendly):
Supplier warrants that its use of AI systems complies with all applicable laws, regulations, and industry standards, including but not limited to the EU AI Act, UK AI regulations, and any applicable US federal or state AI legislation. If changes in law require modifications to the AI systems or Services, Supplier shall implement such modifications at no additional cost to Customer within 30 days of such new law going into effect.
What suppliers resist: Bearing the full cost of compliance with future, unknowable regulations, particularly when those regulations impose expensive technical requirements.
Compromise position: Supplier shall comply with applicable laws.
Red flag: Any disclaimer of regulatory compliance obligations (e.g., "Services are provided 'as is' with respect to regulatory compliance").
Watch this space for more best practices to come. Much like AI itself, legal terms are evolving quickly. The more quickly critical clauses can be identified and tied back to other obligations within the agreement, the more effectively legal leaders can ask the right questions to manage risk and opportunities for their organizations.
Stay tuned for Part II!
