Uncategorized

Negotiating Liability and Indemnification Clauses in OpenAI Enterprise API Contracts

Negotiating Liability and Indemnification Clauses in OpenAI Enterprise API Contracts

In enterprise AI deals, liability and indemnity clauses define who bears the risk of things going wrong. When working with OpenAI’s API or ChatGPT Enterprise, CIOs and procurement leaders must scrutinize these terms. OpenAI’s standard contracts include an IP indemnity, broad disclaimers, and a liability cap tied to fees paid. You will want to understand these defaults and push for carve-outs or enhancements to protect your organization. This guide walks you through OpenAI’s typical clauses and shows how to negotiate them for data breaches, intellectual property issues, compliance fines, and model output risks.

OpenAI’s Standard Liability & Indemnity Terms
OpenAI’s business terms (as of late 2023) follow a familiar SaaS pattern but with AI nuances. By default, OpenAI will indemnify you for third-party IP claims against the service itself: the agreement states, “OpenAI agrees to indemnify, defend, and hold Customer harmless…arising out of a Claim alleging that the Services infringe any third-party IP Right.” This IP indemnity explicitly covers claims tied to OpenAI’s own code or training data. It excludes claims arising from the combination of OpenAI’s services with other products, modifications to the service, or any customer-provided content or applications.

In return, you must indemnify OpenAI for problems caused by your actions. For example, the contract requires the customer to defend OpenAI if their use of the API violates the agreement (like breaching content rules) or if they upload unauthorized material. This mutual indemnity arrangement is typical: OpenAI covers the risks associated with the “AI engine,” and you cover any misuse or errors in your inputs. It’s important to narrow your indemnity obligation, so you aren’t on the hook for every imaginable claim. As one expert advises, measure your OpenAI indemnity is “tied to your breach of the agreement or misuse of the service, not just any use.”

OpenAI also limits liability and damages. The agreement disclaims virtually all indirect damages (including lost profits, punitive, special, and incidental damages) for both parties. It then caps direct damages: typically, “each party’s total liability…will not exceed the total amount Customer paid to OpenAI during the twelve months immediately before the event”. In other words, if you paid $100k in the past year, that’s the maximum OpenAI would owe you for any claim (aside from uncapped exceptions). These standard caps and disclaimers shield OpenAI from large losses. The cap excludes OpenAI’s indemnity obligations (meaning they can pay more than the cap to satisfy an IP indemnity).

Damage Types – Direct vs. Indirect. OpenAI’s terms broadly bar indirect (consequential) damages. The cap limits direct damages (like actual legal costs or remedies). For example, if a ChatGPT outage causes you to lose profits, you’d normally classify lost profits as indirect and be out of luck. In practice, you’ll likely accept no-indirect-damages clauses, but clarify that some costs (e.g., breach investigation, legal fees defending IP claims, or regulatory fines) count as direct. As one playbook suggests, “Consider lost profit as a direct loss if your use of OpenAI is directly tied to revenue.”

Carve-Outs and Exceptions. Not all losses are treated equally. Common carve-outs include: (1) Gross negligence or willful misconduct – OpenAI’s limits don’t apply if they act in bad faith or with gross negligence. (2) Indemnification payments – OpenAI’s obligations to indemnify you (e.g., IP claims) are not subject to the monetary cap. (3) Confidentiality/data breaches – OpenAI’s breach of security or confidentiality may void the indirect damages carve-out, suggesting you can claim full damages if they botch security. (4) Payment obligations – If you owe money to OpenAI, that isn’t limited by the cap. In negotiations, you want to include strategic carve-outs; for instance, demand that any liability for regulatory fines (e.g., GDPR sanctions resulting from a data leak) be uncapped or explicitly covered.

Managing Specific AI Risks

Data Breaches & Security Incidents

Cloud services, such as OpenAI’s API, handle valuable data. Negotiate clear security commitments and breach protocols. Ensure the contract requires OpenAI to notify you immediately of any incident affecting your data and to assist with the investigation. You may seek an indemnity or at least an uncapped carve-out for losses from a vendor-caused breach. For example, an ideal clause might state that damages from OpenAI’s failure to meet its security obligations are not subject to the usual cap or disclaimers. (OpenAI’s standard does exclude security breaches from indirect damage limits, but you could push to explicitly remove the monetary cap for breaches of Section 5.1). Additionally, confirm that OpenAI’s insurance or incident response resources will cover the costs associated with notifying customers and regulators.

In short, push for strong security and breach obligations with liability. For example, request that OpenAI promptly indemnify you for any confirmed breach of your data resulting from their negligence. If they resist, at least carve out breach-related losses (such as remediation costs and regulatory fines) from the cap. Document their compliance standards (e.g., SOC 2, ISO 27001), and tie any failure to those commitments.

IP Infringement & Content Liability

Intellectual property risk is at the forefront with AI. OpenAI’s indemnity covers infringement by the model or training data, which is often referred to as the “Copyright Shield.” This is crucial: if ChatGPT accidentally plagiarizes text or code, OpenAI should defend you. Confirm that the indemnity explicitly includes outputs and training data. It does – OpenAI’s terms say they’ll defend claims that the “Services or training data” infringe copyright. However, note the exceptions: claims arising from your content or applications, or combining OpenAI’s output with other software, are excluded. In practice, this means that if you reprint or transform an AI output and it infringes, OpenAI might claim it’s your responsibility (since it resulted from your input or combination).

Negotiation tactics: Narrow those carve-outs. For example, if you are buying an enterprise, ask OpenAI to limit the modification and combination exclusion so that basic uses of their output still fall under their indemnity. Dentons warns that broad exceptions (like any “modification” of the software) can get the protection. You might propose language like “except to the extent caused by material changes made by Customer to the Services” rather than any combination. Also, ensure that trademarks or other intellectual properties (if relevant to your domain) are mentioned.

For content liability (defamation, disinformation, etc.), OpenAI will not indemnify you – they treat user outputs as your responsibility. You should be prepared to disclaim or edit sensitive outputs. Internally, ensure that outputs are reviewed before publication. You can ask OpenAI if they will extend indemnity or warranty protection for “product defects” (like code bugs causing harmful outputs), but vendors often resist this. At a minimum, document a process for addressing injurious outputs (e.g., a bug bounty or rapid patch process).

Regulatory Compliance & Government Investigations

AI and data use are heavily regulated. Confirm that OpenAI will assist you in complying with the relevant AWS requirements.. For example, include or attach OpenAI’s standard Data Processing Addendum (DPA) to meet GDPR/CCPA requirements. In negotiations, clarify whose job it is to handle data subject requests or audit demands: typically, OpenAI (as a processor) should assist you in responding to regulators.

For liability, fines and penalties for regulatory violations can be enormous. OpenAI’s terms don’t explicitly cover this. A smart tactic is to carve out regulatory fines from the liability cap; for example, if OpenAI’s misconfiguration results in a GDPR fine, they should cover it. OpenAI may balk, but you can tie this into the security discussion. Highlight that some AI regulations (like the EU AI Act) could impose obligations – ask for indemnity or at least cooperation in case of government inquiries.

Finally, prepare for investigations: ensure the contract obligates OpenAI to preserve logs and provide records if legal or regulatory authorities demand them (and to notify you if they get a subpoena for your data).

Output Risk – Hallucinations, Bias & Misinformation

Generative AI is unpredictable. OpenAI’s standard stance is that you are solely responsible for outputs. They disclaim accuracy and won’t indemnify you for false or biased content. As a customer, you should manage this via your policies (human review, fact-checking, etc.). Contractually, there is limited wiggle room: you could request a “product defect” indemnity or a warranty that the model meets certain safety standards, but expect resistance.

Instead, negotiate practical mitigations. For example, you can request that OpenAI provide information on mitigation techniques or the latest updates that reduce bias. If using the API in regulated areas (like medical advice or financial decisions), you can turn off training on your data and apply additional filters (some enterprise deals allow custom guardrails).

Dentons suggests one approach: require the vendor to monitor and address bias (e.g., via audit rights). While indemnity for hallucinations is unlikely, you could try a clause that if OpenAI knowingly delivers content with malicious code or a virus, they must indemnify you (this is an extreme scenario, but underscores vendor responsibility for “product defects”). In short, treat AI outputs as a service with no warrant, but negotiate for transparency and safeguards.

Practical Negotiation Strategies & Example Clauses

  • Liability Cap:
    • Standard: “Each party’s total liability…will not exceed the total amount Customer paid to OpenAI during the twelve months immediately before the event”.
    • Negotiation: Push to raise or remove this cap for key risks. For example, ask for a cap equal to 2–3 times the annual fees or a flat high-dollar amount. Crucially, carve out exceptions. Common carve-outs include breaches of confidentiality, gross negligence, IP infringement indemnity, and privacy violations. You might demand that liabilities for data breaches or regulatory fines fall outside the cap. OpenAI’s terms already exclude indemnity payments from the cap, but insist that all payments under an indemnity (even yours) are additional to the cap.
  • IP Indemnity:
    • Standard: “OpenAI agrees to indemnify…any Claim alleging that the Services infringe any third-party IP Right”. It notes that this covers training data and model outputs.
    • Negotiation: Ensure this clause explicitly covers both the AI service and any outputs you use. Clarify that “Services” includes generated outputs. Tighten or eliminate broad exceptions (e.g., “combinations with other software” or “modifications”). In practice, you may ask whether an output directly copies copyrighted content; in this case, OpenAI defends you even if you use the output downstream.
  • Customer Indemnity:
    • Standard: You indemnify OpenAI for claims arising from your breach of this agreement or misuse (including the use of disallowed content).
    • Negotiation: Narrow this obligation. For instance, specify that it only applies to claims caused by your breach of the contract or violation of the law. Note that indemnity does not apply to any claims arising from the normal, authorized use of the service. Avoid language that would force you to indemnify OpenAI for things beyond your control (like remote data breaches not caused by your actions).
  • Indirect Damages:
    • Standard: Indirect, punitive, and consequential damages are waived for both parties.
    • Negotiation: You’ll likely accept this waiver, but confirm it doesn’t cover all pain points. For example, clarify that costs like legal fees, breach notification costs, or statutory fines are “direct” damages resulting from the incident so that you can claim them. Some savvy negotiators request that certain damages be listed explicitly as direct (e.g., “costs of preventing or correcting a security incident”).
  • Carve-Out Examples: Risk Area Default Treatment (OpenAI)- Negotiation Tactic: Confidentiality Breach. Indirect damages waiver does not apply to security breaches. Strengthen this by making breach liability uncapped and waiving any requirement that you prove negligence. Add audit rights or incident response obligations. Data privacy (DPA) fines are typically not addressed; vendor disclaimers remain in place. Carve out regulatory fines from the cap. For example: “Notwithstanding any cap, OpenAI will be liable for fines levied by regulators to the extent caused by its breach of this contract.”Gross negligence is excluded from the cap by default. Define this clearly in the contract. You might add examples (such as intentional data theft or flagrant negligence in updating security) that would void the cap. IP indemnity claims. Indemnity is uncapped. Keep it uncapped. You could even expand: some clients request indemnity beyond copyright (e.g., to cover brand defamation or trade secret misuses, if feasible, though vendors resist).
  • Example Redlines: Original: “Neither party will be liable for any indirect, incidental, special, consequential, or exemplary damages…”.
    Revised: “Neither party will be liable for indirect damages except for (1) breaches of confidentiality or data protection obligations, (2) regulatory fines arising from a party’s breach, or (3) any damages subject to indemnification. In those cases, liability shall be direct.” Original: “Each party’s total liability…will not exceed the total amount paid during the twelve months… before the event giving rise to liability”.
    Negotiation: Propose raising to “224-monthsfees” or “aggregate of all fees paid during the contract term.” More importantly, append: “This cap shall not apply to liability arising from breaches of confidentiality, IP infringement claims, gross negligence/willful misconduct, or any regulatory penalties.”

Recommendations

  • Secure and expand IP indemnity. Ensure OpenAI’s indemnity for IP infringement explicitly covers the model and any outputs you use. Narrow, overly broad exclusions (for example, avoid giving up protection just because you combined the output with other systems).
  • Negotiate liability caps and carve-outs. Don’t accept a low cap if your potential loss is large. As requested, higher cap (e.g., multiple fees or a fixed large sum) and clearly outline major risks. At a minimum, carve out indemnity payments and damages from data breaches or confidentiality violations from the cap.
  • Balance indemnities. Push back if OpenAI’s form tries to make your indemnity too broad. You should only indemnify OpenAI for claims caused by your breach or misuse. Conversely, hold OpenAI to its promises. For example, insist that their indemnity obligations remain uncapped and are the “only remedy” for IP infringement.
  • Clarify regulatory liability. Explicitly address compliance: Attach a Data Protection Agreement (e.g., GDPR) and detail who is responsible for paying for any regulatory fines or penalties. Make clear that any fines due to OpenAI’s failure to secure data or comply are OpenAI’s responsibility. Even if it’s only an “ask,” this signals the importance of compliance.
  • Plan for outputs. Treat AI outputs as a risk you manage internally. Implement review processes and consider insuring against harm from misinformation. Contractually, ask for transparency (e.g., audit rights on model improvements) and ensure you have the right to delete or correct sensitive data.

By tailoring these clauses to the realities of generative AI, you can shift risk to where it belongs. As one commentary notes, “Each party should cover the risks under their control: OpenAI covers the AI and its training content; you cover…your use of the AI”. Use that principle in negotiations: make OpenAI accountable for flaws in their models or data, and ensure you aren’t unfairly liable for downstream issues.

Author

  • Fredrik Filipsson

    Fredrik Filipsson brings two decades of Oracle license management experience, including a nine-year tenure at Oracle and 11 years in Oracle license consulting. His expertise extends across leading IT corporations like IBM, enriching his profile with a broad spectrum of software and cloud projects. Filipsson's proficiency encompasses IBM, SAP, Microsoft, and Salesforce platforms, alongside significant involvement in Microsoft Copilot and AI initiatives, improving organizational efficiency.

    View all posts