Uncategorized

CIO Playbook: Pre-Negotiation Strategies for IP and Data Ownership in OpenAI Enterprise Contracts

CIO Playbook: Pre-Negotiation Strategies for IP and Data Ownership in OpenAI Enterprise Contracts

Introduction

The enterprise adoption of generative AI presents tremendous opportunities for innovation, but it also raises critical concerns regarding data ownership and intellectual property (IP). Before signing a contract with OpenAI for services like ChatGPT Enterprise, GPT-4 API, or embedding models, CIOs and other technology leaders must diligently prepare a pre-negotiation strategy. The goal is to safeguard your organization’s data and ensure clarity on who owns and can use both the information you input into the AI and the outputs it generates. This playbook, styled in a professional advisory tone, guides CIOs, CTOs, enterprise architects, procurement leads, and legal counsel through the pre-negotiation phase. It provides actionable guidance to protect enterprise data and intellectual property rights when contracting with OpenAI, focusing specifically on key issues to address before any agreement is finalized.

Why This Matters: OpenAI’s enterprise offerings promise privacy and security by default (for example, ChatGPT Enterprise does not train on your business data and offers encryption), but no organization should rely solely on vendor assurances. Recent incidents have highlighted the risks – for instance, engineers at Samsung inadvertently leaked proprietary source code by pasting it into ChatGPT, prompting the company to ban the internal use of such AI tools. This highlights the importance of securing contract terms that prevent unauthorized data use or exposure. Additionally, questions of who owns AI-generated code, documents, or analysis can have far-reaching implications for intellectual property rights and compliance. A robust pre-negotiation plan enables you to proactively address these concerns at the bargaining table, rather than reacting to problems later.

This playbook focuses on pre-contract negotiation measures, including defining requirements, evaluating risks, and formulating strong contractual positions regarding data and intellectual property (IP). It does not cover post-signature enforcement or downstream development workflows. By following these guidelines, enterprises can confidently engage OpenAI (or similar AI vendors) with a clear strategy to protect their crown jewels of data and intellectual property.

Understanding the Stakes: Data and IP in OpenAI Enterprise Services

Before diving into negotiation tactics, it’s crucial to understand how OpenAI’s enterprise services handle data and what’s at risk. When using GPT-4 APIs, embedding models, or ChatGPT Enterprise, your organization will send information (prompts, documents, code, etc.) to OpenAI’s platform (input data) and receive AI-generated output (completions, answers, generated code, embeddings, etc.). Both sides of this exchange raise important questions:

  • Input Data Exposure: Any proprietary or sensitive data you send to an AI model leaves your controlled environment and may be accessible to unauthorized parties. Without proper safeguards, there’s a risk of unauthorized access, data leaks, or your data being used in ways you didn’t intend (such as improving the vendor’s AI model). Regulatory compliance is also a factor – for example, personal data in prompts could trigger privacy laws if not handled properly. In the case of OpenAI, their standard business terms for enterprise clients explicitly treat customer prompts and data as confidential and promise not to use them for training models. However, it’s essential to cement these promises in a negotiated contract and close any gaps (for example, defining how long data is retained and who can access it).
  • AI-Generated Outputs and IP: Content produced by GPT-4 or other models can be extremely valuable – think of software code snippets, research summaries, or strategy documents drafted by the AI. Who owns this output by default? OpenAI’s terms indicate that you own the output generated from your inputs, and OpenAI assigns any rights it might have in that output to the customer. This default is favourable, but ambiguities can still arise. For instance, if the model’s output inadvertently contains copyrighted text or proprietary code from its training data, you could face IP infringement issues even though you “own” the output. Additionally, without an explicit agreement, there might be concerns about whether OpenAI could reuse or disclose the outputs provided to your company. Understanding these stakes – protecting input confidentiality and securing output ownership – sets the stage for formulating your negotiation strategy.

In summary, the pre-negotiation phase should be used to analyze what data you will put into OpenAI’s systems, how critical that data is, what the AI outputs will be used for, and what worst-case scenarios you need to guard against (like proprietary data becoming public or a third-party suing over an AI-generated asset). Armed with this context, you can proceed to define contract requirements that lock down data use and IP rights to your satisfaction.

Safeguarding Input Data: Privacy, Confidentiality, and Security

When entrusting sensitive business information to OpenAI’s AI services, data privacy and confidentiality must be top priorities in your negotiation. Although OpenAI advertises strong protections, your contract must explicitly reflect them. Key considerations include:

  • Confidentiality Obligations: Ensure the contract defines all data you submit (prompts, documents, context you provide) and even the AI’s outputs as your confidential information. The agreement should obligate OpenAI to protect this data, at least to the same degree as it would its sensitive information. All customer-provided content and AI-generated content should be restricted from any unauthorized use or disclosure. In practical terms, this means OpenAI cannot share your data or outputs with third parties or use them for any purpose other than to deliver the service to you. Negotiate clear, strong language here – for example, a clause stating “OpenAI shall treat all Customer Input and Output as Confidential Information of Customer, and will not disclose it to any third party or use it for any purpose outside the scope of providing the contracted AI services.” This provides a contractual backstop to technical measures and policies.
  • Data Retention and Deletion: Control over how long your data is stored on the vendor’s systems is essential to minimizing exposure. OpenAI’s ChatGPT Enterprise product allows organizations to set data retention policies (including the option of not storing prompts at all). In your contract, stipulate that data retention is under your control and management. For example, you may require that no prompts or conversations are stored beyond a transient period needed to serve the response (zero retention), or if retention is allowed for functionality, you can specify a maximum retention duration (e.g., 30 days), after which the data is deleted. The contract should grant you the right to request the deletion of your content at any time and require OpenAI to certify or confirm that the deletion has occurred. This is not only a best practice for security but also important for regulatory compliance, such as GDPR’s “right to be forgotten.” Insist on clauses like: “Upon Customer’s request or contract termination, OpenAI will promptly delete all Customer Content and certify such deletion in writing.” Additionally, consider incorporating an audit or review mechanism to verify deletions, if feasible.
  • Data Security Measures: Beyond confidentiality and deletion, ensure the contract imposes specific security obligations on OpenAI to protect your data. OpenAI should maintain industry-standard security certifications and practices (for instance, SOC 2 Type II compliance, ISO 27001, etc.), use strong encryption (e.g., AES-256 for data at rest and TLS 1.2+ for data in transit), and have access controls that limit who (even within OpenAI) can see customer content. While OpenAI’s materials claim enterprise-grade security, it’s wise to have these commitments in writing. Include a clause requiring timely breach notification as well – e.g., “OpenAI will notify Customer within X days of any security breach affecting Customer data.” Clear security commitments will help your infosec team sleep at night and fulfil your due diligence requirements.
  • Compliance and Data Processing Agreements: If your use of OpenAI will involve personal data or other regulated information, attach a Data Processing Addendum (DPA) or equivalent privacy agreement to the contract. OpenAI provides a standard Data Processing Agreement (DPA) for business customers – ensure this is reviewed and signed as part of your agreement. The DPA should clearly outline roles (typically, you are the data controller and OpenAI is a data processor acting on your instructions) and include necessary clauses for GDPR, CCPA, or other relevant privacy laws (such as purpose limitation, data subject rights assistance, and sub-processor disclosures). Verify if any industry-specific requirements apply (for example, if you’re in healthcare, you might need a HIPAA Business Associate Agreement). Negotiating these privacy details upfront is crucial; you want OpenAI contractually bound to adhere to applicable laws and your data handling standards when processing your data.
  • Real-World Example – Why This Matters: High-profile data leaks have shown the danger of complacency. In early 2023, Samsung learned this the hard way when engineers accidentally uploaded confidential source code to ChatGPT, thinking it would help with coding — instead, that code became part of ChatGPT’s input logs. Samsung quickly banned employees from using the tool, fearing sensitive data could escape or be seen by others. This incident underlines that even if an AI service isn’t malicious, user missteps can lead to breaches if the contract and platform don’t provide adequate safeguards. A strong confidentiality and data-protection clause in your OpenAI contract creates a legal requirement for the vendor to implement and honour safeguards (and gives you recourse if they fail), adding an extra layer of protection beyond just trusting employees to be careful.

In summary, during pre-negotiation, make data protection a non-negotiable priority. Clearly articulate to OpenAI that your organization requires stringent confidentiality, limited data use, robust security practices, and compliance with relevant privacy regulations. Most of these should align with OpenAI’s stated enterprise policies – your task is to ensure they are explicitly documented in the contract in terms that protect your interests. Don’t hesitate to add specific language or even an exhibit detailing security controls if your security team requires it. It is far easier to get these commitments upfront than to wish you had if something goes wrong later.

Ensuring Intellectual Property Ownership of Inputs and Outputs

Clarity on IP ownership is another pillar of a solid OpenAI contract. As the customer, you want to make sure that anything you send to the AI (your inputs) remains yours and that anything the AI produces for you (outputs) becomes yours to use freely. OpenAI’s standard business terms are quite favourable on this point – they explicitly state that you retain ownership of all input you provide. You own the output generated for you, with OpenAI even assigning any potential rights in that output to you. Nonetheless, you should not take this for granted; spell it out in your negotiated agreement and consider additional nuances:

  • Your Inputs Stay Yours: Include a clause affirming that all data, content, or materials you provide to OpenAI are, and will remain, your property (or under your license, in cases where you’re feeding third-party content you have rights to). OpenAI should have no ownership rights over your prompts, documents, or other input data. At most, the contract can grant OpenAI a very limited license to use your inputs solely to provide the service to you. Such language ensures that OpenAI cannot, for example, compile customer prompts into a dataset or claim any derivative rights to your proprietary information. This is usually straightforward, but review the contract draft to remove or narrow any broad license language. For instance, if OpenAI’s template asks for the right to use your content to “develop and improve services,” strike that out in favour of “use solely to perform the services for Customer”.
  • You Own AI-Generated Outputs: Negotiating Output Ownership is Vital for IP Peace of Mind. The contract should clearly state that, as between OpenAI and your organization, your organization owns all outputs generated by the AI in response to your prompts. OpenAI’s terms use exactly this approach – they assign to the customer any rights OpenAI may have in the output. By memorializing this, you ensure you have full rights to use, modify, combine, publish, or monetize the AI’s outputs as you see fit, with no royalties or approvals needed. This is crucial if, for example, ChatGPT helps your team write marketing copy, draft code, or create designs; you don’t want to later face a claim that OpenAI retains some ownership or that you can’t treat the AI-produced material as your company’s asset. Double-check for contract wording like “Customer owns all rights, title, and interest in and to the Output” and that OpenAI waives or assigns any of its rights to you. This will cover copyright and any other intellectual property (IP) in the generated text, code, or images (for image-generating services).
  • Avoid Unintended Licenses: Back to Vendor. Be cautious about any contract language that grants OpenAI rights to your content beyond what is necessary. Often, cloud contracts include boilerplate language that allows the vendor to use customer content for broader purposes. In this context, resist any request for a broad license on your prompts or outputs in favour of OpenAI. The only acceptable license from you to OpenAI is a narrow one: permission to process your inputs and deliver the outputs to you. Similarly, once you receive the output, OpenAI shouldn’t retain rights to use that output elsewhere. Ideally, the agreement will clarify that OpenAI cannot reuse or publish the outputs it generated for you, nor disclose the specifics of your prompts. Ensure that any license you provide is limited in scope (service delivery only), territory (should be explicitly limited to within their systems), and time (only for the duration required). Keeping these licenses tightly scoped prevents OpenAI from later claiming, for example, that it can analyze your prompts or outputs for its product development or marketing.
  • IP Risks in Outputs – Addressing Third-Party Rights: Owning the output doesn’t automatically guarantee that it is free from others’ IP. One of the trickiest issues with generative AI is that the model may produce content that is similar to or even identical to existing copyrighted material or proprietary code present in its training data. For instance, the AI might generate a few lines of code that happen to match open-source code under a restrictive license, or it might output a paragraph that closely mirrors a published article. In such cases, your company could be accused of IP infringement for using that output, even though you got it from the AI in good faith. OpenAI’s current terms put the onus on the user (customer) to handle this risk – they generally disclaim liability for output content and require users to ensure their use of outputs doesn’t violate any laws or rights. From a negotiation standpoint, you should raise this concern and seek protections: ask OpenAI for at least a warranty or representation that the service isn’t knowingly providing you infringing content and, ideally, an indemnification (we will discuss indemnities in the next section) if a third-party copyright or patent claim arises from the outputs or the AI model itself. OpenAI may not readily agree to broad warranties regarding unpredictable AI output, but highlighting this issue signals its importance. At a minimum, ensure they confirm that, to their knowledge, the model isn’t intentionally trained on unlawfully obtained content and that they will cooperate or assist if an issue is discovered.
  • Example Scenario – IP Ownership in Practice: Imagine your team uses GPT-4 to generate some code that becomes part of your product. Six months later, a software vendor claims that the code mirrors their proprietary code and accuses you of infringement. Suppose your contract clearly states you own the AI-generated code, and you negotiated an IP indemnity from OpenAI. In that case, your company can turn to OpenAI for defense (assuming the claim stems from the model output and not from something you did). Without those provisions, you would be on your own in the legal battle. Similarly, consider AI-generated text: if ChatGPT produces a marketing paragraph that inadvertently quotes a few lines from a copyrighted article, you technically own that paragraph via contract, but the original publisher still owns their article and could object. This is why ownership alone isn’t enough – it must be coupled with risk mitigation. Your negotiation should, therefore, pair the freedom to use outputs (ownership rights) with contractual safeguards (warranties, indemnities, or at least internal review processes) to ensure the quality and legality of those outputs.

In short, protect your IP on both fronts: what you give to the AI and what you get from it. The pre-negotiation phase is the time to ensure the contract is clear about IP ownership. These terms cost OpenAI nothing to provide (they don’t want your data or the copyright to your outputs), so any resistance or vague language is a red flag. Nail down explicit clauses for input and output ownership, and you will remove a huge area of potential dispute or uncertainty later. Your organization can then confidently build on AI-generated content, knowing it legally belongs to you.

Restricting Data Use and Preventing Unwanted Model Training

One of the unique risks with AI services is the possibility that your data could be used to further train or refine the vendor’s models, thereby potentially exposing your proprietary information or contributing to a model that serves other customers (including competitors). In the case of OpenAI, they have publicly stated that business customers’ data will not be used for training their models, and their standard terms support this, stating that OpenAI will not use customer content to develop or improve its services. Nonetheless, part of your pre-negotiation strategy must be to explicitly opt out of any data usage for model training or analytics and close any loopholes. Address the following points in your contract discussions:

  • No Training on Customer Data: Insist on a clear contract clause that OpenAI will not use your inputs or outputs to train, retrain, or improve any AI models. The language should be unconditional, covering both direct training and any form of machine learning improvement. OpenAI’s enterprise agreements typically hincludethis by default (it’s a major selling point for ChatGPT Enterprise that your data sremainsyours and isn’t fed back into the model), but ensure the wording is cclear and concise For example: “OpenAI shall not use Customer’s Content (inputs or outputs) to train, develop, or improve any machine learning or artificial intelligence models.” By locking this down, you prevent scenarios where, say, your prompt or use case inadvertently helps make GPT-4 better for everyone. It also means that if you have highly sensitive data (such as proprietary financial data or trade secrets), there is zero chance it will end up influencing responses given to another user.
  • Explicit Opt-Out Confirmation: Historically, some AI providers required customers to opt out of data training via settings or email requests. OpenAI’s current stance for enterprise is opt-out by default, but you should still codify that you have definitively opted out of any data-sharing for model training. In negotiation, you might include wording such as: “Customer Content will be excluded from any datasets used to train or improve OpenAI’s models. OpenAI will not store or analyze Customer Content beyond what is necessary to fulfill Customer’s specific requests, except as required for legal compliance or security monitoring.” This kind of clause not only reiterates the no-training policy but also addresses a related concern: it limits the storage of your data to only what is needed operationally. Some retention of data may be required for functions such as real-time processing or temporary caching. Still, it should not persist indefinitely in any system that could later be analyzed. By explicitly stating the opt-out, you also avoid any ambiguity if OpenAI updates its policies – your contract will override any default data usage setting.
  • Limit Use to Your Benefit Only: Ensure the contract clearly states that any permissible use of your data by OpenAI is solely to serve your account and its related purposes. For instance, OpenAI might need to temporarily store conversation context to provide continuity in a chat session. Alternatively, you might engage them to fine-tune a model on your data for your exclusive use. These uses are acceptable as long as they’re under your control and for your benefit. The agreement can specify that any such training or tuning is performed exclusively for you, and the resulting tuned model or output belongs to you (and is not incorporated into OpenAI’s general models). Essentially, choose to leverage OpenAI’s technology to improve your outcomes (such as a private, fine-tuned model). The contract should ensure that this doesn’t inadvertently result in OpenAI gaining access to your data to enhance their platform for others. Any broader usage should require a separate explicit agreement. Keeping this principle in mind during negotiation will help draw a bright line between your proprietary uses and the vendor’s platform learning.
  • Anonymized Aggregate Data (If Any): Many cloud service providers request the right to use aggregated or anonymized metadata from customer usage for analytics and improvement. For example, OpenAI might want to track overall usage patterns, performance metrics, or error rates. This can sometimes be acceptable if it is truly unable to identify your data or content. Decide internally where you stand on this. A strict stance is to forbid any use of your data, even metadata, beyond servicing your account. A more lenient but cautious stance is to allow only non-content-based data use – e.g., number of API calls, latency metrics – and only if all identifying info is removed. If you allow any aggregate data collection, define it tightly. For instance: “OpenAI may collect and use usage metrics (e.g., total query volumes, system performance data) to maintain and improve the service, provided that such data does not include any Customer Content or identifiable information and is aggregated to avoid disclosing Customer’s confidential information.” If you prefer zero usage, say so clearly: “OpenAI shall not use any Customer Content or derived data for product development or analytics outside of providing services to Customer.” The negotiation should align with your organization’s privacy philosophy and risk tolerance – ensure that any permitted telemetry is truly harmless and devoid of sensitive information.
  • Verification and Audit Rights: Given the importance of the no-training commitment, some enterprises may want the ability to verify compliance. Direct auditing of a vendor’s AI training pipeline may be impractical, but you could negotiate more flexible mechanisms. One approach is to require an annual written certification from OpenAI that they have complied with the no-training and data use provisions of your contract. Another approach is to tie this obligation into their standard security audits or SOC 2 reports – for instance, by asking that their SOC 2 scope include controls that ensure customer data isn’t used for training. While OpenAI might not allow individual customers to audit their systems due to the multi-tenant nature of the service, they might agree to a third-party audit or certification. The key is to create accountability for the promise. If OpenAI is confident in its controls, it should have no issue affirming them in writing on a regular basis. Even mentioning this expectation in pre-negotiation discussions signals that your company takes the no-training clause very seriously and will hold them to it.

In summary, preventing your data from being incorporated into OpenAI’s learning process is critical when negotiating with the company. The good news is OpenAI is generally amenable to this (and even markets it as a feature), but don’t leave it to trust or policy documents – bake it into the legally binding contract. By securing robust “no training use” language and related assurances, you greatly reduce the risk of proprietary data bleeding into the wider world or powering someone else’s AI. This also closes one door to potential IP leakage since your data and outputs remain sequestered to your usage alone. As you prepare for negotiations, list this item as a top priority and craft clear wording that accurately reflects your expectations, ensuring there is no room for misinterpretation.

Mitigating IP and Data Risks with Indemnities and Warranties

Even with strong confidentiality and no-training clauses, there are residual risks whenever leveraging AI, chiefly the possibility of IP infringement or other legal harm arising from the technology’s output or behaviour. That’s where indemnification, warranties, and liability clauses come into play. In pre-negotiation, CIOs and legal teams should determine what assurances and remedies they require from OpenAI in the event of issues and then seek to include these provisions in the contract. Here’s how to approach these legal safeguards:

  • IP Infringement Indemnity: This is arguably the most crucial indemnity to secure in an AI contract. An indemnity is a commitment by one party to defend and cover the other’s losses if certain claims arise. You will want OpenAI to indemnify your organization against any claims that the OpenAI service (including the model, its training data, or the outputs it generates for you) infringes on someone’s intellectual property rights. OpenAI’s standard business terms for enterprises reportedly do include an IP infringement indemnity, which is a positive starting point. Make sure your negotiated contract explicitly includes it and verify its scope. Specifically, it should cover claims of copyright, patent, or trademark infringement arising from your use of OpenAI’s services or outputs. For example, if an author alleges that ChatGPT’s answer to your prompt was essentially a paragraph from their copyrighted book, OpenAI would handle the legal defense and any settlement or damages rather than leaving you on the hook. Ensure the indemnity isn’t narrowly drafted – it should encompass issues with the model’s training data (since that’s where copyrighted text or code might have come from) and the output content. Having this indemnity shifts a key risk off your shoulders, recognizing that you don’t control the innards of the AI’s training corpus. In negotiations, don’t be afraid to ask OpenAI to confirm this covers all outputs you receive and any data they provide. Also, clarify any conditions: typically, you must use the service in authorized ways to be eligible (which is fair – if you misuse the AI, they shouldn’t cover you).
  • Other Potential Indemnities: Consider whether other legal risks require indemnity. For instance, what if the AI’s output defames someone or violates a law in some unforeseen way, and your company is sued for using/publishing that output? AI vendors often resist indemnifying those kinds of scenarios, arguing they can’t control what you ask or do with the output. However, you can still raise the issue. At a minimum, ensure the contract’s indemnities cover the core IP issues (which are the most likely litigation risk currently). You might also consider negotiating a product liability indemnity – i.e., if the AI software itself (not the content) causes harm due to a defect, OpenAI would be responsible. An example might be if the AI injects malicious code or viruses into an output that damages your systems, although this is highly unlikely. Still, you want the vendor to stand behind the safety of their product. Similarly, if OpenAI were to experience a data breach that exposed your content and led to claims, an indemnity for breaches could be explored (though vendors typically try to limit liability for data breaches to capped damages, rather than full indemnity). Use your judgment on how far to press beyond IP infringement; the priority is to cover any foreseeable legal disputes related to IP and data misuse, as these
    are most pertinent to this context.
  • Your Indemnification to OpenAI: Please note that OpenAI may also require you to indemnify them in certain situations, typically when a third party brings a claim due to your use of the service in violation of the contract or applicable law. Commonly, enterprise agreements make the customer responsible if, say, you provide data you had no right to use (e.g., you upload another company’s confidential info without permission and that company sues OpenAI) or if you use the AI to generate unlawful content and someone is harmed. These asks are generally reasonable as long as they’re appropriately scoped. In pre-negotiation, review any proposed customer indemnity clause carefully. Negotiate to narrow it down to scenarios where you are truly at fault, such as breach of the agreement, willful misconduct, or knowing misuse of the service. It should not be so broad that you’re indemnifying OpenAI for anything and everything. For example, if you used the AI exactly as intended and a third party still sues, that should fall under OpenAI’s responsibility (especially if it’s related to the model output). Aim for a balance: OpenAI covers issues arising from their technology and content; you cover issues arising from your misuse or illegal use. When both sides cover the risks under their control, it creates a fair allocation of responsibility.
  • Warranties and Disclaimers: Aside from indemnities, examine the warranties that OpenAI is willing to provide regarding the service and its outputs. Vendors often warranty that the software service will function as described and meet security standards, etc. In the AI context, most providers are careful not to warranty the accuracy or correctness of outputs (“as-is” output with no guarantees since AI can be unpredictable). However, you can seek a warranty against known bad behavior, such as OpenAI affirming that, to their knowledge, the service doesn’t contain any malware and that they have not incorporated any data into the model that they know would violate intellectual property rights. They may also warrant compliance with laws (e.g., export controls and data protection laws), which is important if you operate globally. Be sure to obtain a warranty that the service will comply with any data handling commitments (no training, confidentiality, etc., as promised). Conversely, be aware of OpenAI’s disclaimers. They will likely disclaim liability for how you use the outputs and for any errors in the output. You can’t eliminate all such disclaimers (AI output is probabilistic and sometimes wrong), but do ensure the contract doesn’t disclaim things that contradict the promises made elsewhere. For example, if there’s a blanket disclaimer “OpenAI has no responsibility for output content,” you might need to carve out their indemnity obligations or warranties from that blanket statement. Carefully reconcile the warranty section with the IP/indemnity clauses so you’re not left with an unenforceable promise.
  • Liability Limits: Virtually every vendor contract includes a limitation of liability section, which caps the amount each party can recover in damages. Pay special attention to how this interacts with the IP and data protections you negotiated. Ideally, any indemnification obligations for IP infringement or data breach should be uncapped or have a higher cap than general breaches. Otherwise, if OpenAI’s total liability is capped (for example, at the fees you paid or a fixed dollar amount), that cap might make an indemnity hollow – a large IP lawsuit could exceed the cap, leaving you exposed. In pre-negotiation, identify the acceptable liability cap and determine if you require exceptions. Many enterprise contracts carve out certain things from the cap: e.g., indemnities, confidentiality breaches, and gross negligence may be uncapped or subject to a higher “super cap”. Try to negotiate that IP infringement and data confidentiality breaches by the vendor are outside any low liability cap. You may not obtain unlimited liability from a provider like OpenAI (few vendors agree to unlimited liability for any reason). Still, you could negotiate, for example, 2-3 times the contract value or a reasonable dollar figure for these specific areas. The point is to ensure that if a worst-case scenario occurs, such as a massive IP lawsuit or a significant data leak caused by the vendor, the vendor faces sufficient financial consequences to cover your damages and incentivize prevention. Align this with your risk assessment; if the contract value is small but the potential IP exposure is huge, this is worth fighting for.

By locking down indemnities, warranties, and liability terms in the negotiation, you create a legal safety net. Indemnities transfer risk, warranties provide assurance (or at least a standard of performance), and a fair liability structure ensures accountability. These can be complex clauses, so involve your legal counsel and don’t hesitate to engage outside specialists who have negotiated AI contracts. Remember, OpenAI and other AI vendors are aware of these concerns – many have gradually become more willing to offer IP indemnification and clearer terms as enterprise clients demand them. Use that to your advantage by referencing industry standards (e.g., “other leading AI vendors have agreed to X indemnity in their contracts”). The pre-negotiation phase is your opportunity to determine which risks you are unwilling to bear and to develop strong positions on each. With a firm stance and well-defined asks in these legal protections, you can enter negotiations confident that you won’t be signing away your rights or accepting undue risk.

Pre-Negotiation Best Practices and Review Process

Having identified the substantive terms you need (data protection, IP ownership, no training, indemnities, etc.), it’s equally important to manage the process of preparing for and conducting the negotiation. A structured pre-negotiation approach will ensure no critical issue is overlooked and that you engage OpenAI from a position of knowledge and preparedness. Here are some best practices and steps to take before sitting down at the negotiating table:

  • Assemble a Cross-Functional Team: A deal with OpenAI for enterprise AI services is not just an IT procurement – it involves multiple departments, including legal, compliance, security, and business units. Form a team that includes stakeholders from each relevant domain: Legal counsel (for contract language and risk evaluation), IT and data security (for technical and security requirements), Privacy/compliance officers (for data protection concerns), Procurement or sourcing (for commercial terms and vendor management expertise), and representatives of key user groups (e.g., the innovation team or department sponsoring the AI use-case). This team should meet early to establish goals and deal-breakers. By involving everyone up front, you avoid the scenario of discovering a show-stopper issue late in the process. For example, your security lead might insist on a specific certification, or the legal team might identify an IP clause that needs adjustment. Consolidate these internal positions before engaging with OpenAI’s sales or contracting representatives.
  • Educate Yourself on OpenAI’s Standard Terms and Policies: Before you propose any changes, thoroughly review OpenAI’s existing terms (Business Terms, Enterprise Privacy Policy, usage guidelines, etc.). Understanding what OpenAI’s baseline offer includes will inform where you need to negotiate harder. As noted, OpenAI’s standard enterprise terms already include favorable points, such as no data training and customer ownership of outputs. Knowing this, you can focus negotiation time on reinforcing those and adding any missing pieces (such as stronger indemnities or clarifying ambiguities). Print out the terms and mark them up as a team: highlight anything unclear or problematic. Additionally, review OpenAI’s documentation on how they handle data (e.g., retention options, security measures published) and any public statements (such as press releases or blog posts) regarding ChatGPT Enterprise. These can sometimes be used to justify your requests: “Your website says we ‘own and control our data’ – we need that explicitly in the contract clause X.” The more familiar you are with OpenAI’s offerings, the more credible and efficient your negotiation will be.
  • Identify and Prioritize Requirements: With input from your team and knowledge of OpenAI’s defaults, create a list of contractual requirements that are must-haves vs. nice-to-haves. Prioritize issues based on their importance to your risk posture. For example, you might rank confidentiality/no-training clauses and IP ownership as critical (non-negotiable), an IP indemnity as very important, and perhaps something like an uptime SLA or pricing protections as lower priority (since our focus here is IP/data, we assume those are top concerns). Having a prioritized list will help you during negotiations to focus on what matters most and identify areas where you have flexibility. It can be useful to create a checklist or matrix of these points, which you can use to track which items have been addressed in the draft contract and which still require attention. Ensure “both sides” of the data/IP coin are on the list: the measures to protect you (e.g., vendor obligations we discussed) and any obligations on your side (e.g., you might have to agree not to input certain regulated data without informing them). Being clear on your ask for each item (“What exactly do we want this clause to say or protect against?”) is vital.
  • Draft or Collect Proposed Language: It often speeds up negotiations if you come prepared with specific clause language or an addendum for key issues. Your legal team may draft a short “AI Services Addendum” that contains all your essential modifications – for instance, a tailored confidentiality clause, a data usage clause, and so on. Alternatively, if you have access to industry templates or guidance (some companies share anonymized AI contract addenda, and organizations like IAPP or law firms publish example clauses), use those as starting points. Having a written proposal for how to handle data ownership or an indemnity provides OpenAI with a concrete response rather than merely raising a conceptual issue. That said, remain open to using OpenAI’s wording if it’s sufficient; you don’t want to reinvent the wheel if their contract already covers a point well. Use redlines on their draft to make your changes clear and concise. During the pre-negotiation phase, running these drafts by your internal stakeholders or external advisors (more on this below) can validate that they’re covering the right bases.
  • Engage Independent Experts (Licensing Advisors or Counsel): As a CIO or tech leader, you may lean on your internal legal team for contract reviews, but given the novelty of AI contracts, it’s often wise to consult an external specialist who has negotiated similar deals. Firms like Redress Compliance (among others) specialize in software and AI licensing negotiations and can provide insight into what terms are reasonable or what other clients have achieved. They can help you avoid vendor-biased language and ensure you’re not missing any hidden pitfalls. Bringing in an independent licensing advisor or outside counsel early – even during your preparation of requirements – can strengthen your negotiation position. They might point out, for instance, “OpenAI usually includes X in their DPAs; make sure to get that,” or “We’ve seen vendors concede Y indemnity when pressed.” This kind of market knowledge is invaluable. It also signals to OpenAI that you are taking the process seriously and have done your homework, potentially leading them to be more forthcoming. Budget permitting, don’t hesitate to get a professional second opinion on the draft contract or your negotiation strategy.
  • Internal Review and Approval Process: Plan out how you will review drafts and approve the final terms internally. For example, designate who on the team will be the primary point of contact for communicating with OpenAI’s negotiators (typically someone in procurement or legal). After each contract draft from OpenAI, conduct a roundtable review with your team to ensure that all your priority items are accurately reflected. Use the checklist you prepared to mark off items or flag open issues. Keep meeting notes on any concessions or trade-offs discussed. It’s also important to involve higher-level executives at decision points, especially if there may be risk trade-offs. For instance, if OpenAI refuses to budge on something your team has marked as critical, you might escalate the issue to the CIO, General Counsel, or even the board, if necessary, to decide whether the deal proceeds or if alternative approaches (such as not using certain data with the service) are acceptable. Having a clear approval chain prevents last-minute surprises. Additionally, make sure the final review covers integration points: ensure the Master Agreement, any DPAs, and other schedules (like a Statement of Work, if any) are consistent, and none override the protections you negotiated (be cautious of order-of-precedence clauses).
  • Be Mindful of Vendor Tactics and Red Flags: During negotiation, remain vigilant for “red flag” clauses or omissions. If OpenAI’s draft is missing a standard protection (such as not explicitly mentioning the confidentiality of your data), that’s a red flag to address immediately. If you see broad language granting OpenAI rights to “use content to improve services” or overly strict limitations on your use of outputs, call those out early. Sometimes, terms can be hidden in referenced documents (such as a usage policy) – be sure to review those and ensure that nothing there contradicts your negotiated understanding. For example, if a usage policy states, “OpenAI may monitor and use conversations for content moderation and improvement,” you would need to reconcile that with the no-training clause (it might be related to abuse monitoring, which could be acceptable if limited – clarify it). If OpenAI is unwilling to include key protection, assess how critical that is: Is it a deal-breaker, or can it be mitigated in another way? Knowing your priorities helps here. It’s often useful to explain the reason behind your requests rather than just saying, “We need X clause.” If OpenAI understands you have regulatory obligations or security policies requiring something, they might be more flexible. Nonetheless, if something truly essential cannot be agreed upon, you may need to consider walking away or seeking alternative solutions (or compensating controls). Part of pre-negotiation is deciding in advance what your fallback plan is if certain terms can’t be obtained.

By following these best practices, you create a disciplined approach to negotiating your OpenAI contract. You’ll enter talks with a well-informed team, a clear list of demands, and expert backup, which significantly improves the chances of a successful outcome. Remember, the negotiation is not just about legal fine print; it’s about aligning the service with your enterprise’s risk tolerance and values. The more preparation you do, the smoother the negotiation will go, and the final contract will reflect a partnership that enables you to use OpenAI’s powerful tools with confidence that your data is safe and your IP rights are protected.

Recommendations

In the pre-negotiation phase, CIOs and their teams should take the following prioritized actions to ensure enterprise data and IP ownership are adequately protected when contracting with OpenAI:

  1. Form a Cross-Functional Negotiation Team: Assemble stakeholders from IT, security, legal, compliance, and procurement early on. Ensure that everyone understands the AI initiative and contributes their requirements (e.g., security standards, privacy must-haves, IP risk concerns) before talks with OpenAI begin. A unified internal stance will prevent gaps and last-minute surprises.
  2. Inventory and Classify Data for AI Use: Identify the types of data you plan to send to OpenAI’s services, e.g., (code, customer data, strategic documents). Classify this data by sensitivity and regulatory status. Use this analysis to determine upfront whether certain high-risk data should be excluded or anonymized, and to inform the contract (for example, if personal data is involved, a Data Protection Agreement is non-negotiable). Knowing your data exposure will strengthen your negotiating position on confidentiality and privacy clauses.
  3. Review OpenAI’s Standard Terms and Policies: Obtain and study OpenAI’s Business Terms, Enterprise Privacy commitments, Usage Policies, and any draft contract they propose. Map out how these default terms address data usage, ownership, confidentiality, and liability. Highlight any provisions that are unclear or insufficient for your needs. This preparation enables you to acknowledge areas where OpenAI’s standard is acceptable while zeroing in on areas that require negotiation.
  4. Define Clear Data Protection Demands: Be ready to insist on robust confidentiality and data use clauses. This includes designating all customer-provided input and AI output as confidential information and prohibiting OpenAI from sharing or using it for any purpose beyond serving your account. Come prepared with language that gives you control over data retention and deletion (e.g., the right to require prompt deletion of your data upon request or contract end). By articulating these requirements clearly, you set the expectation that data safeguarding is a top priority.
  5. Secure Explicit IP Ownership Terms: Ensure the contract explicitly states that your organization retains ownership of all inputs you provide and that you own all outputs generated by the AI for your use. OpenAI’s agreement should include a clause assigning any of OpenAI’s rights in the output to you. Verify that no contract language undermines this (such as any vendor license to use your content beyond providing the service). This ensures that you can use and build upon AI-generated material with full rights, thereby preventing future IP disputes over ownership.
  6. Include a “No Training on Customer Data” Clause: Even if OpenAI’s policy is not to use business data for training, make it a contractual obligation. Write in a clause that OpenAI will not use your inputs or outputs to train or improve any AI models or algorithms. This should also cover the use of your data for any secondary analytics or product development without your permission. Getting this in writing protects you against policy changes and ensures your proprietary data won’t leak into any model accessible by others.
  7. Negotiate Data Retention and Audit Controls: Establish your right to control how long data is stored and to confirm proper handling. For example, you might mandate that no prompts or conversations are stored beyond 30 days (or are not stored at all) and that OpenAI must delete data upon your request. Additionally, consider including a right to periodic certification or audit of these practices (such as receiving a report that confirms none of your data was used in training). This level of control and oversight is key to maintaining ongoing trust in the service.
  8. Attach or Incorporate a Data Processing Addendum (DPA): If any personal or sensitive data is involved, ensure a DPA is signed along with the main contract. The DPA should outline privacy obligations, including GDPR compliance, OpenAI’s role as a processor under your instructions, transparency regarding sub-processors, and security measures. Don’t rely on generic promises – having a tailored DPA in place is often legally required and will enforce how OpenAI must handle personal data (e.g., users’ prompts that include personal information).
  9. Obtain IP Infringement Indemnification: Push for an indemnity clause where OpenAI agrees to defend and indemnify your company if the AI service or its outputs are claimed to infringe someone’s IP rights. This typically covers copyright or patent claims arising from the model’s training data or generated content. Given the uncertainties in generative AI outputs, this protection is vital – it shifts the legal risk back to the vendor if, for example, the AI accidentally reproduces part of a copyrighted text or code. Ensure that this indemnity covers the full scope of potential IP issues and that an overly small liability cap doesn’t nullify it.
  10. Evaluate and Raise Liability Limits: Scrutinize any limitations of liability in the contract. Negotiate for higher caps or uncapped liability on critical items, such as confidentiality breaches or IP indemnification. For instance, ask that the IP indemnity obligations be exempt from the standard liability cap, so that OpenAI will fully cover a serious claim without being limited by a low ceiling. The goal is to avoid a scenario where you think you’re protected (via indemnity or breach clause) only to find the contract’s liability cap renders it largely unenforceable in a significant incident.
  11. Demand Security and Compliance Assurances: Discuss and include any necessary security requirements. At a minimum, obtain commitments that OpenAI maintains industry-standard security certifications (e.g., SOC 2 Type II) and adheres to best practices (such as encryption, access controls, and monitoring). If your company has specific compliance requirements (such as data residency or audit support), bring these to the attention of the team early. For example, if you require that all data remain in certain jurisdictions or that you conduct a security review of OpenAI’s controls via their trust portal, include those requirements. Ensuring security obligations are contractually documented helps protect your data and satisfies regulators or auditors that you engaged in due diligence.
  12. Engage an Independent Advisor or Legal Expert: Before finalizing any agreement, have an outside expert review the terms. An independent licensing advisor or experienced tech lawyer (such as those from Redress Compliance or similar firms) can identify any hidden risks or opportunities for better terms. They might suggest tweaks in wording or confirm that the negotiated protections meet industry best practices. Their stamp of approval can validate your approach, and if they raise red flags, you can address them with OpenAI before signing. This extra step is worthwhile, given the rapidly evolving nature of AI contracts.
  13. Develop an Internal AI Use Policy: In parallel with contract negotiations, define how your organization will safely use OpenAI’s services once available. While this is an internal step (not a contract clause), it’s part of pre-negotiation preparation. Set guidelines for employees on what data can and cannot be input into ChatGPT or the API (e.g., no uploading source code or personal customer data without approval). Establish review processes for sensitive AI-generated outputs before they are made publicly available. By having these policies ready, you can also verify that the contract supports them (for example, if your policy forbids inputting certain data, you may not need to negotiate certain privacy terms as heavily, or vice versa). Aligning contracts and policies ensures that you not only secure the right terms from OpenAI but also use the AI in a way that minimizes risk on your side.

By following these recommendations in order, CIOs will cover the full spectrum of pre-negotiation activities – from team preparation and understanding OpenAI’s landscape to nailing down specific contract clauses and safeguards. The result will be a well-negotiated agreement that enables your enterprise to harness OpenAI’s capabilities confidently, knowing that your data remains protected and your IP rights remain firmly in your control.

Author

  • Fredrik Filipsson

    Fredrik Filipsson brings two decades of Oracle license management experience, including a nine-year tenure at Oracle and 11 years in Oracle license consulting. His expertise extends across leading IT corporations like IBM, enriching his profile with a broad spectrum of software and cloud projects. Filipsson's proficiency encompasses IBM, SAP, Microsoft, and Salesforce platforms, alongside significant involvement in Microsoft Copilot and AI initiatives, improving organizational efficiency.

    View all posts