by Charu A. Chandrasekhar, Avi Gesser, and Adam Shankman

Left to proper: Charu A. Chandrasekhar, Avi Gesser, and Adam Shankman (images courtesy of Debevoise & Plimpton LLP)
We acknowledge it’s a bit of early to make the decision for the most important AI problem for 2026, however we’re fairly assured that NDAs and different contractual use limitations are about to grow to be a major drawback for enterprise AI adoption.
As AI mannequin capabilities converge and plateau, improved GenAI efficiency—particularly for legislation companies, consultants, asset managers, insurance coverage corporations, and monetary providers companies—will come from offering the fashions with high-quality context (e.g., private, related paperwork). Certainly, most of the new options just lately introduced by frontier LLM suppliers like OpenAI and Anthropic are designed to supply the fashions with entry to high-quality, private inside agency context from work emails, SharePoint websites, databases, customer support calls, and so forth. However typically, the companies that wish to use these supplies don’t clearly personal them or the fitting to make use of them on this means. Many of those supplies had been offered by purchasers, prospects, or different third events, with use circumstances hooked up to them. Particularly, NDAs, engagement letters, and contractual phrases and circumstances might place important limitations on how these paperwork can be utilized.
First, there are provisions that expressly limit companies’ capacity to make use of AI with consumer knowledge. A number of these provisions had been written in 2023 and 2024, when many legislation companies didn’t have entry to enterprise AI fashions, so the priority was that the AI fashions would practice on the consumer’s knowledge, compromising its confidentiality. Primarily, these clauses present that “you might be prohibited from utilizing AI with our knowledge with out our consent.”
There are additionally many provisions that had been drafted both earlier than GenAI was out there or with out the usage of GenAI in thoughts, however might nonetheless apply to the usage of GenAI with consumer knowledge. These embrace restrictions relating to make use of limitations, technical segregation, knowledge alteration, knowledge dissemination, knowledge destruction, and IP rights.
Including to the complexity of what’s already a really sophisticated evaluation, the contracts that govern the datasets are sometimes not standardized and govern quite a few items of information, so discovering which clauses apply to explicit datasets may be very troublesome. There could also be tons of and even 1000’s of relevant contracts. For instance, legislation agency engagement letters could also be considerably uniform, however most companies are additionally topic to tons of of outdoor counsel tips which might be all very completely different from one another in each substance and kind.
As an example the purpose, suppose an insurance coverage firm needs to make use of AI to re-price its auto insurance coverage in a selected metropolis, neighborhood by neighborhood, utilizing an enormous amount of information that could be related for accident or theft claims in every location. They’ve collected or bought knowledge referring to climate circumstances, highway building, crime statistics, previous insurance coverage claims, telematics, vandalism frequency by make and mannequin of automotive, drone footage with analytics, and so forth. Every dataset could also be topic to a number of contracts, and subsequently a number of attainable restrictions, which can differ by supplier, by time interval, and by location.
Debevoise has developed a protocol for these sorts of huge knowledge tasks that entails amassing the contracts, organizing them utilizing bespoke AI structuring options, figuring out the relevant knowledge restrictions, mapping the information to the contracts, after which addressing these restrictions by means of a number of inventive and sensible mitigation choices.
Unlocking the worth of AI will more and more contain the messy train of assessing and addressing contractual restrictions on high-quality inside private knowledge. This course of is usually a time-consuming and sophisticated evaluation, which frequently entails managing a number of and completely different authorized, technical, enterprise, and reputational dangers. The challenges may be important and usually are not the sort of issues that get simpler over time if ignored, however they’re solvable.
Charu A. Chandrasekhar and Avi Gesser are Companions, and Adam Shankman is an Affiliate at Debevoise & Plimpton LLP. This put up first appeared on the agency’s weblog.
The views, opinions and positions expressed inside all posts are these of the writer(s) alone and don’t signify these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College Faculty of Regulation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this web site and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the writer(s) and any legal responsibility close to infringement of mental property rights stays with the writer(s).
by Charu A. Chandrasekhar, Avi Gesser, and Adam Shankman

Left to proper: Charu A. Chandrasekhar, Avi Gesser, and Adam Shankman (images courtesy of Debevoise & Plimpton LLP)
We acknowledge it’s a bit of early to make the decision for the most important AI problem for 2026, however we’re fairly assured that NDAs and different contractual use limitations are about to grow to be a major drawback for enterprise AI adoption.
As AI mannequin capabilities converge and plateau, improved GenAI efficiency—particularly for legislation companies, consultants, asset managers, insurance coverage corporations, and monetary providers companies—will come from offering the fashions with high-quality context (e.g., private, related paperwork). Certainly, most of the new options just lately introduced by frontier LLM suppliers like OpenAI and Anthropic are designed to supply the fashions with entry to high-quality, private inside agency context from work emails, SharePoint websites, databases, customer support calls, and so forth. However typically, the companies that wish to use these supplies don’t clearly personal them or the fitting to make use of them on this means. Many of those supplies had been offered by purchasers, prospects, or different third events, with use circumstances hooked up to them. Particularly, NDAs, engagement letters, and contractual phrases and circumstances might place important limitations on how these paperwork can be utilized.
First, there are provisions that expressly limit companies’ capacity to make use of AI with consumer knowledge. A number of these provisions had been written in 2023 and 2024, when many legislation companies didn’t have entry to enterprise AI fashions, so the priority was that the AI fashions would practice on the consumer’s knowledge, compromising its confidentiality. Primarily, these clauses present that “you might be prohibited from utilizing AI with our knowledge with out our consent.”
There are additionally many provisions that had been drafted both earlier than GenAI was out there or with out the usage of GenAI in thoughts, however might nonetheless apply to the usage of GenAI with consumer knowledge. These embrace restrictions relating to make use of limitations, technical segregation, knowledge alteration, knowledge dissemination, knowledge destruction, and IP rights.
Including to the complexity of what’s already a really sophisticated evaluation, the contracts that govern the datasets are sometimes not standardized and govern quite a few items of information, so discovering which clauses apply to explicit datasets may be very troublesome. There could also be tons of and even 1000’s of relevant contracts. For instance, legislation agency engagement letters could also be considerably uniform, however most companies are additionally topic to tons of of outdoor counsel tips which might be all very completely different from one another in each substance and kind.
As an example the purpose, suppose an insurance coverage firm needs to make use of AI to re-price its auto insurance coverage in a selected metropolis, neighborhood by neighborhood, utilizing an enormous amount of information that could be related for accident or theft claims in every location. They’ve collected or bought knowledge referring to climate circumstances, highway building, crime statistics, previous insurance coverage claims, telematics, vandalism frequency by make and mannequin of automotive, drone footage with analytics, and so forth. Every dataset could also be topic to a number of contracts, and subsequently a number of attainable restrictions, which can differ by supplier, by time interval, and by location.
Debevoise has developed a protocol for these sorts of huge knowledge tasks that entails amassing the contracts, organizing them utilizing bespoke AI structuring options, figuring out the relevant knowledge restrictions, mapping the information to the contracts, after which addressing these restrictions by means of a number of inventive and sensible mitigation choices.
Unlocking the worth of AI will more and more contain the messy train of assessing and addressing contractual restrictions on high-quality inside private knowledge. This course of is usually a time-consuming and sophisticated evaluation, which frequently entails managing a number of and completely different authorized, technical, enterprise, and reputational dangers. The challenges may be important and usually are not the sort of issues that get simpler over time if ignored, however they’re solvable.
Charu A. Chandrasekhar and Avi Gesser are Companions, and Adam Shankman is an Affiliate at Debevoise & Plimpton LLP. This put up first appeared on the agency’s weblog.
The views, opinions and positions expressed inside all posts are these of the writer(s) alone and don’t signify these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College Faculty of Regulation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this web site and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the writer(s) and any legal responsibility close to infringement of mental property rights stays with the writer(s).



















