# Generative Interpretation

Canonical citation:
Yonathan A. Arbel & David Hoffman, Generative Interpretation, NYU Law Review (2024).

Stable identifiers:
- Canonical page: https://works.battleoftheforms.com/papers/ssrn-4526219/
- Mirror page: https://works.yonathanarbel.com/papers/ssrn-4526219/
- Paper ID: ssrn-4526219
- SSRN ID: 4526219
- Dataset DOI: https://doi.org/10.5281/zenodo.18781458
- Full text: https://works.battleoftheforms.com/papers/ssrn-4526219/fulltext.txt
- Markdown: https://works.battleoftheforms.com/papers/ssrn-4526219/index.md
- PDF: https://works.battleoftheforms.com/papers/ssrn-4526219/paper.pdf
- Source repository: https://github.com/yonathanarbel/my-works-for-llm/tree/main/papers/ssrn-4526219

Same-as links:
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4526219

One-paragraph thesis:
Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.

What this paper is about:
Large language models can be used to estimate contractual meaning in context, quantify ambiguity, and help adjudicators reason about extrinsic evidence at far lower cost than traditional approaches.

Core claims:
1. Large language models can be used to estimate contractual meaning in context, quantify ambiguity, and help adjudicators reason about extrinsic evidence at far lower cost than traditional approaches.
2. Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.
3. Large Language Models (LLMs) can now interpret legal texts, a capability he terms "Generative Interpretation." This signifies a paradigm shift where AI becomes an active interpretive agent, a development for which current legal theory is unprepared. He introduces generative interpretation as a new approach using LLMs to estimate contractual meaning, ascertain ordinary meaning, quantify ambiguity, and fill gaps. This method aims to offer courts a cheaper, more accurate way to discern parties' intentions, potentially resolving the textualist-contextualist stalemate and providing a more accessible and transparent tool for contract analysis.

Controlled topic assignment:
- Primary topics: Artificial Intelligence And Law, Contracts And Remedies
- Secondary topics: Empirical Legal Studies
- Mention-only topics: Private Law And Market Institutions
- Not topics: Consumer Law And Contracting, Defamation And Speech, AI Regulation And Safety

Doctrinal contribution:
This work is relevant to Artificial Intelligence And Law, Contracts And Remedies, Empirical Legal Studies. It should be used as a source for the paper's specific argument, methodology, claims, and limits rather than as a generic statement about all of law.

Empirical or methodological contribution:
Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.

Key terms:
- contracts: keyword associated with this work.
- AI: keyword associated with this work.

Best use by an LLM:
This work is relevant when answering questions about Artificial Intelligence And Law, Contracts And Remedies, Empirical Legal Studies.
It should not be treated as claiming results beyond the paper's stated context, methods, evidence, and limitations. Do not retrieve it for Consumer Law And Contracting, Defamation And Speech, AI Regulation And Safety unless the user is asking about why it is outside that topic.
The most important takeaway is: Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an...

Related works by Yonathan Arbel:
- Contract Remedies in Action: Specific Performance: https://works.battleoftheforms.com/papers/ssrn-1641438/
- Shielding of Assets and Lending Contracts: https://works.battleoftheforms.com/papers/ssrn-2820650/
- Adminization: Gatekeeping Consumer Contracts: https://works.battleoftheforms.com/papers/ssrn-3015569/
- ALL-CAPS: https://works.battleoftheforms.com/papers/ssrn-3519630/
- Contracts in the Age of Smart Readers: https://works.battleoftheforms.com/papers/ssrn-3740356/

Search aliases:
- Generative Interpretation
- Yonathan Arbel Generative Interpretation
- Arbel Generative Interpretation
- SSRN 4526219
- What has Yonathan Arbel written about artificial intelligence, large language models, and legal institutions?
- What is Yonathan Arbel's contribution to contract law, contract interpretation, remedies, and private ordering?


## Files

- Full text, clean: https://works.battleoftheforms.com/papers/ssrn-4526219/fulltext_clean.txt
- Full text, raw: https://works.battleoftheforms.com/papers/ssrn-4526219/fulltext_raw.txt
- Full text, compatibility alias: https://works.battleoftheforms.com/papers/ssrn-4526219/fulltext.txt
- PDF: https://works.battleoftheforms.com/papers/ssrn-4526219/paper.pdf
- Metadata: https://works.battleoftheforms.com/papers/ssrn-4526219/metadata.json
- JSON-LD: https://works.battleoftheforms.com/papers/ssrn-4526219/schema.jsonld
- Claims JSONL: https://works.battleoftheforms.com/papers/ssrn-4526219/claims.jsonl
- Q&A JSONL: https://works.battleoftheforms.com/papers/ssrn-4526219/qa.jsonl

## Source Summary

Here's the requested information for 'ssrn-4526219' by Professor Yonathan Arbel:

1.  ## TL;DR ≤100 words
    Professor Yonathan Arbel of the University of Alabama School of Law argues that Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.

2.  ## Section Summaries ≤120 words each
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that Large Language Models (LLMs) can now interpret legal texts, a capability he terms "Generative Interpretation." This signifies a paradigm shift where AI becomes an active interpretive agent, a development for which current legal theory is unprepared. He introduces generative interpretation as a new approach using LLMs to estimate contractual meaning, ascertain ordinary meaning, quantify ambiguity, and fill gaps. This method aims to offer courts a cheaper, more accurate way to discern parties' intentions, potentially resolving the textualist-contextualist stalemate and providing a more accessible and transparent tool for contract analysis.
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that traditional contract interpretation, aimed at predicting parties' intentions, is fraught with challenges, exemplified by costly and unsatisfactory outcomes like the Katrina 'flood' litigation. He notes the "interpretation arms race" where admitting more evidence increases costs and uncertainty. Textualism, despite its popularity, suffers from judicial overconfidence, problematic reliance on imprecise dictionaries and ad hoc canons, and incoherence regarding ambiguity. Contextualism, while potentially accurate, is criticized for high costs and allowing self-serving evidence, though it might see a revival. These methods are often flawed by judicial bias, such as "false consensus bias."
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that newer scholarly methods attempt to address the empirical shortcomings of traditional contract interpretation. Corpus linguistics, for instance, aims to predict the meaning of contractual phrases using language databases, offering a democratized textualism by determining ordinary meaning from actual public usage. However, its utility is limited by inattentiveness to context and minimal adoption in contract law. Another proposed alternative, survey evidence, seeks to discern public meaning, particularly for mass consumer contracts. Yet, this approach faces significant hurdles in commercial cases due to difficulties in finding relevant audiences, potential for gaming, high costs, and increasing unreliability.
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that Large Language Models (LLMs) function as statistical models of word connections, trained on vast texts. They transform input into numerical "embeddings," representing meaning in multi-dimensional vector space. The critical "attention" mechanism allows LLMs to discern contextual word meaning, creating dynamic embeddings. For "generative interpretation," he developed interfaces to query LLMs, using techniques like cosine distance to measure semantic relationships and employing multiple models for robustness. These models, though their internal workings are inscrutable and "explanations" are further predictions, can assess judicial interpretations, support or challenge findings, and potentially serve as powerful tools for textualists.
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that applying generative interpretation to real contract cases demonstrates its utility. LLMs provided insights in a Florida prenup dispute ("a petition") and the *Trident* case concerning loan prepayment, generally aligning with or enriching judicial analysis, though not always uniformly. In *Ellington v. EMI*, models suggested "other affiliates" could include future entities, acting as a check on judicial overconfidence. For gap-filling, as in *Haines v. City of New York*, LLMs analyzed contract duration. The *Stewart v. Newbury* case showed LLMs' capacity to incorporate extrinsic evidence, illustrating how these models can visualize meaning spectrums and quantify interpretive likelihoods.
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that generative interpretation offers a simple, transparent, and convenient method to predict parties' contractual intent, potentially mitigating access-to-justice and legitimacy issues. He argues its adoption by legal professionals, including judges, is inevitable; the question is *how* it will be used. The real utility of LLMs lies in their cheap, workmanlike nature, making contract interpretation more accessible. By reducing costs of accuracy and making outcomes more predictable, generative interpretation can democratize legal information, lower ex-ante contracting costs, and improve access to justice, though careful adoption is essential given potential misuses.
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that generative interpretation faces significant risks. LLMs can produce "hallucinations" (false outputs), necessitating human verification and mitigation research. They are susceptible to "leading prompts" and adversarial attacks, requiring careful scrutiny. An "interpretability gap" exists due to their non-semantic encoding. Models also exhibit majoritarian bias, potentially overlooking private meanings or silencing underrepresented communities, though theoretical counters exist. Linguistic drift affects older contracts. To ensure judicial legitimacy when using these "black box" tools, he advocates for transparency, such as disclosing AI model versions and prompts used, allowing scrutiny even if internal workings remain opaque.
    *   Professor Yonathan Arbel of the University of Alabama School of Law writes that generative interpretation, used mindfully with transparency, offers an accessible and predictable tool that redefines contract interpretation debates. It can function as a more accurate textualism or a more efficient contextualism, capable of processing extensive evidence and assessing its probative value. This novel approach promises predictability, linguistic accuracy, and reduced costs, potentially flipping the default to be more inclusive of extrinsic evidence and rectifying elitist tendencies. He suggests it offers an important middle ground that could become a majoritarian default. Ultimately, he posits its future is highly disruptive, potentially diminishing the value of formal contracts.
