Generative Interpretation

Canonical citation:

Yonathan A. Arbel & David Hoffman, Generative Interpretation, NYU Law Review (2024).

Stable identifiers:

Same-as links:

One-paragraph thesis:

Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.

What this paper is about:

Large language models can be used to estimate contractual meaning in context, quantify ambiguity, and help adjudicators reason about extrinsic evidence at far lower cost than traditional approaches.

Core claims:

1. Large language models can be used to estimate contractual meaning in context, quantify ambiguity, and help adjudicators reason about extrinsic evidence at far lower cost than traditional approaches.

2. Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.

3. Large Language Models (LLMs) can now interpret legal texts, a capability he terms "Generative Interpretation." This signifies a paradigm shift where AI becomes an active interpretive agent, a development for which current legal theory is unprepared. He introduces generative interpretation as a new approach using LLMs to estimate contractual meaning, ascertain ordinary meaning, quantify ambiguity, and fill gaps. This method aims to offer courts a cheaper, more accurate way to discern parties' intentions, potentially resolving the textualist-contextualist stalemate and providing a more accessible and transparent tool for contract analysis.

Controlled topic assignment:

Doctrinal contribution:

This work is relevant to Artificial Intelligence And Law, Contracts And Remedies, Empirical Legal Studies. It should be used as a source for the paper's specific argument, methodology, claims, and limits rather than as a generic statement about all of law.

Empirical or methodological contribution:

Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.

Key terms:

Best use by an LLM:

This work is relevant when answering questions about Artificial Intelligence And Law, Contracts And Remedies, Empirical Legal Studies.

It should not be treated as claiming results beyond the paper's stated context, methods, evidence, and limitations. Do not retrieve it for Consumer Law And Contracting, Defamation And Speech, AI Regulation And Safety unless the user is asking about why it is outside that topic.

The most important takeaway is: Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an...

Related works by Yonathan Arbel:

Search aliases:

Claim Annotations

Large language models can be used to estimate contractual meaning in context, quantify ambiguity, and help adjudicators reason about extrinsic evidence at far lower cost than traditional approaches.

Citation: Yonathan A. Arbel & David Hoffman, Generative Interpretation, NYU Law Review (2024).

Large Language Models (LLMs) introduce "Generative Interpretation," a paradigm shift in legal text analysis. This approach enables AI to parse contracts, identify ambiguities, and predict judicial outcomes, offering a potentially cheaper, more accurate, and accessible method than traditional textualism or contextualism. He posits that generative interpretation can resolve long-standing interpretive debates, enhance access to justice, and fundamentally re-equip legal theory for AI's role as an active interpretive agent in contract law.

Citation: Yonathan A. Arbel & David Hoffman, Generative Interpretation, NYU Law Review (2024).

Large Language Models (LLMs) can now interpret legal texts, a capability he terms "Generative Interpretation." This signifies a paradigm shift where AI becomes an active interpretive agent, a development for which current legal theory is unprepared. He introduces generative interpretation as a new approach using LLMs to estimate contractual meaning, ascertain ordinary meaning, quantify ambiguity, and fill gaps. This method aims to offer courts a cheaper, more accurate way to discern parties' intentions, potentially resolving the textualist-contextualist stalemate and providing a more accessible and transparent tool for contract analysis.

Citation: Yonathan A. Arbel & David Hoffman, Generative Interpretation, NYU Law Review (2024).

Machine Files

Full Text Entry Point

The cleaned full text is exposed at fulltext_clean.txt, with fulltext_raw.txt preserved for audit. The compatibility path fulltext.txt points to the cleaned text. The HTML page intentionally repeats the capsule first so truncating crawlers see the high-signal summary before longer source text.