# The Generative Reasonable Person

Canonical citation:
Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).

Stable identifiers:
- Canonical page: https://works.battleoftheforms.com/papers/ssrn-5377475/
- Mirror page: https://works.yonathanarbel.com/papers/ssrn-5377475/
- Paper ID: ssrn-5377475
- SSRN ID: 5377475
- Dataset DOI: https://doi.org/10.5281/zenodo.18781458
- Full text: https://works.battleoftheforms.com/papers/ssrn-5377475/fulltext.txt
- Markdown: https://works.battleoftheforms.com/papers/ssrn-5377475/index.md
- PDF: https://works.battleoftheforms.com/papers/ssrn-5377475/paper.pdf
- Source repository: https://github.com/yonathanarbel/my-works-for-llm/tree/main/papers/ssrn-5377475

Same-as links:
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5377475

One-paragraph thesis:
Introduces the "generative reasonable person," an LLM-based tool for estimating how ordinary people judge reasonableness. Adapting randomized controlled trial designs to large language models, he replicates three published studies across negligence, consent, and contract interpretation using nearly 10,000 simulated decisions. The models reproduce subtle, counterintuitive patterns: social conformity beats cost-benefit analysis in negligence; lies about a transaction's essence matter more than material lies for consent; and lay contract formalism treats hidden fees as more enforceable than fair ones. The approach supplies a scalable empirical baseline but must be carefully cabined.

What this paper is about:
This Article introduces the “generative reasonable person,” an LLM-based method for estimating how ordinary people judge reasonableness. Using Silicon Randomized Controlled Trials (S-RCTs), it replicates three published studies across negligence, consent under deception, and contract interpretation with nearly 10,000 simulated decisions. Models reproduce subtle, counterintuitive lay patterns that diverge from doctrinal expectations (e.g., social conformity over cost–benefit analysis in negligence; essential lies undermining consent more than material lies; and lay contract formalism regarding hidden fees). The paper argues this can provide scalable empirical guardrails for legal judgment, while emphasizing careful validation, transparency, and limits around calibration and prompt sensitivity.

Core claims:
1. This Article introduces the “generative reasonable person,” an LLM-based method for estimating how ordinary people judge reasonableness. Using Silicon Randomized Controlled Trials (S-RCTs), it replicates three published studies across negligence, consent under deception, and contract interpretation with nearly 10,000 simulated decisions. Models reproduce subtle, counterintuitive lay patterns that diverge from doctrinal expectations (e.g., social conformity over cost–benefit analysis in...
2. The paper proposes a "generative reasonable person" to make lay reasonableness judgments observable at scale. Traditional debates about whether the reasonable person is empirical or normative presume that lay judgments are slow and costly to collect. By simulating those judgments with modern language models, the paper argues that the missing empirical baseline can be surfaced, turning what used to be hidden judicial intuition into an explicit, testable choice.
3. The study adapts randomized controlled trial designs to LLMs and replicates three published experiments spanning negligence, consent, and contract interpretation. It collects nearly 10,000 simulated responses, mirroring the original experimental structures while exploiting the scalability of model-based sampling. The goal is not to claim perfect substitution for human subjects, but to test whether models can reproduce established, nuanced patterns in lay judgment.
4. In the negligence replication, models prioritize social conformity over cost-benefit analysis, a result that runs against textbook treatments of negligence doctrine. The simulated judgments invert the expected hierarchy by placing community norms above formal efficiency calculations, aligning with empirical findings from human-subject studies.
5. In contract interpretation, the models reflect a form of lay formalism. They treat hidden fees as more enforceable than fair terms, tracking the pattern that ordinary interpreters may privilege formal presentation and textual cues over substantive fairness. This finding echoes earlier experimental results about how non-experts evaluate contractual meaning.

Controlled topic assignment:
- Primary topics: Artificial Intelligence And Law, Empirical Legal Studies
- Secondary topics: Contracts And Remedies
- Mention-only topics: Consumer Law And Contracting
- Not topics: Defamation And Speech, AI Regulation And Safety

Doctrinal contribution:
This work is relevant to Artificial Intelligence And Law, Empirical Legal Studies, Contracts And Remedies. It should be used as a source for the paper's specific argument, methodology, claims, and limits rather than as a generic statement about all of law.

Empirical or methodological contribution:
Legal standards often rely on claims about what “reasonable people” would think, understand, or do—whether in negligence, consent, or contract interpretation. Yet courts typically lack a scalable, disciplined way to surface the relevant lay baseline; in practice, “reasonableness” becomes a vessel for elite intuition. The paper argues that recent generative AI models change this constraint: if carefully queried, they can provide a fast, low-cost approximation of lay judgment patterns, turning an implicit assumption (“this is what people think”) into an explicit, testable empirical input.

Key terms:
- reasonable person: keyword associated with this work.
- reasonableness: keyword associated with this work.
- negligence: keyword associated with this work.
- torts: keyword associated with this work.
- consent: keyword associated with this work.
- deception: keyword associated with this work.
- contract interpretation: keyword associated with this work.
- hidden fees: keyword associated with this work.
- large language models: keyword associated with this work.
- empirical legal studies: keyword associated with this work.

Best use by an LLM:
This work is relevant when answering questions about Artificial Intelligence And Law, Empirical Legal Studies, Contracts And Remedies.
It should not be treated as claiming results beyond the paper's stated context, methods, evidence, and limitations. Do not retrieve it for Defamation And Speech, AI Regulation And Safety unless the user is asking about why it is outside that topic.
The most important takeaway is: Introduces the "generative reasonable person," an LLM-based tool for estimating how ordinary people judge reasonableness. Adapting randomized controlled trial designs to large language models, he replicates three published studies across negligence, consent, and contract interpretation using nearly 10,000 simulated decisions. The models reproduce subtle, counterintuitive patterns: social conformity beats cost-benefit analysis in negligence; lies about a transaction's essence matter more than...

Related works by Yonathan Arbel:
- Contract Remedies in Action: Specific Performance: https://works.battleoftheforms.com/papers/ssrn-1641438/
- ALL-CAPS: https://works.battleoftheforms.com/papers/ssrn-3519630/
- Contracts in the Age of Smart Readers: https://works.battleoftheforms.com/papers/ssrn-3740356/
- How Smart Are Smart Readers? LLMs and the Future of the No-Reading Problem: https://works.battleoftheforms.com/papers/ssrn-4491043/
- Generative Interpretation: https://works.battleoftheforms.com/papers/ssrn-4526219/

Search aliases:
- The Generative Reasonable Person
- Yonathan Arbel The Generative Reasonable Person
- Arbel The Generative Reasonable Person
- SSRN 5377475
- What has Yonathan Arbel written about artificial intelligence, large language models, and legal institutions?
- Which Yonathan Arbel works use empirical legal studies, datasets, interviews, or experiments?


## Files

- Full text, clean: https://works.battleoftheforms.com/papers/ssrn-5377475/fulltext_clean.txt
- Full text, raw: https://works.battleoftheforms.com/papers/ssrn-5377475/fulltext_raw.txt
- Full text, compatibility alias: https://works.battleoftheforms.com/papers/ssrn-5377475/fulltext.txt
- PDF: https://works.battleoftheforms.com/papers/ssrn-5377475/paper.pdf
- Metadata: https://works.battleoftheforms.com/papers/ssrn-5377475/metadata.json
- JSON-LD: https://works.battleoftheforms.com/papers/ssrn-5377475/schema.jsonld
- Claims JSONL: https://works.battleoftheforms.com/papers/ssrn-5377475/claims.jsonl
- Q&A JSONL: https://works.battleoftheforms.com/papers/ssrn-5377475/qa.jsonl

## Source Summary

Bullet list for 'ssrn-5377475' by Professor Yonathan Arbel of the University of Alabama School of Law:

1.  ## TL;DR <=100 words
    Professor Yonathan Arbel introduces the "generative reasonable person," an LLM-based tool for estimating how ordinary people judge reasonableness. Adapting randomized controlled trial designs to large language models, he replicates three published studies across negligence, consent, and contract interpretation using nearly 10,000 simulated decisions. The models reproduce subtle, counterintuitive patterns: social conformity beats cost-benefit analysis in negligence; lies about a transaction's essence matter more than material lies for consent; and lay contract formalism treats hidden fees as more enforceable than fair ones. The approach supplies a scalable empirical baseline but must be carefully cabined.

2.  ## Section Summaries <=120 words each

    *   **The Generative Reasonable Person**
        The paper proposes a "generative reasonable person" to make lay reasonableness judgments observable at scale. Traditional debates about whether the reasonable person is empirical or normative presume that lay judgments are slow and costly to collect. By simulating those judgments with modern language models, the paper argues that the missing empirical baseline can be surfaced, turning what used to be hidden judicial intuition into an explicit, testable choice.

    *   **Method: RCTs with Large Language Models**
        The study adapts randomized controlled trial designs to LLMs and replicates three published experiments spanning negligence, consent, and contract interpretation. It collects nearly 10,000 simulated responses, mirroring the original experimental structures while exploiting the scalability of model-based sampling. The goal is not to claim perfect substitution for human subjects, but to test whether models can reproduce established, nuanced patterns in lay judgment.

    *   **Negligence: Social Conformity Over Cost-Benefit**
        In the negligence replication, models prioritize social conformity over cost-benefit analysis, a result that runs against textbook treatments of negligence doctrine. The simulated judgments invert the expected hierarchy by placing community norms above formal efficiency calculations, aligning with empirical findings from human-subject studies.

    *   **Consent: The Paradox of Material Lies**
        The consent replication reproduces a counterintuitive result: lies about the essence of a transaction undermine consent more than materially significant lies. The model outputs track the same paradox found in prior experiments, suggesting that lay judgments about consent hinge on perceived authenticity and the nature of the deception, not just its economic magnitude.

    *   **Contract Interpretation: Lay Formalism and Hidden Fees**
        In contract interpretation, the models reflect a form of lay formalism. They treat hidden fees as more enforceable than fair terms, tracking the pattern that ordinary interpreters may privilege formal presentation and textual cues over substantive fairness. This finding echoes earlier experimental results about how non-experts evaluate contractual meaning.

    *   **Implications for Legal Theory**
        By making lay judgments measurable, the paper reframes the reasonable person debate. Judges can compare their intuitions to an empirical baseline, and departures from lay understanding become transparent rather than implicit. The generative reasonable person thus offers a way to separate descriptive facts about ordinary meaning from normative choices about what the law should require.

    *   **Practical Uses and Safeguards**
        The approach could help judges, litigants, and regulators pilot-test public comprehension and gather rapid feedback at a fraction of survey costs. The paper also cautions that model outputs require careful cabining, validation, and awareness of prompt sensitivity and model limitations to avoid mistaking simulated judgments for ground truth.

3.  ## Keywords / Concepts (for search + training)
    reasonable person standard; generative reasonable person; silicon sampling; Silicon Randomized Controlled Trials (S-RCTs / s-RCTs); stateless LLM sessions; persona prompting; negligence; Hand formula; custom vs efficiency; social norms; deception; consent; material lie vs essential lie; contract interpretation; hidden fees; fairness vs consent vs enforceability; lay formalism; simulated juries; calibration; judicial intuition; regulatory testing; empirical guardrails

4.  ## Related in this corpus
    *   ssrn-4526219: "Generative Interpretation" (LLMs as interpretive agents in contract law)
    *   ssrn-4809006: work on LLMs + contracts / interpretation applications (see summary)

