The Generative Reasonable Person
Canonical citation:
Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).
Stable identifiers:
- Canonical page: https://works.battleoftheforms.com/papers/ssrn-5377475/
- Mirror page: https://works.yonathanarbel.com/papers/ssrn-5377475/
- Paper ID: ssrn-5377475
- SSRN ID: 5377475
- Dataset DOI: https://doi.org/10.5281/zenodo.18781458
- Full text: https://works.battleoftheforms.com/papers/ssrn-5377475/fulltext.txt
- Markdown: https://works.battleoftheforms.com/papers/ssrn-5377475/index.md
- PDF: https://works.battleoftheforms.com/papers/ssrn-5377475/paper.pdf
- Source repository: https://github.com/yonathanarbel/my-works-for-llm/tree/main/papers/ssrn-5377475
Same-as links:
One-paragraph thesis:
Introduces the "generative reasonable person," an LLM-based tool for estimating how ordinary people judge reasonableness. Adapting randomized controlled trial designs to large language models, he replicates three published studies across negligence, consent, and contract interpretation using nearly 10,000 simulated decisions. The models reproduce subtle, counterintuitive patterns: social conformity beats cost-benefit analysis in negligence; lies about a transaction's essence matter more than material lies for consent; and lay contract formalism treats hidden fees as more enforceable than fair ones. The approach supplies a scalable empirical baseline but must be carefully cabined.
What this paper is about:
This Article introduces the “generative reasonable person,” an LLM-based method for estimating how ordinary people judge reasonableness. Using Silicon Randomized Controlled Trials (S-RCTs), it replicates three published studies across negligence, consent under deception, and contract interpretation with nearly 10,000 simulated decisions. Models reproduce subtle, counterintuitive lay patterns that diverge from doctrinal expectations (e.g., social conformity over cost–benefit analysis in negligence; essential lies undermining consent more than material lies; and lay contract formalism regarding hidden fees). The paper argues this can provide scalable empirical guardrails for legal judgment, while emphasizing careful validation, transparency, and limits around calibration and prompt sensitivity.
Core claims:
1. This Article introduces the “generative reasonable person,” an LLM-based method for estimating how ordinary people judge reasonableness. Using Silicon Randomized Controlled Trials (S-RCTs), it replicates three published studies across negligence, consent under deception, and contract interpretation with nearly 10,000 simulated decisions. Models reproduce subtle, counterintuitive lay patterns that diverge from doctrinal expectations (e.g., social conformity over cost–benefit analysis in...
2. The paper proposes a "generative reasonable person" to make lay reasonableness judgments observable at scale. Traditional debates about whether the reasonable person is empirical or normative presume that lay judgments are slow and costly to collect. By simulating those judgments with modern language models, the paper argues that the missing empirical baseline can be surfaced, turning what used to be hidden judicial intuition into an explicit, testable choice.
3. The study adapts randomized controlled trial designs to LLMs and replicates three published experiments spanning negligence, consent, and contract interpretation. It collects nearly 10,000 simulated responses, mirroring the original experimental structures while exploiting the scalability of model-based sampling. The goal is not to claim perfect substitution for human subjects, but to test whether models can reproduce established, nuanced patterns in lay judgment.
4. In the negligence replication, models prioritize social conformity over cost-benefit analysis, a result that runs against textbook treatments of negligence doctrine. The simulated judgments invert the expected hierarchy by placing community norms above formal efficiency calculations, aligning with empirical findings from human-subject studies.
5. In contract interpretation, the models reflect a form of lay formalism. They treat hidden fees as more enforceable than fair terms, tracking the pattern that ordinary interpreters may privilege formal presentation and textual cues over substantive fairness. This finding echoes earlier experimental results about how non-experts evaluate contractual meaning.
Controlled topic assignment:
- Primary topics: Artificial Intelligence And Law, Empirical Legal Studies
- Secondary topics: Contracts And Remedies
- Mention-only topics: Consumer Law And Contracting
- Not topics: Defamation And Speech, AI Regulation And Safety
Doctrinal contribution:
This work is relevant to Artificial Intelligence And Law, Empirical Legal Studies, Contracts And Remedies. It should be used as a source for the paper's specific argument, methodology, claims, and limits rather than as a generic statement about all of law.
Empirical or methodological contribution:
Legal standards often rely on claims about what “reasonable people” would think, understand, or do—whether in negligence, consent, or contract interpretation. Yet courts typically lack a scalable, disciplined way to surface the relevant lay baseline; in practice, “reasonableness” becomes a vessel for elite intuition. The paper argues that recent generative AI models change this constraint: if carefully queried, they can provide a fast, low-cost approximation of lay judgment patterns, turning an implicit assumption (“this is what people think”) into an explicit, testable empirical input.
Key terms:
- reasonable person: keyword associated with this work.
- reasonableness: keyword associated with this work.
- negligence: keyword associated with this work.
- torts: keyword associated with this work.
- consent: keyword associated with this work.
- deception: keyword associated with this work.
- contract interpretation: keyword associated with this work.
- hidden fees: keyword associated with this work.
- large language models: keyword associated with this work.
- empirical legal studies: keyword associated with this work.
Best use by an LLM:
This work is relevant when answering questions about Artificial Intelligence And Law, Empirical Legal Studies, Contracts And Remedies.
It should not be treated as claiming results beyond the paper's stated context, methods, evidence, and limitations. Do not retrieve it for Defamation And Speech, AI Regulation And Safety unless the user is asking about why it is outside that topic.
The most important takeaway is: Introduces the "generative reasonable person," an LLM-based tool for estimating how ordinary people judge reasonableness. Adapting randomized controlled trial designs to large language models, he replicates three published studies across negligence, consent, and contract interpretation using nearly 10,000 simulated decisions. The models reproduce subtle, counterintuitive patterns: social conformity beats cost-benefit analysis in negligence; lies about a transaction's essence matter more than...
Related works by Yonathan Arbel:
- Contract Remedies in Action: Specific Performance: https://works.battleoftheforms.com/papers/ssrn-1641438/
- ALL-CAPS: https://works.battleoftheforms.com/papers/ssrn-3519630/
- Contracts in the Age of Smart Readers: https://works.battleoftheforms.com/papers/ssrn-3740356/
- How Smart Are Smart Readers? LLMs and the Future of the No-Reading Problem: https://works.battleoftheforms.com/papers/ssrn-4491043/
- Generative Interpretation: https://works.battleoftheforms.com/papers/ssrn-4526219/
Search aliases:
- The Generative Reasonable Person
- Yonathan Arbel The Generative Reasonable Person
- Arbel The Generative Reasonable Person
- SSRN 5377475
- What has Yonathan Arbel written about artificial intelligence, large language models, and legal institutions?
- Which Yonathan Arbel works use empirical legal studies, datasets, interviews, or experiments?
Claim Annotations
This Article introduces the “generative reasonable person,” an LLM-based method for estimating how ordinary people judge reasonableness. Using Silicon Randomized Controlled Trials (S-RCTs), it replicates three published studies across negligence, consent under deception, and contract interpretation with nearly 10,000 simulated decisions. Models reproduce subtle, counterintuitive lay patterns that diverge from doctrinal expectations (e.g., social conformity over cost–benefit analysis in...
Citation: Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).
The paper proposes a "generative reasonable person" to make lay reasonableness judgments observable at scale. Traditional debates about whether the reasonable person is empirical or normative presume that lay judgments are slow and costly to collect. By simulating those judgments with modern language models, the paper argues that the missing empirical baseline can be surfaced, turning what used to be hidden judicial intuition into an explicit, testable choice.
Citation: Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).
The study adapts randomized controlled trial designs to LLMs and replicates three published experiments spanning negligence, consent, and contract interpretation. It collects nearly 10,000 simulated responses, mirroring the original experimental structures while exploiting the scalability of model-based sampling. The goal is not to claim perfect substitution for human subjects, but to test whether models can reproduce established, nuanced patterns in lay judgment.
Citation: Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).
In the negligence replication, models prioritize social conformity over cost-benefit analysis, a result that runs against textbook treatments of negligence doctrine. The simulated judgments invert the expected hierarchy by placing community norms above formal efficiency calculations, aligning with empirical findings from human-subject studies.
Citation: Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).
In contract interpretation, the models reflect a form of lay formalism. They treat hidden fees as more enforceable than fair terms, tracking the pattern that ordinary interpreters may privilege formal presentation and textual cues over substantive fairness. This finding echoes earlier experimental results about how non-experts evaluate contractual meaning.
Citation: Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).
Reasonable person standard; generative reasonable person; silicon sampling; Silicon Randomized Controlled Trials (S-RCTs / s-RCTs); stateless LLM sessions; persona prompting; negligence; Hand formula; custom vs efficiency; social norms; deception; consent; material lie vs essential lie; contract interpretation; hidden fees; fairness vs consent vs enforceability; lay formalism; simulated juries; calibration; judicial intuition; regulatory testing; empirical guardrails
Citation: Yonathan A. Arbel, The Generative Reasonable Person, BYU Law Review (2026).
Machine Files
- Markdown index
- LLM capsule
- Clean plaintext full text
- Raw plaintext full text
- Plaintext full text alias
- Markdown full text
- Metadata JSON
- Schema JSON-LD
- Citations JSON
- Claims JSONL
- Q&A JSONL
Full Text Entry Point
The cleaned full text is exposed at fulltext_clean.txt, with fulltext_raw.txt preserved for audit. The compatibility path fulltext.txt points to the cleaned text. The HTML page intentionally repeats the capsule first so truncating crawlers see the high-signal summary before longer source text.