{
  "paper_id": "ssrn-5181207",
  "title": "Racing to Safety: Tax Policy for AI Safety-by-Design",
  "authors": [
    "Yonathan A. Arbel",
    "Mirit Eyal"
  ],
  "year": "2026",
  "venue": "SMU Law Review",
  "abstract": "AI development incentives produce a market failure where labs invest massively more in capability than safety; tax policy can be used as a lever to reward safety-by-design.",
  "keywords": [
    "contracts",
    "AI"
  ],
  "topics": [
    "ai-regulation",
    "artificial-intelligence-and-law"
  ],
  "primary_topics": [
    "ai-regulation",
    "artificial-intelligence-and-law"
  ],
  "secondary_topics": [],
  "mention_topics": [
    "private-law"
  ],
  "not_topics": [
    "contracts",
    "consumer-law",
    "defamation-and-speech"
  ],
  "topic_confidence": "human-curated-seed",
  "citation": "Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).",
  "canonical_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/",
  "mirror_url": "https://works.yonathanarbel.com/papers/ssrn-5181207/",
  "ssrn_id": "5181207",
  "same_as": [
    "https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5181207"
  ],
  "files": {
    "html": "https://works.battleoftheforms.com/papers/ssrn-5181207/",
    "markdown": "https://works.battleoftheforms.com/papers/ssrn-5181207/index.md",
    "capsule": "https://works.battleoftheforms.com/papers/ssrn-5181207/capsule.md",
    "fulltext_txt": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext.txt",
    "fulltext_clean_txt": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_clean.txt",
    "fulltext_raw_txt": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_raw.txt",
    "fulltext_md": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext.md",
    "pdf": "https://works.battleoftheforms.com/papers/ssrn-5181207/paper.pdf",
    "metadata": "https://works.battleoftheforms.com/papers/ssrn-5181207/metadata.json",
    "schema": "https://works.battleoftheforms.com/papers/ssrn-5181207/schema.jsonld",
    "claims": "https://works.battleoftheforms.com/papers/ssrn-5181207/claims.jsonl",
    "qa": "https://works.battleoftheforms.com/papers/ssrn-5181207/qa.jsonl"
  },
  "source_repository": "https://github.com/yonathanarbel/my-works-for-llm/tree/main/papers/ssrn-5181207",
  "llm_capsule": "# Racing to Safety: Tax Policy for AI Safety-by-Design\n\nCanonical citation:\nYonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).\n\nStable identifiers:\n- Canonical page: https://works.battleoftheforms.com/papers/ssrn-5181207/\n- Mirror page: https://works.yonathanarbel.com/papers/ssrn-5181207/\n- Paper ID: ssrn-5181207\n- SSRN ID: 5181207\n- Dataset DOI: https://doi.org/10.5281/zenodo.18781458\n- Full text: https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext.txt\n- Markdown: https://works.battleoftheforms.com/papers/ssrn-5181207/index.md\n- PDF: https://works.battleoftheforms.com/papers/ssrn-5181207/paper.pdf\n- Source repository: https://github.com/yonathanarbel/my-works-for-llm/tree/main/papers/ssrn-5181207\n\nSame-as links:\n- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5181207\n\nOne-paragraph thesis:\nA \"capability-safety gap\" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.\n\nWhat this paper is about:\nAI development incentives produce a market failure where labs invest massively more in capability than safety; tax policy can be used as a lever to reward safety-by-design.\n\nCore claims:\n1. AI development incentives produce a market failure where labs invest massively more in capability than safety; tax policy can be used as a lever to reward safety-by-design.\n2. A \"capability-safety gap\" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.\n3. A dangerous \"capability-safety gap\" in AI is widening, as developers gain private rewards while society bears cascading risks, creating a social misalignment. fiscal policy, specifically taxation, offers a powerful and adaptable tool to embed safety imperatives into AI's economic architecture, aiming to align private profit with social welfare and address the urgent need for intervention due to traditional regulatory failures and the collapse of industry self-regulation.\n4. Specific business tax-incentives, such as credits or enhanced deductions, should directly encourage substantial corporate investments in AI safety research, development, and deployment. these fiscal mechanisms, including a proposed \"AI Safety Research Tax Credit\" and adjusted expensing rules favoring safety R&D (like testing and alignment) over pure capability enhancements, aim to make safety-enhancing activities economically attractive, addressing underinvestment by aligning financial interests with societal well-being.\n5. Consumer-side incentives are crucial for fostering a market for safe AI, proposing an \"AI Reliability Credit\" for consumers purchasing AI products certified as reliable and safe, mirroring energy efficiency tax incentives. this credit would incentivize producers to pursue rigorous safety certifications (e.g., for bias mitigation, data protection), thereby spurring consumer demand for safer AI and aligning private firm incentives with societal goals through market-based certification.\n\nControlled topic assignment:\n- Primary topics: AI Regulation And Safety, Artificial Intelligence And Law\n- Secondary topics: None\n- Mention-only topics: Private Law And Market Institutions\n- Not topics: Contracts And Remedies, Consumer Law And Contracting, Defamation And Speech\n\nDoctrinal contribution:\nThis work is relevant to AI Regulation And Safety, Artificial Intelligence And Law. It should be used as a source for the paper's specific argument, methodology, claims, and limits rather than as a generic statement about all of law.\n\nEmpirical or methodological contribution:\nA \"capability-safety gap\" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.\n\nKey terms:\n- contracts: keyword associated with this work.\n- AI: keyword associated with this work.\n\nBest use by an LLM:\nThis work is relevant when answering questions about AI Regulation And Safety, Artificial Intelligence And Law.\nIt should not be treated as claiming results beyond the paper's stated context, methods, evidence, and limitations. Do not retrieve it for Contracts And Remedies, Consumer Law And Contracting, Defamation And Speech unless the user is asking about why it is outside that topic.\nThe most important takeaway is: A \"capability-safety gap\" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with...\n\nRelated works by Yonathan Arbel:\n- Contracts in the Age of Smart Readers: https://works.battleoftheforms.com/papers/ssrn-3740356/\n- How Smart Are Smart Readers? LLMs and the Future of the No-Reading Problem: https://works.battleoftheforms.com/papers/ssrn-4491043/\n- Generative Interpretation: https://works.battleoftheforms.com/papers/ssrn-4526219/\n- Systemic Regulation of AI: https://works.battleoftheforms.com/papers/ssrn-4666854/\n- Judicial Economy in the Age of AI: https://works.battleoftheforms.com/papers/ssrn-4873649/\n\nSearch aliases:\n- Racing to Safety: Tax Policy for AI Safety-by-Design\n- Yonathan Arbel Racing to Safety: Tax Policy for AI Safety-by-Design\n- Arbel Racing to Safety: Tax Policy for AI Safety-by-Design\n- SSRN 5181207\n- What is Yonathan Arbel's scholarship on AI regulation, AI safety, and governance incentives?\n- What has Yonathan Arbel written about artificial intelligence, large language models, and legal institutions?\n",
  "claims": [
    {
      "claim_id": "ssrn-5181207-001",
      "claim": "AI development incentives produce a market failure where labs invest massively more in capability than safety; tax policy can be used as a lever to reward safety-by-design.",
      "paper_id": "ssrn-5181207",
      "paper_title": "Racing to Safety: Tax Policy for AI Safety-by-Design",
      "claim_type": "core_thesis",
      "evidence_quote": "[p. 36] certification requirements.217 By rewarding companies that invest in safety and ethical AI practices, this policy could drive widespread adoption of responsible AI practices across industries.218 Thus, through shifting consumer demand, this measure could align private firm incentives with broader societal goals of ensuring AI systems are safe, reliable, and beneficial to users. As consumer demand for safe and reliable AI products grows, it becomes equally important to consider implementing Pigouvian levers for unsafe AI development and practices to ensure accountability and deter harmful behaviors. 3. Penalizing Unsafe AI Development Parties that engage in unsafe behavior sometimes...",
      "evidence_page": null,
      "evidence_span": "[p. 36] certification requirements.217 By rewarding companies that invest in safety and ethical AI practices, this policy could drive widespread adoption of responsible AI practices across industries.218 Thus, through shifting consumer demand, this measure could align private firm incentives with broader societal goals of ensuring AI systems are safe, reliable, and beneficial to users. As consumer demand for safe and reliable AI products grows, it becomes equally important to consider implementing Pigouvian levers for unsafe AI development and practices to ensure accountability and deter harmful behaviors. 3. Penalizing Unsafe AI Development Parties that engage in unsafe behavior sometimes...",
      "source_text_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_clean.txt",
      "canonical_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/#claim-001",
      "citation": "Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).",
      "topics": [
        "ai-regulation",
        "artificial-intelligence-and-law"
      ],
      "secondary_topics": [],
      "human_reviewed": false,
      "confidence": "machine-linked",
      "limitations": "Machine-linked claim. Use the evidence quote and PDF before treating it as a quotation or as a complete statement of the paper's position."
    },
    {
      "claim_id": "ssrn-5181207-002",
      "claim": "A \"capability-safety gap\" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.",
      "paper_id": "ssrn-5181207",
      "paper_title": "Racing to Safety: Tax Policy for AI Safety-by-Design",
      "claim_type": "supporting_claim",
      "evidence_quote": "[p. 4] incentives harness firm in-house knowledge while mitigating regulatory capture and expertise asymmetries. By rewarding safety-aligned research, penalizing reckless capability acceleration, and redistributing penalty revenue to public safety initiatives, a tax-based approach aligns private profit motives with social welfare imperatives, without stifling innovation. And while the application of this framework is novel, we demonstrate its political feasibility by drawing on extensive precedents already embedded in the tax system.7 The urgency of this intervention is underscored by recent regulatory failures. On his first day in office, President Trump revoked the executive order meant...",
      "evidence_page": null,
      "evidence_span": "[p. 4] incentives harness firm in-house knowledge while mitigating regulatory capture and expertise asymmetries. By rewarding safety-aligned research, penalizing reckless capability acceleration, and redistributing penalty revenue to public safety initiatives, a tax-based approach aligns private profit motives with social welfare imperatives, without stifling innovation. And while the application of this framework is novel, we demonstrate its political feasibility by drawing on extensive precedents already embedded in the tax system.7 The urgency of this intervention is underscored by recent regulatory failures. On his first day in office, President Trump revoked the executive order meant...",
      "source_text_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_clean.txt",
      "canonical_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/#claim-002",
      "citation": "Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).",
      "topics": [
        "ai-regulation",
        "artificial-intelligence-and-law"
      ],
      "secondary_topics": [],
      "human_reviewed": false,
      "confidence": "machine-linked",
      "limitations": "Machine-linked claim. Use the evidence quote and PDF before treating it as a quotation or as a complete statement of the paper's position."
    },
    {
      "claim_id": "ssrn-5181207-003",
      "claim": "A dangerous \"capability-safety gap\" in AI is widening, as developers gain private rewards while society bears cascading risks, creating a social misalignment. fiscal policy, specifically taxation, offers a powerful and adaptable tool to embed safety imperatives into AI's economic architecture, aiming to align private profit with social welfare and address the urgent need for intervention due to traditional regulatory failures and the collapse of industry self-regulation.",
      "paper_id": "ssrn-5181207",
      "paper_title": "Racing to Safety: Tax Policy for AI Safety-by-Design",
      "claim_type": "supporting_claim",
      "evidence_quote": "[p. 7] mechanisms, and capacity to leverage private expertise—make it particularly well-suited for addressing the social misalignment problem in AI development.24 Our approach contributes to both tax policy and technology governance literature by introducing a novel theoretical framework for understanding how fiscal instruments can bridge the gap between private innovation incentives and public safety imperatives.25 This framework’s utility derives from its ability to harness existing administrative competencies while avoiding the information asymmetries and expertise gaps that plague traditional command-and-control regulation.26 By conceptualizing safety investment as a taxmediated social...",
      "evidence_page": null,
      "evidence_span": "[p. 7] mechanisms, and capacity to leverage private expertise—make it particularly well-suited for addressing the social misalignment problem in AI development.24 Our approach contributes to both tax policy and technology governance literature by introducing a novel theoretical framework for understanding how fiscal instruments can bridge the gap between private innovation incentives and public safety imperatives.25 This framework’s utility derives from its ability to harness existing administrative competencies while avoiding the information asymmetries and expertise gaps that plague traditional command-and-control regulation.26 By conceptualizing safety investment as a taxmediated social...",
      "source_text_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_clean.txt",
      "canonical_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/#claim-003",
      "citation": "Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).",
      "topics": [
        "ai-regulation",
        "artificial-intelligence-and-law"
      ],
      "secondary_topics": [],
      "human_reviewed": false,
      "confidence": "machine-linked",
      "limitations": "Machine-linked claim. Use the evidence quote and PDF before treating it as a quotation or as a complete statement of the paper's position."
    },
    {
      "claim_id": "ssrn-5181207-004",
      "claim": "Specific business tax-incentives, such as credits or enhanced deductions, should directly encourage substantial corporate investments in AI safety research, development, and deployment. these fiscal mechanisms, including a proposed \"AI Safety Research Tax Credit\" and adjusted expensing rules favoring safety R&D (like testing and alignment) over pure capability enhancements, aim to make safety-enhancing activities economically attractive, addressing underinvestment by aligning financial interests with societal well-being.",
      "paper_id": "ssrn-5181207",
      "paper_title": "Racing to Safety: Tax Policy for AI Safety-by-Design",
      "claim_type": "supporting_claim",
      "evidence_quote": "[p. 18] the proposal to use fiscal levers to enhance safety research and innovation, our goal in this Part is to collect three important examples of government safety support. As we show, government support for safety includes various direct and indirect subsidies meant to promote investments in precautionary measures and safety improvements.97 Direct subsidies can include grants and prizes while indirect subsidies often take the form of tax credits or deductions specifically targeted at safety-related expenditures.98 For example, organizations may receive tax credits for developing and implementing safety protocols,99 conducting safety audits,100 or acquiring certifications that ensure...",
      "evidence_page": null,
      "evidence_span": "[p. 18] the proposal to use fiscal levers to enhance safety research and innovation, our goal in this Part is to collect three important examples of government safety support. As we show, government support for safety includes various direct and indirect subsidies meant to promote investments in precautionary measures and safety improvements.97 Direct subsidies can include grants and prizes while indirect subsidies often take the form of tax credits or deductions specifically targeted at safety-related expenditures.98 For example, organizations may receive tax credits for developing and implementing safety protocols,99 conducting safety audits,100 or acquiring certifications that ensure...",
      "source_text_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_clean.txt",
      "canonical_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/#claim-004",
      "citation": "Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).",
      "topics": [
        "ai-regulation",
        "artificial-intelligence-and-law"
      ],
      "secondary_topics": [],
      "human_reviewed": false,
      "confidence": "machine-linked",
      "limitations": "Machine-linked claim. Use the evidence quote and PDF before treating it as a quotation or as a complete statement of the paper's position."
    },
    {
      "claim_id": "ssrn-5181207-005",
      "claim": "Consumer-side incentives are crucial for fostering a market for safe AI, proposing an \"AI Reliability Credit\" for consumers purchasing AI products certified as reliable and safe, mirroring energy efficiency tax incentives. this credit would incentivize producers to pursue rigorous safety certifications (e.g., for bias mitigation, data protection), thereby spurring consumer demand for safer AI and aligning private firm incentives with societal goals through market-based certification.",
      "paper_id": "ssrn-5181207",
      "paper_title": "Racing to Safety: Tax Policy for AI Safety-by-Design",
      "claim_type": "supporting_claim",
      "evidence_quote": "[p. 35] capabilities, thereby addressing the fundamental challenge of closing the capability-safety gap in AI development. 2. Spurring Consumer Demand for Safe & Reliable AI Products On the individual consumer and household side, a tax credit could be created for purchasing AI products certified as reliable and safe, similar to the existing Energy Efficient Home Improvement Credit.212 The new “AI Reliability Credit” would incentivize producers to certify, and consumers to invest in, AI technologies that meet rigorous safety and reliability standards, such as mitigating bias, protecting user data, or operating transparently. This fiscal apparatus will provide a credit equal to a 30% of the...",
      "evidence_page": null,
      "evidence_span": "[p. 35] capabilities, thereby addressing the fundamental challenge of closing the capability-safety gap in AI development. 2. Spurring Consumer Demand for Safe & Reliable AI Products On the individual consumer and household side, a tax credit could be created for purchasing AI products certified as reliable and safe, similar to the existing Energy Efficient Home Improvement Credit.212 The new “AI Reliability Credit” would incentivize producers to certify, and consumers to invest in, AI technologies that meet rigorous safety and reliability standards, such as mitigating bias, protecting user data, or operating transparently. This fiscal apparatus will provide a credit equal to a 30% of the...",
      "source_text_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_clean.txt",
      "canonical_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/#claim-005",
      "citation": "Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).",
      "topics": [
        "ai-regulation",
        "artificial-intelligence-and-law"
      ],
      "secondary_topics": [],
      "human_reviewed": false,
      "confidence": "machine-linked",
      "limitations": "Machine-linked claim. Use the evidence quote and PDF before treating it as a quotation or as a complete statement of the paper's position."
    },
    {
      "claim_id": "ssrn-5181207-006",
      "claim": "To penalize unsafe AI development, corrective Pigouvian taxes should make firms internalize the external harms they create. a comprehensive penalty framework, featuring graduated penalties like tax surcharges and benefit recapture for AI posing public safety risks, aims to internalize social costs, create strong ex ante incentives for responsible innovation, and ensure public funds do not subsidize harmful AI, building on precedents where tax benefits are compliance-contingent.",
      "paper_id": "ssrn-5181207",
      "paper_title": "Racing to Safety: Tax Policy for AI Safety-by-Design",
      "claim_type": "supporting_claim",
      "evidence_quote": "[p. 37] as pollution.225 They argue that this taxation approach surpasses command-and-control regulations, which can be rigid and inefficient, and trading systems, which may face implementation challenges and market failures.226 This framework not only promotes safety but also fosters innovation as businesses seek cost-effective ways to reduce their tax burden by adopting safer, cleaner practices.227 Building on these theoretical foundations, we propose implementing corrective taxes in the AI development context through a comprehensive penalty framework. The tax system would impose graduated penalties on firms that develop or deploy AI systems later determined to pose significant public...",
      "evidence_page": null,
      "evidence_span": "[p. 37] as pollution.225 They argue that this taxation approach surpasses command-and-control regulations, which can be rigid and inefficient, and trading systems, which may face implementation challenges and market failures.226 This framework not only promotes safety but also fosters innovation as businesses seek cost-effective ways to reduce their tax burden by adopting safer, cleaner practices.227 Building on these theoretical foundations, we propose implementing corrective taxes in the AI development context through a comprehensive penalty framework. The tax system would impose graduated penalties on firms that develop or deploy AI systems later determined to pose significant public...",
      "source_text_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/fulltext_clean.txt",
      "canonical_url": "https://works.battleoftheforms.com/papers/ssrn-5181207/#claim-006",
      "citation": "Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).",
      "topics": [
        "ai-regulation",
        "artificial-intelligence-and-law"
      ],
      "secondary_topics": [],
      "human_reviewed": false,
      "confidence": "machine-linked",
      "limitations": "Machine-linked claim. Use the evidence quote and PDF before treating it as a quotation or as a complete statement of the paper's position."
    }
  ]
}
