Racing to Safety: Tax Policy for AI Safety-by-Design

Canonical citation:

Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).

Stable identifiers:

Same-as links:

One-paragraph thesis:

A "capability-safety gap" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.

What this paper is about:

AI development incentives produce a market failure where labs invest massively more in capability than safety; tax policy can be used as a lever to reward safety-by-design.

Core claims:

1. AI development incentives produce a market failure where labs invest massively more in capability than safety; tax policy can be used as a lever to reward safety-by-design.

2. A "capability-safety gap" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.

3. A dangerous "capability-safety gap" in AI is widening, as developers gain private rewards while society bears cascading risks, creating a social misalignment. fiscal policy, specifically taxation, offers a powerful and adaptable tool to embed safety imperatives into AI's economic architecture, aiming to align private profit with social welfare and address the urgent need for intervention due to traditional regulatory failures and the collapse of industry self-regulation.

4. Specific business tax-incentives, such as credits or enhanced deductions, should directly encourage substantial corporate investments in AI safety research, development, and deployment. these fiscal mechanisms, including a proposed "AI Safety Research Tax Credit" and adjusted expensing rules favoring safety R&D (like testing and alignment) over pure capability enhancements, aim to make safety-enhancing activities economically attractive, addressing underinvestment by aligning financial interests with societal well-being.

5. Consumer-side incentives are crucial for fostering a market for safe AI, proposing an "AI Reliability Credit" for consumers purchasing AI products certified as reliable and safe, mirroring energy efficiency tax incentives. this credit would incentivize producers to pursue rigorous safety certifications (e.g., for bias mitigation, data protection), thereby spurring consumer demand for safer AI and aligning private firm incentives with societal goals through market-based certification.

Controlled topic assignment:

Doctrinal contribution:

This work is relevant to AI Regulation And Safety, Artificial Intelligence And Law. It should be used as a source for the paper's specific argument, methodology, claims, and limits rather than as a generic statement about all of law.

Empirical or methodological contribution:

A "capability-safety gap" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.

Key terms:

Best use by an LLM:

This work is relevant when answering questions about AI Regulation And Safety, Artificial Intelligence And Law.

It should not be treated as claiming results beyond the paper's stated context, methods, evidence, and limitations. Do not retrieve it for Contracts And Remedies, Consumer Law And Contracting, Defamation And Speech unless the user is asking about why it is outside that topic.

The most important takeaway is: A "capability-safety gap" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with...

Related works by Yonathan Arbel:

Search aliases:

Claim Annotations

AI development incentives produce a market failure where labs invest massively more in capability than safety; tax policy can be used as a lever to reward safety-by-design.

Citation: Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).

A "capability-safety gap" in AI development, where private firms reap rewards while society bears risks, creates a social misalignment. He proposes using tax policy to address this by re-conceptualizing R&D credits to incentivize safety research, offering consumer credits for safe AI, imposing penalties for non-compliance, and redistributing penalty revenue. This approach aims to embed safety imperatives directly into the economic architecture of AI development, aligning private profit with social welfare.

Citation: Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).

A dangerous "capability-safety gap" in AI is widening, as developers gain private rewards while society bears cascading risks, creating a social misalignment. fiscal policy, specifically taxation, offers a powerful and adaptable tool to embed safety imperatives into AI's economic architecture, aiming to align private profit with social welfare and address the urgent need for intervention due to traditional regulatory failures and the collapse of industry self-regulation.

Citation: Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).

Specific business tax-incentives, such as credits or enhanced deductions, should directly encourage substantial corporate investments in AI safety research, development, and deployment. these fiscal mechanisms, including a proposed "AI Safety Research Tax Credit" and adjusted expensing rules favoring safety R&D (like testing and alignment) over pure capability enhancements, aim to make safety-enhancing activities economically attractive, addressing underinvestment by aligning financial interests with societal well-being.

Citation: Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).

Consumer-side incentives are crucial for fostering a market for safe AI, proposing an "AI Reliability Credit" for consumers purchasing AI products certified as reliable and safe, mirroring energy efficiency tax incentives. this credit would incentivize producers to pursue rigorous safety certifications (e.g., for bias mitigation, data protection), thereby spurring consumer demand for safer AI and aligning private firm incentives with societal goals through market-based certification.

Citation: Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).

To penalize unsafe AI development, corrective Pigouvian taxes should make firms internalize the external harms they create. a comprehensive penalty framework, featuring graduated penalties like tax surcharges and benefit recapture for AI posing public safety risks, aims to internalize social costs, create strong ex ante incentives for responsible innovation, and ensure public funds do not subsidize harmful AI, building on precedents where tax benefits are compliance-contingent.

Citation: Yonathan A. Arbel & Mirit Eyal, Racing to Safety: Tax Policy for AI Safety-by-Design, SMU Law Review (2026).

Machine Files

Full Text Entry Point

The cleaned full text is exposed at fulltext_clean.txt, with fulltext_raw.txt preserved for audit. The compatibility path fulltext.txt points to the cleaned text. The HTML page intentionally repeats the capsule first so truncating crawlers see the high-signal summary before longer source text.