Research

Paper

TESTING March 16, 2026

Algorithms for Deciding the Safety of States in Fully Observable Non-deterministic Problems: Technical Report

Authors

Johannes Schmalz, Chaahat Jain

Abstract

Learned action policies are increasingly popular in sequential decision-making, but suffer from a lack of safety guarantees. Recent work introduced a pipeline for testing the safety of such policies under initial-state and action-outcome non-determinism. At the pipeline's core, is the problem of deciding whether a state is safe (a safe policy exists from the state) and finding faults, which are state-action pairs that transition from a safe state to an unsafe one. Their most effective algorithm for deciding safety, TarjanSafe, is effective on their benchmarks, but we show that it has exponential worst-case runtime with respect to the state space. A linear-time alternative exists, but it is slower in practice. We close this gap with a new policy-iteration algorithm iPI, that combines the best of both: it matches TarjanSafe's best-case runtime while guaranteeing a polynomial worst-case. Experiments confirm our theory and show that in problems amenable to TarjanSafe iPI has similar performance, whereas in ill-suited problems iPI scales exponentially better.

Metadata

arXiv ID: 2603.15282
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15282v1</id>\n    <title>Algorithms for Deciding the Safety of States in Fully Observable Non-deterministic Problems: Technical Report</title>\n    <updated>2026-03-16T13:45:33Z</updated>\n    <link href='https://arxiv.org/abs/2603.15282v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15282v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Learned action policies are increasingly popular in sequential decision-making, but suffer from a lack of safety guarantees. Recent work introduced a pipeline for testing the safety of such policies under initial-state and action-outcome non-determinism. At the pipeline's core, is the problem of deciding whether a state is safe (a safe policy exists from the state) and finding faults, which are state-action pairs that transition from a safe state to an unsafe one. Their most effective algorithm for deciding safety, TarjanSafe, is effective on their benchmarks, but we show that it has exponential worst-case runtime with respect to the state space. A linear-time alternative exists, but it is slower in practice. We close this gap with a new policy-iteration algorithm iPI, that combines the best of both: it matches TarjanSafe's best-case runtime while guaranteeing a polynomial worst-case. Experiments confirm our theory and show that in problems amenable to TarjanSafe iPI has similar performance, whereas in ill-suited problems iPI scales exponentially better.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-16T13:45:33Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Johannes Schmalz</name>\n    </author>\n    <author>\n      <name>Chaahat Jain</name>\n    </author>\n  </entry>"
}