Research

Paper

AI LLM March 03, 2026

SpecLoop: An Agentic RTL-to-Specification Framework with Formal Verification Feedback Loop

Authors

Fu-Chieh Chang, Yu-Hsin Yang, Hung-Ming Huang, Yun-Chia Hsu, Yin-Yu Lin, Ming-Fang Tsai, Chun-Chih Yang, Pei-Yuan Wu

Abstract

RTL implementations frequently lack up-to-date or consistent specifications, making comprehension, maintenance, and verification costly and error-prone. While prior work has explored generating specifications from RTL using large language models (LLMs), ensuring that the generated documents faithfully capture design intent remains a major challenge. We present SpecLoop, an agentic framework for RTL-to-specification generation with a formal-verification-driven iterative feedback loop. SpecLoop first generates candidate specifications and then reconstructs RTL from these specifications; it uses formal equivalence checking tools between the reconstructed RTL and the original design to validate functional consistency. When mismatches are detected, counterexamples are fed back to iteratively refine the specifications until equivalence is proven or no further progress can be made. Experiments across multiple LLMs and RTL benchmarks show that incorporating formal verification feedback substantially improves specification correctness and robustness over LLM-only baselines, demonstrating the effectiveness of verification-guided specification generation.

Metadata

arXiv ID: 2603.02895
Provider: ARXIV
Primary Category: cs.AR
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02895v1</id>\n    <title>SpecLoop: An Agentic RTL-to-Specification Framework with Formal Verification Feedback Loop</title>\n    <updated>2026-03-03T11:45:00Z</updated>\n    <link href='https://arxiv.org/abs/2603.02895v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02895v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>RTL implementations frequently lack up-to-date or consistent specifications, making comprehension, maintenance, and verification costly and error-prone. While prior work has explored generating specifications from RTL using large language models (LLMs), ensuring that the generated documents faithfully capture design intent remains a major challenge. We present SpecLoop, an agentic framework for RTL-to-specification generation with a formal-verification-driven iterative feedback loop. SpecLoop first generates candidate specifications and then reconstructs RTL from these specifications; it uses formal equivalence checking tools between the reconstructed RTL and the original design to validate functional consistency. When mismatches are detected, counterexamples are fed back to iteratively refine the specifications until equivalence is proven or no further progress can be made. Experiments across multiple LLMs and RTL benchmarks show that incorporating formal verification feedback substantially improves specification correctness and robustness over LLM-only baselines, demonstrating the effectiveness of verification-guided specification generation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AR'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.PL'/>\n    <published>2026-03-03T11:45:00Z</published>\n    <arxiv:primary_category term='cs.AR'/>\n    <author>\n      <name>Fu-Chieh Chang</name>\n    </author>\n    <author>\n      <name>Yu-Hsin Yang</name>\n    </author>\n    <author>\n      <name>Hung-Ming Huang</name>\n    </author>\n    <author>\n      <name>Yun-Chia Hsu</name>\n    </author>\n    <author>\n      <name>Yin-Yu Lin</name>\n    </author>\n    <author>\n      <name>Ming-Fang Tsai</name>\n    </author>\n    <author>\n      <name>Chun-Chih Yang</name>\n    </author>\n    <author>\n      <name>Pei-Yuan Wu</name>\n    </author>\n  </entry>"
}