Research

Paper

AI LLM February 19, 2026

When to Trust the Cheap Check: Weak and Strong Verification for Reasoning

Authors

Shayan Kiyani, Sima Noorani, George Pappas, Hamed Hassani

Abstract

Reasoning with LLMs increasingly unfolds inside a broader verification loop. Internally, systems use cheap checks, such as self-consistency or proxy rewards, which we call weak verification. Externally, users inspect outputs and steer the model through feedback until results are trustworthy, which we call strong verification. These signals differ sharply in cost and reliability: strong verification can establish trust but is resource-intensive, while weak verification is fast and scalable but noisy and imperfect. We formalize this tension through weak--strong verification policies, which decide when to accept or reject based on weak verification and when to defer to strong verification. We introduce metrics capturing incorrect acceptance, incorrect rejection, and strong-verification frequency. Over population, we show that optimal policies admit a two-threshold structure and that calibration and sharpness govern the value of weak verifiers. Building on this, we develop an online algorithm that provably controls acceptance and rejection errors without assumptions on the query stream, the language model, or the weak verifier.

Metadata

arXiv ID: 2602.17633
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17633v1</id>\n    <title>When to Trust the Cheap Check: Weak and Strong Verification for Reasoning</title>\n    <updated>2026-02-19T18:47:38Z</updated>\n    <link href='https://arxiv.org/abs/2602.17633v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17633v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reasoning with LLMs increasingly unfolds inside a broader verification loop. Internally, systems use cheap checks, such as self-consistency or proxy rewards, which we call weak verification. Externally, users inspect outputs and steer the model through feedback until results are trustworthy, which we call strong verification. These signals differ sharply in cost and reliability: strong verification can establish trust but is resource-intensive, while weak verification is fast and scalable but noisy and imperfect. We formalize this tension through weak--strong verification policies, which decide when to accept or reject based on weak verification and when to defer to strong verification. We introduce metrics capturing incorrect acceptance, incorrect rejection, and strong-verification frequency. Over population, we show that optimal policies admit a two-threshold structure and that calibration and sharpness govern the value of weak verifiers. Building on this, we develop an online algorithm that provably controls acceptance and rejection errors without assumptions on the query stream, the language model, or the weak verifier.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.ML'/>\n    <published>2026-02-19T18:47:38Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Shayan Kiyani</name>\n    </author>\n    <author>\n      <name>Sima Noorani</name>\n    </author>\n    <author>\n      <name>George Pappas</name>\n    </author>\n    <author>\n      <name>Hamed Hassani</name>\n    </author>\n  </entry>"
}