Paper
Counterexample Guided Branching via Directional Relaxation Analysis in Complete Neural Network Verification
Authors
Jingyang Li, Fu Song, Guoqiang Li
Abstract
Deep Neural Networks demonstrate exceptional performance but remain vulnerable to adversarial perturbations, necessitating formal verification for safety-critical deployment. To address the computational complexity of this task, researchers often employ abstraction-refinement techniques that iteratively tighten an over-approximated model. While structural methods utilize Counterexample-Guided Abstraction Refine- ment, state-of-the-art dataflow verifiers typically rely on Branch-and-Bound to refine numerical convex relaxations. However, current dataflow approaches operate with blind refinement processes that rely on static heuristics and fail to leverage specific diagnostic information from verification failures. In this work, we argue that Branch-and-Bound should be reformulated as a Dataflow CEGAR loop where the spurious counterexample serves as a precise witness to local abstraction errors. We propose DRG-BaB, a framework that introduces the Directional Relaxation Gap heuristic to prioritize branching on neurons actively contributing to falsification in the abstract domain. By deriving a closed-form spurious counterexample directly from linear bounds, our method transforms generic search into targeted refinement. Experiments on high-dimensional benchmarks demonstrate that this approach significantly reduces search tree size and verification time compared to established baselines.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.14823v1</id>\n <title>Counterexample Guided Branching via Directional Relaxation Analysis in Complete Neural Network Verification</title>\n <updated>2026-03-16T04:57:44Z</updated>\n <link href='https://arxiv.org/abs/2603.14823v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.14823v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Deep Neural Networks demonstrate exceptional performance but remain vulnerable to adversarial perturbations, necessitating formal verification for safety-critical deployment. To address the computational complexity of this task, researchers often employ abstraction-refinement techniques that iteratively tighten an over-approximated model. While structural methods utilize Counterexample-Guided Abstraction Refine- ment, state-of-the-art dataflow verifiers typically rely on Branch-and-Bound to refine numerical convex relaxations. However, current dataflow approaches operate with blind refinement processes that rely on static heuristics and fail to leverage specific diagnostic information from verification failures. In this work, we argue that Branch-and-Bound should be reformulated as a Dataflow CEGAR loop where the spurious counterexample serves as a precise witness to local abstraction errors. We propose DRG-BaB, a framework that introduces the Directional Relaxation Gap heuristic to prioritize branching on neurons actively contributing to falsification in the abstract domain. By deriving a closed-form spurious counterexample directly from linear bounds, our method transforms generic search into targeted refinement. Experiments on high-dimensional benchmarks demonstrate that this approach significantly reduces search tree size and verification time compared to established baselines.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n <published>2026-03-16T04:57:44Z</published>\n <arxiv:comment>21 pages, 2 figures</arxiv:comment>\n <arxiv:primary_category term='cs.SE'/>\n <author>\n <name>Jingyang Li</name>\n </author>\n <author>\n <name>Fu Song</name>\n </author>\n <author>\n <name>Guoqiang Li</name>\n </author>\n </entry>"
}