Paper
Verify Implementation Equivalence of Large Models
Authors
Qi Zhan, Xing Hu, Xin Xia, Shanping Li
Abstract
Verifying whether two implementations of the same large model are equivalent across frameworks is difficult in practice. Even when they realize the same computation, their graphs may differ substantially in operator decomposition, tensor layout, and the use of fused or opaque kernels, making manual rewrite rules hard to build and maintain. We present Emerge, a framework for checking Implementation Equivalence over computation graphs of large-model implementations. Instead of writing rules manually, Emerge represents the two implementations in an e-graph, infers candidate relations from execution values, and synthesizes rewrite rules on demand when existing rules are insufficient. Each synthesized rule is validated using the strongest applicable method, including SMT- based checking for symbolically tractable cases and constraint-aware randomized testing for opaque kernels, and then propagated through e-graph rebuilding to establish larger equivalences. Our current implementation targets inference computation graphs captured from HuggingFace Transformers and vLLM. Our evaluation shows that Emerge establishes equivalence for correct implementation pairs at practical cost, while also providing useful by-products for debugging: it detects 10 of 13 known implementation bugs and uncovers 8 previously unknown implementation issues that were later confirmed by developers. In addition, Emerge synthesizes block-level rules that compare favorably with manually authored ones.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.21851v1</id>\n <title>Verify Implementation Equivalence of Large Models</title>\n <updated>2026-03-23T11:39:56Z</updated>\n <link href='https://arxiv.org/abs/2603.21851v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.21851v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Verifying whether two implementations of the same large model are equivalent across frameworks is difficult in practice. Even when\n they realize the same computation, their graphs may differ substantially in operator decomposition, tensor layout, and the use of\n fused or opaque kernels, making manual rewrite rules hard to build and maintain. We present Emerge, a framework for checking\n Implementation Equivalence over computation graphs of large-model implementations. Instead of writing rules manually, Emerge\n represents the two implementations in an e-graph, infers candidate relations from execution values, and synthesizes rewrite rules on\n demand when existing rules are insufficient. Each synthesized rule is validated using the strongest applicable method, including SMT-\n based checking for symbolically tractable cases and constraint-aware randomized testing for opaque kernels, and then propagated\n through e-graph rebuilding to establish larger equivalences. Our current implementation targets inference computation graphs captured\n from HuggingFace Transformers and vLLM. Our evaluation shows that Emerge establishes equivalence for correct implementation pairs at\n practical cost, while also providing useful by-products for debugging: it detects 10 of 13 known implementation bugs and uncovers 8\n previously unknown implementation issues that were later confirmed by developers. In addition, Emerge synthesizes block-level rules\n that compare favorably with manually authored ones.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n <published>2026-03-23T11:39:56Z</published>\n <arxiv:primary_category term='cs.SE'/>\n <author>\n <name>Qi Zhan</name>\n </author>\n <author>\n <name>Xing Hu</name>\n </author>\n <author>\n <name>Xin Xia</name>\n </author>\n <author>\n <name>Shanping Li</name>\n </author>\n </entry>"
}