Paper
KARL: Knowledge Agents via Reinforcement Learning
Authors
Jonathan D. Chang, Andrew Drozdov, Shubham Toshniwal, Owen Oertell, Alexander Trott, Jacob Portes, Abhay Gupta, Pallavi Koppol, Ashutosh Baheti, Sean Kulinski, Ivan Zhou, Irene Dea, Krista Opsahl-Ong, Simon Favreau-Lessard, Sean Owen, Jose Javier Gonzalez Ortiz, Arnav Singhvi, Xabi Andrade, Cindy Wang, Kartik Sreenivasan, Sam Havens, Jialu Liu, Peyton DeNiro, Wen Sun, Michael Bendersky, Jonathan Frankle
Abstract
We present a system for training enterprise search agents via reinforcement learning that achieves state-of-the-art performance across a diverse suite of hard-to-verify agentic search tasks. Our work makes four core contributions. First, we introduce KARLBench, a multi-capability evaluation suite spanning six distinct search regimes, including constraint-driven entity search, cross-document report synthesis, tabular numerical reasoning, exhaustive entity retrieval, procedural reasoning over technical documentation, and fact aggregation over internal enterprise notes. Second, we show that models trained across heterogeneous search behaviors generalize substantially better than those optimized for any single benchmark. Third, we develop an agentic synthesis pipeline that employs long-horizon reasoning and tool use to generate diverse, grounded, and high-quality training data, with iterative bootstrapping from increasingly capable models. Fourth, we propose a new post-training paradigm based on iterative large-batch off-policy RL that is sample efficient, robust to train-inference engine discrepancies, and naturally extends to multi-task training with out-of-distribution generalization. Compared to Claude 4.6 and GPT 5.2, KARL is Pareto-optimal on KARLBench across cost-quality and latency-quality trade-offs, including tasks that were out-of-distribution during training. With sufficient test-time compute, it surpasses the strongest closed models. These results show that tailored synthetic data in combination with multi-task reinforcement learning enables cost-efficient and high-performing knowledge agents for grounded reasoning.
Metadata
Related papers
Cosmic Shear in Effective Field Theory at Two-Loop Order: Revisiting $S_8$ in Dark Energy Survey Data
Shi-Fan Chen, Joseph DeRose, Mikhail M. Ivanov, Oliver H. E. Philcox • 2026-03-30
Stop Probing, Start Coding: Why Linear Probes and Sparse Autoencoders Fail at Compositional Generalisation
Vitória Barin Pacela, Shruti Joshi, Isabela Camacho, Simon Lacoste-Julien, Da... • 2026-03-30
SNID-SAGE: A Modern Framework for Interactive Supernova Classification and Spectral Analysis
Fiorenzo Stoppa, Stephen J. Smartt • 2026-03-30
Acoustic-to-articulatory Inversion of the Complete Vocal Tract from RT-MRI with Various Audio Embeddings and Dataset Sizes
Sofiane Azzouz, Pierre-André Vuissoz, Yves Laprie • 2026-03-30
Rotating black hole shadows in metric-affine bumblebee gravity
Jose R. Nascimento, Ana R. M. Oliveira, Albert Yu. Petrov, Paulo J. Porfírio,... • 2026-03-30
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.05218v1</id>\n <title>KARL: Knowledge Agents via Reinforcement Learning</title>\n <updated>2026-03-05T14:30:25Z</updated>\n <link href='https://arxiv.org/abs/2603.05218v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.05218v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>We present a system for training enterprise search agents via reinforcement learning that achieves state-of-the-art performance across a diverse suite of hard-to-verify agentic search tasks. Our work makes four core contributions. First, we introduce KARLBench, a multi-capability evaluation suite spanning six distinct search regimes, including constraint-driven entity search, cross-document report synthesis, tabular numerical reasoning, exhaustive entity retrieval, procedural reasoning over technical documentation, and fact aggregation over internal enterprise notes. Second, we show that models trained across heterogeneous search behaviors generalize substantially better than those optimized for any single benchmark. Third, we develop an agentic synthesis pipeline that employs long-horizon reasoning and tool use to generate diverse, grounded, and high-quality training data, with iterative bootstrapping from increasingly capable models. Fourth, we propose a new post-training paradigm based on iterative large-batch off-policy RL that is sample efficient, robust to train-inference engine discrepancies, and naturally extends to multi-task training with out-of-distribution generalization. Compared to Claude 4.6 and GPT 5.2, KARL is Pareto-optimal on KARLBench across cost-quality and latency-quality trade-offs, including tasks that were out-of-distribution during training. With sufficient test-time compute, it surpasses the strongest closed models. These results show that tailored synthetic data in combination with multi-task reinforcement learning enables cost-efficient and high-performing knowledge agents for grounded reasoning.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n <published>2026-03-05T14:30:25Z</published>\n <arxiv:comment>77 pages, 43 figures, 17 tables</arxiv:comment>\n <arxiv:primary_category term='cs.AI'/>\n <author>\n <name>Jonathan D. Chang</name>\n </author>\n <author>\n <name>Andrew Drozdov</name>\n </author>\n <author>\n <name>Shubham Toshniwal</name>\n </author>\n <author>\n <name>Owen Oertell</name>\n </author>\n <author>\n <name>Alexander Trott</name>\n </author>\n <author>\n <name>Jacob Portes</name>\n </author>\n <author>\n <name>Abhay Gupta</name>\n </author>\n <author>\n <name>Pallavi Koppol</name>\n </author>\n <author>\n <name>Ashutosh Baheti</name>\n </author>\n <author>\n <name>Sean Kulinski</name>\n </author>\n <author>\n <name>Ivan Zhou</name>\n </author>\n <author>\n <name>Irene Dea</name>\n </author>\n <author>\n <name>Krista Opsahl-Ong</name>\n </author>\n <author>\n <name>Simon Favreau-Lessard</name>\n </author>\n <author>\n <name>Sean Owen</name>\n </author>\n <author>\n <name>Jose Javier Gonzalez Ortiz</name>\n </author>\n <author>\n <name>Arnav Singhvi</name>\n </author>\n <author>\n <name>Xabi Andrade</name>\n </author>\n <author>\n <name>Cindy Wang</name>\n </author>\n <author>\n <name>Kartik Sreenivasan</name>\n </author>\n <author>\n <name>Sam Havens</name>\n </author>\n <author>\n <name>Jialu Liu</name>\n </author>\n <author>\n <name>Peyton DeNiro</name>\n </author>\n <author>\n <name>Wen Sun</name>\n </author>\n <author>\n <name>Michael Bendersky</name>\n </author>\n <author>\n <name>Jonathan Frankle</name>\n </author>\n </entry>"
}