Research

Paper

AI LLM March 10, 2026

ActiveUltraFeedback: Efficient Preference Data Generation using Active Learning

Authors

Davit Melikidze, Marian Schneider, Jessica Lam, Martin Wertich, Ido Hakimi, Barna Pásztor, Andreas Krause

Abstract

Reinforcement Learning from Human Feedback (RLHF) has become the standard for aligning Large Language Models (LLMs), yet its efficacy is bottlenecked by the high cost of acquiring preference data, especially in low-resource and expert domains. To address this, we introduce ACTIVEULTRAFEEDBACK, a modular active learning pipeline that leverages uncertainty estimates to dynamically identify the most informative responses for annotation. Our pipeline facilitates the systematic evaluation of standard response selection methods alongside DOUBLE REVERSE THOMPSON SAMPLING (DRTS) and DELTAUCB, two novel methods prioritizing response pairs with large predicted quality gaps, leveraging recent results showing that such pairs provide good signals for fine-tuning. Our experiments demonstrate that ACTIVEULTRAFEEDBACK yields high-quality datasets that lead to significant improvements in downstream performance, notably achieving comparable or superior results with as little as one-sixth of the annotated data relative to static baselines. Our pipeline is available at https://github.com/lasgroup/ActiveUltraFeedback and our preference datasets at https://huggingface.co/ActiveUltraFeedback.

Metadata

arXiv ID: 2603.09692
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09692v1</id>\n    <title>ActiveUltraFeedback: Efficient Preference Data Generation using Active Learning</title>\n    <updated>2026-03-10T13:59:50Z</updated>\n    <link href='https://arxiv.org/abs/2603.09692v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09692v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement Learning from Human Feedback (RLHF) has become the standard for aligning Large Language Models (LLMs), yet its efficacy is bottlenecked by the high cost of acquiring preference data, especially in low-resource and expert domains. To address this, we introduce ACTIVEULTRAFEEDBACK, a modular active learning pipeline that leverages uncertainty estimates to dynamically identify the most informative responses for annotation. Our pipeline facilitates the systematic evaluation of standard response selection methods alongside DOUBLE REVERSE THOMPSON SAMPLING (DRTS) and DELTAUCB, two novel methods prioritizing response pairs with large predicted quality gaps, leveraging recent results showing that such pairs provide good signals for fine-tuning. Our experiments demonstrate that ACTIVEULTRAFEEDBACK yields high-quality datasets that lead to significant improvements in downstream performance, notably achieving comparable or superior results with as little as one-sixth of the annotated data relative to static baselines. Our pipeline is available at https://github.com/lasgroup/ActiveUltraFeedback and our preference datasets at https://huggingface.co/ActiveUltraFeedback.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-10T13:59:50Z</published>\n    <arxiv:comment>35 pages, 6 figures, 24 tables</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Davit Melikidze</name>\n    </author>\n    <author>\n      <name>Marian Schneider</name>\n    </author>\n    <author>\n      <name>Jessica Lam</name>\n    </author>\n    <author>\n      <name>Martin Wertich</name>\n    </author>\n    <author>\n      <name>Ido Hakimi</name>\n    </author>\n    <author>\n      <name>Barna Pásztor</name>\n    </author>\n    <author>\n      <name>Andreas Krause</name>\n    </author>\n  </entry>"
}