Research

Paper

AI LLM February 27, 2026

Preference Packing: Efficient Preference Optimization for Large Language Models

Authors

Jaekyung Cho

Abstract

Resource-efficient training optimization techniques are becoming increasingly important as the size of large language models (LLMs) continues to grow. In particular, batch packing is commonly used in pre-training and supervised fine-tuning to achieve resource-efficient training. We propose preference packing, a method to enhance resource efficiency in training techniques that use data with different responses for the same input prompt, such as reward models or Direct Preference Optimization (DPO). Preference packing improves resource efficiency by reducing the attention operations for duplicate input prompts and decreasing KV cache memory usage. We conducted experiments on text-only datasets and image-included datasets and achieved at least 37% reduction in training time. Notably, this method can be applied alongside existing optimization techniques such as batch sorting, resulting in a 3.22x speedup.

Metadata

arXiv ID: 2602.24082
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.24082v1</id>\n    <title>Preference Packing: Efficient Preference Optimization for Large Language Models</title>\n    <updated>2026-02-27T15:19:57Z</updated>\n    <link href='https://arxiv.org/abs/2602.24082v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.24082v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Resource-efficient training optimization techniques are becoming increasingly important as the size of large language models (LLMs) continues to grow. In particular, batch packing is commonly used in pre-training and supervised fine-tuning to achieve resource-efficient training. We propose preference packing, a method to enhance resource efficiency in training techniques that use data with different responses for the same input prompt, such as reward models or Direct Preference Optimization (DPO). Preference packing improves resource efficiency by reducing the attention operations for duplicate input prompts and decreasing KV cache memory usage. We conducted experiments on text-only datasets and image-included datasets and achieved at least 37% reduction in training time. Notably, this method can be applied alongside existing optimization techniques such as batch sorting, resulting in a 3.22x speedup.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-27T15:19:57Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Jaekyung Cho</name>\n    </author>\n  </entry>"
}