Research

Paper

AI LLM March 05, 2026

FireBench: Evaluating Instruction Following in Enterprise and API-Driven LLM Applications

Authors

Yunfan Zhang, Yijie Bei, Jetashree Ravi, Pawel Garbacki

Abstract

Instruction following is critical for LLMs deployed in enterprise and API-driven settings, where strict adherence to output formats, content constraints, and procedural requirements is essential for enabling reliable LLM-assisted workflows. However, existing instruction following benchmarks predominantly evaluate natural language generation constraints that reflect the needs of chat assistants rather than enterprise users. To bridge this gap, we introduce FireBench, an LLM instruction following benchmark grounded in real-world enterprise and API usage patterns. FireBench evaluates six core capability dimensions across diverse applications including information extraction, customer support, and coding agents, comprising over 2,400 samples. We evaluate 11 LLMs and present key findings on their instruction following behavior in enterprise scenarios. We open-source FireBench at fire-bench.com to help users assess model suitability, support model developers in diagnosing performance, and invite community contributions.

Metadata

arXiv ID: 2603.04857
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-05
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04857v1</id>\n    <title>FireBench: Evaluating Instruction Following in Enterprise and API-Driven LLM Applications</title>\n    <updated>2026-03-05T06:25:50Z</updated>\n    <link href='https://arxiv.org/abs/2603.04857v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04857v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Instruction following is critical for LLMs deployed in enterprise and API-driven settings, where strict adherence to output formats, content constraints, and procedural requirements is essential for enabling reliable LLM-assisted workflows. However, existing instruction following benchmarks predominantly evaluate natural language generation constraints that reflect the needs of chat assistants rather than enterprise users. To bridge this gap, we introduce FireBench, an LLM instruction following benchmark grounded in real-world enterprise and API usage patterns. FireBench evaluates six core capability dimensions across diverse applications including information extraction, customer support, and coding agents, comprising over 2,400 samples. We evaluate 11 LLMs and present key findings on their instruction following behavior in enterprise scenarios. We open-source FireBench at fire-bench.com to help users assess model suitability, support model developers in diagnosing performance, and invite community contributions.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <published>2026-03-05T06:25:50Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Yunfan Zhang</name>\n    </author>\n    <author>\n      <name>Yijie Bei</name>\n    </author>\n    <author>\n      <name>Jetashree Ravi</name>\n    </author>\n    <author>\n      <name>Pawel Garbacki</name>\n    </author>\n  </entry>"
}