Research

Paper

AI LLM March 20, 2026

FDARxBench: Benchmarking Regulatory and Clinical Reasoning on FDA Generic Drug Assessment

Authors

Betty Xiong, Jillian Fisher, Benjamin Newman, Meng Hu, Shivangi Gupta, Yejin Choi, Lanyan Fang, Russ B Altman

Abstract

We introduce an expert curated, real-world benchmark for evaluating document-grounded question-answering (QA) motivated by generic drug assessment, using the U.S. Food and Drug Administration (FDA) drug label documents. Drug labels contain rich but heterogeneous clinical and regulatory information, making accurate question answering difficult for current language models. In collaboration with FDA regulatory assessors, we introduce FDARxBench, and construct a multi-stage pipeline for generating high-quality, expert curated, QA examples spanning factual, multi-hop, and refusal tasks, and design evaluation protocols to assess both open-book and closed-book reasoning. Experiments across proprietary and open-weight models reveal substantial gaps in factual grounding, long-context retrieval, and safe refusal behavior. While motivated by FDA generic drug assessment needs, this benchmark also provides a substantial foundation for challenging regulatory-grade evaluation of label comprehension. The benchmark is designed to support evaluation of LLM behavior on drug-label questions.

Metadata

arXiv ID: 2603.19539
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-20
Fetched: 2026-03-23 16:54

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19539v1</id>\n    <title>FDARxBench: Benchmarking Regulatory and Clinical Reasoning on FDA Generic Drug Assessment</title>\n    <updated>2026-03-20T00:33:58Z</updated>\n    <link href='https://arxiv.org/abs/2603.19539v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19539v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>We introduce an expert curated, real-world benchmark for evaluating document-grounded question-answering (QA) motivated by generic drug assessment, using the U.S. Food and Drug Administration (FDA) drug label documents. Drug labels contain rich but heterogeneous clinical and regulatory information, making accurate question answering difficult for current language models. In collaboration with FDA regulatory assessors, we introduce FDARxBench, and construct a multi-stage pipeline for generating high-quality, expert curated, QA examples spanning factual, multi-hop, and refusal tasks, and design evaluation protocols to assess both open-book and closed-book reasoning. Experiments across proprietary and open-weight models reveal substantial gaps in factual grounding, long-context retrieval, and safe refusal behavior. While motivated by FDA generic drug assessment needs, this benchmark also provides a substantial foundation for challenging regulatory-grade evaluation of label comprehension. The benchmark is designed to support evaluation of LLM behavior on drug-label questions.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-20T00:33:58Z</published>\n    <arxiv:comment>4 pages, 2 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Betty Xiong</name>\n    </author>\n    <author>\n      <name>Jillian Fisher</name>\n    </author>\n    <author>\n      <name>Benjamin Newman</name>\n    </author>\n    <author>\n      <name>Meng Hu</name>\n    </author>\n    <author>\n      <name>Shivangi Gupta</name>\n    </author>\n    <author>\n      <name>Yejin Choi</name>\n    </author>\n    <author>\n      <name>Lanyan Fang</name>\n    </author>\n    <author>\n      <name>Russ B Altman</name>\n    </author>\n  </entry>"
}