Research

Paper

AI LLM February 20, 2026

WorkflowPerturb: Calibrated Stress Tests for Evaluating Multi-Agent Workflow Metrics

Authors

Madhav Kanda, Pedro Las-Casas, Alok Gautam Kumbhare, Rodrigo Fonseca, Sharad Agarwal

Abstract

LLM-based systems increasingly generate structured workflows for complex tasks. In practice, automatic evaluation of these workflows is difficult, because metric scores are often not calibrated, and score changes do not directly communicate the severity of workflow degradation. We introduce WorkflowPerturb, a controlled benchmark for studying workflow evaluation metrics. It works by applying realistic, controlled perturbations to golden workflows. WorkflowPerturb contains 4,973 golden workflows and 44,757 perturbed variants across three perturbation types (Missing Steps, Compressed Steps, and Description Changes), each applied at severity levels of 10%, 30%, and 50%. We benchmark multiple metric families and analyze their sensitivity and calibration using expected score trajectories and residuals. Our results characterize systematic differences across metric families and support severity-aware interpretation of workflow evaluation scores. Our dataset will be released upon acceptance.

Metadata

arXiv ID: 2602.17990
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-20
Fetched: 2026-02-23 05:33

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17990v1</id>\n    <title>WorkflowPerturb: Calibrated Stress Tests for Evaluating Multi-Agent Workflow Metrics</title>\n    <updated>2026-02-20T04:54:31Z</updated>\n    <link href='https://arxiv.org/abs/2602.17990v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17990v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>LLM-based systems increasingly generate structured workflows for complex tasks. In practice, automatic evaluation of these workflows is difficult, because metric scores are often not calibrated, and score changes do not directly communicate the severity of workflow degradation. We introduce WorkflowPerturb, a controlled benchmark for studying workflow evaluation metrics. It works by applying realistic, controlled perturbations to golden workflows. WorkflowPerturb contains 4,973 golden workflows and 44,757 perturbed variants across three perturbation types (Missing Steps, Compressed Steps, and Description Changes), each applied at severity levels of 10%, 30%, and 50%. We benchmark multiple metric families and analyze their sensitivity and calibration using expected score trajectories and residuals. Our results characterize systematic differences across metric families and support severity-aware interpretation of workflow evaluation scores. Our dataset will be released upon acceptance.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-20T04:54:31Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Madhav Kanda</name>\n    </author>\n    <author>\n      <name>Pedro Las-Casas</name>\n    </author>\n    <author>\n      <name>Alok Gautam Kumbhare</name>\n    </author>\n    <author>\n      <name>Rodrigo Fonseca</name>\n    </author>\n    <author>\n      <name>Sharad Agarwal</name>\n    </author>\n  </entry>"
}