Research

Paper

AI LLM March 10, 2026

Do What I Say: A Spoken Prompt Dataset for Instruction-Following

Authors

Maike Züfle, Sara Papi, Fabian Retkowski, Szymon Mazurek, Marek Kasztelnik, Alexander Waibel, Luisa Bentivogli, Jan Niehues

Abstract

Speech Large Language Models (SLLMs) have rapidly expanded, supporting a wide range of tasks. These models are typically evaluated using text prompts, which may not reflect real-world scenarios where users interact with speech. To address this gap, we introduce DoWhatISay (DOWIS), a multilingual dataset of human-recorded spoken and written prompts designed to pair with any existing benchmark for realistic evaluation of SLLMs under spoken instruction conditions. Spanning 9 tasks and 11 languages, it provides 10 prompt variants per task-language pair, across five styles. Using DOWIS, we benchmark state-of-the-art SLLMs, analyzing the interplay between prompt modality, style, language, and task type. Results show that text prompts consistently outperform spoken prompts, particularly for low-resource and cross-lingual settings. Only for tasks with speech output, spoken prompts do close the gap, highlighting the need for speech-based prompting in SLLM evaluation.

Metadata

arXiv ID: 2603.09881
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09881v1</id>\n    <title>Do What I Say: A Spoken Prompt Dataset for Instruction-Following</title>\n    <updated>2026-03-10T16:39:46Z</updated>\n    <link href='https://arxiv.org/abs/2603.09881v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09881v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Speech Large Language Models (SLLMs) have rapidly expanded, supporting a wide range of tasks. These models are typically evaluated using text prompts, which may not reflect real-world scenarios where users interact with speech. To address this gap, we introduce DoWhatISay (DOWIS), a multilingual dataset of human-recorded spoken and written prompts designed to pair with any existing benchmark for realistic evaluation of SLLMs under spoken instruction conditions. Spanning 9 tasks and 11 languages, it provides 10 prompt variants per task-language pair, across five styles. Using DOWIS, we benchmark state-of-the-art SLLMs, analyzing the interplay between prompt modality, style, language, and task type. Results show that text prompts consistently outperform spoken prompts, particularly for low-resource and cross-lingual settings. Only for tasks with speech output, spoken prompts do close the gap, highlighting the need for speech-based prompting in SLLM evaluation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-10T16:39:46Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Maike Züfle</name>\n    </author>\n    <author>\n      <name>Sara Papi</name>\n    </author>\n    <author>\n      <name>Fabian Retkowski</name>\n    </author>\n    <author>\n      <name>Szymon Mazurek</name>\n    </author>\n    <author>\n      <name>Marek Kasztelnik</name>\n    </author>\n    <author>\n      <name>Alexander Waibel</name>\n    </author>\n    <author>\n      <name>Luisa Bentivogli</name>\n    </author>\n    <author>\n      <name>Jan Niehues</name>\n    </author>\n  </entry>"
}