Research

Paper

AI LLM March 03, 2026

Compact Prompting in Instruction-tuned LLMs for Joint Argumentative Component Detection

Authors

Sofiane Elguendouze, Erwan Hain, Elena Cabrio, Serena Villata

Abstract

Argumentative component detection (ACD) is a core subtask of Argument(ation) Mining (AM) and one of its most challenging aspects, as it requires jointly delimiting argumentative spans and classifying them into components such as claims and premises. While research on this subtask remains relatively limited compared to other AM tasks, most existing approaches formulate it as a simplified sequence labeling problem, component classification, or a pipeline of component segmentation followed by classification. In this paper, we propose a novel approach based on instruction-tuned Large Language Models (LLMs) using compact instruction-based prompts, and reframe ACD as a language generation task, enabling arguments to be identified directly from plain text without relying on pre-segmented components. Experiments on standard benchmarks show that our approach achieves higher performance compared to state-of-the-art systems. To the best of our knowledge, this is one of the first attempts to fully model ACD as a generative task, highlighting the potential of instruction tuning for complex AM problems.

Metadata

arXiv ID: 2603.03095
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.03095v1</id>\n    <title>Compact Prompting in Instruction-tuned LLMs for Joint Argumentative Component Detection</title>\n    <updated>2026-03-03T15:32:58Z</updated>\n    <link href='https://arxiv.org/abs/2603.03095v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.03095v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Argumentative component detection (ACD) is a core subtask of Argument(ation) Mining (AM) and one of its most challenging aspects, as it requires jointly delimiting argumentative spans and classifying them into components such as claims and premises. While research on this subtask remains relatively limited compared to other AM tasks, most existing approaches formulate it as a simplified sequence labeling problem, component classification, or a pipeline of component segmentation followed by classification. In this paper, we propose a novel approach based on instruction-tuned Large Language Models (LLMs) using compact instruction-based prompts, and reframe ACD as a language generation task, enabling arguments to be identified directly from plain text without relying on pre-segmented components. Experiments on standard benchmarks show that our approach achieves higher performance compared to state-of-the-art systems. To the best of our knowledge, this is one of the first attempts to fully model ACD as a generative task, highlighting the potential of instruction tuning for complex AM problems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-03T15:32:58Z</published>\n    <arxiv:comment>Under Review (COLM 2026)</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Sofiane Elguendouze</name>\n    </author>\n    <author>\n      <name>Erwan Hain</name>\n    </author>\n    <author>\n      <name>Elena Cabrio</name>\n    </author>\n    <author>\n      <name>Serena Villata</name>\n    </author>\n  </entry>"
}