Research

Paper

AI LLM March 19, 2026

SignAgent: Agentic LLMs for Linguistically-Grounded Sign Language Annotation and Dataset Curation

Authors

Oliver Cory, Ozge Mercanoglu Sincan, Richard Bowden

Abstract

This paper introduces SignAgent, a novel agentic framework that utilises Large Language Models (LLMs) for scalable, linguistically-grounded Sign Language (SL) annotation and dataset curation. Traditional computational methods for SLs often operate at the gloss level, overlooking crucial linguistic nuances, while manual linguistic annotation remains a significant bottleneck, proving too slow and expensive for the creation of large-scale, phonologically-aware datasets. SignAgent addresses these challenges through SignAgent Orchestrator, a reasoning LLM that coordinates a suite of linguistic tools, and SignGraph, a knowledge-grounded LLM that provides lexical and linguistic grounding. We evaluate our framework on two downstream annotation tasks. First, on Pseudo-gloss Annotation, where the agent performs constrained assignment, using multi-modal evidence to extract and order suitable gloss labels for signed sequences. Second, on ID Glossing, where the agent detects and refines visual clusters by reasoning over both visual similarity and phonological overlap to correctly identify and group lexical sign variants. Our results demonstrate that our agentic approach achieves strong performance for large-scale, linguistically-aware data annotation and curation.

Metadata

arXiv ID: 2603.19059
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-19
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19059v1</id>\n    <title>SignAgent: Agentic LLMs for Linguistically-Grounded Sign Language Annotation and Dataset Curation</title>\n    <updated>2026-03-19T15:52:49Z</updated>\n    <link href='https://arxiv.org/abs/2603.19059v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19059v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>This paper introduces SignAgent, a novel agentic framework that utilises Large Language Models (LLMs) for scalable, linguistically-grounded Sign Language (SL) annotation and dataset curation. Traditional computational methods for SLs often operate at the gloss level, overlooking crucial linguistic nuances, while manual linguistic annotation remains a significant bottleneck, proving too slow and expensive for the creation of large-scale, phonologically-aware datasets. SignAgent addresses these challenges through SignAgent Orchestrator, a reasoning LLM that coordinates a suite of linguistic tools, and SignGraph, a knowledge-grounded LLM that provides lexical and linguistic grounding. We evaluate our framework on two downstream annotation tasks. First, on Pseudo-gloss Annotation, where the agent performs constrained assignment, using multi-modal evidence to extract and order suitable gloss labels for signed sequences. Second, on ID Glossing, where the agent detects and refines visual clusters by reasoning over both visual similarity and phonological overlap to correctly identify and group lexical sign variants. Our results demonstrate that our agentic approach achieves strong performance for large-scale, linguistically-aware data annotation and curation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-19T15:52:49Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Oliver Cory</name>\n    </author>\n    <author>\n      <name>Ozge Mercanoglu Sincan</name>\n    </author>\n    <author>\n      <name>Richard Bowden</name>\n    </author>\n  </entry>"
}