Research

Paper

AI LLM March 24, 2026

A Multimodal Framework for Human-Multi-Agent Interaction

Authors

Shaid Hasan, Breenice Lee, Sujan Sarker, Tariq Iqbal

Abstract

Human-robot interaction is increasingly moving toward multi-robot, socially grounded environments. Existing systems struggle to integrate multimodal perception, embodied expression, and coordinated decision-making in a unified framework. This limits natural and scalable interaction in shared physical spaces. We address this gap by introducing a multimodal framework for human-multi-agent interaction in which each robot operates as an autonomous cognitive agent with integrated multimodal perception and Large Language Model (LLM)-driven planning grounded in embodiment. At the team level, a centralized coordination mechanism regulates turn-taking and agent participation to prevent overlapping speech and conflicting actions. Implemented on two humanoid robots, our framework enables coherent multi-agent interaction through interaction policies that combine speech, gesture, gaze, and locomotion. Representative interaction runs demonstrate coordinated multimodal reasoning across agents and grounded embodied responses. Future work will focus on larger-scale user studies and deeper exploration of socially grounded multi-agent interaction dynamics.

Metadata

arXiv ID: 2603.23271
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23271v1</id>\n    <title>A Multimodal Framework for Human-Multi-Agent Interaction</title>\n    <updated>2026-03-24T14:35:40Z</updated>\n    <link href='https://arxiv.org/abs/2603.23271v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23271v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Human-robot interaction is increasingly moving toward multi-robot, socially grounded environments. Existing systems struggle to integrate multimodal perception, embodied expression, and coordinated decision-making in a unified framework. This limits natural and scalable interaction in shared physical spaces. We address this gap by introducing a multimodal framework for human-multi-agent interaction in which each robot operates as an autonomous cognitive agent with integrated multimodal perception and Large Language Model (LLM)-driven planning grounded in embodiment. At the team level, a centralized coordination mechanism regulates turn-taking and agent participation to prevent overlapping speech and conflicting actions. Implemented on two humanoid robots, our framework enables coherent multi-agent interaction through interaction policies that combine speech, gesture, gaze, and locomotion. Representative interaction runs demonstrate coordinated multimodal reasoning across agents and grounded embodied responses. Future work will focus on larger-scale user studies and deeper exploration of socially grounded multi-agent interaction dynamics.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-24T14:35:40Z</published>\n    <arxiv:comment>4 pages, 3 figures. Accepted at ACM/IEEE HRI 2026 Workshop (MAgicS-HRI)</arxiv:comment>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Shaid Hasan</name>\n    </author>\n    <author>\n      <name>Breenice Lee</name>\n    </author>\n    <author>\n      <name>Sujan Sarker</name>\n    </author>\n    <author>\n      <name>Tariq Iqbal</name>\n    </author>\n  </entry>"
}