Research

Paper

AI LLM March 04, 2026

Lightweight Visual Reasoning for Socially-Aware Robots

Authors

Alessio Galatolo, Ronald Cumbal, Alexandros Rouchitsas, Katie Winkle, Didem Gürdür Broo, Ginevra Castellano

Abstract

Robots operating in shared human environments must not only navigate, interact, and detect their surroundings, they must also interpret and respond to dynamic, and often unpredictable, human behaviours. Although recent advances have shown promise in enhancing robotic perception and instruction-following using Vision-Language Models (VLMs), they remain limited in addressing the complexities of multimodal human-robot interactions (HRI). Motivated by this challenge, we introduce a lightweight language-to-vision feedback module that closes the loop between an LLM and the vision encoder in VLMs. The module projects image-token hidden states through a gated Multi-Layer Perceptron (MLP) back into the encoder input, prompting a second pass that reinterprets the scene under text context. We evaluate this approach on three robotics-centred tasks: navigation in a simulated environment (Habitat), sequential scene description (Mementos-Robotics), and human-intention recognition (our HRI dataset). Results show that our method improves Qwen 2.5 (7B) by $3.3\%$ (less distance), $+0.057$ description score, and $+2.93\%$ accuracy, with less than $3\%$ extra parameters; Gemma 3 (4B) and LLaVA OV 1.5 (4B) show mixed navigation results but gains $+0.111,+0.055$ and $+10.81\%,+4.79\%$ on the latter two tasks. Code is available at https://github.com/alessioGalatolo/VLM-Reasoning-for-Robotics

Metadata

arXiv ID: 2603.03942
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-04
Fetched: 2026-03-05 06:06

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.03942v1</id>\n    <title>Lightweight Visual Reasoning for Socially-Aware Robots</title>\n    <updated>2026-03-04T11:08:44Z</updated>\n    <link href='https://arxiv.org/abs/2603.03942v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.03942v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Robots operating in shared human environments must not only navigate, interact, and detect their surroundings, they must also interpret and respond to dynamic, and often unpredictable, human behaviours. Although recent advances have shown promise in enhancing robotic perception and instruction-following using Vision-Language Models (VLMs), they remain limited in addressing the complexities of multimodal human-robot interactions (HRI). Motivated by this challenge, we introduce a lightweight language-to-vision feedback module that closes the loop between an LLM and the vision encoder in VLMs. The module projects image-token hidden states through a gated Multi-Layer Perceptron (MLP) back into the encoder input, prompting a second pass that reinterprets the scene under text context. We evaluate this approach on three robotics-centred tasks: navigation in a simulated environment (Habitat), sequential scene description (Mementos-Robotics), and human-intention recognition (our HRI dataset). Results show that our method improves Qwen 2.5 (7B) by $3.3\\%$ (less distance), $+0.057$ description score, and $+2.93\\%$ accuracy, with less than $3\\%$ extra parameters; Gemma 3 (4B) and LLaVA OV 1.5 (4B) show mixed navigation results but gains $+0.111,+0.055$ and $+10.81\\%,+4.79\\%$ on the latter two tasks. Code is available at https://github.com/alessioGalatolo/VLM-Reasoning-for-Robotics</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-03-04T11:08:44Z</published>\n    <arxiv:comment>ICRA26</arxiv:comment>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Alessio Galatolo</name>\n    </author>\n    <author>\n      <name>Ronald Cumbal</name>\n    </author>\n    <author>\n      <name>Alexandros Rouchitsas</name>\n    </author>\n    <author>\n      <name>Katie Winkle</name>\n    </author>\n    <author>\n      <name>Didem Gürdür Broo</name>\n    </author>\n    <author>\n      <name>Ginevra Castellano</name>\n    </author>\n  </entry>"
}