Research

Paper

TESTING March 09, 2026

SAIL: Test-Time Scaling for In-Context Imitation Learning with VLM

Authors

Makoto Sato, Yusuke Iwasawa, Yujin Tang, So Kuroki

Abstract

In-context imitation learning allows robots to acquire skills from demonstrations, yet one-shot trajectory generation remains fragile under environmental variation. We propose SAIL, a framework that reframes robot imitation as an iterative refinement problem capable of scaling with test-time compute. SAIL utilizes Monte Carlo Tree Search, where each node is a complete trajectory and edges correspond to trajectory refinements. The process is guided by three core components: an automated archive of successful trajectories for contextually relevant retrieval, a vision language model-based scoring mechanism for trajectory evaluation, and a step-level feedback that provides trajectory-aligned scores for iterative refinement. Experiments across six diverse manipulation tasks in simulation and real-world validation clearly demonstrate that increasing test-time compute consistently improves success rates, achieving up to 95% on complex tasks. Our results suggest that trajectory-level test-time scaling is a robust path toward more generalizable robotic agents.

Metadata

arXiv ID: 2603.08269
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-09
Fetched: 2026-03-10 05:43

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.08269v1</id>\n    <title>SAIL: Test-Time Scaling for In-Context Imitation Learning with VLM</title>\n    <updated>2026-03-09T11:39:40Z</updated>\n    <link href='https://arxiv.org/abs/2603.08269v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.08269v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In-context imitation learning allows robots to acquire skills from demonstrations, yet one-shot trajectory generation remains fragile under environmental variation. We propose SAIL, a framework that reframes robot imitation as an iterative refinement problem capable of scaling with test-time compute. SAIL utilizes Monte Carlo Tree Search, where each node is a complete trajectory and edges correspond to trajectory refinements. The process is guided by three core components: an automated archive of successful trajectories for contextually relevant retrieval, a vision language model-based scoring mechanism for trajectory evaluation, and a step-level feedback that provides trajectory-aligned scores for iterative refinement. Experiments across six diverse manipulation tasks in simulation and real-world validation clearly demonstrate that increasing test-time compute consistently improves success rates, achieving up to 95% on complex tasks. Our results suggest that trajectory-level test-time scaling is a robust path toward more generalizable robotic agents.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-09T11:39:40Z</published>\n    <arxiv:comment>8 pages, 3 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Makoto Sato</name>\n    </author>\n    <author>\n      <name>Yusuke Iwasawa</name>\n    </author>\n    <author>\n      <name>Yujin Tang</name>\n    </author>\n    <author>\n      <name>So Kuroki</name>\n    </author>\n  </entry>"
}