Research

Paper

AI LLM February 20, 2026

How Fast Can I Run My VLA? Demystifying VLA Inference Performance with VLA-Perf

Authors

Wenqi Jiang, Jason Clemons, Karu Sankaralingam, Christos Kozyrakis

Abstract

Vision-Language-Action (VLA) models have recently demonstrated impressive capabilities across various embodied AI tasks. While deploying VLA models on real-world robots imposes strict real-time inference constraints, the inference performance landscape of VLA remains poorly understood due to the large combinatorial space of model architectures and inference systems. In this paper, we ask a fundamental research question: How should we design future VLA models and systems to support real-time inference? To address this question, we first introduce VLA-Perf, an analytical performance model that can analyze inference performance for arbitrary combinations of VLA models and inference systems. Using VLA-Perf, we conduct the first systematic study of the VLA inference performance landscape. From a model-design perspective, we examine how inference performance is affected by model scaling, model architectural choices, long-context video inputs, asynchronous inference, and dual-system model pipelines. From the deployment perspective, we analyze where VLA inference should be executed -- on-device, on edge servers, or in the cloud -- and how hardware capability and network performance jointly determine end-to-end latency. By distilling 15 key takeaways from our comprehensive evaluation, we hope this work can provide practical guidance for the design of future VLA models and inference systems.

Metadata

arXiv ID: 2602.18397
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-02-20
Fetched: 2026-02-23 05:33

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.18397v1</id>\n    <title>How Fast Can I Run My VLA? Demystifying VLA Inference Performance with VLA-Perf</title>\n    <updated>2026-02-20T18:02:28Z</updated>\n    <link href='https://arxiv.org/abs/2602.18397v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.18397v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision-Language-Action (VLA) models have recently demonstrated impressive capabilities across various embodied AI tasks. While deploying VLA models on real-world robots imposes strict real-time inference constraints, the inference performance landscape of VLA remains poorly understood due to the large combinatorial space of model architectures and inference systems. In this paper, we ask a fundamental research question: How should we design future VLA models and systems to support real-time inference? To address this question, we first introduce VLA-Perf, an analytical performance model that can analyze inference performance for arbitrary combinations of VLA models and inference systems. Using VLA-Perf, we conduct the first systematic study of the VLA inference performance landscape. From a model-design perspective, we examine how inference performance is affected by model scaling, model architectural choices, long-context video inputs, asynchronous inference, and dual-system model pipelines. From the deployment perspective, we analyze where VLA inference should be executed -- on-device, on edge servers, or in the cloud -- and how hardware capability and network performance jointly determine end-to-end latency. By distilling 15 key takeaways from our comprehensive evaluation, we hope this work can provide practical guidance for the design of future VLA models and inference systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-02-20T18:02:28Z</published>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Wenqi Jiang</name>\n    </author>\n    <author>\n      <name>Jason Clemons</name>\n    </author>\n    <author>\n      <name>Karu Sankaralingam</name>\n    </author>\n    <author>\n      <name>Christos Kozyrakis</name>\n    </author>\n  </entry>"
}