Research

Paper

AI LLM March 12, 2026

Cornserve: A Distributed Serving System for Any-to-Any Multimodal Models

Authors

Jae-Won Chung, Jeff J. Ma, Jisang Ahn, Yizhuo Liang, Akshay Jajoo, Myungjin Lee, Mosharaf Chowdhury

Abstract

Any-to-Any models are an emerging class of multimodal models that accept combinations of multimodal data (e.g., text, image, video, audio) as input and generate them as output. Serving these models are challenging; different requests with different input and output modalities traverse different paths through the model computation graph, and each component of the model have different scaling characteristics. We present Cornserve, a distributed serving system for generic Any-to-Any models. Cornserve provides a flexible task abstraction for expressing Any-to-Any model computation graphs, enabling component disaggregation and independent scaling. The distributed runtime dispatches compute to the data plane via an efficient record-and-replay execution model that keeps track of data dependencies, and forwards tensor data between components directly from the producer to the consumer. Built on Kubernetes with approximately 23K new lines of Python, Cornserve supports diverse Any-to-Any models and delivers up to 3.81$\times$ higher throughput and 5.79$\times$ lower tail latency. Cornserve is open-source, and the demo video is available on YouTube.

Metadata

arXiv ID: 2603.12118
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-12
Fetched: 2026-03-14 05:03

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12118v1</id>\n    <title>Cornserve: A Distributed Serving System for Any-to-Any Multimodal Models</title>\n    <updated>2026-03-12T16:20:35Z</updated>\n    <link href='https://arxiv.org/abs/2603.12118v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12118v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Any-to-Any models are an emerging class of multimodal models that accept combinations of multimodal data (e.g., text, image, video, audio) as input and generate them as output. Serving these models are challenging; different requests with different input and output modalities traverse different paths through the model computation graph, and each component of the model have different scaling characteristics.\n  We present Cornserve, a distributed serving system for generic Any-to-Any models. Cornserve provides a flexible task abstraction for expressing Any-to-Any model computation graphs, enabling component disaggregation and independent scaling. The distributed runtime dispatches compute to the data plane via an efficient record-and-replay execution model that keeps track of data dependencies, and forwards tensor data between components directly from the producer to the consumer. Built on Kubernetes with approximately 23K new lines of Python, Cornserve supports diverse Any-to-Any models and delivers up to 3.81$\\times$ higher throughput and 5.79$\\times$ lower tail latency. Cornserve is open-source, and the demo video is available on YouTube.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <published>2026-03-12T16:20:35Z</published>\n    <arxiv:comment>Open source https://github.com/cornserve-ai/cornserve / Demo video https://www.youtube.com/watch?v=nb8R-vztLRg</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Jae-Won Chung</name>\n    </author>\n    <author>\n      <name>Jeff J. Ma</name>\n    </author>\n    <author>\n      <name>Jisang Ahn</name>\n    </author>\n    <author>\n      <name>Yizhuo Liang</name>\n    </author>\n    <author>\n      <name>Akshay Jajoo</name>\n    </author>\n    <author>\n      <name>Myungjin Lee</name>\n    </author>\n    <author>\n      <name>Mosharaf Chowdhury</name>\n    </author>\n  </entry>"
}