Paper
Speed by Simplicity: A Single-Stream Architecture for Fast Audio-Video Generative Foundation Model
Authors
SII-GAIR, Sand. ai, :, Ethan Chern, Hansi Teng, Hanwen Sun, Hao Wang, Hong Pan, Hongyu Jia, Jiadi Su, Jin Li, Junjie Yu, Lijie Liu, Lingzhi Li, Lyumanshan Ye, Min Hu, Qiangang Wang, Quanwei Qi, Steffi Chern, Tao Bu, Taoran Wang, Teren Xu, Tianning Zhang, Tiantian Mi, Weixian Xu, Wenqiang Zhang, Wentai Zhang, Xianping Yi, Xiaojie Cai, Xiaoyang Kang, Yan Ma, Yixiu Liu, Yunbo Zhang, Yunpeng Huang, Yutong Lin, Zewei Tao, Zhaoliang Liu, Zheng Zhang, Zhiyao Cen, Zhixuan Yu, Zhongshu Wang, Zhulin Hu, Zijin Zhou, Zinan Guo, Yue Cao, Pengfei Liu
Abstract
We present daVinci-MagiHuman, an open-source audio-video generative foundation model for human-centric generation. daVinci-MagiHuman jointly generates synchronized video and audio using a single-stream Transformer that processes text, video, and audio within a unified token sequence via self-attention only. This single-stream design avoids the complexity of multi-stream or cross-attention architectures while remaining easy to optimize with standard training and inference infrastructure. The model is particularly strong in human-centric scenarios, producing expressive facial performance, natural speech-expression coordination, realistic body motion, and precise audio-video synchronization. It supports multilingual spoken generation across Chinese (Mandarin and Cantonese), English, Japanese, Korean, German, and French. For efficient inference, we combine the single-stream backbone with model distillation, latent-space super-resolution, and a Turbo VAE decoder, enabling generation of a 5-second 256p video in 2 seconds on a single H100 GPU. In automatic evaluation, daVinci-MagiHuman achieves the highest visual quality and text alignment among leading open models, along with the lowest word error rate (14.60%) for speech intelligibility. In pairwise human evaluation, it achieves win rates of 80.0% against Ovi 1.1 and 60.9% against LTX 2.3 over 2000 comparisons. We open-source the complete model stack, including the base model, the distilled model, the super-resolution model, and the inference codebase.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.21986v1</id>\n <title>Speed by Simplicity: A Single-Stream Architecture for Fast Audio-Video Generative Foundation Model</title>\n <updated>2026-03-23T13:49:06Z</updated>\n <link href='https://arxiv.org/abs/2603.21986v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.21986v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>We present daVinci-MagiHuman, an open-source audio-video generative foundation model for human-centric generation. daVinci-MagiHuman jointly generates synchronized video and audio using a single-stream Transformer that processes text, video, and audio within a unified token sequence via self-attention only. This single-stream design avoids the complexity of multi-stream or cross-attention architectures while remaining easy to optimize with standard training and inference infrastructure. The model is particularly strong in human-centric scenarios, producing expressive facial performance, natural speech-expression coordination, realistic body motion, and precise audio-video synchronization. It supports multilingual spoken generation across Chinese (Mandarin and Cantonese), English, Japanese, Korean, German, and French. For efficient inference, we combine the single-stream backbone with model distillation, latent-space super-resolution, and a Turbo VAE decoder, enabling generation of a 5-second 256p video in 2 seconds on a single H100 GPU. In automatic evaluation, daVinci-MagiHuman achieves the highest visual quality and text alignment among leading open models, along with the lowest word error rate (14.60%) for speech intelligibility. In pairwise human evaluation, it achieves win rates of 80.0% against Ovi 1.1 and 60.9% against LTX 2.3 over 2000 comparisons. We open-source the complete model stack, including the base model, the distilled model, the super-resolution model, and the inference codebase.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n <published>2026-03-23T13:49:06Z</published>\n <arxiv:primary_category term='cs.CV'/>\n <author>\n <name> SII-GAIR</name>\n </author>\n <author>\n <name>Sand. ai</name>\n </author>\n <author>\n <name> :</name>\n </author>\n <author>\n <name>Ethan Chern</name>\n </author>\n <author>\n <name>Hansi Teng</name>\n </author>\n <author>\n <name>Hanwen Sun</name>\n </author>\n <author>\n <name>Hao Wang</name>\n </author>\n <author>\n <name>Hong Pan</name>\n </author>\n <author>\n <name>Hongyu Jia</name>\n </author>\n <author>\n <name>Jiadi Su</name>\n </author>\n <author>\n <name>Jin Li</name>\n </author>\n <author>\n <name>Junjie Yu</name>\n </author>\n <author>\n <name>Lijie Liu</name>\n </author>\n <author>\n <name>Lingzhi Li</name>\n </author>\n <author>\n <name>Lyumanshan Ye</name>\n </author>\n <author>\n <name>Min Hu</name>\n </author>\n <author>\n <name>Qiangang Wang</name>\n </author>\n <author>\n <name>Quanwei Qi</name>\n </author>\n <author>\n <name>Steffi Chern</name>\n </author>\n <author>\n <name>Tao Bu</name>\n </author>\n <author>\n <name>Taoran Wang</name>\n </author>\n <author>\n <name>Teren Xu</name>\n </author>\n <author>\n <name>Tianning Zhang</name>\n </author>\n <author>\n <name>Tiantian Mi</name>\n </author>\n <author>\n <name>Weixian Xu</name>\n </author>\n <author>\n <name>Wenqiang Zhang</name>\n </author>\n <author>\n <name>Wentai Zhang</name>\n </author>\n <author>\n <name>Xianping Yi</name>\n </author>\n <author>\n <name>Xiaojie Cai</name>\n </author>\n <author>\n <name>Xiaoyang Kang</name>\n </author>\n <author>\n <name>Yan Ma</name>\n </author>\n <author>\n <name>Yixiu Liu</name>\n </author>\n <author>\n <name>Yunbo Zhang</name>\n </author>\n <author>\n <name>Yunpeng Huang</name>\n </author>\n <author>\n <name>Yutong Lin</name>\n </author>\n <author>\n <name>Zewei Tao</name>\n </author>\n <author>\n <name>Zhaoliang Liu</name>\n </author>\n <author>\n <name>Zheng Zhang</name>\n </author>\n <author>\n <name>Zhiyao Cen</name>\n </author>\n <author>\n <name>Zhixuan Yu</name>\n </author>\n <author>\n <name>Zhongshu Wang</name>\n </author>\n <author>\n <name>Zhulin Hu</name>\n </author>\n <author>\n <name>Zijin Zhou</name>\n </author>\n <author>\n <name>Zinan Guo</name>\n </author>\n <author>\n <name>Yue Cao</name>\n </author>\n <author>\n <name>Pengfei Liu</name>\n </author>\n </entry>"
}