Research

Paper

AI LLM March 05, 2026

MASQuant: Modality-Aware Smoothing Quantization for Multimodal Large Language Models

Authors

Lulu Hu, Wenhu Xiao, Xin Chen, Xinhua Xu, Bowen Xu, Kun Li, Yongliang Tao

Abstract

Post-training quantization (PTQ) with computational invariance for Large Language Models~(LLMs) have demonstrated remarkable advances, however, their application to Multimodal Large Language Models~(MLLMs) presents substantial challenges. In this paper, we analyze SmoothQuant as a case study and identify two critical issues: Smoothing Misalignment and Cross-Modal Computational Invariance. To address these issues, we propose Modality-Aware Smoothing Quantization (MASQuant), a novel framework that introduces (1) Modality-Aware Smoothing (MAS), which learns separate, modality-specific smoothing factors to prevent Smoothing Misalignment, and (2) Cross-Modal Compensation (CMC), which addresses Cross-modal Computational Invariance by using SVD whitening to transform multi-modal activation differences into low-rank forms, enabling unified quantization across modalities. MASQuant demonstrates stable quantization performance across both dual-modal and tri-modal MLLMs. Experimental results show that MASQuant is competitive among the state-of-the-art PTQ algorithms. Source code: https://github.com/alibaba/EfficientAI.

Metadata

arXiv ID: 2603.04800
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-05
Fetched: 2026-03-07 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04800v1</id>\n    <title>MASQuant: Modality-Aware Smoothing Quantization for Multimodal Large Language Models</title>\n    <updated>2026-03-05T04:41:32Z</updated>\n    <link href='https://arxiv.org/abs/2603.04800v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04800v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Post-training quantization (PTQ) with computational invariance for Large Language Models~(LLMs) have demonstrated remarkable advances, however, their application to Multimodal Large Language Models~(MLLMs) presents substantial challenges. In this paper, we analyze SmoothQuant as a case study and identify two critical issues: Smoothing Misalignment and Cross-Modal Computational Invariance. To address these issues, we propose Modality-Aware Smoothing Quantization (MASQuant), a novel framework that introduces (1) Modality-Aware Smoothing (MAS), which learns separate, modality-specific smoothing factors to prevent Smoothing Misalignment, and (2) Cross-Modal Compensation (CMC), which addresses Cross-modal Computational Invariance by using SVD whitening to transform multi-modal activation differences into low-rank forms, enabling unified quantization across modalities. MASQuant demonstrates stable quantization performance across both dual-modal and tri-modal MLLMs. Experimental results show that MASQuant is competitive among the state-of-the-art PTQ algorithms. Source code: https://github.com/alibaba/EfficientAI.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-05T04:41:32Z</published>\n    <arxiv:comment>Accepted to CVPR 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Lulu Hu</name>\n    </author>\n    <author>\n      <name>Wenhu Xiao</name>\n    </author>\n    <author>\n      <name>Xin Chen</name>\n    </author>\n    <author>\n      <name>Xinhua Xu</name>\n    </author>\n    <author>\n      <name>Bowen Xu</name>\n    </author>\n    <author>\n      <name>Kun Li</name>\n    </author>\n    <author>\n      <name>Yongliang Tao</name>\n    </author>\n  </entry>"
}