Research

Paper

AI LLM February 23, 2026

Hexagon-MLIR: An AI Compilation Stack For Qualcomm's Neural Processing Units (NPUs)

Authors

Mohammed Javed Absar, Muthu Baskaran, Abhikrant Sharma, Abhilash Bhandari, Ankit Aggarwal, Arun Rangasamy, Dibyendu Das, Fateme Hosseini, Franck Slama, Iulian Brumar, Jyotsna Verma, Krishnaprasad Bindumadhavan, Mitesh Kothari, Mohit Gupta, Ravishankar Kolachana, Richard Lethin, Samarth Narang, Sanjay Motilal Ladwa, Shalini Jain, Snigdha Suresh Dalvi, Tasmia Rahman, Venkat Rasagna Reddy Komatireddy, Vivek Vasudevbhai Pandya, Xiyue Shi, Zachary Zipper

Abstract

In this paper, we present Hexagon-MLIR,an open-source compilation stack that targets Qualcomm Hexagon Neural Processing Unit (NPU) and provides unified support for lowering Triton kernels and PyTorch models . Built using the MLIR framework, our compiler applies a structured sequence of passes to exploit NPU architectural features to accelerate AI workloads. It enables faster deployment of new Triton kernels (hand-written or subgraphs from PyTorch 2.0), for our target by providing automated compilation from kernel to binary. By ingesting Triton kernels, we generate mega-kernels that maximize data locality in the NPU's Tightly Coupled Memory (TCM), reducing the bandwidth bottlenecks inherent in library-based approaches. This initiative complements our commercial toolchains by providing developers with an open-source MLIR-based compilation stack that gives them a path to advance AI compilation capabilities through a more flexible approach. Hexagon-MLIR is a work-in-progress, and we are continuing to add many more optimizations and capabilities in this effort.

Metadata

arXiv ID: 2602.19762
Provider: ARXIV
Primary Category: cs.PL
Published: 2026-02-23
Fetched: 2026-02-24 04:38

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.19762v1</id>\n    <title>Hexagon-MLIR: An AI Compilation Stack For Qualcomm's Neural Processing Units (NPUs)</title>\n    <updated>2026-02-23T12:12:39Z</updated>\n    <link href='https://arxiv.org/abs/2602.19762v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.19762v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In this paper, we present Hexagon-MLIR,an open-source compilation stack that targets Qualcomm Hexagon Neural Processing Unit (NPU) and provides unified support for lowering Triton kernels and PyTorch models . Built using the MLIR framework, our compiler applies a structured sequence of passes to exploit NPU architectural features to accelerate AI workloads. It enables faster deployment of new Triton kernels (hand-written or subgraphs from PyTorch 2.0), for our target by providing automated compilation from kernel to binary. By ingesting Triton kernels, we generate mega-kernels that maximize data locality in the NPU's Tightly Coupled Memory (TCM), reducing the bandwidth bottlenecks inherent in library-based approaches. This initiative complements our commercial toolchains by providing developers with an open-source MLIR-based compilation stack that gives them a path to advance AI compilation capabilities through a more flexible approach. Hexagon-MLIR is a work-in-progress, and we are continuing to add many more optimizations and capabilities in this effort.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.PL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-23T12:12:39Z</published>\n    <arxiv:primary_category term='cs.PL'/>\n    <author>\n      <name>Mohammed Javed Absar</name>\n    </author>\n    <author>\n      <name>Muthu Baskaran</name>\n    </author>\n    <author>\n      <name>Abhikrant Sharma</name>\n    </author>\n    <author>\n      <name>Abhilash Bhandari</name>\n    </author>\n    <author>\n      <name>Ankit Aggarwal</name>\n    </author>\n    <author>\n      <name>Arun Rangasamy</name>\n    </author>\n    <author>\n      <name>Dibyendu Das</name>\n    </author>\n    <author>\n      <name>Fateme Hosseini</name>\n    </author>\n    <author>\n      <name>Franck Slama</name>\n    </author>\n    <author>\n      <name>Iulian Brumar</name>\n    </author>\n    <author>\n      <name>Jyotsna Verma</name>\n    </author>\n    <author>\n      <name>Krishnaprasad Bindumadhavan</name>\n    </author>\n    <author>\n      <name>Mitesh Kothari</name>\n    </author>\n    <author>\n      <name>Mohit Gupta</name>\n    </author>\n    <author>\n      <name>Ravishankar Kolachana</name>\n    </author>\n    <author>\n      <name>Richard Lethin</name>\n    </author>\n    <author>\n      <name>Samarth Narang</name>\n    </author>\n    <author>\n      <name>Sanjay Motilal Ladwa</name>\n    </author>\n    <author>\n      <name>Shalini Jain</name>\n    </author>\n    <author>\n      <name>Snigdha Suresh Dalvi</name>\n    </author>\n    <author>\n      <name>Tasmia Rahman</name>\n    </author>\n    <author>\n      <name>Venkat Rasagna Reddy Komatireddy</name>\n    </author>\n    <author>\n      <name>Vivek Vasudevbhai Pandya</name>\n    </author>\n    <author>\n      <name>Xiyue Shi</name>\n    </author>\n    <author>\n      <name>Zachary Zipper</name>\n    </author>\n  </entry>"
}