Research

Paper

AI LLM March 17, 2026

Omnilingual SONAR: Cross-Lingual and Cross-Modal Sentence Embeddings Bridging Massively Multilingual Text and Speech

Authors

Omnilingual SONAR Team, João Maria Janeiro, Pere-Lluís Huguet Cabot, Ioannis Tsiamas, Yen Meng, Vivek Iyer, Guillem Ramírez, Loic Barrault, Belen Alastruey, Yu-An Chung, Marta R. Costa-Jussa, David Dale, Kevin Heffernan, Jaehyeong Jo, Artyom Kozhevnikov, Alexandre Mourachko, Christophe Ropers, Holger Schwenk, Paul-Ambroise Duquenne

Abstract

Cross-lingual sentence encoders typically cover only a few hundred languages and often trade downstream quality for stronger alignment, limiting their adoption. We introduce OmniSONAR, a new family of omnilingual, cross-lingual and cross-modal sentence embedding models that natively embed text, speech, code, and mathematical expressions in a single semantic space, while delivering state-of-the-art downstream performance at the scale of thousands of languages, from high-resource to extremely low-resource varieties. To reach this scale without representation collapse, we use progressive training. We first learn a strong foundational space for 200 languages with an LLM-initialized encoder-decoder, combining token-level decoding with a novel split-softmax contrastive loss and synthetic hard negatives. Building on this foundation, we expand to several thousands language varieties via a two-stage teacher-student encoder distillation framework. Finally, we demonstrate the cross-modal extensibility of this space by seamlessly mapping 177 spoken languages into it. OmniSONAR halves cross-lingual similarity search error on the 200-language FLORES dataset and reduces error by a factor of 15 on the 1,560-language BIBLE benchmark. It also enables strong translation, outperforming NLLB-3B on multilingual benchmarks and exceeding prior models (including much larger LLMs) by 15 chrF++ points on 1,560 languages into English BIBLE translation. OmniSONAR also performs strongly on MTEB and XLCoST. For speech, OmniSONAR achieves a 43% lower similarity-search error and reaches 97% of SeamlessM4T speech-to-text quality, despite being zero-shot for translation (trained only on ASR data). Finally, by training an encoder-decoder LM, Spectrum, exclusively on English text processing OmniSONAR embedding sequences, we unlock high-performance transfer to thousands of languages and speech for complex downstream tasks.

Metadata

arXiv ID: 2603.16606
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16606v1</id>\n    <title>Omnilingual SONAR: Cross-Lingual and Cross-Modal Sentence Embeddings Bridging Massively Multilingual Text and Speech</title>\n    <updated>2026-03-17T14:47:35Z</updated>\n    <link href='https://arxiv.org/abs/2603.16606v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16606v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Cross-lingual sentence encoders typically cover only a few hundred languages and often trade downstream quality for stronger alignment, limiting their adoption. We introduce OmniSONAR, a new family of omnilingual, cross-lingual and cross-modal sentence embedding models that natively embed text, speech, code, and mathematical expressions in a single semantic space, while delivering state-of-the-art downstream performance at the scale of thousands of languages, from high-resource to extremely low-resource varieties. To reach this scale without representation collapse, we use progressive training. We first learn a strong foundational space for 200 languages with an LLM-initialized encoder-decoder, combining token-level decoding with a novel split-softmax contrastive loss and synthetic hard negatives. Building on this foundation, we expand to several thousands language varieties via a two-stage teacher-student encoder distillation framework. Finally, we demonstrate the cross-modal extensibility of this space by seamlessly mapping 177 spoken languages into it. OmniSONAR halves cross-lingual similarity search error on the 200-language FLORES dataset and reduces error by a factor of 15 on the 1,560-language BIBLE benchmark. It also enables strong translation, outperforming NLLB-3B on multilingual benchmarks and exceeding prior models (including much larger LLMs) by 15 chrF++ points on 1,560 languages into English BIBLE translation. OmniSONAR also performs strongly on MTEB and XLCoST. For speech, OmniSONAR achieves a 43% lower similarity-search error and reaches 97% of SeamlessM4T speech-to-text quality, despite being zero-shot for translation (trained only on ASR data). Finally, by training an encoder-decoder LM, Spectrum, exclusively on English text processing OmniSONAR embedding sequences, we unlock high-performance transfer to thousands of languages and speech for complex downstream tasks.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-17T14:47:35Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name> Omnilingual SONAR Team</name>\n    </author>\n    <author>\n      <name>João Maria Janeiro</name>\n    </author>\n    <author>\n      <name>Pere-Lluís Huguet Cabot</name>\n    </author>\n    <author>\n      <name>Ioannis Tsiamas</name>\n    </author>\n    <author>\n      <name>Yen Meng</name>\n    </author>\n    <author>\n      <name>Vivek Iyer</name>\n    </author>\n    <author>\n      <name>Guillem Ramírez</name>\n    </author>\n    <author>\n      <name>Loic Barrault</name>\n    </author>\n    <author>\n      <name>Belen Alastruey</name>\n    </author>\n    <author>\n      <name>Yu-An Chung</name>\n    </author>\n    <author>\n      <name>Marta R. Costa-Jussa</name>\n    </author>\n    <author>\n      <name>David Dale</name>\n    </author>\n    <author>\n      <name>Kevin Heffernan</name>\n    </author>\n    <author>\n      <name>Jaehyeong Jo</name>\n    </author>\n    <author>\n      <name>Artyom Kozhevnikov</name>\n    </author>\n    <author>\n      <name>Alexandre Mourachko</name>\n    </author>\n    <author>\n      <name>Christophe Ropers</name>\n    </author>\n    <author>\n      <name>Holger Schwenk</name>\n    </author>\n    <author>\n      <name>Paul-Ambroise Duquenne</name>\n    </author>\n  </entry>"
}