Research

Paper

AI LLM March 02, 2026

Building a Strong Instruction Language Model for a Less-Resourced Language

Authors

Domen Vreš, Tjaša Arčon, Timotej Petrič, Dario Vajda, Marko Robnik-Šikonja, Iztok Lebar Bajec

Abstract

Large language models (LLMs) have become an essential tool for natural language processing and artificial intelligence in general. Current open-source models are primarily trained on English texts, resulting in poorer performance on less-resourced languages and cultures. We present a set of methodological approaches necessary for the successful adaptation of an LLM to a less-resourced language, and demonstrate them using the Slovene language. We present GaMS3-12B, a generative model for Slovene with 12 billion parameters, and demonstrate that it is the best-performing open-source model for Slovene within its parameter range. We adapted the model to the Slovene language using three-stage continual pre-training of the Gemma 3 model, followed by two-stage supervised fine-tuning (SFT). We trained the model on a combination of 140B Slovene, English, Bosnian, Serbian, and Croatian pretraining tokens, and over 200 thousand English and Slovene SFT examples. We evaluate GaMS3-12B on the Slovenian-LLM-Eval datasets, English-to-Slovene translation, and the Slovene LLM arena. We show that the described model outperforms 12B Gemma 3 across all three scenarios and performs comparably to much larger commercial GPT-4o in the Slovene LLM arena, achieving a win rate of over 60 %.

Metadata

arXiv ID: 2603.01691
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01691v1</id>\n    <title>Building a Strong Instruction Language Model for a Less-Resourced Language</title>\n    <updated>2026-03-02T10:21:15Z</updated>\n    <link href='https://arxiv.org/abs/2603.01691v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01691v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) have become an essential tool for natural language processing and artificial intelligence in general. Current open-source models are primarily trained on English texts, resulting in poorer performance on less-resourced languages and cultures. We present a set of methodological approaches necessary for the successful adaptation of an LLM to a less-resourced language, and demonstrate them using the Slovene language. We present GaMS3-12B, a generative model for Slovene with 12 billion parameters, and demonstrate that it is the best-performing open-source model for Slovene within its parameter range. We adapted the model to the Slovene language using three-stage continual pre-training of the Gemma 3 model, followed by two-stage supervised fine-tuning (SFT). We trained the model on a combination of 140B Slovene, English, Bosnian, Serbian, and Croatian pretraining tokens, and over 200 thousand English and Slovene SFT examples. We evaluate GaMS3-12B on the Slovenian-LLM-Eval datasets, English-to-Slovene translation, and the Slovene LLM arena. We show that the described model outperforms 12B Gemma 3 across all three scenarios and performs comparably to much larger commercial GPT-4o in the Slovene LLM arena, achieving a win rate of over 60 %.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-02T10:21:15Z</published>\n    <arxiv:comment>Currently under review at Natural Language Processing Special Issue on Language Models for Low-Resource Languages</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Domen Vreš</name>\n    </author>\n    <author>\n      <name>Tjaša Arčon</name>\n    </author>\n    <author>\n      <name>Timotej Petrič</name>\n    </author>\n    <author>\n      <name>Dario Vajda</name>\n    </author>\n    <author>\n      <name>Marko Robnik-Šikonja</name>\n    </author>\n    <author>\n      <name>Iztok Lebar Bajec</name>\n    </author>\n  </entry>"
}