Research

Paper

AI LLM March 05, 2026

Ensembling Language Models with Sequential Monte Carlo

Authors

Robin Shing Moon Chan, Tianyu Liu, Samuel Kiegeland, Clemente Pasti, Jacob Hoover Vigly, Timothy J. O'Donnell, Ryan Cotterell, Tim Vieira

Abstract

Practitioners have access to an abundance of language models and prompting strategies for solving many language modeling tasks; yet prior work shows that modeling performance is highly sensitive to both choices. Classical machine learning ensembling techniques offer a principled approach: aggregate predictions from multiple sources to achieve better performance than any single one. However, applying ensembling to language models during decoding is challenging: naively aggregating next-token probabilities yields samples from a locally normalized, biased approximation of the generally intractable ensemble distribution over strings. In this work, we introduce a unified framework for composing $K$ language models into $f$-ensemble distributions for a wide range of functions $f\colon\mathbb{R}_{\geq 0}^{K}\to\mathbb{R}_{\geq 0}$. To sample from these distributions, we propose a byte-level sequential Monte Carlo (SMC) algorithm that operates in a shared character space, enabling ensembles of models with mismatching vocabularies and consistent sampling in the limit. We evaluate a family of $f$-ensembles across prompt and model combinations for various structured text generation tasks, highlighting the benefits of alternative aggregation strategies over traditional probability averaging, and showing that better posterior approximations can yield better ensemble performance.

Metadata

arXiv ID: 2603.05432
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-05
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.05432v1</id>\n    <title>Ensembling Language Models with Sequential Monte Carlo</title>\n    <updated>2026-03-05T17:54:31Z</updated>\n    <link href='https://arxiv.org/abs/2603.05432v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.05432v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Practitioners have access to an abundance of language models and prompting strategies for solving many language modeling tasks; yet prior work shows that modeling performance is highly sensitive to both choices. Classical machine learning ensembling techniques offer a principled approach: aggregate predictions from multiple sources to achieve better performance than any single one. However, applying ensembling to language models during decoding is challenging: naively aggregating next-token probabilities yields samples from a locally normalized, biased approximation of the generally intractable ensemble distribution over strings. In this work, we introduce a unified framework for composing $K$ language models into $f$-ensemble distributions for a wide range of functions $f\\colon\\mathbb{R}_{\\geq 0}^{K}\\to\\mathbb{R}_{\\geq 0}$. To sample from these distributions, we propose a byte-level sequential Monte Carlo (SMC) algorithm that operates in a shared character space, enabling ensembles of models with mismatching vocabularies and consistent sampling in the limit. We evaluate a family of $f$-ensembles across prompt and model combinations for various structured text generation tasks, highlighting the benefits of alternative aggregation strategies over traditional probability averaging, and showing that better posterior approximations can yield better ensemble performance.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-05T17:54:31Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Robin Shing Moon Chan</name>\n    </author>\n    <author>\n      <name>Tianyu Liu</name>\n    </author>\n    <author>\n      <name>Samuel Kiegeland</name>\n    </author>\n    <author>\n      <name>Clemente Pasti</name>\n    </author>\n    <author>\n      <name>Jacob Hoover Vigly</name>\n    </author>\n    <author>\n      <name>Timothy J. O'Donnell</name>\n    </author>\n    <author>\n      <name>Ryan Cotterell</name>\n    </author>\n    <author>\n      <name>Tim Vieira</name>\n    </author>\n  </entry>"
}