Research

Paper

TESTING March 10, 2026

Bias in Universal Machine-Learned Interatomic Potentials and its Effects on Fine-Tuning

Authors

Nicolas Wong, Julia H. Yang

Abstract

Universal machine learned interatomic potentials (uMLIPs) embody a growing area of interest due to their transferability across the periodic table, displaying an error of about 0.6 kcal/mol against the Matbench Discovery test set. However, we show that achieving more accurate predictions on out-of-domain tasks requires fine-tuning. Additionally, we investigate the existence and influence of model biases in molecular dynamics (MD) by examining two approaches for data generation: from multiple MD trajectories in parallel, which we call naive fine-tuning, and from a single MD trajectory with fine-tuning after set intervals, which we call periodic fine-tuning. Our results find that naive fine-tuning generates constrained datasets that fail to represent MD simulations, and thus downstream fine-tuned models fail during extrapolation. In contrast, periodic fine-tuning yields models which are more generalizable and accurate, producing low-error dynamics. These findings indicate the role of uMLIP bias in fine-tuning, and highlights the need for multiple fine-tuning steps. Lastly, we relate unphysical behavior to principal component space, and quantify extrapolations through Q-residual analysis, which are useful as a proxy for epistemic uncertainty for larger simulations.

Metadata

arXiv ID: 2603.10159
Provider: ARXIV
Primary Category: cond-mat.mtrl-sci
Published: 2026-03-10
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10159v1</id>\n    <title>Bias in Universal Machine-Learned Interatomic Potentials and its Effects on Fine-Tuning</title>\n    <updated>2026-03-10T18:51:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.10159v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10159v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Universal machine learned interatomic potentials (uMLIPs) embody a growing area of interest due to their transferability across the periodic table, displaying an error of about 0.6 kcal/mol against the Matbench Discovery test set. However, we show that achieving more accurate predictions on out-of-domain tasks requires fine-tuning. Additionally, we investigate the existence and influence of model biases in molecular dynamics (MD) by examining two approaches for data generation: from multiple MD trajectories in parallel, which we call naive fine-tuning, and from a single MD trajectory with fine-tuning after set intervals, which we call periodic fine-tuning. Our results find that naive fine-tuning generates constrained datasets that fail to represent MD simulations, and thus downstream fine-tuned models fail during extrapolation. In contrast, periodic fine-tuning yields models which are more generalizable and accurate, producing low-error dynamics. These findings indicate the role of uMLIP bias in fine-tuning, and highlights the need for multiple fine-tuning steps. Lastly, we relate unphysical behavior to principal component space, and quantify extrapolations through Q-residual analysis, which are useful as a proxy for epistemic uncertainty for larger simulations.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cond-mat.mtrl-sci'/>\n    <published>2026-03-10T18:51:05Z</published>\n    <arxiv:primary_category term='cond-mat.mtrl-sci'/>\n    <author>\n      <name>Nicolas Wong</name>\n    </author>\n    <author>\n      <name>Julia H. Yang</name>\n    </author>\n  </entry>"
}