Research

Paper

AI LLM February 26, 2026

Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction

Authors

Sha Hu

Abstract

An artificial intelligence (AI) model can be viewed as a function that maps inputs to outputs in high-dimensional spaces. Once designed and well trained, the AI model is applied for inference. However, even optimized AI models can produce inference errors due to aleatoric and epistemic uncertainties. Interestingly, we observed that when inferring multiple samples based on invariant transformations of an input, inference errors can show partial independences due to epistemic uncertainty. Leveraging this insight, we propose a "resampling" based inferencing that applies to a trained AI model with multiple transformed versions of an input, and aggregates inference outputs to a more accurate result. This approach has the potential to improve inference accuracy and offers a strategy for balancing model size and performance.

Metadata

arXiv ID: 2602.23315
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23315v1</id>\n    <title>Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction</title>\n    <updated>2026-02-26T18:22:40Z</updated>\n    <link href='https://arxiv.org/abs/2602.23315v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23315v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>An artificial intelligence (AI) model can be viewed as a function that maps inputs to outputs in high-dimensional spaces. Once designed and well trained, the AI model is applied for inference. However, even optimized AI models can produce inference errors due to aleatoric and epistemic uncertainties. Interestingly, we observed that when inferring multiple samples based on invariant transformations of an input, inference errors can show partial independences due to epistemic uncertainty. Leveraging this insight, we propose a \"resampling\" based inferencing that applies to a trained AI model with multiple transformed versions of an input, and aggregates inference outputs to a more accurate result. This approach has the potential to improve inference accuracy and offers a strategy for balancing model size and performance.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-26T18:22:40Z</published>\n    <arxiv:comment>5 pages, 5 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Sha Hu</name>\n    </author>\n  </entry>"
}