Research

Paper

AI LLM March 13, 2026

From Experiments to Expertise: Scientific Knowledge Consolidation for AI-Driven Computational Research

Authors

Haonan Huang

Abstract

While large language models (LLMs) have transformed AI agents into proficient executors of computational materials science, performing a hundred simulations does not make a researcher. What distinguishes research from routine execution is the progressive accumulation of knowledge -- learning which approaches fail, recognizing patterns across systems, and applying understanding to new problems. However, the prevailing paradigm in AI-driven computational science treats each execution in isolation, largely discarding hard-won insights between runs. Here we present QMatSuite, an open-source platform closing this gap. Agents record findings with full provenance, retrieve knowledge before new calculations, and in dedicated reflection sessions correct erroneous findings and synthesize observations into cross-compound patterns. In benchmarks on a six-step quantum-mechanical simulation workflow, accumulated knowledge reduces reasoning overhead by 67% and improves accuracy from 47% to 3% deviation from literature -- and when transferred to an unfamiliar material, achieves 1% deviation with zero pipeline failures.

Metadata

arXiv ID: 2603.13191
Provider: ARXIV
Primary Category: physics.comp-ph
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.13191v1</id>\n    <title>From Experiments to Expertise: Scientific Knowledge Consolidation for AI-Driven Computational Research</title>\n    <updated>2026-03-13T17:25:47Z</updated>\n    <link href='https://arxiv.org/abs/2603.13191v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.13191v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>While large language models (LLMs) have transformed AI agents into proficient executors of computational materials science, performing a hundred simulations does not make a researcher. What distinguishes research from routine execution is the progressive accumulation of knowledge -- learning which approaches fail, recognizing patterns across systems, and applying understanding to new problems. However, the prevailing paradigm in AI-driven computational science treats each execution in isolation, largely discarding hard-won insights between runs. Here we present QMatSuite, an open-source platform closing this gap. Agents record findings with full provenance, retrieve knowledge before new calculations, and in dedicated reflection sessions correct erroneous findings and synthesize observations into cross-compound patterns. In benchmarks on a six-step quantum-mechanical simulation workflow, accumulated knowledge reduces reasoning overhead by 67% and improves accuracy from 47% to 3% deviation from literature -- and when transferred to an unfamiliar material, achieves 1% deviation with zero pipeline failures.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='physics.comp-ph'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cond-mat.mtrl-sci'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-13T17:25:47Z</published>\n    <arxiv:primary_category term='physics.comp-ph'/>\n    <author>\n      <name>Haonan Huang</name>\n    </author>\n  </entry>"
}