Research

Paper

AI LLM March 16, 2026

Multi-turn Physics-informed Vision-language Model for Physics-grounded Anomaly Detection

Authors

Yao Gu, Xiaohao Xu, Yingna Wu

Abstract

Vision-Language Models (VLMs) demonstrate strong general-purpose reasoning but remain limited in physics-grounded anomaly detection, where causal understanding of dynamics is essential. Existing VLMs, trained predominantly on appearance-centric correlations, fail to capture kinematic constraints, leading to poor performance on anomalies such as irregular rotations or violated mechanical motions. We introduce a physics-informed instruction tuning framework that explicitly encodes object properties, motion paradigms, and dynamic constraints into structured prompts. By delivering these physical priors through multi-turn dialogues, our method decomposes causal reasoning into incremental steps, enabling robust internal representations of normal and abnormal dynamics. Evaluated on the Phys-AD benchmark, our approach achieves 96.7% AUROC in video-level detection--substantially outperforming prior SOTA (66.9%)--and yields superior causal explanations (0.777 LLM score). This work highlights how structured physics priors can transform VLMs into reliable detectors of dynamic anomalies.

Metadata

arXiv ID: 2603.15237
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15237v1</id>\n    <title>Multi-turn Physics-informed Vision-language Model for Physics-grounded Anomaly Detection</title>\n    <updated>2026-03-16T13:11:47Z</updated>\n    <link href='https://arxiv.org/abs/2603.15237v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15237v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision-Language Models (VLMs) demonstrate strong general-purpose reasoning but remain limited in physics-grounded anomaly detection, where causal understanding of dynamics is essential. Existing VLMs, trained predominantly on appearance-centric correlations, fail to capture kinematic constraints, leading to poor performance on anomalies such as irregular rotations or violated mechanical motions. We introduce a physics-informed instruction tuning framework that explicitly encodes object properties, motion paradigms, and dynamic constraints into structured prompts. By delivering these physical priors through multi-turn dialogues, our method decomposes causal reasoning into incremental steps, enabling robust internal representations of normal and abnormal dynamics. Evaluated on the Phys-AD benchmark, our approach achieves 96.7% AUROC in video-level detection--substantially outperforming prior SOTA (66.9%)--and yields superior causal explanations (0.777 LLM score). This work highlights how structured physics priors can transform VLMs into reliable detectors of dynamic anomalies.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-16T13:11:47Z</published>\n    <arxiv:comment>Accepted by IEEE ICASSP2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Yao Gu</name>\n    </author>\n    <author>\n      <name>Xiaohao Xu</name>\n    </author>\n    <author>\n      <name>Yingna Wu</name>\n    </author>\n  </entry>"
}