Research

Paper

AI LLM February 19, 2026

EA-Swin: An Embedding-Agnostic Swin Transformer for AI-Generated Video Detection

Authors

Hung Mai, Loi Dinh, Duc Hai Nguyen, Dat Do, Luong Doan, Khanh Nguyen Quoc, Huan Vu, Phong Ho, Naeem Ul Islam, Tuan Do

Abstract

Recent advances in foundation video generators such as Sora2, Veo3, and other commercial systems have produced highly realistic synthetic videos, exposing the limitations of existing detection methods that rely on shallow embedding trajectories, image-based adaptation, or computationally heavy MLLMs. We propose EA-Swin, an Embedding-Agnostic Swin Transformer that models spatiotemporal dependencies directly on pretrained video embeddings via a factorized windowed attention design, making it compatible with generic ViT-style patch-based encoders. Alongside the model, we construct the EA-Video dataset, a benchmark dataset comprising 130K videos that integrates newly collected samples with curated existing datasets, covering diverse commercial and open-source generators and including unseen-generator splits for rigorous cross-distribution evaluation. Extensive experiments show that EA-Swin achieves 0.97-0.99 accuracy across major generators, outperforming prior SoTA methods (typically 0.8-0.9) by a margin of 5-20%, while maintaining strong generalization to unseen distributions, establishing a scalable and robust solution for modern AI-generated video detection.

Metadata

arXiv ID: 2602.17260
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17260v1</id>\n    <title>EA-Swin: An Embedding-Agnostic Swin Transformer for AI-Generated Video Detection</title>\n    <updated>2026-02-19T11:04:20Z</updated>\n    <link href='https://arxiv.org/abs/2602.17260v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17260v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent advances in foundation video generators such as Sora2, Veo3, and other commercial systems have produced highly realistic synthetic videos, exposing the limitations of existing detection methods that rely on shallow embedding trajectories, image-based adaptation, or computationally heavy MLLMs. We propose EA-Swin, an Embedding-Agnostic Swin Transformer that models spatiotemporal dependencies directly on pretrained video embeddings via a factorized windowed attention design, making it compatible with generic ViT-style patch-based encoders. Alongside the model, we construct the EA-Video dataset, a benchmark dataset comprising 130K videos that integrates newly collected samples with curated existing datasets, covering diverse commercial and open-source generators and including unseen-generator splits for rigorous cross-distribution evaluation. Extensive experiments show that EA-Swin achieves 0.97-0.99 accuracy across major generators, outperforming prior SoTA methods (typically 0.8-0.9) by a margin of 5-20%, while maintaining strong generalization to unseen distributions, establishing a scalable and robust solution for modern AI-generated video detection.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-19T11:04:20Z</published>\n    <arxiv:comment>First preprint</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Hung Mai</name>\n    </author>\n    <author>\n      <name>Loi Dinh</name>\n    </author>\n    <author>\n      <name>Duc Hai Nguyen</name>\n    </author>\n    <author>\n      <name>Dat Do</name>\n    </author>\n    <author>\n      <name>Luong Doan</name>\n    </author>\n    <author>\n      <name>Khanh Nguyen Quoc</name>\n    </author>\n    <author>\n      <name>Huan Vu</name>\n    </author>\n    <author>\n      <name>Phong Ho</name>\n    </author>\n    <author>\n      <name>Naeem Ul Islam</name>\n    </author>\n    <author>\n      <name>Tuan Do</name>\n    </author>\n  </entry>"
}