Research

Paper

TESTING February 24, 2026

Airavat: An Agentic Framework for Internet Measurement

Authors

Alagappan Ramanathan, Eunju Kang, Dongsu Han, Sangeetha Abdu Jyothi

Abstract

Internet measurement faces twin challenges: complex analyses require expert-level orchestration of tools, yet even syntactically correct implementations can have methodological flaws and can be difficult to verify. Democratizing measurement capabilities thus demands automating both workflow generation and verification against methodological standards established through decades of research. We present Airavat, the first agentic framework for Internet measurement workflow generation with systematic verification and validation. Airavat coordinates a set of agents mirroring expert reasoning: three agents handle problem decomposition, solution design, and code implementation, with assistance from a registry of existing tools. Two specialized engines ensure methodological correctness: a Verification Engine evaluates workflows against a knowledge graph encoding five decades of measurement research, while a Validation Engine identifies appropriate validation techniques grounded in established methodologies. Through four Internet measurement case studies, we demonstrate that Airavat (i) generates workflows matching expert-level solutions, (ii) makes sound architectural decisions, (iii) addresses novel problems without ground truth, and (iv) identifies methodological flaws missed by standard execution-based testing.

Metadata

arXiv ID: 2602.20924
Provider: ARXIV
Primary Category: cs.NI
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20924v1</id>\n    <title>Airavat: An Agentic Framework for Internet Measurement</title>\n    <updated>2026-02-24T14:04:18Z</updated>\n    <link href='https://arxiv.org/abs/2602.20924v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20924v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Internet measurement faces twin challenges: complex analyses require expert-level orchestration of tools, yet even syntactically correct implementations can have methodological flaws and can be difficult to verify. Democratizing measurement capabilities thus demands automating both workflow generation and verification against methodological standards established through decades of research.\n  We present Airavat, the first agentic framework for Internet measurement workflow generation with systematic verification and validation. Airavat coordinates a set of agents mirroring expert reasoning: three agents handle problem decomposition, solution design, and code implementation, with assistance from a registry of existing tools. Two specialized engines ensure methodological correctness: a Verification Engine evaluates workflows against a knowledge graph encoding five decades of measurement research, while a Validation Engine identifies appropriate validation techniques grounded in established methodologies. Through four Internet measurement case studies, we demonstrate that Airavat (i) generates workflows matching expert-level solutions, (ii) makes sound architectural decisions, (iii) addresses novel problems without ground truth, and (iv) identifies methodological flaws missed by standard execution-based testing.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.NI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <published>2026-02-24T14:04:18Z</published>\n    <arxiv:primary_category term='cs.NI'/>\n    <author>\n      <name>Alagappan Ramanathan</name>\n    </author>\n    <author>\n      <name>Eunju Kang</name>\n    </author>\n    <author>\n      <name>Dongsu Han</name>\n    </author>\n    <author>\n      <name>Sangeetha Abdu Jyothi</name>\n    </author>\n  </entry>"
}