Research

Paper

AI LLM March 19, 2026

SVLAT: Scientific Visualization Literacy Assessment Test

Authors

Patrick Phuoc Do, Kaiyuan Tang, Kuangshi Ai, Chaoli Wang

Abstract

Scientific visualization (SciVis) has become an essential means for exploring, understanding, and communicating complex scientific phenomena. However, the field still lacks a validated instrument assessing how well people read, understand, and interpret them. We present a scientific visualization literacy assessment test (SVLAT) that measures the general public's SciVis literacy. Covering a range of visualization forms and interpretation demands, SVLAT comprises 49 items grounded in 18 scientific visualizations and illustrations spanning eight visualization techniques and 11 tasks. Instrument development followed a staged, psychometrically grounded pipeline. We defined the construct and blueprint, followed by item generation, and expert review with five SciVis experts using the content validity ratio (mean CVR = 0.79). We subsequently administered a pilot test (30 participants) and a large-scale test tryout (485 participants) to evaluate the instrument's psychometric properties. For validation, we performed item analysis and refinement using both classical test theory (CTT) and item response theory (IRT) to examine item functioning and overall test quality. SVLAT demonstrates high reliability in the tryout sample (McDonald's omega_t = 0.82, Cronbach's alpha = 0.81). The assessment materials are available at https://osf.io/hr3nw/.

Metadata

arXiv ID: 2603.19000
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-03-19
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19000v1</id>\n    <title>SVLAT: Scientific Visualization Literacy Assessment Test</title>\n    <updated>2026-03-19T15:04:56Z</updated>\n    <link href='https://arxiv.org/abs/2603.19000v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19000v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Scientific visualization (SciVis) has become an essential means for exploring, understanding, and communicating complex scientific phenomena. However, the field still lacks a validated instrument assessing how well people read, understand, and interpret them. We present a scientific visualization literacy assessment test (SVLAT) that measures the general public's SciVis literacy. Covering a range of visualization forms and interpretation demands, SVLAT comprises 49 items grounded in 18 scientific visualizations and illustrations spanning eight visualization techniques and 11 tasks. Instrument development followed a staged, psychometrically grounded pipeline. We defined the construct and blueprint, followed by item generation, and expert review with five SciVis experts using the content validity ratio (mean CVR = 0.79). We subsequently administered a pilot test (30 participants) and a large-scale test tryout (485 participants) to evaluate the instrument's psychometric properties. For validation, we performed item analysis and refinement using both classical test theory (CTT) and item response theory (IRT) to examine item functioning and overall test quality. SVLAT demonstrates high reliability in the tryout sample (McDonald's omega_t = 0.82, Cronbach's alpha = 0.81). The assessment materials are available at https://osf.io/hr3nw/.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-03-19T15:04:56Z</published>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Patrick Phuoc Do</name>\n    </author>\n    <author>\n      <name>Kaiyuan Tang</name>\n    </author>\n    <author>\n      <name>Kuangshi Ai</name>\n    </author>\n    <author>\n      <name>Chaoli Wang</name>\n    </author>\n  </entry>"
}