Research

Paper

TESTING March 23, 2026

From Scores to Strategies: Towards Gaze-Informed Diagnostic Assessment for Visualization Literacy

Authors

Kathrin Schnizer

Abstract

Visualization literacy assessments typically rely on correctness to classify performance, providing little evidence about how readers arrive at their answers. We argue that gaze can address this gap as an implicit process signal that complements standardized tests without sacrificing their scalability. Synthesizing findings from visualization and related research, we show that gaze metrics capture cognitive load invisible to accuracy and response time, and reflect strategy differences in attention allocation that track proficiency. We propose assessments that integrate literacy scores with gaze-derived process indicators - component-level attention profiles, integration frequency, and viewing path dispersion - to distinguish fluent comprehension from labored success. This would shift literacy assessment from binary classification toward nuanced characterization of how readers navigate, integrate, and coordinate information across chart components. A roadmap identifies open challenges in empirical grounding, generalizability, assessment design, and practical feasibility.

Metadata

arXiv ID: 2603.21898
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-03-23
Fetched: 2026-03-24 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.21898v1</id>\n    <title>From Scores to Strategies: Towards Gaze-Informed Diagnostic Assessment for Visualization Literacy</title>\n    <updated>2026-03-23T12:22:39Z</updated>\n    <link href='https://arxiv.org/abs/2603.21898v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.21898v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Visualization literacy assessments typically rely on correctness to classify performance, providing little evidence about how readers arrive at their answers. We argue that gaze can address this gap as an implicit process signal that complements standardized tests without sacrificing their scalability. Synthesizing findings from visualization and related research, we show that gaze metrics capture cognitive load invisible to accuracy and response time, and reflect strategy differences in attention allocation that track proficiency. We propose assessments that integrate literacy scores with gaze-derived process indicators - component-level attention profiles, integration frequency, and viewing path dispersion - to distinguish fluent comprehension from labored success. This would shift literacy assessment from binary classification toward nuanced characterization of how readers navigate, integrate, and coordinate information across chart components. A roadmap identifies open challenges in empirical grounding, generalizability, assessment design, and practical feasibility.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-03-23T12:22:39Z</published>\n    <arxiv:comment>4 pages, Workshop on Data Literacy for the 21st Century at CHI 2026, April 26, Barcelona, Spain</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Kathrin Schnizer</name>\n    </author>\n  </entry>"
}