Research

Paper

AI LLM February 27, 2026

Shape vs. Context: Examining Human--AI Gaps in Ambiguous Japanese Character Recognition

Authors

Daichi Haraguchi

Abstract

High text recognition performance does not guarantee that Vision-Language Models (VLMs) share human-like decision patterns when resolving ambiguity. We investigate this behavioral gap by directly comparing humans and VLMs using continuously interpolated Japanese character shapes generated via a $β$-VAE. We estimate decision boundaries in a single-character recognition (shape-only task) and evaluate whether VLM responses align with human judgments under shape in context (i.e., embedding an ambiguous character near the human decision boundary in word-level context). We find that human and VLM decision boundaries differ in the shape-only task, and that shape in context can improve human alignment in some conditions. These results highlight qualitative behavioral differences, offering foundational insights toward human--VLM alignment benchmarking.

Metadata

arXiv ID: 2602.23746
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23746v1</id>\n    <title>Shape vs. Context: Examining Human--AI Gaps in Ambiguous Japanese Character Recognition</title>\n    <updated>2026-02-27T07:18:53Z</updated>\n    <link href='https://arxiv.org/abs/2602.23746v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23746v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>High text recognition performance does not guarantee that Vision-Language Models (VLMs) share human-like decision patterns when resolving ambiguity. We investigate this behavioral gap by directly comparing humans and VLMs using continuously interpolated Japanese character shapes generated via a $β$-VAE. We estimate decision boundaries in a single-character recognition (shape-only task) and evaluate whether VLM responses align with human judgments under shape in context (i.e., embedding an ambiguous character near the human decision boundary in word-level context). We find that human and VLM decision boundaries differ in the shape-only task, and that shape in context can improve human alignment in some conditions. These results highlight qualitative behavioral differences, offering foundational insights toward human--VLM alignment benchmarking.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-27T07:18:53Z</published>\n    <arxiv:comment>Accepted to CHI 2026 Poster track</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Daichi Haraguchi</name>\n    </author>\n  </entry>"
}