Research

Paper

TESTING March 06, 2026

Spatial Colour Mixing Illusions as a Perception Stress Test for Vision-Language Models

Authors

Nicoleta-Nina Basoc, Adrian Cosma, Emilian Radoi

Abstract

Vision-language models (VLMs) achieve strong benchmark results, yet can exhibit systematic perceptual weaknesses: structured, large changes to pixel values can cause confident yet nonsensical predictions, even when the underlying scene remains easily recognizable to humans. We study this gap using Spatial Colour Mixing, a programmatic family of colour distortions that overlays structured patterns (in both RGB and Ostwald colour systems) onto natural images. We introduce a framework of eight spatial colour mixing variants and evaluate nine VLMs across three model families on four datasets. Across models and datasets, accuracy degrades sharply with increasing distortion, and scaling the language model does not reliably mitigate the failure. In a human study with 61 participants on an animal recognition dataset, humans substantially outperform VLMs under the same distortions. Finally, we show that a simple human-inspired preprocessing step recovers a meaningful portion of performance for several distortion types, motivating perception-aware preprocessing and tool-use as practical strategies for improving VLM robustness.

Metadata

arXiv ID: 2603.06141
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.06141v1</id>\n    <title>Spatial Colour Mixing Illusions as a Perception Stress Test for Vision-Language Models</title>\n    <updated>2026-03-06T10:50:04Z</updated>\n    <link href='https://arxiv.org/abs/2603.06141v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.06141v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision-language models (VLMs) achieve strong benchmark results, yet can exhibit systematic perceptual weaknesses: structured, large changes to pixel values can cause confident yet nonsensical predictions, even when the underlying scene remains easily recognizable to humans. We study this gap using Spatial Colour Mixing, a programmatic family of colour distortions that overlays structured patterns (in both RGB and Ostwald colour systems) onto natural images. We introduce a framework of eight spatial colour mixing variants and evaluate nine VLMs across three model families on four datasets. Across models and datasets, accuracy degrades sharply with increasing distortion, and scaling the language model does not reliably mitigate the failure. In a human study with 61 participants on an animal recognition dataset, humans substantially outperform VLMs under the same distortions. Finally, we show that a simple human-inspired preprocessing step recovers a meaningful portion of performance for several distortion types, motivating perception-aware preprocessing and tool-use as practical strategies for improving VLM robustness.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-06T10:50:04Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Nicoleta-Nina Basoc</name>\n    </author>\n    <author>\n      <name>Adrian Cosma</name>\n    </author>\n    <author>\n      <name>Emilian Radoi</name>\n    </author>\n  </entry>"
}