Research

Paper

AI LLM February 24, 2026

Predicting Sentence Acceptability Judgments in Multimodal Contexts

Authors

Hyewon Jang, Nikolai Ilinykh, Sharid Loáiciga, Jey Han Lau, Shalom Lappin

Abstract

Previous work has examined the capacity of deep neural networks (DNNs), particularly transformers, to predict human sentence acceptability judgments, both independently of context, and in document contexts. We consider the effect of prior exposure to visual images (i.e., visual context) on these judgments for humans and large language models (LLMs). Our results suggest that, in contrast to textual context, visual images appear to have little if any impact on human acceptability ratings. However, LLMs display the compression effect seen in previous work on human judgments in document contexts. Different sorts of LLMs are able to predict human acceptability judgments to a high degree of accuracy, but in general, their performance is slightly better when visual contexts are removed. Moreover, the distribution of LLM judgments varies among models, with Qwen resembling human patterns, and others diverging from them. LLM-generated predictions on sentence acceptability are highly correlated with their normalised log probabilities in general. However, the correlations decrease when visual contexts are present, suggesting that a higher gap exists between the internal representations of LLMs and their generated predictions in the presence of visual contexts. Our experimental work suggests interesting points of similarity and of difference between human and LLM processing of sentences in multimodal contexts.

Metadata

arXiv ID: 2602.20918
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20918v1</id>\n    <title>Predicting Sentence Acceptability Judgments in Multimodal Contexts</title>\n    <updated>2026-02-24T13:54:38Z</updated>\n    <link href='https://arxiv.org/abs/2602.20918v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20918v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Previous work has examined the capacity of deep neural networks (DNNs), particularly transformers, to predict human sentence acceptability judgments, both independently of context, and in document contexts. We consider the effect of prior exposure to visual images (i.e., visual context) on these judgments for humans and large language models (LLMs). Our results suggest that, in contrast to textual context, visual images appear to have little if any impact on human acceptability ratings. However, LLMs display the compression effect seen in previous work on human judgments in document contexts. Different sorts of LLMs are able to predict human acceptability judgments to a high degree of accuracy, but in general, their performance is slightly better when visual contexts are removed. Moreover, the distribution of LLM judgments varies among models, with Qwen resembling human patterns, and others diverging from them. LLM-generated predictions on sentence acceptability are highly correlated with their normalised log probabilities in general. However, the correlations decrease when visual contexts are present, suggesting that a higher gap exists between the internal representations of LLMs and their generated predictions in the presence of visual contexts. Our experimental work suggests interesting points of similarity and of difference between human and LLM processing of sentences in multimodal contexts.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-02-24T13:54:38Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Hyewon Jang</name>\n    </author>\n    <author>\n      <name>Nikolai Ilinykh</name>\n    </author>\n    <author>\n      <name>Sharid Loáiciga</name>\n    </author>\n    <author>\n      <name>Jey Han Lau</name>\n    </author>\n    <author>\n      <name>Shalom Lappin</name>\n    </author>\n  </entry>"
}