Research

Paper

AI LLM February 26, 2026

Humans and LLMs Diverge on Probabilistic Inferences

Authors

Gaurav Kamath, Sreenath Madathil, Sebastian Schuster, Marie-Catherine de Marneffe, Siva Reddy

Abstract

Human reasoning often involves working over limited information to arrive at probabilistic conclusions. In its simplest form, this involves making an inference that is not strictly entailed by a premise, but rather only likely given the premise. While reasoning LLMs have demonstrated strong performance on logical and mathematical tasks, their behavior on such open-ended, non-deterministic inferences remains largely unexplored. We introduce ProbCOPA, a dataset of 210 handcrafted probabilistic inferences in English, each annotated for inference likelihood by 25--30 human participants. We find that human responses are graded and varied, revealing probabilistic judgments of the inferences in our dataset. Comparing these judgments with responses from eight state-of-the-art reasoning LLMs, we show that models consistently fail to produce human-like distributions. Finally, analyzing LLM reasoning chains, we find evidence of a common reasoning pattern used to evaluate such inferences. Our findings reveal persistent differences between humans and LLMs, and underscore the need to evaluate reasoning beyond deterministic settings.

Metadata

arXiv ID: 2602.23546
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-26
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23546v1</id>\n    <title>Humans and LLMs Diverge on Probabilistic Inferences</title>\n    <updated>2026-02-26T23:00:41Z</updated>\n    <link href='https://arxiv.org/abs/2602.23546v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23546v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Human reasoning often involves working over limited information to arrive at probabilistic conclusions. In its simplest form, this involves making an inference that is not strictly entailed by a premise, but rather only likely given the premise. While reasoning LLMs have demonstrated strong performance on logical and mathematical tasks, their behavior on such open-ended, non-deterministic inferences remains largely unexplored. We introduce ProbCOPA, a dataset of 210 handcrafted probabilistic inferences in English, each annotated for inference likelihood by 25--30 human participants. We find that human responses are graded and varied, revealing probabilistic judgments of the inferences in our dataset. Comparing these judgments with responses from eight state-of-the-art reasoning LLMs, we show that models consistently fail to produce human-like distributions. Finally, analyzing LLM reasoning chains, we find evidence of a common reasoning pattern used to evaluate such inferences. Our findings reveal persistent differences between humans and LLMs, and underscore the need to evaluate reasoning beyond deterministic settings.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-26T23:00:41Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Gaurav Kamath</name>\n    </author>\n    <author>\n      <name>Sreenath Madathil</name>\n    </author>\n    <author>\n      <name>Sebastian Schuster</name>\n    </author>\n    <author>\n      <name>Marie-Catherine de Marneffe</name>\n    </author>\n    <author>\n      <name>Siva Reddy</name>\n    </author>\n  </entry>"
}