Research

Paper

AI LLM March 12, 2026

Stuck on Suggestions: Automation Bias, the Anchoring Effect, and the Factors That Shape Them in Computational Pathology

Authors

Emely Rosbach, Jonas Ammeling, Jonathan Ganz, Christof Albert Bertram, Thomas Conrad, Andreas Riener, Marc Aubreville

Abstract

Artificial intelligence (AI)-driven decision support systems can improve diagnostic accuracy and efficiency in computational pathology. However, collaboration between human experts and AI may introduce cognitive biases such as automation and anchoring bias, where users adopt system predictions blindly or are disproportionately influenced by AI advice, even when inaccurate. These effects may be amplified under time pressure, common in routine pathology, or shaped by individual user characteristics. We conducted an online experiment in which pathology experts (n = 28) estimated tumor cell percentages: once independently and once with AI support. A subset of estimations in each condition was performed under time strain. Overall, AI assistance improved diagnostic performance but introduced a 7% automation bias rate, defined as accepted negative consultations where previously correct independent judgments were overturned by incorrect AI advice. While time pressure did not increase the frequency of automation bias, it appeared to intensify its severity, reflected in stronger performance declines associated with increased AI reliance under cognitive load. A linear mixed-effects model (LMM) simulating weighted averaging showed a statistically significant positive coefficient for AI advice, indicating moderate anchoring on system output. This effect increased under time pressure, suggesting anchoring bias becomes more pronounced when cognitive resources are limited. A second LMM assessing automation reliance, a proxy for automation and anchoring bias, showed that professional experience and self-efficacy were associated with lower dependence on AI, whereas higher confidence during AI-assisted decisions was tied to increased AI reliance. These findings highlight the dual nature of AI integration in clinical workflows: improving performance while introducing risks of bias-driven diagnostic errors.

Metadata

arXiv ID: 2603.11821
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-03-12
Fetched: 2026-03-14 05:03

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.11821v1</id>\n    <title>Stuck on Suggestions: Automation Bias, the Anchoring Effect, and the Factors That Shape Them in Computational Pathology</title>\n    <updated>2026-03-12T11:30:54Z</updated>\n    <link href='https://arxiv.org/abs/2603.11821v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.11821v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Artificial intelligence (AI)-driven decision support systems can improve diagnostic accuracy and efficiency in computational pathology. However, collaboration between human experts and AI may introduce cognitive biases such as automation and anchoring bias, where users adopt system predictions blindly or are disproportionately influenced by AI advice, even when inaccurate. These effects may be amplified under time pressure, common in routine pathology, or shaped by individual user characteristics. We conducted an online experiment in which pathology experts (n = 28) estimated tumor cell percentages: once independently and once with AI support. A subset of estimations in each condition was performed under time strain. Overall, AI assistance improved diagnostic performance but introduced a 7% automation bias rate, defined as accepted negative consultations where previously correct independent judgments were overturned by incorrect AI advice. While time pressure did not increase the frequency of automation bias, it appeared to intensify its severity, reflected in stronger performance declines associated with increased AI reliance under cognitive load. A linear mixed-effects model (LMM) simulating weighted averaging showed a statistically significant positive coefficient for AI advice, indicating moderate anchoring on system output. This effect increased under time pressure, suggesting anchoring bias becomes more pronounced when cognitive resources are limited. A second LMM assessing automation reliance, a proxy for automation and anchoring bias, showed that professional experience and self-efficacy were associated with lower dependence on AI, whereas higher confidence during AI-assisted decisions was tied to increased AI reliance. These findings highlight the dual nature of AI integration in clinical workflows: improving performance while introducing risks of bias-driven diagnostic errors.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-03-12T11:30:54Z</published>\n    <arxiv:comment>Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA)</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Emely Rosbach</name>\n    </author>\n    <author>\n      <name>Jonas Ammeling</name>\n    </author>\n    <author>\n      <name>Jonathan Ganz</name>\n    </author>\n    <author>\n      <name>Christof Albert Bertram</name>\n    </author>\n    <author>\n      <name>Thomas Conrad</name>\n    </author>\n    <author>\n      <name>Andreas Riener</name>\n    </author>\n    <author>\n      <name>Marc Aubreville</name>\n    </author>\n    <arxiv:doi>10.59275/j.melba.2026-87b1</arxiv:doi>\n    <link href='https://doi.org/10.59275/j.melba.2026-87b1' rel='related' title='doi'/>\n  </entry>"
}