Paper
Stuck on Suggestions: Automation Bias, the Anchoring Effect, and the Factors That Shape Them in Computational Pathology
Authors
Emely Rosbach, Jonas Ammeling, Jonathan Ganz, Christof Albert Bertram, Thomas Conrad, Andreas Riener, Marc Aubreville
Abstract
Artificial intelligence (AI)-driven decision support systems can improve diagnostic accuracy and efficiency in computational pathology. However, collaboration between human experts and AI may introduce cognitive biases such as automation and anchoring bias, where users adopt system predictions blindly or are disproportionately influenced by AI advice, even when inaccurate. These effects may be amplified under time pressure, common in routine pathology, or shaped by individual user characteristics. We conducted an online experiment in which pathology experts (n = 28) estimated tumor cell percentages: once independently and once with AI support. A subset of estimations in each condition was performed under time strain. Overall, AI assistance improved diagnostic performance but introduced a 7% automation bias rate, defined as accepted negative consultations where previously correct independent judgments were overturned by incorrect AI advice. While time pressure did not increase the frequency of automation bias, it appeared to intensify its severity, reflected in stronger performance declines associated with increased AI reliance under cognitive load. A linear mixed-effects model (LMM) simulating weighted averaging showed a statistically significant positive coefficient for AI advice, indicating moderate anchoring on system output. This effect increased under time pressure, suggesting anchoring bias becomes more pronounced when cognitive resources are limited. A second LMM assessing automation reliance, a proxy for automation and anchoring bias, showed that professional experience and self-efficacy were associated with lower dependence on AI, whereas higher confidence during AI-assisted decisions was tied to increased AI reliance. These findings highlight the dual nature of AI integration in clinical workflows: improving performance while introducing risks of bias-driven diagnostic errors.
Metadata
Related papers
Gen-Searcher: Reinforcing Agentic Search for Image Generation
Kaituo Feng, Manyuan Zhang, Shuang Chen, Yunlong Lin, Kaixuan Fan, Yilei Jian... • 2026-03-30
On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers
Omer Dahary, Benaya Koren, Daniel Garibi, Daniel Cohen-Or • 2026-03-30
Graphilosophy: Graph-Based Digital Humanities Computing with The Four Books
Minh-Thu Do, Quynh-Chau Le-Tran, Duc-Duy Nguyen-Mai, Thien-Trang Nguyen, Khan... • 2026-03-30
ParaSpeechCLAP: A Dual-Encoder Speech-Text Model for Rich Stylistic Language-Audio Pretraining
Anuj Diwan, Eunsol Choi, David Harwath • 2026-03-30
RAD-AI: Rethinking Architecture Documentation for AI-Augmented Ecosystems
Oliver Aleksander Larsen, Mahyar T. Moghaddam • 2026-03-30
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.11821v1</id>\n <title>Stuck on Suggestions: Automation Bias, the Anchoring Effect, and the Factors That Shape Them in Computational Pathology</title>\n <updated>2026-03-12T11:30:54Z</updated>\n <link href='https://arxiv.org/abs/2603.11821v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.11821v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Artificial intelligence (AI)-driven decision support systems can improve diagnostic accuracy and efficiency in computational pathology. However, collaboration between human experts and AI may introduce cognitive biases such as automation and anchoring bias, where users adopt system predictions blindly or are disproportionately influenced by AI advice, even when inaccurate. These effects may be amplified under time pressure, common in routine pathology, or shaped by individual user characteristics. We conducted an online experiment in which pathology experts (n = 28) estimated tumor cell percentages: once independently and once with AI support. A subset of estimations in each condition was performed under time strain. Overall, AI assistance improved diagnostic performance but introduced a 7% automation bias rate, defined as accepted negative consultations where previously correct independent judgments were overturned by incorrect AI advice. While time pressure did not increase the frequency of automation bias, it appeared to intensify its severity, reflected in stronger performance declines associated with increased AI reliance under cognitive load. A linear mixed-effects model (LMM) simulating weighted averaging showed a statistically significant positive coefficient for AI advice, indicating moderate anchoring on system output. This effect increased under time pressure, suggesting anchoring bias becomes more pronounced when cognitive resources are limited. A second LMM assessing automation reliance, a proxy for automation and anchoring bias, showed that professional experience and self-efficacy were associated with lower dependence on AI, whereas higher confidence during AI-assisted decisions was tied to increased AI reliance. These findings highlight the dual nature of AI integration in clinical workflows: improving performance while introducing risks of bias-driven diagnostic errors.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n <published>2026-03-12T11:30:54Z</published>\n <arxiv:comment>Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA)</arxiv:comment>\n <arxiv:primary_category term='cs.HC'/>\n <author>\n <name>Emely Rosbach</name>\n </author>\n <author>\n <name>Jonas Ammeling</name>\n </author>\n <author>\n <name>Jonathan Ganz</name>\n </author>\n <author>\n <name>Christof Albert Bertram</name>\n </author>\n <author>\n <name>Thomas Conrad</name>\n </author>\n <author>\n <name>Andreas Riener</name>\n </author>\n <author>\n <name>Marc Aubreville</name>\n </author>\n <arxiv:doi>10.59275/j.melba.2026-87b1</arxiv:doi>\n <link href='https://doi.org/10.59275/j.melba.2026-87b1' rel='related' title='doi'/>\n </entry>"
}