Research

Paper

AI LLM March 17, 2026

Why We Need to Destroy the Illusion of Speaking to A Human: Critical Reflections On Ethics at the Front-End for LLMs

Authors

Sarah Diefenbach, Daniel Ullrich

Abstract

Conversation with chatbots based on Large Language Models (LLMs) such as ChatGPT has become one of the major forms of interaction with Artificial Intelligence (AI) in everyday life. What makes this interaction so convenient is that interacting with LLMs feels so natural, and resembles what we know from real, human conversations. At the same time, this seeming similarity is part of one of the ethical challenges of AI design, since it activates many misleading ideas about AI. We discuss similarities and differences between human-AI-conversations and interpersonal conversation and highlight starting points for more ethical design of AI at the front-end.

Metadata

arXiv ID: 2603.16633
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16633v1</id>\n    <title>Why We Need to Destroy the Illusion of Speaking to A Human: Critical Reflections On Ethics at the Front-End for LLMs</title>\n    <updated>2026-03-17T15:04:24Z</updated>\n    <link href='https://arxiv.org/abs/2603.16633v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16633v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Conversation with chatbots based on Large Language Models (LLMs) such as ChatGPT has become one of the major forms of interaction with Artificial Intelligence (AI) in everyday life. What makes this interaction so convenient is that interacting with LLMs feels so natural, and resembles what we know from real, human conversations. At the same time, this seeming similarity is part of one of the ethical challenges of AI design, since it activates many misleading ideas about AI. We discuss similarities and differences between human-AI-conversations and interpersonal conversation and highlight starting points for more ethical design of AI at the front-end.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-03-17T15:04:24Z</published>\n    <arxiv:comment>CHI 2026 Conference on Human-Computer Interaction</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Sarah Diefenbach</name>\n    </author>\n    <author>\n      <name>Daniel Ullrich</name>\n    </author>\n  </entry>"
}