Research

Paper

AI LLM March 06, 2026

Is it Me? Toward Self-Extension to AI Avatars in Virtual Reality

Authors

Jieying Zhang, Steeven Villa, Abdallah El Ali

Abstract

Advances in generative AI, speech synthesis, and embodied avatars enable systems that not only assist communication, but can act as proxies on users' behalf. Prior work in HCI has largely focused on systems as external tools, with less attention paid to the experiential consequences of users' speech and actions becoming assimilated with AI-generated output. We introduce the design and implementation of ProxyMe, a work-in-progress VR prototype that allows users to embody an avatar whose voice and spoken content are modified by an AI system. By combining avatar-based embodiment, voice cloning, and AI-mediated speech augmentation, ProxyMe invites the exploration of avatar self-extension: situations in which AI-modified communication is experienced as part of one's own expressive behavior. We chart out research challenges and envisioned scenarios, with a focus on how varying degrees of delegation and steerability can influence perceived agency, authorship, and self-identification.

Metadata

arXiv ID: 2603.06030
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.06030v1</id>\n    <title>Is it Me? Toward Self-Extension to AI Avatars in Virtual Reality</title>\n    <updated>2026-03-06T08:31:14Z</updated>\n    <link href='https://arxiv.org/abs/2603.06030v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.06030v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Advances in generative AI, speech synthesis, and embodied avatars enable systems that not only assist communication, but can act as proxies on users' behalf. Prior work in HCI has largely focused on systems as external tools, with less attention paid to the experiential consequences of users' speech and actions becoming assimilated with AI-generated output. We introduce the design and implementation of ProxyMe, a work-in-progress VR prototype that allows users to embody an avatar whose voice and spoken content are modified by an AI system. By combining avatar-based embodiment, voice cloning, and AI-mediated speech augmentation, ProxyMe invites the exploration of avatar self-extension: situations in which AI-modified communication is experienced as part of one's own expressive behavior. We chart out research challenges and envisioned scenarios, with a focus on how varying degrees of delegation and steerability can influence perceived agency, authorship, and self-identification.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-03-06T08:31:14Z</published>\n    <arxiv:comment>Cond. accepted to ACM CHI '26 Extended Abstracts (Poster)</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Jieying Zhang</name>\n    </author>\n    <author>\n      <name>Steeven Villa</name>\n    </author>\n    <author>\n      <name>Abdallah El Ali</name>\n    </author>\n    <arxiv:doi>10.1145/3772363.3798899</arxiv:doi>\n    <link href='https://doi.org/10.1145/3772363.3798899' rel='related' title='doi'/>\n  </entry>"
}