Research

Paper

AI LLM March 25, 2026

Evidence of an Emergent "Self" in Continual Robot Learning

Authors

Adidev Jhunjhunwala, Judah Goldfeder, Hod Lipson

Abstract

A key challenge to understanding self-awareness has been a principled way of quantifying whether an intelligent system has a concept of a "self," and if so how to differentiate the "self" from other cognitive structures. We propose that the "self" can be isolated by seeking the invariant portion of cognitive process that changes relatively little compared to more rapidly acquired cognitive knowledge and skills, because our self is the most persistent aspect of our experiences. We used this principle to analyze the cognitive structure of robots under two conditions: One robot learns a constant task, while a second robot is subjected to continual learning under variable tasks. We find that robots subjected to continual learning develop an invariant subnetwork that is significantly more stable (p < 0.001) compared to the control. We suggest that this principle can offer a window into exploring selfhood in other cognitive AI systems.

Metadata

arXiv ID: 2603.24350
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.24350v1</id>\n    <title>Evidence of an Emergent \"Self\" in Continual Robot Learning</title>\n    <updated>2026-03-25T14:27:32Z</updated>\n    <link href='https://arxiv.org/abs/2603.24350v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.24350v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>A key challenge to understanding self-awareness has been a principled way of quantifying whether an intelligent system has a concept of a \"self,\" and if so how to differentiate the \"self\" from other cognitive structures. We propose that the \"self\" can be isolated by seeking the invariant portion of cognitive process that changes relatively little compared to more rapidly acquired cognitive knowledge and skills, because our self is the most persistent aspect of our experiences. We used this principle to analyze the cognitive structure of robots under two conditions: One robot learns a constant task, while a second robot is subjected to continual learning under variable tasks. We find that robots subjected to continual learning develop an invariant subnetwork that is significantly more stable (p &lt; 0.001) compared to the control. We suggest that this principle can offer a window into exploring selfhood in other cognitive AI systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-25T14:27:32Z</published>\n    <arxiv:comment>39 pages, 17 figures, includes supplementary materials</arxiv:comment>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Adidev Jhunjhunwala</name>\n    </author>\n    <author>\n      <name>Judah Goldfeder</name>\n    </author>\n    <author>\n      <name>Hod Lipson</name>\n    </author>\n  </entry>"
}