Research

Paper

AI LLM February 27, 2026

AI Must Embrace Specialization via Superhuman Adaptable Intelligence

Authors

Judah Goldfeder, Philippe Wyder, Yann LeCun, Ravid Shwartz Ziv

Abstract

Everyone from AI executives and researchers to doomsayers, politicians, and activists is talking about Artificial General Intelligence (AGI). Yet, they often don't seem to agree on its exact definition. One common definition of AGI is an AI that can do everything a human can do, but are humans truly general? In this paper, we address what's wrong with our conception of AGI, and why, even in its most coherent formulation, it is a flawed concept to describe the future of AI. We explore whether the most widely accepted definitions are plausible, useful, and truly general. We argue that AI must embrace specialization, rather than strive for generality, and in its specialization strive for superhuman performance, and introduce Superhuman Adaptable Intelligence (SAI). SAI is defined as intelligence that can learn to exceed humans at anything important that we can do, and that can fill in the skill gaps where humans are incapable. We then lay out how SAI can help hone a discussion around AI that was blurred by an overloaded definition of AGI, and extrapolate the implications of using it as a guide for the future.

Metadata

arXiv ID: 2602.23643
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23643v1</id>\n    <title>AI Must Embrace Specialization via Superhuman Adaptable Intelligence</title>\n    <updated>2026-02-27T03:26:21Z</updated>\n    <link href='https://arxiv.org/abs/2602.23643v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23643v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Everyone from AI executives and researchers to doomsayers, politicians, and activists is talking about Artificial General Intelligence (AGI). Yet, they often don't seem to agree on its exact definition. One common definition of AGI is an AI that can do everything a human can do, but are humans truly general? In this paper, we address what's wrong with our conception of AGI, and why, even in its most coherent formulation, it is a flawed concept to describe the future of AI. We explore whether the most widely accepted definitions are plausible, useful, and truly general. We argue that AI must embrace specialization, rather than strive for generality, and in its specialization strive for superhuman performance, and introduce Superhuman Adaptable Intelligence (SAI). SAI is defined as intelligence that can learn to exceed humans at anything important that we can do, and that can fill in the skill gaps where humans are incapable. We then lay out how SAI can help hone a discussion around AI that was blurred by an overloaded definition of AGI, and extrapolate the implications of using it as a guide for the future.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-27T03:26:21Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Judah Goldfeder</name>\n    </author>\n    <author>\n      <name>Philippe Wyder</name>\n    </author>\n    <author>\n      <name>Yann LeCun</name>\n    </author>\n    <author>\n      <name>Ravid Shwartz Ziv</name>\n    </author>\n  </entry>"
}