Research

Paper

AI LLM March 16, 2026

Beyond Monolithic Models: Symbolic Seams for Composable Neuro-Symbolic Architectures

Authors

Nicolas Schuler, Vincenzo Scotti, Raffaela Mirandola

Abstract

Current Artificial Intelligence (AI) systems are frequently built around monolithic models that entangle perception, reasoning, and decision-making, a design that often conflicts with established software architecture principles. Large Language Models (LLMs) amplify this tendency, offering scale but limited transparency and adaptability. To address this, we argue for composability as a guiding principle that treats AI as a living architecture rather than a fixed artifact. We introduce symbolic seams: explicit architectural breakpoints where a system commits to inspectable, typed boundary objects, versioned constraint bundles, and decision traces. We describe how seams enable a composable neuro-symbolic design that combines the data-driven adaptability of learned components with the verifiability of explicit symbolic constraints -- combining strengths neither paradigm achieves alone. By treating AI systems as assemblies of interchangeable parts rather than indivisible wholes, we outline a direction for intelligent systems that are extensible, transparent, and amenable to principled evolution.

Metadata

arXiv ID: 2603.15087
Provider: ARXIV
Primary Category: cs.SE
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15087v1</id>\n    <title>Beyond Monolithic Models: Symbolic Seams for Composable Neuro-Symbolic Architectures</title>\n    <updated>2026-03-16T10:41:01Z</updated>\n    <link href='https://arxiv.org/abs/2603.15087v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15087v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Current Artificial Intelligence (AI) systems are frequently built around monolithic models that entangle perception, reasoning, and decision-making, a design that often conflicts with established software architecture principles. Large Language Models (LLMs) amplify this tendency, offering scale but limited transparency and adaptability. To address this, we argue for composability as a guiding principle that treats AI as a living architecture rather than a fixed artifact. We introduce symbolic seams: explicit architectural breakpoints where a system commits to inspectable, typed boundary objects, versioned constraint bundles, and decision traces. We describe how seams enable a composable neuro-symbolic design that combines the data-driven adaptability of learned components with the verifiability of explicit symbolic constraints -- combining strengths neither paradigm achieves alone. By treating AI systems as assemblies of interchangeable parts rather than indivisible wholes, we outline a direction for intelligent systems that are extensible, transparent, and amenable to principled evolution.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <published>2026-03-16T10:41:01Z</published>\n    <arxiv:comment>Submitted to New and Emerging Ideas (NEMI) track at ICSA 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.SE'/>\n    <author>\n      <name>Nicolas Schuler</name>\n    </author>\n    <author>\n      <name>Vincenzo Scotti</name>\n    </author>\n    <author>\n      <name>Raffaela Mirandola</name>\n    </author>\n  </entry>"
}