Research

Paper

AI LLM February 26, 2026

When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design

Authors

Soyoung Jung, Daehoo Yoon, Sung Gyu Koh, Young Hwan Kim, Yehan Ahn, Sung Park

Abstract

Agentic AI increasingly intervenes proactively by inferring users' situations from contextual data yet often fails for lack of principled judgment about when, why, and whether to act. We address this gap by proposing a conceptual model that reframes behavior as an interpretive outcome integrating Scene (observable situation), Context (user-constructed meaning), and Human Behavior Factors (determinants shaping behavioral likelihood). Grounded in multidisciplinary perspectives across the humanities, social sciences, HCI, and engineering, the model separates what is observable from what is meaningful to the user and explains how the same scene can yield different behavioral meanings and outcomes. To translate this lens into design action, we derive five agent design principles (behavioral alignment, contextual sensitivity, temporal appropriateness, motivational calibration, and agency preservation) that guide intervention depth, timing, intensity, and restraint. Together, the model and principles provide a foundation for designing agentic AI systems that act with contextual sensitivity and judgment in interactions.

Metadata

arXiv ID: 2602.22814
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22814v1</id>\n    <title>When Should an AI Act? A Human-Centered Model of Scene, Context, and Behavior for Agentic AI Design</title>\n    <updated>2026-02-26T09:56:37Z</updated>\n    <link href='https://arxiv.org/abs/2602.22814v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22814v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Agentic AI increasingly intervenes proactively by inferring users' situations from contextual data yet often fails for lack of principled judgment about when, why, and whether to act. We address this gap by proposing a conceptual model that reframes behavior as an interpretive outcome integrating Scene (observable situation), Context (user-constructed meaning), and Human Behavior Factors (determinants shaping behavioral likelihood). Grounded in multidisciplinary perspectives across the humanities, social sciences, HCI, and engineering, the model separates what is observable from what is meaningful to the user and explains how the same scene can yield different behavioral meanings and outcomes. To translate this lens into design action, we derive five agent design principles (behavioral alignment, contextual sensitivity, temporal appropriateness, motivational calibration, and agency preservation) that guide intervention depth, timing, intensity, and restraint. Together, the model and principles provide a foundation for designing agentic AI systems that act with contextual sensitivity and judgment in interactions.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-02-26T09:56:37Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Soyoung Jung</name>\n    </author>\n    <author>\n      <name>Daehoo Yoon</name>\n    </author>\n    <author>\n      <name>Sung Gyu Koh</name>\n    </author>\n    <author>\n      <name>Young Hwan Kim</name>\n    </author>\n    <author>\n      <name>Yehan Ahn</name>\n    </author>\n    <author>\n      <name>Sung Park</name>\n    </author>\n  </entry>"
}