Research

Paper

AI LLM March 02, 2026

Strategic Advice in the Age of Personal AI

Authors

Yueyang Liu, Wichinpong Park Sinchaisri

Abstract

Personal AI assistants have changed how people use institutional and professional advice. We study this new strategic setting in which individuals may stochastically consult a personal AI whose recommendation is predictable to the focal advisor. Personal AI enters this strategic environment along two dimensions: how often it is consulted and how much weight it receives in the human's decision when consulted. Anticipating this, the advisor responds by counteracting the personal AI recommendation. Counteraction becomes more aggressive as personal AI is consulted more often. Yet advisor performance is non-monotone: equilibrium loss is highest at intermediate levels of adoption and vanishes when personal AI is never used or always used. Trust affects performance through a single relative influence index, and greater relative influence of personal AI increases advisor vulnerability. Extending the framework to costly credibility building, we characterize how personal AI adoption reshapes incentives to invest in trust.

Metadata

arXiv ID: 2603.02055
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02055v1</id>\n    <title>Strategic Advice in the Age of Personal AI</title>\n    <updated>2026-03-02T16:45:43Z</updated>\n    <link href='https://arxiv.org/abs/2603.02055v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02055v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Personal AI assistants have changed how people use institutional and professional advice. We study this new strategic setting in which individuals may stochastically consult a personal AI whose recommendation is predictable to the focal advisor. Personal AI enters this strategic environment along two dimensions: how often it is consulted and how much weight it receives in the human's decision when consulted. Anticipating this, the advisor responds by counteracting the personal AI recommendation. Counteraction becomes more aggressive as personal AI is consulted more often. Yet advisor performance is non-monotone: equilibrium loss is highest at intermediate levels of adoption and vanishes when personal AI is never used or always used. Trust affects performance through a single relative influence index, and greater relative influence of personal AI increases advisor vulnerability. Extending the framework to costly credibility building, we characterize how personal AI adoption reshapes incentives to invest in trust.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.GT'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-03-02T16:45:43Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Yueyang Liu</name>\n    </author>\n    <author>\n      <name>Wichinpong Park Sinchaisri</name>\n    </author>\n  </entry>"
}