Research

Paper

AI LLM March 06, 2026

Evaluating LLM Alignment With Human Trust Models

Authors

Anushka Debnath, Stephen Cranefield, Bastin Tony Roy Savarimuthu, Emiliano Lorini

Abstract

Trust plays a pivotal role in enabling effective cooperation, reducing uncertainty, and guiding decision-making in both human interactions and multi-agent systems. Although it is significant, there is limited understanding of how large language models (LLMs) internally conceptualize and reason about trust. This work presents a white-box analysis of trust representation in EleutherAI/gpt-j-6B, using contrastive prompting to generate embedding vectors within the activation space of the LLM for diadic trust and related interpersonal relationship attributes. We first identified trust-related concepts from five established human trust models. We then determined a threshold for significant conceptual alignment by computing pairwise cosine similarities across 60 general emotional concepts. Then we measured the cosine similarities between the LLM's internal representation of trust and the derived trust-related concepts. Our results show that the internal trust representation of EleutherAI/gpt-j-6B aligns most closely with the Castelfranchi socio-cognitive model, followed by the Marsh Model. These findings indicate that LLMs encode socio-cognitive constructs in their activation space in ways that support meaningful comparative analyses, inform theories of social cognition, and support the design of human-AI collaborative systems.

Metadata

arXiv ID: 2603.05839
Provider: ARXIV
Primary Category: cs.MA
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.05839v1</id>\n    <title>Evaluating LLM Alignment With Human Trust Models</title>\n    <updated>2026-03-06T02:49:49Z</updated>\n    <link href='https://arxiv.org/abs/2603.05839v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.05839v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Trust plays a pivotal role in enabling effective cooperation, reducing uncertainty, and guiding decision-making in both human interactions and multi-agent systems. Although it is significant, there is limited understanding of how large language models (LLMs) internally conceptualize and reason about trust. This work presents a white-box analysis of trust representation in EleutherAI/gpt-j-6B, using contrastive prompting to generate embedding vectors within the activation space of the LLM for diadic trust and related interpersonal relationship attributes. We first identified trust-related concepts from five established human trust models. We then determined a threshold for significant conceptual alignment by computing pairwise cosine similarities across 60 general emotional concepts. Then we measured the cosine similarities between the LLM's internal representation of trust and the derived trust-related concepts. Our results show that the internal trust representation of EleutherAI/gpt-j-6B aligns most closely with the Castelfranchi socio-cognitive model, followed by the Marsh Model. These findings indicate that LLMs encode socio-cognitive constructs in their activation space in ways that support meaningful comparative analyses, inform theories of social cognition, and support the design of human-AI collaborative systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.MA'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-06T02:49:49Z</published>\n    <arxiv:comment>This paper will appear in the post-proceedings of ICAART 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.MA'/>\n    <author>\n      <name>Anushka Debnath</name>\n    </author>\n    <author>\n      <name>Stephen Cranefield</name>\n    </author>\n    <author>\n      <name>Bastin Tony Roy Savarimuthu</name>\n    </author>\n    <author>\n      <name>Emiliano Lorini</name>\n    </author>\n  </entry>"
}