Research

Paper

AI LLM March 20, 2026

From Token to Item: Enhancing Large Language Models for Recommendation via Item-aware Attention Mechanism

Authors

Xiaokun Zhang, Bowei He, Jiamin Chen, Ziqiang Cui, Chen Ma

Abstract

Large Language Models (LLMs) have recently gained increasing attention in the field of recommendation. Existing LLM-based methods typically represent items as token sequences, and apply attention layers on these tokens to generate recommendations. However, by inheriting the standard attention mechanism, these methods focus on modeling token-level relations. This token-centric focus overlooks the item as the fundamental unit of recommendation, preventing existing methods from effectively capturing collaborative relations at the item level. In this work, we revisit the role of tokens in LLM-driven recommendation and categorize their relations into two types: (1) intra-item token relations, which present the content semantics of an item, e.g., name, color, and size; and (2) inter-item token relations, which encode collaborative relations across items. Building on these insights, we propose a novel framework with an item-aware attention mechanism (IAM) to enhance LLMs for recommendation. Specifically, IAM devises two complementary attention layers: (1) an intra-item attention layer, which restricts attention to tokens within the same item, modeling item content semantics; and (2) an inter-item attention layer, which attends exclusively to token relations across items, capturing item collaborative relations. Through this stacked design, IAM explicitly emphasizes items as the fundamental units in recommendation, enabling LLMs to effectively exploit item-level collaborative relations. Extensive experiments on several public datasets demonstrate the effectiveness of IAM in enhancing LLMs for personalized recommendation.

Metadata

arXiv ID: 2603.19693
Provider: ARXIV
Primary Category: cs.IR
Published: 2026-03-20
Fetched: 2026-03-23 16:54

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19693v1</id>\n    <title>From Token to Item: Enhancing Large Language Models for Recommendation via Item-aware Attention Mechanism</title>\n    <updated>2026-03-20T06:56:59Z</updated>\n    <link href='https://arxiv.org/abs/2603.19693v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19693v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Language Models (LLMs) have recently gained increasing attention in the field of recommendation. Existing LLM-based methods typically represent items as token sequences, and apply attention layers on these tokens to generate recommendations. However, by inheriting the standard attention mechanism, these methods focus on modeling token-level relations. This token-centric focus overlooks the item as the fundamental unit of recommendation, preventing existing methods from effectively capturing collaborative relations at the item level. In this work, we revisit the role of tokens in LLM-driven recommendation and categorize their relations into two types: (1) intra-item token relations, which present the content semantics of an item, e.g., name, color, and size; and (2) inter-item token relations, which encode collaborative relations across items. Building on these insights, we propose a novel framework with an item-aware attention mechanism (IAM) to enhance LLMs for recommendation. Specifically, IAM devises two complementary attention layers: (1) an intra-item attention layer, which restricts attention to tokens within the same item, modeling item content semantics; and (2) an inter-item attention layer, which attends exclusively to token relations across items, capturing item collaborative relations. Through this stacked design, IAM explicitly emphasizes items as the fundamental units in recommendation, enabling LLMs to effectively exploit item-level collaborative relations. Extensive experiments on several public datasets demonstrate the effectiveness of IAM in enhancing LLMs for personalized recommendation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.IR'/>\n    <published>2026-03-20T06:56:59Z</published>\n    <arxiv:comment>This work has been accepted by WWW 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.IR'/>\n    <author>\n      <name>Xiaokun Zhang</name>\n    </author>\n    <author>\n      <name>Bowei He</name>\n    </author>\n    <author>\n      <name>Jiamin Chen</name>\n    </author>\n    <author>\n      <name>Ziqiang Cui</name>\n    </author>\n    <author>\n      <name>Chen Ma</name>\n    </author>\n  </entry>"
}