Research

Paper

AI LLM February 19, 2026

Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers

Authors

Bingqian Li, Bowen Zheng, Xiaolei Wang, Long Zhang, Jinpeng Wang, Sheng Chen, Wayne Xin Zhao, Ji-rong Wen

Abstract

Large language models (LLMs) have shown great promise in recommender systems, where supervised fine-tuning (SFT) is commonly used for adaptation. Subsequent studies further introduce preference learning to incorporate negative samples into the training process. However, existing methods rely on sequence-level, offline-generated negatives, making them less discriminative and informative when adapting LLMs to recommendation tasks with large negative item spaces. To address these challenges, we propose ILRec, a novel preference fine-tuning framework for LLM-based recommendation, leveraging self-hard negative signals extracted from intermediate layers to improve preference learning. Specifically, we identify self-hard negative tokens from intermediate layers as fine-grained negative supervision that dynamically reflects the model's preference learning process. To effectively integrate these signals into training, we design a two-stage framework comprising cross-layer preference optimization and cross-layer preference distillation, enabling the model to jointly discriminate informative negatives and enhance the quality of negative signals from intermediate layers. In addition, we introduce a lightweight collaborative filtering model to assign token-level rewards for negative signals, mitigating the risk of over-penalizing false negatives. Extensive experiments on three datasets demonstrate ILRec's effectiveness in enhancing the performance of LLM-based recommender systems.

Metadata

arXiv ID: 2602.17410
Provider: ARXIV
Primary Category: cs.IR
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17410v1</id>\n    <title>Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers</title>\n    <updated>2026-02-19T14:37:43Z</updated>\n    <link href='https://arxiv.org/abs/2602.17410v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17410v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) have shown great promise in recommender systems, where supervised fine-tuning (SFT) is commonly used for adaptation. Subsequent studies further introduce preference learning to incorporate negative samples into the training process. However, existing methods rely on sequence-level, offline-generated negatives, making them less discriminative and informative when adapting LLMs to recommendation tasks with large negative item spaces. To address these challenges, we propose ILRec, a novel preference fine-tuning framework for LLM-based recommendation, leveraging self-hard negative signals extracted from intermediate layers to improve preference learning. Specifically, we identify self-hard negative tokens from intermediate layers as fine-grained negative supervision that dynamically reflects the model's preference learning process. To effectively integrate these signals into training, we design a two-stage framework comprising cross-layer preference optimization and cross-layer preference distillation, enabling the model to jointly discriminate informative negatives and enhance the quality of negative signals from intermediate layers. In addition, we introduce a lightweight collaborative filtering model to assign token-level rewards for negative signals, mitigating the risk of over-penalizing false negatives. Extensive experiments on three datasets demonstrate ILRec's effectiveness in enhancing the performance of LLM-based recommender systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.IR'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-19T14:37:43Z</published>\n    <arxiv:primary_category term='cs.IR'/>\n    <author>\n      <name>Bingqian Li</name>\n    </author>\n    <author>\n      <name>Bowen Zheng</name>\n    </author>\n    <author>\n      <name>Xiaolei Wang</name>\n    </author>\n    <author>\n      <name>Long Zhang</name>\n    </author>\n    <author>\n      <name>Jinpeng Wang</name>\n    </author>\n    <author>\n      <name>Sheng Chen</name>\n    </author>\n    <author>\n      <name>Wayne Xin Zhao</name>\n    </author>\n    <author>\n      <name>Ji-rong Wen</name>\n    </author>\n  </entry>"
}