Paper
CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production
Authors
Yixin Nie, Lin Guan, Zhongyao Ma, Anchit Gupta, Yipin Zhou, Xiao Li, Zhengping Zhou, Raymond Zeng, Gelin Zhou, Shigan Chu, Ajay Thampi, Wancen Mu, Nathan Shuster, Ketong Wang, Lin Chen, Jason Brewer, Derek Hao Hu, Alexander McCauley, Jason Weston, Sem Park, Na Zhang, Kevin Tang
Abstract
This report presents CharacterFlywheel, an iterative flywheel process for improving large language models (LLMs) in production social chat applications across Instagram, WhatsApp, and Messenger. Starting from LLaMA 3.1, we refined models across 15 generations using data from both internal and external real-user traffic. Through continuous deployments from July 2024 to April 2025, we conducted controlled 7-day A/B tests showing consistent engagement improvements: 7 of 8 newly deployed models demonstrated positive lift over the baseline, with the strongest performers achieving up to 8.8% improvement in engagement breadth and 19.4% in engagement depth. We also observed substantial gains in steerability, with instruction following increasing from 59.2% to 84.8% and instruction violations decreasing from 26.6% to 5.8%. We detail the CharacterFlywheel process which integrates data curation, reward modeling to estimate and interpolate the landscape of engagement metrics, supervised fine-tuning (SFT), reinforcement learning (RL), and both offline and online evaluation to ensure reliable progress at each optimization step. We also discuss our methods for overfitting prevention and navigating production dynamics at scale. These contributions advance the scientific rigor and understanding of LLMs in social applications serving millions of users.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.01973v1</id>\n <title>CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production</title>\n <updated>2026-03-02T15:27:31Z</updated>\n <link href='https://arxiv.org/abs/2603.01973v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.01973v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>This report presents CharacterFlywheel, an iterative flywheel process for improving large language models (LLMs) in production social chat applications across Instagram, WhatsApp, and Messenger. Starting from LLaMA 3.1, we refined models across 15 generations using data from both internal and external real-user traffic. Through continuous deployments from July 2024 to April 2025, we conducted controlled 7-day A/B tests showing consistent engagement improvements: 7 of 8 newly deployed models demonstrated positive lift over the baseline, with the strongest performers achieving up to 8.8% improvement in engagement breadth and 19.4% in engagement depth. We also observed substantial gains in steerability, with instruction following increasing from 59.2% to 84.8% and instruction violations decreasing from 26.6% to 5.8%. We detail the CharacterFlywheel process which integrates data curation, reward modeling to estimate and interpolate the landscape of engagement metrics, supervised fine-tuning (SFT), reinforcement learning (RL), and both offline and online evaluation to ensure reliable progress at each optimization step. We also discuss our methods for overfitting prevention and navigating production dynamics at scale. These contributions advance the scientific rigor and understanding of LLMs in social applications serving millions of users.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.SI'/>\n <published>2026-03-02T15:27:31Z</published>\n <arxiv:primary_category term='cs.CL'/>\n <author>\n <name>Yixin Nie</name>\n </author>\n <author>\n <name>Lin Guan</name>\n </author>\n <author>\n <name>Zhongyao Ma</name>\n </author>\n <author>\n <name>Anchit Gupta</name>\n </author>\n <author>\n <name>Yipin Zhou</name>\n </author>\n <author>\n <name>Xiao Li</name>\n </author>\n <author>\n <name>Zhengping Zhou</name>\n </author>\n <author>\n <name>Raymond Zeng</name>\n </author>\n <author>\n <name>Gelin Zhou</name>\n </author>\n <author>\n <name>Shigan Chu</name>\n </author>\n <author>\n <name>Ajay Thampi</name>\n </author>\n <author>\n <name>Wancen Mu</name>\n </author>\n <author>\n <name>Nathan Shuster</name>\n </author>\n <author>\n <name>Ketong Wang</name>\n </author>\n <author>\n <name>Lin Chen</name>\n </author>\n <author>\n <name>Jason Brewer</name>\n </author>\n <author>\n <name>Derek Hao Hu</name>\n </author>\n <author>\n <name>Alexander McCauley</name>\n </author>\n <author>\n <name>Jason Weston</name>\n </author>\n <author>\n <name>Sem Park</name>\n </author>\n <author>\n <name>Na Zhang</name>\n </author>\n <author>\n <name>Kevin Tang</name>\n </author>\n </entry>"
}