Research

Paper

AI LLM February 20, 2026

Decision Support under Prediction-Induced Censoring

Authors

Yan Chen, Ruyi Huang, Cheng Liu

Abstract

In many data-driven online decision systems, actions determine not only operational costs but also the data availability for future learning -- a phenomenon termed Prediction-Induced Censoring (PIC). This challenge is particularly acute in large-scale resource allocation for generative AI (GenAI) serving: insufficient capacity triggers shortages but hides the true demand, leaving the system with only a "greater-than" constraint. Standard decision-making approaches that rely on uncensored data suffer from selection bias, often locking the system into a self-reinforcing low-provisioning trap. To break this loop, this paper proposes an adaptive approach named PIC-Reinforcement Learning (PIC-RL), a closed-loop framework that transforms censoring from a data quality problem into a decision signal. PIC-RL integrates (1) Uncertainty-Aware Demand Prediction to manage the information-cost trade-off, (2) Pessimistic Surrogate Inference to construct decision-aligned conservative feedback from shortage events, and (3) Dual-Timescale Adaptation to stabilize online learning against distribution drift. The analysis provides theoretical guarantees that the feedback design corrects the selection bias inherent in naive learning. Experiments on production Alibaba GenAI traces demonstrate that PIC-RL consistently outperforms state-of-the-art baselines, reducing service degradation by up to 50% while maintaining cost efficiency.

Metadata

arXiv ID: 2602.18031
Provider: ARXIV
Primary Category: eess.SY
Published: 2026-02-20
Fetched: 2026-02-23 05:33

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.18031v1</id>\n    <title>Decision Support under Prediction-Induced Censoring</title>\n    <updated>2026-02-20T07:22:31Z</updated>\n    <link href='https://arxiv.org/abs/2602.18031v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.18031v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In many data-driven online decision systems, actions determine not only operational costs but also the data availability for future learning -- a phenomenon termed Prediction-Induced Censoring (PIC). This challenge is particularly acute in large-scale resource allocation for generative AI (GenAI) serving: insufficient capacity triggers shortages but hides the true demand, leaving the system with only a \"greater-than\" constraint. Standard decision-making approaches that rely on uncensored data suffer from selection bias, often locking the system into a self-reinforcing low-provisioning trap. To break this loop, this paper proposes an adaptive approach named PIC-Reinforcement Learning (PIC-RL), a closed-loop framework that transforms censoring from a data quality problem into a decision signal. PIC-RL integrates (1) Uncertainty-Aware Demand Prediction to manage the information-cost trade-off, (2) Pessimistic Surrogate Inference to construct decision-aligned conservative feedback from shortage events, and (3) Dual-Timescale Adaptation to stabilize online learning against distribution drift. The analysis provides theoretical guarantees that the feedback design corrects the selection bias inherent in naive learning. Experiments on production Alibaba GenAI traces demonstrate that PIC-RL consistently outperforms state-of-the-art baselines, reducing service degradation by up to 50% while maintaining cost efficiency.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.SY'/>\n    <published>2026-02-20T07:22:31Z</published>\n    <arxiv:primary_category term='eess.SY'/>\n    <author>\n      <name>Yan Chen</name>\n    </author>\n    <author>\n      <name>Ruyi Huang</name>\n    </author>\n    <author>\n      <name>Cheng Liu</name>\n    </author>\n  </entry>"
}