Research

Paper

TESTING March 16, 2026

GameUIAgent: An LLM-Powered Framework for Automated Game UI Design with Structured Intermediate Representation

Authors

Wei Zeng, Fengwei An, Zhen Liu, Jian Zhao

Abstract

Game UI design requires consistent visual assets across rarity tiers yet remains a predominantly manual process. We present GameUIAgent, an LLM-powered agentic framework that translates natural language descriptions into editable Figma designs via a Design Spec JSON intermediate representation. A six-stage neuro-symbolic pipeline combines LLM generation, deterministic post-processing, and a Vision-Language Model (VLM)-guided Reflection Controller (RC) for iterative self-correction with guaranteed non-regressive quality. Evaluated across 110 test cases, three LLMs, and three UI templates, cross-model analysis establishes a game-domain failure taxonomy (rarity-dependent degradation; visual emptiness) and uncovers two key empirical findings. A Quality Ceiling Effect (Pearson r=-0.96, p<0.01) suggests that RC improvement is bounded by headroom below a quality threshold -- a visual-domain counterpart to test-time compute scaling laws. A Rendering-Evaluation Fidelity Principle reveals that partial rendering enhancements paradoxically degrade VLM evaluation by amplifying structural defects. Together, these results establish foundational principles for LLM-driven visual generation agents in game production.

Metadata

arXiv ID: 2603.14724
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.14724v1</id>\n    <title>GameUIAgent: An LLM-Powered Framework for Automated Game UI Design with Structured Intermediate Representation</title>\n    <updated>2026-03-16T01:57:56Z</updated>\n    <link href='https://arxiv.org/abs/2603.14724v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.14724v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Game UI design requires consistent visual assets across rarity tiers yet remains a predominantly manual process. We present GameUIAgent, an LLM-powered agentic framework that translates natural language descriptions into editable Figma designs via a Design Spec JSON intermediate representation. A six-stage neuro-symbolic pipeline combines LLM generation, deterministic post-processing, and a Vision-Language Model (VLM)-guided Reflection Controller (RC) for iterative self-correction with guaranteed non-regressive quality. Evaluated across 110 test cases, three LLMs, and three UI templates, cross-model analysis establishes a game-domain failure taxonomy (rarity-dependent degradation; visual emptiness) and uncovers two key empirical findings. A Quality Ceiling Effect (Pearson r=-0.96, p&lt;0.01) suggests that RC improvement is bounded by headroom below a quality threshold -- a visual-domain counterpart to test-time compute scaling laws. A Rendering-Evaluation Fidelity Principle reveals that partial rendering enhancements paradoxically degrade VLM evaluation by amplifying structural defects. Together, these results establish foundational principles for LLM-driven visual generation agents in game production.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-16T01:57:56Z</published>\n    <arxiv:comment>8 pages, 6 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Wei Zeng</name>\n    </author>\n    <author>\n      <name>Fengwei An</name>\n    </author>\n    <author>\n      <name>Zhen Liu</name>\n    </author>\n    <author>\n      <name>Jian Zhao</name>\n    </author>\n  </entry>"
}