Paper
Roomify: Spatially-Grounded Style Transformation for Immersive Virtual Environments
Authors
Xueyang Wang, Qinxuan Cen, Weitao Bi, Yunxiang Ma, Xin Yi, Robert Xiao, Xinyi Fu, Hewu Li
Abstract
We present Roomify, a spatially-grounded transformation system that generates themed virtual environments anchored to users' physical rooms while maintaining spatial structure and functional semantics. Current VR approaches face a fundamental trade-off: full immersion sacrifices spatial awareness, while passthrough solutions break presence. Roomify addresses this through spatially-grounded transformation - treating physical spaces as "spatial containers" that preserve key functional and geometric properties of furniture while enabling radical stylistic changes. Our pipeline combines in-situ 3D scene understanding, AI-driven spatial reasoning, and style-aware generation to create personalized virtual environments grounded in physical reality. We introduce a cross-reality authoring tool enabling fine-grained user control through MR editing and VR preview workflows. Two user studies validate our approach: one with 18 VR users demonstrates a 63% improvement in presence over passthrough and 26% over fully virtual baselines while maintaining spatial awareness; another with 8 design professionals confirms the system's creative expressiveness (scene quality: 5.95/7; creativity support: 6.08/7) and professional workflow value across diverse environments.
Metadata
Related papers
Gen-Searcher: Reinforcing Agentic Search for Image Generation
Kaituo Feng, Manyuan Zhang, Shuang Chen, Yunlong Lin, Kaixuan Fan, Yilei Jian... • 2026-03-30
On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers
Omer Dahary, Benaya Koren, Daniel Garibi, Daniel Cohen-Or • 2026-03-30
Graphilosophy: Graph-Based Digital Humanities Computing with The Four Books
Minh-Thu Do, Quynh-Chau Le-Tran, Duc-Duy Nguyen-Mai, Thien-Trang Nguyen, Khan... • 2026-03-30
ParaSpeechCLAP: A Dual-Encoder Speech-Text Model for Rich Stylistic Language-Audio Pretraining
Anuj Diwan, Eunsol Choi, David Harwath • 2026-03-30
RAD-AI: Rethinking Architecture Documentation for AI-Augmented Ecosystems
Oliver Aleksander Larsen, Mahyar T. Moghaddam • 2026-03-30
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.04917v1</id>\n <title>Roomify: Spatially-Grounded Style Transformation for Immersive Virtual Environments</title>\n <updated>2026-03-05T08:01:38Z</updated>\n <link href='https://arxiv.org/abs/2603.04917v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.04917v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>We present Roomify, a spatially-grounded transformation system that generates themed virtual environments anchored to users' physical rooms while maintaining spatial structure and functional semantics. Current VR approaches face a fundamental trade-off: full immersion sacrifices spatial awareness, while passthrough solutions break presence. Roomify addresses this through spatially-grounded transformation - treating physical spaces as \"spatial containers\" that preserve key functional and geometric properties of furniture while enabling radical stylistic changes. Our pipeline combines in-situ 3D scene understanding, AI-driven spatial reasoning, and style-aware generation to create personalized virtual environments grounded in physical reality. We introduce a cross-reality authoring tool enabling fine-grained user control through MR editing and VR preview workflows. Two user studies validate our approach: one with 18 VR users demonstrates a 63% improvement in presence over passthrough and 26% over fully virtual baselines while maintaining spatial awareness; another with 8 design professionals confirms the system's creative expressiveness (scene quality: 5.95/7; creativity support: 6.08/7) and professional workflow value across diverse environments.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n <published>2026-03-05T08:01:38Z</published>\n <arxiv:comment>Accepted at CHI 2026 (ACM Conference on Human Factors in Computing Systems). 24 pages, 10 figures. Author's version</arxiv:comment>\n <arxiv:primary_category term='cs.HC'/>\n <author>\n <name>Xueyang Wang</name>\n </author>\n <author>\n <name>Qinxuan Cen</name>\n </author>\n <author>\n <name>Weitao Bi</name>\n </author>\n <author>\n <name>Yunxiang Ma</name>\n </author>\n <author>\n <name>Xin Yi</name>\n </author>\n <author>\n <name>Robert Xiao</name>\n </author>\n <author>\n <name>Xinyi Fu</name>\n </author>\n <author>\n <name>Hewu Li</name>\n </author>\n <arxiv:doi>10.1145/3772318.3791803</arxiv:doi>\n <link href='https://doi.org/10.1145/3772318.3791803' rel='related' title='doi'/>\n </entry>"
}