Paper
ShadAR: LLM-driven shader generation to transform visual perception in Augmented Reality
Authors
Yanni Mei, Samuel Wendt, Florian Mueller, Jan Gugenheimer
Abstract
Augmented Reality (AR) can simulate various visual perceptions, such as how individuals with colorblindness see the world. However, these simulations require developers to predefine each visual effect, limiting flexibility. We present ShadAR, an AR application enabling real-time transformation of visual perception through shader generation using large language models (LLMs). ShadAR allows users to express their visual intent via natural language, which is interpreted by an LLM to generate corresponding shader code. This shader is then compiled real-time to modify the AR headset viewport. We present our LLM-driven shader generation pipeline and demonstrate its ability to transform visual perception for inclusiveness and creativity.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.17481v1</id>\n <title>ShadAR: LLM-driven shader generation to transform visual perception in Augmented Reality</title>\n <updated>2026-02-19T15:50:32Z</updated>\n <link href='https://arxiv.org/abs/2602.17481v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.17481v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Augmented Reality (AR) can simulate various visual perceptions, such as how individuals with colorblindness see the world. However, these simulations require developers to predefine each visual effect, limiting flexibility. We present ShadAR, an AR application enabling real-time transformation of visual perception through shader generation using large language models (LLMs). ShadAR allows users to express their visual intent via natural language, which is interpreted by an LLM to generate corresponding shader code. This shader is then compiled real-time to modify the AR headset viewport. We present our LLM-driven shader generation pipeline and demonstrate its ability to transform visual perception for inclusiveness and creativity.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n <published>2026-02-19T15:50:32Z</published>\n <arxiv:primary_category term='cs.HC'/>\n <arxiv:journal_ref>2025 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Daejeon, Korea, Republic of, 2025, pp. 959-960</arxiv:journal_ref>\n <author>\n <name>Yanni Mei</name>\n </author>\n <author>\n <name>Samuel Wendt</name>\n </author>\n <author>\n <name>Florian Mueller</name>\n </author>\n <author>\n <name>Jan Gugenheimer</name>\n </author>\n <arxiv:doi>10.1109/ISMAR-Adjunct68609.2025.00267</arxiv:doi>\n <link href='https://doi.org/10.1109/ISMAR-Adjunct68609.2025.00267' rel='related' title='doi'/>\n </entry>"
}