Paper
Terms of (Ab)Use: An Analysis of GenAI Services
Authors
Harshvardhan J. Pandit, Dick A. H. Blankvoort, Dick A. H. Blankvoort, Sasha Luccioni, Abeba Birhane
Abstract
Generative AI services like ChatGPT and Gemini are some of the fastest-growing consumer services. Individuals using such services must accept their terms of use before access, and conform to these terms for continued use of the service. Established literature has shown that despite their status as legally-binding agreements, terms of use are not actually well-understood, and may contain implications that are surprising for consumers. In this paper, we analyse the terms of 6 generative AI services from the perspective of an EU-based consumer. Our findings, based on a developed codebook which we provide in the paper, reiterate known issues regarding generative AI services such as the default use of user data for training and surface new concerns regarding responsibility, liability, and rights. All terms in our analysis contained language that explicitly discards assurances regarding the quality, availability and appropriateness of the service, regardless of whether the service is free or paid. The terms also make users solely responsible for outputs meeting norms dictated by the provider, despite no information or control being provided over the functioning of the model, and at the risk of account termination. The terms further restrict users in how outputs can be used while service providers utilise both user-provided inputs as well as user-liable outputs for a wide variety of purposes at their discretion. The implications of these practices are severe, as we find consumers suffer from lack of necessary information, significant imbalance of power, and have responsibilities they cannot materially fulfil without violating the terms. To remedy this situation, we make concrete recommendations for authorities and policymakers to urgently upgrade existing consumer protection mechanisms to tackle this growing issue.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.18964v1</id>\n <title>Terms of (Ab)Use: An Analysis of GenAI Services</title>\n <updated>2026-03-19T14:30:00Z</updated>\n <link href='https://arxiv.org/abs/2603.18964v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.18964v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Generative AI services like ChatGPT and Gemini are some of the fastest-growing consumer services. Individuals using such services must accept their terms of use before access, and conform to these terms for continued use of the service. Established literature has shown that despite their status as legally-binding agreements, terms of use are not actually well-understood, and may contain implications that are surprising for consumers. In this paper, we analyse the terms of 6 generative AI services from the perspective of an EU-based consumer. Our findings, based on a developed codebook which we provide in the paper, reiterate known issues regarding generative AI services such as the default use of user data for training and surface new concerns regarding responsibility, liability, and rights. All terms in our analysis contained language that explicitly discards assurances regarding the quality, availability and appropriateness of the service, regardless of whether the service is free or paid. The terms also make users solely responsible for outputs meeting norms dictated by the provider, despite no information or control being provided over the functioning of the model, and at the risk of account termination. The terms further restrict users in how outputs can be used while service providers utilise both user-provided inputs as well as user-liable outputs for a wide variety of purposes at their discretion. The implications of these practices are severe, as we find consumers suffer from lack of necessary information, significant imbalance of power, and have responsibilities they cannot materially fulfil without violating the terms. To remedy this situation, we make concrete recommendations for authorities and policymakers to urgently upgrade existing consumer protection mechanisms to tackle this growing issue.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n <published>2026-03-19T14:30:00Z</published>\n <arxiv:comment>Peer-reviewed, to be presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2026</arxiv:comment>\n <arxiv:primary_category term='cs.CY'/>\n <author>\n <name>Harshvardhan J. Pandit</name>\n </author>\n <author>\n <name>Dick A. H. Blankvoort</name>\n </author>\n <author>\n <name>Dick A. H. Blankvoort</name>\n </author>\n <author>\n <name>Sasha Luccioni</name>\n </author>\n <author>\n <name>Abeba Birhane</name>\n </author>\n </entry>"
}