Research

Paper

AI LLM February 24, 2026

From Performance to Purpose: A Sociotechnical Taxonomy for Evaluating Large Language Model Utility

Authors

Gavin Levinson, Keith Feldman

Abstract

As large language models (LLMs) continue to improve at completing discrete tasks, they are being integrated into increasingly complex and diverse real-world systems. However, task-level success alone does not establish a model's fit for use in practice. In applied, high-stakes settings, LLM effectiveness is driven by a wider array of sociotechnical determinants that extend beyond conventional performance measures. Although a growing set of metrics capture many of these considerations, they are rarely organized in a way that supports consistent evaluation, leaving no unified taxonomy for assessing and comparing LLM utility across use cases. To address this gap, we introduce the Language Model Utility Taxonomy (LUX), a comprehensive framework that structures utility evaluation across four domains: performance, interaction, operations, and governance. Within each domain, LUX is organized hierarchically into thematically aligned dimensions and components, each grounded in metrics that enable quantitative comparison and alignment of model selection with intended use. In addition, an external dynamic web tool is provided to support exploration of the framework by connecting each component to a repository of relevant metrics (factors) for applied evaluation.

Metadata

arXiv ID: 2602.20513
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20513v1</id>\n    <title>From Performance to Purpose: A Sociotechnical Taxonomy for Evaluating Large Language Model Utility</title>\n    <updated>2026-02-24T03:31:07Z</updated>\n    <link href='https://arxiv.org/abs/2602.20513v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20513v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>As large language models (LLMs) continue to improve at completing discrete tasks, they are being integrated into increasingly complex and diverse real-world systems. However, task-level success alone does not establish a model's fit for use in practice. In applied, high-stakes settings, LLM effectiveness is driven by a wider array of sociotechnical determinants that extend beyond conventional performance measures. Although a growing set of metrics capture many of these considerations, they are rarely organized in a way that supports consistent evaluation, leaving no unified taxonomy for assessing and comparing LLM utility across use cases. To address this gap, we introduce the Language Model Utility Taxonomy (LUX), a comprehensive framework that structures utility evaluation across four domains: performance, interaction, operations, and governance. Within each domain, LUX is organized hierarchically into thematically aligned dimensions and components, each grounded in metrics that enable quantitative comparison and alignment of model selection with intended use. In addition, an external dynamic web tool is provided to support exploration of the framework by connecting each component to a repository of relevant metrics (factors) for applied evaluation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-02-24T03:31:07Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Gavin Levinson</name>\n    </author>\n    <author>\n      <name>Keith Feldman</name>\n    </author>\n  </entry>"
}