Research

Paper

AI LLM March 24, 2026

Can Large Language Models Reason and Optimize Under Constraints?

Authors

Fabien Bernier, Salah Ghamizi, Pantelis Dogoulis, Maxime Cordy

Abstract

Large Language Models (LLMs) have demonstrated great capabilities across diverse natural language tasks; yet their ability to solve abstraction and optimization problems with constraints remains scarcely explored. In this paper, we investigate whether LLMs can reason and optimize under the physical and operational constraints of Optimal Power Flow (OPF) problem. We introduce a challenging evaluation setup that requires a set of fundamental skills such as reasoning, structured input handling, arithmetic, and constrained optimization. Our evaluation reveals that SoTA LLMs fail in most of the tasks, and that reasoning LLMs still fail in the most complex settings. Our findings highlight critical gaps in LLMs' ability to handle structured reasoning under constraints, and this work provides a rigorous testing environment for developing more capable LLM assistants that can tackle real-world power grid optimization problems.

Metadata

arXiv ID: 2603.23004
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23004v1</id>\n    <title>Can Large Language Models Reason and Optimize Under Constraints?</title>\n    <updated>2026-03-24T09:50:17Z</updated>\n    <link href='https://arxiv.org/abs/2603.23004v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23004v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Language Models (LLMs) have demonstrated great capabilities across diverse natural language tasks; yet their ability to solve abstraction and optimization problems with constraints remains scarcely explored. In this paper, we investigate whether LLMs can reason and optimize under the physical and operational constraints of Optimal Power Flow (OPF) problem. We introduce a challenging evaluation setup that requires a set of fundamental skills such as reasoning, structured input handling, arithmetic, and constrained optimization. Our evaluation reveals that SoTA LLMs fail in most of the tasks, and that reasoning LLMs still fail in the most complex settings. Our findings highlight critical gaps in LLMs' ability to handle structured reasoning under constraints, and this work provides a rigorous testing environment for developing more capable LLM assistants that can tackle real-world power grid optimization problems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-24T09:50:17Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Fabien Bernier</name>\n    </author>\n    <author>\n      <name>Salah Ghamizi</name>\n    </author>\n    <author>\n      <name>Pantelis Dogoulis</name>\n    </author>\n    <author>\n      <name>Maxime Cordy</name>\n    </author>\n  </entry>"
}