Research

Paper

AI LLM March 10, 2026

Constructing a Portfolio Optimization Benchmark Framework for Evaluating Large Language Models

Authors

Hanyong Cho, Jang Ho Kim

Abstract

This study introduces a benchmark framework for evaluating the financial decision-making capabilities of large language models (LLMs) through portfolio optimization problems with mathematically explicit solutions. Unlike existing financial benchmarks that emphasize language-processing tasks, the proposed framework directly tests optimization-based reasoning in investment contexts. A large set of multiple-choice questions is generated by varying objectives, candidate assets, and investment constraints, with each problem designed to include a unique correct solution and systematically constructed alternatives. Experimental results comparing GPT-4, Gemini 1.5 Pro, and Llama 3.1-70B reveal distinct performance patterns: GPT achieves the highest accuracy in risk-based objectives and remains stable under constraints, Gemini performs well in return-based tasks but struggles under other conditions, and Llama records the lowest overall performance. These findings highlight both the potential and current limitations of LLMs in applying quantitative reasoning to finance, while providing a scalable foundation for developing LLM-based services in portfolio management.

Metadata

arXiv ID: 2603.09301
Provider: ARXIV
Primary Category: q-fin.PM
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09301v1</id>\n    <title>Constructing a Portfolio Optimization Benchmark Framework for Evaluating Large Language Models</title>\n    <updated>2026-03-10T07:35:31Z</updated>\n    <link href='https://arxiv.org/abs/2603.09301v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09301v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>This study introduces a benchmark framework for evaluating the financial decision-making capabilities of large language models (LLMs) through portfolio optimization problems with mathematically explicit solutions. Unlike existing financial benchmarks that emphasize language-processing tasks, the proposed framework directly tests optimization-based reasoning in investment contexts. A large set of multiple-choice questions is generated by varying objectives, candidate assets, and investment constraints, with each problem designed to include a unique correct solution and systematically constructed alternatives. Experimental results comparing GPT-4, Gemini 1.5 Pro, and Llama 3.1-70B reveal distinct performance patterns: GPT achieves the highest accuracy in risk-based objectives and remains stable under constraints, Gemini performs well in return-based tasks but struggles under other conditions, and Llama records the lowest overall performance. These findings highlight both the potential and current limitations of LLMs in applying quantitative reasoning to finance, while providing a scalable foundation for developing LLM-based services in portfolio management.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='q-fin.PM'/>\n    <published>2026-03-10T07:35:31Z</published>\n    <arxiv:primary_category term='q-fin.PM'/>\n    <author>\n      <name>Hanyong Cho</name>\n    </author>\n    <author>\n      <name>Jang Ho Kim</name>\n    </author>\n  </entry>"
}