Research

Paper

TESTING March 20, 2026

Interpreting Reinforcement Learning Model Behavior via Koopman with Control

Authors

William T. Redman

Abstract

Reinforcement learning (RL) models have shown the capability of learning complex behaviors, but quantitatively assessing those behaviors - which is critical for safety assurance and the discovery of novel strategies - is challenging. By viewing RL models as control systems, we hypothesize that data-driven approximations of their associated Koopman operators may provide dynamical information about their behavior, thus enabling greater interpretability. To test this, we apply the Koopman with control framework to RL models trained on several standard benchmark environments and demonstrate that properties of the fit linear control models, such as stability and controllability, evolve during training in a task dependent manner. Comparing these metrics across different training epochs or across differently optimized RL models enables an understanding of how they differ. In addition, we find cases where - even when the reward achieved by the RL model is static - the stability and controllability is nonetheless evolving, predicting increased reward with further training. This suggests that these metrics may be able to serve as hidden progress measures, a core idea in mechanistic interpretability. Taken together, our results illustrate that the Koopman with control framework provides a comprehensive way in which to analyze and interpret the behavior of RL models, particularly across training.

Metadata

arXiv ID: 2603.19968
Provider: ARXIV
Primary Category: math.OC
Published: 2026-03-20
Fetched: 2026-03-23 16:54

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19968v1</id>\n    <title>Interpreting Reinforcement Learning Model Behavior via Koopman with Control</title>\n    <updated>2026-03-20T14:11:13Z</updated>\n    <link href='https://arxiv.org/abs/2603.19968v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19968v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement learning (RL) models have shown the capability of learning complex behaviors, but quantitatively assessing those behaviors - which is critical for safety assurance and the discovery of novel strategies - is challenging. By viewing RL models as control systems, we hypothesize that data-driven approximations of their associated Koopman operators may provide dynamical information about their behavior, thus enabling greater interpretability. To test this, we apply the Koopman with control framework to RL models trained on several standard benchmark environments and demonstrate that properties of the fit linear control models, such as stability and controllability, evolve during training in a task dependent manner. Comparing these metrics across different training epochs or across differently optimized RL models enables an understanding of how they differ. In addition, we find cases where - even when the reward achieved by the RL model is static - the stability and controllability is nonetheless evolving, predicting increased reward with further training. This suggests that these metrics may be able to serve as hidden progress measures, a core idea in mechanistic interpretability. Taken together, our results illustrate that the Koopman with control framework provides a comprehensive way in which to analyze and interpret the behavior of RL models, particularly across training.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='math.OC'/>\n    <published>2026-03-20T14:11:13Z</published>\n    <arxiv:comment>6 pages, 5 figures, comments welcome!</arxiv:comment>\n    <arxiv:primary_category term='math.OC'/>\n    <author>\n      <name>William T. Redman</name>\n    </author>\n  </entry>"
}