Research

Paper

TESTING March 03, 2026

Embedding interpretable $\ell_1$-regression into neural networks for uncovering temporal structure in cell imaging

Authors

Fabian Kabus, Maren Hackenberg, Julia Hindel, Thibault Cholvin, Antje Kilias, Thomas Brox, Abhinav Valada, Marlene Bartos, Harald Binder

Abstract

While artificial neural networks excel in unsupervised learning of non-sparse structure, classical statistical regression techniques offer better interpretability, in particular when sparseness is enforced by $\ell_1$ regularization, enabling identification of which factors drive observed dynamics. We investigate how these two types of approaches can be optimally combined, exemplarily considering two-photon calcium imaging data where sparse autoregressive dynamics are to be extracted. We propose embedding a vector autoregressive (VAR) model as an interpretable regression technique into a convolutional autoencoder, which provides dimension reduction for tractable temporal modeling. A skip connection separately addresses non-sparse static spatial information, selectively channeling sparse structure into the $\ell_1$-regularized VAR. $\ell_1$-estimation of regression parameters is enabled by differentiating through the piecewise linear solution path. This is contrasted with approaches where the autoencoder does not adapt to the VAR model. Having an embedded statistical model also enables a testing approach for comparing temporal sequences from the same observational unit. Additionally, contribution maps visualize which spatial regions drive the learned dynamics.

Metadata

arXiv ID: 2603.02899
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02899v1</id>\n    <title>Embedding interpretable $\\ell_1$-regression into neural networks for uncovering temporal structure in cell imaging</title>\n    <updated>2026-03-03T11:48:44Z</updated>\n    <link href='https://arxiv.org/abs/2603.02899v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02899v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>While artificial neural networks excel in unsupervised learning of non-sparse structure, classical statistical regression techniques offer better interpretability, in particular when sparseness is enforced by $\\ell_1$ regularization, enabling identification of which factors drive observed dynamics. We investigate how these two types of approaches can be optimally combined, exemplarily considering two-photon calcium imaging data where sparse autoregressive dynamics are to be extracted. We propose embedding a vector autoregressive (VAR) model as an interpretable regression technique into a convolutional autoencoder, which provides dimension reduction for tractable temporal modeling. A skip connection separately addresses non-sparse static spatial information, selectively channeling sparse structure into the $\\ell_1$-regularized VAR. $\\ell_1$-estimation of regression parameters is enabled by differentiating through the piecewise linear solution path. This is contrasted with approaches where the autoencoder does not adapt to the VAR model. Having an embedded statistical model also enables a testing approach for comparing temporal sequences from the same observational unit. Additionally, contribution maps visualize which spatial regions drive the learned dynamics.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-03T11:48:44Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Fabian Kabus</name>\n    </author>\n    <author>\n      <name>Maren Hackenberg</name>\n    </author>\n    <author>\n      <name>Julia Hindel</name>\n    </author>\n    <author>\n      <name>Thibault Cholvin</name>\n    </author>\n    <author>\n      <name>Antje Kilias</name>\n    </author>\n    <author>\n      <name>Thomas Brox</name>\n    </author>\n    <author>\n      <name>Abhinav Valada</name>\n    </author>\n    <author>\n      <name>Marlene Bartos</name>\n    </author>\n    <author>\n      <name>Harald Binder</name>\n    </author>\n  </entry>"
}