Research

Paper

TESTING March 03, 2026

Joint Training Across Multiple Activation Sparsity Regimes

Authors

Haotian Wang

Abstract

Generalization in deep neural networks remains only partially understood. Inspired by the stronger generalization tendency of biological systems, we explore the hypothesis that robust internal representations should remain effective across both dense and sparse activation regimes. To test this idea, we introduce a simple training strategy that applies global top-k constraints to hidden activations and repeatedly cycles a single model through multiple activation budgets via progressive compression and periodic reset. Using CIFAR-10 without data augmentation and a WRN-28-4 backbone, we find in single-run experiments that two adaptive keep-ratio control strategies both outperform dense baseline training. These preliminary results suggest that joint training across multiple activation sparsity regimes may provide a simple and effective route to improved generalization.

Metadata

arXiv ID: 2603.03131
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.03131v1</id>\n    <title>Joint Training Across Multiple Activation Sparsity Regimes</title>\n    <updated>2026-03-03T16:03:33Z</updated>\n    <link href='https://arxiv.org/abs/2603.03131v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.03131v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Generalization in deep neural networks remains only partially understood. Inspired by the stronger generalization tendency of biological systems, we explore the hypothesis that robust internal representations should remain effective across both dense and sparse activation regimes. To test this idea, we introduce a simple training strategy that applies global top-k constraints to hidden activations and repeatedly cycles a single model through multiple activation budgets via progressive compression and periodic reset. Using CIFAR-10 without data augmentation and a WRN-28-4 backbone, we find in single-run experiments that two adaptive keep-ratio control strategies both outperform dense baseline training. These preliminary results suggest that joint training across multiple activation sparsity regimes may provide a simple and effective route to improved generalization.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-03T16:03:33Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Haotian Wang</name>\n    </author>\n  </entry>"
}