Research

Paper

AI LLM February 25, 2026

A task-based data-flow methodology for programming heterogeneous systems with multiple accelerator APIs

Authors

Aleix Boné, Alejandro Aguirre, David Álvarez, Pedro J. Martinez-Ferrer, Vicenç Beltran

Abstract

Heterogeneous nodes that combine multi-core CPUs with diverse accelerators are rapidly becoming the norm in both high-performance computing (HPC) and AI infrastructures. Exploiting these platforms, however, requires orchestrating several low-level accelerator APIs such as CUDA, SYCL, and Triton. In some occasions they can be combined with optimized vendor math libraries: e.g., cuBLAS and oneAPI. Each API or library introduces its own abstractions, execution semantics, and synchronization mechanisms. Combining them within a single application is therefore error-prone and labor-intensive. We propose reusing a task-based data-flow methodology together with Task-Aware APIs (TA-libs) to overcome these limitations and facilitate the seamless integration of multiple accelerator programming models, while still leveraging the best-in-class kernels offered by each API. Applications are expressed as a directed acyclic graph (DAG) of host tasks and device kernels managed by an OpenMP/OmpSs-2 runtime. We introduce Task-Aware SYCL (TASYCL) and leverage Task-Aware CUDA (TACUDA), which elevate individual accelerator invocations to first-class tasks. When multiple native runtimes coexist on the same multi-core CPU, they contend for threads, leading to oversubscription and performance variability. To address this, we unify their thread management under the nOS-V tasking and threading library, to which we contribute a new port of the PoCL (Portable OpenCL) runtime. These results demonstrate that task-aware libraries, coupled with the nOS-V library, enable a single application to harness multiple accelerator programming models transparently and efficiently. The proposed methodology is immediately applicable to current heterogeneous nodes and is readily extensible to future systems that integrate even richer combinations of CPUs, GPUs, FPGAs, and AI accelerators.

Metadata

arXiv ID: 2602.21897
Provider: ARXIV
Primary Category: cs.DC
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21897v1</id>\n    <title>A task-based data-flow methodology for programming heterogeneous systems with multiple accelerator APIs</title>\n    <updated>2026-02-25T13:27:44Z</updated>\n    <link href='https://arxiv.org/abs/2602.21897v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21897v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Heterogeneous nodes that combine multi-core CPUs with diverse accelerators are rapidly becoming the norm in both high-performance computing (HPC) and AI infrastructures. Exploiting these platforms, however, requires orchestrating several low-level accelerator APIs such as CUDA, SYCL, and Triton. In some occasions they can be combined with optimized vendor math libraries: e.g., cuBLAS and oneAPI. Each API or library introduces its own abstractions, execution semantics, and synchronization mechanisms. Combining them within a single application is therefore error-prone and labor-intensive. We propose reusing a task-based data-flow methodology together with Task-Aware APIs (TA-libs) to overcome these limitations and facilitate the seamless integration of multiple accelerator programming models, while still leveraging the best-in-class kernels offered by each API.\n  Applications are expressed as a directed acyclic graph (DAG) of host tasks and device kernels managed by an OpenMP/OmpSs-2 runtime. We introduce Task-Aware SYCL (TASYCL) and leverage Task-Aware CUDA (TACUDA), which elevate individual accelerator invocations to first-class tasks. When multiple native runtimes coexist on the same multi-core CPU, they contend for threads, leading to oversubscription and performance variability. To address this, we unify their thread management under the nOS-V tasking and threading library, to which we contribute a new port of the PoCL (Portable OpenCL) runtime.\n  These results demonstrate that task-aware libraries, coupled with the nOS-V library, enable a single application to harness multiple accelerator programming models transparently and efficiently. The proposed methodology is immediately applicable to current heterogeneous nodes and is readily extensible to future systems that integrate even richer combinations of CPUs, GPUs, FPGAs, and AI accelerators.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <published>2026-02-25T13:27:44Z</published>\n    <arxiv:comment>13 pages, 8 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.DC'/>\n    <arxiv:journal_ref>Future Generation Computer Systems, Volume 180, July 2026, 108383</arxiv:journal_ref>\n    <author>\n      <name>Aleix Boné</name>\n    </author>\n    <author>\n      <name>Alejandro Aguirre</name>\n    </author>\n    <author>\n      <name>David Álvarez</name>\n    </author>\n    <author>\n      <name>Pedro J. Martinez-Ferrer</name>\n    </author>\n    <author>\n      <name>Vicenç Beltran</name>\n    </author>\n    <arxiv:doi>10.1016/j.future.2026.108383</arxiv:doi>\n    <link href='https://doi.org/10.1016/j.future.2026.108383' rel='related' title='doi'/>\n  </entry>"
}