Research

Paper

AI LLM March 10, 2026

Cognitively Layered Data Synthesis for Domain Adaptation of LLMs to Space Situational Awareness

Authors

Ding Linghu, Cheng Wang, Da Fan, Wei Shi, Kaifeng Yin, Xiaoliang Xue, Fan Yang, Haiyi Ren, Cong Zhang

Abstract

Large language models (LLMs) demonstrate exceptional performance on general-purpose tasks. however, transferring them to complex engineering domains such as space situational awareness (SSA) remains challenging owing to insufficient structural alignment with mission chains, the absence of higher-order cognitive supervision, and poor correspondence between data quality criteria and engineering specifications. The core bottleneck is the construction of high-quality supervised fine-tuning (SFT) datasets. To this end, we propose BD-FDG (Bloom's Taxonomy-based Domain-specific Fine-tuning Data Generation), a framework that addresses incomplete knowledge coverage, shallow cognitive depth, and limited quality controllability through three mechanisms: structured knowledge organization, cognitively layered question modeling, and automated quality control. The framework uses a knowledge tree to ensure structured corpus coverage, designs a question generation scheme spanning nine categories and six cognitive levels from Remember to Create to produce samples with a continuous difficulty gradient, and applies a multidimensional scoring pipeline to enforce domain rigor and consistency. Using BD-FDG, we construct SSA-SFT, a domain dataset of approximately 230K samples, and fine-tune Qwen3-8B to obtain SSA-LLM-8B. Experiments show that SSA-LLM-8B achieves relative BLEU-1 improvements of 144\% (no-think) and 176\% (think) on the domain test set and a win rate of 82.21\% over the baseline in arena comparisons, while largely preserving general benchmark performance (MMLU-Pro, MATH-500). These results validate SFT data construction driven by cognitive layering as an effective paradigm for complex engineering domains and provide a transferable framework for domain-specific LLM adaptation.

Metadata

arXiv ID: 2603.09231
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09231v1</id>\n    <title>Cognitively Layered Data Synthesis for Domain Adaptation of LLMs to Space Situational Awareness</title>\n    <updated>2026-03-10T06:04:53Z</updated>\n    <link href='https://arxiv.org/abs/2603.09231v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09231v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) demonstrate exceptional performance on general-purpose tasks. however, transferring them to complex engineering domains such as space situational awareness (SSA) remains challenging owing to insufficient structural alignment with mission chains, the absence of higher-order cognitive supervision, and poor correspondence between data quality criteria and engineering specifications. The core bottleneck is the construction of high-quality supervised fine-tuning (SFT) datasets. To this end, we propose BD-FDG (Bloom's Taxonomy-based Domain-specific Fine-tuning Data Generation), a framework that addresses incomplete knowledge coverage, shallow cognitive depth, and limited quality controllability through three mechanisms: structured knowledge organization, cognitively layered question modeling, and automated quality control. The framework uses a knowledge tree to ensure structured corpus coverage, designs a question generation scheme spanning nine categories and six cognitive levels from Remember to Create to produce samples with a continuous difficulty gradient, and applies a multidimensional scoring pipeline to enforce domain rigor and consistency. Using BD-FDG, we construct SSA-SFT, a domain dataset of approximately 230K samples, and fine-tune Qwen3-8B to obtain SSA-LLM-8B. Experiments show that SSA-LLM-8B achieves relative BLEU-1 improvements of 144\\% (no-think) and 176\\% (think) on the domain test set and a win rate of 82.21\\% over the baseline in arena comparisons, while largely preserving general benchmark performance (MMLU-Pro, MATH-500). These results validate SFT data construction driven by cognitive layering as an effective paradigm for complex engineering domains and provide a transferable framework for domain-specific LLM adaptation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-10T06:04:53Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Ding Linghu</name>\n    </author>\n    <author>\n      <name>Cheng Wang</name>\n    </author>\n    <author>\n      <name>Da Fan</name>\n    </author>\n    <author>\n      <name>Wei Shi</name>\n    </author>\n    <author>\n      <name>Kaifeng Yin</name>\n    </author>\n    <author>\n      <name>Xiaoliang Xue</name>\n    </author>\n    <author>\n      <name>Fan Yang</name>\n    </author>\n    <author>\n      <name>Haiyi Ren</name>\n    </author>\n    <author>\n      <name>Cong Zhang</name>\n    </author>\n  </entry>"
}