Research

Paper

AI LLM March 25, 2026

Forensic Implications of Localized AI: Artifact Analysis of Ollama, LM Studio, and llama.cpp

Authors

Shariq Murtuza

Abstract

The proliferation of local Large Language Model (LLM) runners, such as Ollama, LM Studio and llama.cpp, presents a new challenge for digital forensics investigators. These tools enable users to deploy powerful AI models in an offline manner, creating a potential evidentiary blind spot for investigators. This work presents a systematic, cross platform forensic analysis of these popular local LLM clients. Through controlled experiments on Windows and Linux operating systems, we acquired and analyzed disk and memory artifacts, documenting installation footprints, configuration files, model caches, prompt histories and network activity. Our experiments uncovered a rich set of previously undocumented artifacts for each software, revealing significant differences in evidence persistence and location based on application architecture. Key findings include the recovery of plaintext prompt histories in structured JSON files, detailed model usage logs and unique file signatures suitable for forensic detection. This research provides a foundational corpus of digital evidence for local LLMs, offering forensic investigators reproducible methodologies, practical triage commands and analyse this new class of software. The findings have critical implications for user privacy, the admissibility of AI-related evidence and the development of anti-forensic techniques.

Metadata

arXiv ID: 2603.23996
Provider: ARXIV
Primary Category: cs.CR
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23996v1</id>\n    <title>Forensic Implications of Localized AI: Artifact Analysis of Ollama, LM Studio, and llama.cpp</title>\n    <updated>2026-03-25T06:51:33Z</updated>\n    <link href='https://arxiv.org/abs/2603.23996v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23996v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The proliferation of local Large Language Model (LLM) runners, such as Ollama, LM Studio and llama.cpp, presents a new challenge for digital forensics investigators. These tools enable users to deploy powerful AI models in an offline manner, creating a potential evidentiary blind spot for investigators. This work presents a systematic, cross platform forensic analysis of these popular local LLM clients. Through controlled experiments on Windows and Linux operating systems, we acquired and analyzed disk and memory artifacts, documenting installation footprints, configuration files, model caches, prompt histories and network activity. Our experiments uncovered a rich set of previously undocumented artifacts for each software, revealing significant differences in evidence persistence and location based on application architecture. Key findings include the recovery of plaintext prompt histories in structured JSON files, detailed model usage logs and unique file signatures suitable for forensic detection. This research provides a foundational corpus of digital evidence for local LLMs, offering forensic investigators reproducible methodologies, practical triage commands and analyse this new class of software. The findings have critical implications for user privacy, the admissibility of AI-related evidence and the development of anti-forensic techniques.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CR'/>\n    <published>2026-03-25T06:51:33Z</published>\n    <arxiv:primary_category term='cs.CR'/>\n    <author>\n      <name>Shariq Murtuza</name>\n    </author>\n  </entry>"
}