The Problem
You're in the middle of discussing a complex architectural refactor with Claude when your laptop battery dies. A few minutes later, you plug in the charger and type claude --resume — the entire previous conversation is fully restored, including Claude's analysis, file modifications already executed, and unfinished steps. You can even continue the session on a different machine via a session URL.
This isn't magic — it's the work of Claude Code's session persistence system. It needs to solve several core problems:
- How to reliably write a live conversation to disk without losing data?
- How to skip compressed history during resumption and load only the necessary context?
- How to find and resume sessions across different project directories and devices?
This article provides an in-depth analysis of the complete session persistence mechanism.
Session Storage Architecture
Storage Location
~/.claude/projects/{sanitized-path}/{session-id}.jsonl
Read Path
head/tail lightweight read
compact_boundary truncation
JSONL line-by-line parsing
Path Sanitization
Session files are stored under ~/.claude/projects/, with subdirectory names derived from the project path. Path sanitization ensures correct handling on any operating system:
1export function sanitizePath(name: string): string {
2 const sanitized = name.replace(/[^a-zA-Z0-9]/g, '-')
3 if (sanitized.length <= MAX_SANITIZED_LENGTH) {
4 return sanitized
5 }
6 const hash =
7 typeof Bun !== 'undefined' ? Bun.hash(name).toString(36) : simpleHash(name)
8 return `${sanitized.slice(0, MAX_SANITIZED_LENGTH)}-${hash}`
9}
All non-alphanumeric characters are replaced with hyphens. For deeply nested paths (exceeding 200 characters), the name is truncated and a hash suffix is appended to ensure uniqueness. There's a subtle compatibility issue here — the CLI uses Bun.hash when running in the Bun runtime, while the SDK uses djb2Hash under Node.js, producing different directory suffixes for very long paths. findProjectDir resolves this through a prefix-matching fallback:
1export async function findProjectDir(
2 projectPath: string,
3): Promise<string | undefined> {
4 const exact = getProjectDir(projectPath)
5 try {
6 await readdir(exact)
7 return exact
8 } catch {
9 const sanitized = sanitizePath(projectPath)
10 if (sanitized.length <= MAX_SANITIZED_LENGTH) {
11 return undefined
12 }
13 const prefix = sanitized.slice(0, MAX_SANITIZED_LENGTH)
14 const projectsDir = getProjectsDir()
15 try {
16 const dirents = await readdir(projectsDir, { withFileTypes: true })
17 const match = dirents.find(
18 d => d.isDirectory() && d.name.startsWith(prefix + '-'),
19 )
20 return match ? join(projectsDir, match.name) : undefined
21 } catch {
22 return undefined
23 }
24 }
25}
Session Listing: Two-Phase Scan
When listing sessions, the system must balance performance against completeness. A project directory might contain thousands of session files — reading the full contents of each would be prohibitively expensive.
1export async function listSessionsImpl(
2 options?: ListSessionsOptions,
3): Promise<SessionInfo[]> {
4 const { dir, limit, offset, includeWorktrees } = options ?? {}
5 const off = offset ?? 0
6 const doStat = (limit !== undefined && limit > 0) || off > 0
7
8 const candidates = dir
9 ? await gatherProjectCandidates(dir, includeWorktrees ?? true, doStat)
10 : await gatherAllCandidates(doStat)
11
12 if (!doStat) return readAllAndSort(candidates)
13 return applySortAndLimit(candidates, limit, off)
14}
When limit or offset is specified, a two-phase strategy is used:
- stat-only scan — reads only file metadata (mtime), one syscall per file
- on-demand content read — after sorting, reads only the head/tail of the top N candidates
This means limit: 20 in a directory with 1000 sessions performs ~1000 stat calls + ~20 content reads, rather than 1000 content reads.
Lightweight Metadata Extraction
Metadata for each session file is obtained through head/tail reads — no need to parse the entire file:
1export async function readHeadAndTail(
2 filePath: string,
3 fileSize: number,
4 buf: Buffer,
5): Promise<{ head: string; tail: string }> {
6 try {
7 const fh = await fsOpen(filePath, 'r')
8 try {
9 const headResult = await fh.read(buf, 0, LITE_READ_BUF_SIZE, 0)
10 if (headResult.bytesRead === 0) return { head: '', tail: '' }
11
12 const head = buf.toString('utf8', 0, headResult.bytesRead)
13
14 const tailOffset = Math.max(0, fileSize - LITE_READ_BUF_SIZE)
15 let tail = head
16 if (tailOffset > 0) {
17 const tailResult = await fh.read(buf, 0, LITE_READ_BUF_SIZE, tailOffset)
18 tail = buf.toString('utf8', 0, tailResult.bytesRead)
19 }
20
21 return { head, tail }
22 } finally {
23 await fh.close()
24 }
25 } catch {
26 return { head: '', tail: '' }
27 }
28}
LITE_READ_BUF_SIZE is 64KB — the file head provides session creation information, while the file tail provides the latest state (title, branch, tags, etc.). Metadata fields are extracted directly from the JSON text using regex, without full JSON parsing:
1export function extractJsonStringField(
2 text: string,
3 key: string,
4): string | undefined {
5 const patterns = [`"${key}":"`, `"${key}": "`]
6 for (const pattern of patterns) {
7 const idx = text.indexOf(pattern)
8 if (idx < 0) continue
9 const valueStart = idx + pattern.length
10 let i = valueStart
11 while (i < text.length) {
12 if (text[i] === '\\') { i += 2; continue }
13 if (text[i] === '"') {
14 return unescapeJsonString(text.slice(valueStart, i))
15 }
16 i++
17 }
18 }
19 return undefined
20}
This "pattern matching rather than full parsing" approach also works on truncated JSONL lines (where the file tail cuts off mid-line).
Session Info Assembly
1export function parseSessionInfoFromLite(
2 sessionId: string,
3 lite: LiteSessionFile,
4 projectPath?: string,
5): SessionInfo | null {
6 const { head, tail, mtime, size } = lite
7
8 // Filter out sidechain sessions (internal sessions of sub-agents)
9 const firstLine = firstNewline >= 0 ? head.slice(0, firstNewline) : head
10 if (firstLine.includes('"isSidechain":true')) {
11 return null
12 }
13
14 // Title priority: customTitle > aiTitle > lastPrompt > firstPrompt
15 const customTitle =
16 extractLastJsonStringField(tail, 'customTitle') ||
17 extractLastJsonStringField(head, 'customTitle') ||
18 extractLastJsonStringField(tail, 'aiTitle') ||
19 extractLastJsonStringField(head, 'aiTitle') ||
20 undefined
21
22 const summary =
23 customTitle ||
24 extractLastJsonStringField(tail, 'lastPrompt') ||
25 extractLastJsonStringField(tail, 'summary') ||
26 firstPrompt
27
28 // No title or summary — skip metadata-only sessions
29 if (!summary) return null
30 // ...
31}
Session Resumption: Compact Boundary Truncation
For long-running sessions (5MB+), loading all messages in full is both inefficient and unnecessary — auto-compact has already compressed old messages into summaries. readTranscriptForLoad finds the last compact_boundary marker at the file level and loads only the messages that follow it:
1export async function readTranscriptForLoad(
2 filePath: string,
3 fileSize: number,
4): Promise<{
5 boundaryStartOffset: number
6 postBoundaryBuf: Buffer
7 hasPreservedSegment: boolean
8}> {
9 // ...
10 const chunk = Buffer.allocUnsafe(CHUNK_SIZE)
11 const fd = await fsOpen(filePath, 'r')
12 try {
13 let filePos = 0
14 while (filePos < fileSize) {
15 const { bytesRead } = await fd.read(
16 chunk, 0,
17 Math.min(CHUNK_SIZE, fileSize - filePos),
18 filePos,
19 )
20 if (bytesRead === 0) break
21 filePos += bytesRead
22 // processStraddle + scanChunkLines handle lines spanning chunk boundaries
23 }
24 finalizeOutput(s)
25 } finally {
26 await fd.close()
27 }
28}
The design of this function is highly refined:
- 1MB chunked reads — avoids loading large files entirely into memory
- attribution-snapshot filtering — skipped at the fd level, retaining only the last snapshot
- compact_boundary truncation — upon encountering a new boundary, all prior output is discarded
- preservedSegment detection — preserved segments are message fragments marked as important during compaction
Raw JSONL File (24MB)
Intermediate Messages (8MB)
Total ≈ 6 steps (parallel = faster)
Loaded Result
Total ≈ 2 steps (parallel = faster)
Cross-Chunk Line Handling
A single line in a JSONL file may span a 1MB chunk boundary. processStraddle handles this case:
1// Simplified representation — processStraddle logic
2// The incomplete line from the previous chunk is saved in carryBuf
3// The first \n in the current chunk completes the line
4// Then determines whether the line is an attr-snap (skip) or boundary (truncate)
This streaming approach ensures peak memory usage equals the output size rather than the file size — for a 24MB session file where only 3MB of data follows the last boundary, memory usage is only ~3MB.
Prompt History
Unlike session messages (which record the complete conversation), prompt history only records user inputs — used for Up-arrow and Ctrl+R search.
1let pendingEntries: LogEntry[] = []
2let isWriting = false
3let currentFlushPromise: Promise<void> | null = null
4let cleanupRegistered = false
History writes are asynchronous and batched — new entries first enter the pendingEntries buffer, then are periodically flushed to ~/.claude/history.jsonl in the background.
Concurrency Safety
Multiple Claude sessions may write to the same history file simultaneously. The system uses file locking to ensure safety:
1async function immediateFlushHistory(): Promise<void> {
2 if (pendingEntries.length === 0) return
3
4 let release
5 try {
6 const historyPath = join(getClaudeConfigHomeDir(), 'history.jsonl')
7 await writeFile(historyPath, '', { encoding: 'utf8', mode: 0o600, flag: 'a' })
8
9 release = await lock(historyPath, {
10 stale: 10000,
11 retries: { retries: 3, minTimeout: 50 },
12 })
13
14 const jsonLines = pendingEntries.map(entry => jsonStringify(entry) + '\n')
15 pendingEntries = []
16
17 await appendFile(historyPath, jsonLines.join(''), { mode: 0o600 })
18 } catch (error) {
19 logForDebugging(`Failed to write prompt history: ${error}`)
20 } finally {
21 if (release) { await release() }
22 }
23}
Note the file permissions 0o600 — only the owner can read and write, protecting the privacy of user inputs.
History Deduplication and Ordering
1export async function* getHistory(): AsyncGenerator<HistoryEntry> {
2 const currentProject = getProjectRoot()
3 const currentSession = getSessionId()
4 const otherSessionEntries: LogEntry[] = []
5 let yielded = 0
6
7 for await (const entry of makeLogEntryReader()) {
8 if (!entry || typeof entry.project !== 'string') continue
9 if (entry.project !== currentProject) continue
10
11 if (entry.sessionId === currentSession) {
12 yield await logEntryToHistoryEntry(entry)
13 yielded++
14 } else {
15 otherSessionEntries.push(entry)
16 }
17
18 if (yielded + otherSessionEntries.length >= MAX_HISTORY_ITEMS) break
19 }
20
21 for (const entry of otherSessionEntries) {
22 if (yielded >= MAX_HISTORY_ITEMS) return
23 yield await logEntryToHistoryEntry(entry)
24 yielded++
25 }
26}
History entries from the current session take priority over those from other sessions — this prevents concurrent sessions from interleaving their history records. Up-arrow always shows the current session's inputs first.
History Undo
When a user presses Esc to cancel input before the AI responds, that input should be removed from history:
1export function removeLastFromHistory(): void {
2 if (!lastAddedEntry) return
3 const entry = lastAddedEntry
4 lastAddedEntry = null
5
6 const idx = pendingEntries.lastIndexOf(entry)
7 if (idx !== -1) {
8 pendingEntries.splice(idx, 1)
9 } else {
10 skippedTimestamps.add(entry.timestamp)
11 }
12}
The fast path removes the entry directly from the pending buffer. If the async flush has already written the entry to disk (TTFT is typically >> disk write latency, but race conditions occasionally occur), the timestamp is added to a skip-set and filtered out on the next read.
Pasted Content Handling
Large blocks of pasted text are not suitable for storing directly in the history file. The system uses size-based tiering:
1for (const [id, content] of Object.entries(entry.pastedContents)) {
2 if (content.type === 'image') continue // Images stored separately
3
4 if (content.content.length <= MAX_PASTED_CONTENT_LENGTH) {
5 // Small text (<=1024 characters) stored inline
6 storedPastedContents[Number(id)] = {
7 id: content.id, type: content.type,
8 content: content.content,
9 }
10 } else {
11 // Large text stored as hash reference, content written to paste store
12 const hash = hashPastedText(content.content)
13 storedPastedContents[Number(id)] = {
14 id: content.id, type: content.type,
15 contentHash: hash,
16 }
17 void storePastedText(hash, content.content)
18 }
19}
Cross-Project Resumption
A user might start Claude in one directory and then want to resume a session from a different project. crossProjectResume.ts handles this scenario:
1export function checkCrossProjectResume(
2 log: LogOption,
3 showAllProjects: boolean,
4 worktreePaths: string[],
5): CrossProjectResumeResult {
6 const currentCwd = getOriginalCwd()
7
8 if (!showAllProjects || !log.projectPath || log.projectPath === currentCwd) {
9 return { isCrossProject: false }
10 }
11
12 // Check if it's a different worktree of the same Git repository
13 const isSameRepo = worktreePaths.some(
14 wt => log.projectPath === wt || log.projectPath!.startsWith(wt + sep),
15 )
16
17 if (isSameRepo) {
18 return {
19 isCrossProject: true,
20 isSameRepoWorktree: true,
21 projectPath: log.projectPath,
22 }
23 }
24
25 // Different repository — generate a cd command
26 const sessionId = getSessionIdFromLog(log)
27 const command = `cd ${quote([log.projectPath])} && claude --resume ${sessionId}`
28 return {
29 isCrossProject: true,
30 isSameRepoWorktree: false,
31 command,
32 projectPath: log.projectPath,
33 }
34}
For different worktrees of the same Git repository, resumption can happen directly (same codebase). For entirely different projects, the system generates a cd + claude --resume command for the user to execute.
Session URL Parsing
The --resume argument supports three formats:
1export function parseSessionIdentifier(
2 resumeIdentifier: string,
3): ParsedSessionUrl | null {
4 // 1. JSONL file path
5 if (resumeIdentifier.toLowerCase().endsWith('.jsonl')) {
6 return {
7 sessionId: randomUUID() as UUID,
8 ingressUrl: null,
9 isUrl: false,
10 jsonlFile: resumeIdentifier,
11 isJsonlFile: true,
12 }
13 }
14
15 // 2. UUID session ID
16 if (validateUuid(resumeIdentifier)) {
17 return {
18 sessionId: resumeIdentifier as UUID,
19 ingressUrl: null,
20 isUrl: false,
21 jsonlFile: null,
22 isJsonlFile: false,
23 }
24 }
25
26 // 3. Ingress URL (remote resumption)
27 try {
28 const url = new URL(resumeIdentifier)
29 return {
30 sessionId: randomUUID() as UUID,
31 ingressUrl: url.href,
32 isUrl: true,
33 jsonlFile: null,
34 isJsonlFile: false,
35 }
36 } catch {
37 // Not a valid URL
38 }
39
40 return null
41}
The three formats cover different use cases:
- UUID — most common, looked up from the local
~/.claude/projects/ directory
- JSONL file — direct file path, used for debugging or importing
- URL — connects to a remote session ingress, used for cross-device resumption
Session File Resolution
When resuming a session, the system needs to find the corresponding JSONL file under ~/.claude/projects/. The search logic accounts for worktree scenarios:
1export async function resolveSessionFilePath(
2 sessionId: string,
3 dir?: string,
4): Promise<...> {
5 const fileName = `${sessionId}.jsonl`
6
7 if (dir) {
8 // First look in the current project directory
9 const canonical = await canonicalizePath(dir)
10 const projectDir = await findProjectDir(canonical)
11 if (projectDir) {
12 const filePath = join(projectDir, fileName)
13 // stat check + zero-byte filtering
14 }
15
16 // Worktree fallback — the session may exist in a different worktree root
17 let worktreePaths = await getWorktreePathsPortable(canonical)
18 for (const wt of worktreePaths) {
19 if (wt === canonical) continue
20 // Search each worktree
21 }
22 return undefined
23 }
24
25 // No dir — scan all project directories
26 const projectsDir = getProjectsDir()
27 let dirents = await readdir(projectsDir)
28 for (const name of dirents) {
29 // Search each directory
30 }
31 return undefined
32}
Zero-byte files are treated as not found — this handles cases where a file was truncated but not deleted, allowing the search to continue to a valid copy in a sibling directory.
Cost State Restoration
Resuming a session restores not only the conversation content but also the cost tracking state:
1export function restoreCostStateForSession(sessionId: string): boolean {
2 const data = getStoredSessionCosts(sessionId)
3 if (!data) {
4 return false
5 }
6 setCostStateForRestore(data)
7 return true
8}
Cost data is saved in the project configuration, associated by session ID. During restoration, the session ID is checked for a match — preventing cost data from different sessions from getting mixed up. The restored data includes:
- Total API cost (USD)
- API duration (with and without retries)
- Tool execution duration
- Lines of code changed statistics
- Per-model usage
Summary
Claude Code's session management system addresses the core challenges of AI Agent persistence:
- Incremental writes — the JSONL format supports append-only writes; a process crash only loses the last line
- Two-phase listing — stat-only pre-filtering + on-demand content reads keeps things fast even with thousands of sessions
- Compact Boundary truncation — on resume, only messages after the last compaction are loaded; a 24MB file becomes 3MB in memory
- Cross-environment compatibility — Bun/Node hash differences are resolved through prefix fallback
- Concurrency safety — file locking protects history writes; session-first ordering prevents session interleaving
- Complete state restoration — conversation, cost, and permission context are all restored together
The core design philosophy of this system is "dealing with reality" — files get truncated, processes crash, multiple sessions run concurrently, and users switch between different directories and devices. Every edge case has a corresponding safeguard.