DFIR Guide: Digital Forensics and Incident Response Methodology
Digital forensics and incident response are related but distinct disciplines. Incident response is the operational process of containing and eradicating a threat. Digital forensics is the systematic collection, preservation, and analysis of evidence to understand what happened, how it happened, and what was affected. In practice they overlap — the same analyst often performs both — but the forensic mindset requires different rigor: evidence must be collected in a way that preserves its integrity, the order of collection matters, and the analytical approach must be defensible if the investigation results in litigation, insurance claims, or regulatory reporting. This guide covers the methodology, toolchain, and decision points that define a credible DFIR investigation.
DFIR vs. Incident Response: The Forensics Distinction
Incident response focuses on restoring operations: contain the threat, eradicate persistence, recover systems. The primary goal is business continuity. Forensic investigation focuses on understanding and attribution: what did the attacker do, when, to what systems, and what data was affected? The primary goal is evidence.
When forensics matters beyond internal understanding:
- Litigation and law enforcement referral: Evidence must be collected following chain of custody procedures to be admissible. Ad-hoc investigation without documented collection procedures can render evidence inadmissible.
- Cyber insurance claims: Insurers require forensic evidence of breach scope, timeline, and root cause. A credible forensic report from a qualified firm is typically required for claims above a threshold.
- Regulatory notification: GDPR, HIPAA, SEC incident disclosure, and state breach notification laws require determination of what data was affected and when. Forensics establishes the scope.
- Post-incident litigation: If a breach affects customers or partners, the forensic record becomes evidence in civil proceedings.
When you can prioritize IR over forensics:
- Ransomware where the scope is obvious, no litigation is anticipated, and speed of recovery is the only priority
- Confirmed malware infection with known indicators and no evidence of data exfiltration
- Insider threat where HR and legal have determined the path forward does not require evidentiary-quality investigation
The practical compromise: Most enterprise DFIR involves forensic rigor applied proportionally to risk. Full chain of custody for systems involved in data exfiltration; expedited evidence capture without full chain of custody for systems that need to be restored quickly. Document the decisions made and why.
Order of Volatility: What to Collect First and Why
The order of volatility principle establishes that evidence must be collected in order from most to least volatile — data that disappears soonest is collected first.
RFC 3227 Order of Volatility (most to least volatile):
- CPU registers, cache, and process state (lost immediately on power cycle)
- Routing tables, ARP cache, process table, kernel statistics, memory
- Temporary file systems and swap space
- Data on local disk
- Remote logging and monitoring data
- Physical configuration and network topology
- Archival media (backups, tapes)
In practice, enterprise DFIR prioritizes:
First — running memory (RAM): Memory contains running processes, loaded DLLs, active network connections, decrypted credentials, encryption keys, and malware that may exist nowhere else on disk (fileless malware). On a live system, capture memory before any other action. Memory is lost immediately on reboot or power cycle.
Second — volatile system state: Before capturing disk, collect: running process list with command-line arguments, active network connections (netstat), logged-in users, recently created files, registry run keys, scheduled tasks, loaded services, and clipboard contents. This can be automated with tools like KAPE's triage collection or Velociraptor artifact collection.
Third — disk image: A forensic-quality disk image (bit-for-bit copy with cryptographic hash verification) preserves the complete file system including deleted files, unallocated space, and metadata. For live systems where downtime is unacceptable, use a live acquisition tool rather than powering off.
Fourth — log sources: Windows Event Logs, Sysmon logs, EDR telemetry, network logs, cloud provider logs, authentication logs. Many of these are available from centralized logging even after the endpoint is reimaged — but confirm log retention windows before the investigation starts.
The reboot decision: Rebooting an infected system destroys memory evidence but may be necessary for containment. In most cases, capture memory first (5-15 minutes for most enterprise systems), then make the reboot decision. If the system cannot be kept running even briefly, document the decision and proceed to disk imaging after reboot.
Briefings like this, every morning before 9am.
Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.
Memory Forensics: Extracting Evidence from RAM
Memory forensics recovers artifacts that do not exist on disk: fileless malware, injected code, decrypted credentials, active C2 connections, and process injection evidence. It is often the most valuable forensic step in modern intrusions.
Memory acquisition tools:
- WinPmem: Open-source, widely used, produces raw memory dumps or AFF4 format. Runs from command line:
winpmem_mini_x64_rc2.exe memdump.raw - Magnet RAM Capture: Free, GUI-based, simple to deploy to non-forensic staff
- F-Response: Enterprise tool for remote memory acquisition without installing agents
- Velociraptor: Can collect memory remotely at scale across a fleet
- FTK Imager: Multi-purpose tool that includes memory acquisition
Volatility 3 — the memory analysis framework: Volatility is the standard for memory forensic analysis. Volatility 3 (Python 3) uses symbol tables instead of profiles for Windows, making it compatible with modern Windows versions without requiring exact profile matches.
Essential Volatility plugins for incident response:
# List running processes with parent/child relationships
vol.py -f memdump.raw windows.pstree
# Detect process injection (hollow processes, DLL injection)
vol.py -f memdump.raw windows.malfind
# List network connections (active and recently closed)
vol.py -f memdump.raw windows.netstat
# List loaded DLLs per process — detect unsigned or unusual DLLs
vol.py -f memdump.raw windows.dlllist --pid 1234
# Extract process executable from memory
vol.py -f memdump.raw windows.dumpfiles --pid 1234
# Dump registry hives from memory (for offline registry analysis)
vol.py -f memdump.raw windows.registry.hivelist
vol.py -f memdump.raw windows.registry.printkey --key "Software\\Microsoft\\Windows\\CurrentVersion\\Run"
# Detect command history in conhost/cmd processes
vol.py -f memdump.raw windows.cmdline
What to look for in memory analysis:
- Processes with no parent or unexpected parents (svchost.exe spawned by cmd.exe)
- Processes running from temp directories or user profile paths
- Processes with names that mimic legitimate system processes (svch0st.exe, lsas.exe)
- Network connections from processes that should not have network access
malfindresults showing executable memory regions not backed by a file on disk (classic shellcode injection)- Missing PEB (Process Environment Block) — sign of process hollowing
Disk Forensics: Imaging, Hashing, and File System Analysis
A forensic disk image is a bit-for-bit copy of a storage device, accompanied by cryptographic hash verification that proves the image is identical to the original. This is the foundation of chain of custody for disk evidence.
Creating a forensic image:
# Using dd (Linux) — basic but functional
dd if=/dev/sda of=/mnt/evidence/disk.dd bs=4M conv=sync,noerror
md5sum /dev/sda > /mnt/evidence/disk.dd.md5
md5sum /mnt/evidence/disk.dd >> /mnt/evidence/disk.dd.md5
# Using dcfldd — enhanced dd with hashing built in
dcfldd if=/dev/sda of=/mnt/evidence/disk.dd hash=sha256 hashlog=/mnt/evidence/disk.sha256
# Using ewfacquire — E01 format with built-in compression and verification
ewfacquire -C "Case001" -D "Compromised workstation" -e "Investigator Name" /dev/sda
FTK Imager (Windows, free): The most commonly used forensic imaging tool for Windows endpoints. Supports E01, AFF, and raw formats. Verifies hash before and after imaging automatically.
KAPE (Kroll Artifact Parser and Extractor): KAPE is not a full disk imager — it is a targeted artifact collector. Instead of imaging the full disk, KAPE collects specific forensically relevant files: event logs, registry hives, prefetch files, browser history, recent files, jump lists, LNK files, shellbags, and Amcache. Much faster than full disk imaging (minutes vs. hours) and sufficient for most IR investigations that do not require full disk preservation.
kape.exe --tsource C: --tdest E:\Evidence --tflush --target !SANS_Triage
The !SANS_Triage target bundle collects the standard forensic artifact set used in SANS FOR508.
Key Windows forensic artifacts and what they prove:
- Prefetch files (
C:\Windows\Prefetch\): Evidence that an executable ran. Contains the executable name, run count, last run time, and files accessed during execution. Survives deletion of the executable. - Amcache.hve: Registry hive tracking recently executed programs with SHA-1 hash. Useful for linking a file hash to execution even after deletion.
- Shimcache (AppCompatCache): List of executables that the OS compatibility layer has seen, with timestamps. Does not prove execution, but proves the file existed on disk.
- Jump Lists and LNK files: Recent documents opened by users. Evidence of data accessed before exfiltration.
- $MFT (Master File Table): NTFS metadata for every file: creation, modification, access, and MFT record change timestamps. Deleted file metadata persists in the MFT until the record is reused.
- Windows Event Logs: See the Windows Event Log Analysis guide for the specific Event IDs relevant to security investigations.
Timeline Reconstruction: Super Timeline Analysis with Plaso
A forensic super timeline merges timestamps from multiple artifact sources into a single chronological record of system activity. This is the analytical step that transforms a collection of artifacts into a coherent narrative of the intrusion.
Plaso (log2timeline): Plaso is the standard tool for super timeline generation. It parses 50+ artifact types (Windows Event Logs, prefetch, Amcache, Shimcache, browser history, registry, LNK files, Office MRU, NTFS $MFT, Sysmon, and more) and outputs a unified timeline in CSV or JSON.
# Create a Plaso storage file from a disk image or artifact collection
log2timeline.py --storage-file timeline.plaso /path/to/evidence/
# Filter and output to CSV for analysis
psort.py -o l2tcsv -w timeline.csv timeline.plaso
# Filter to a specific time window
psort.py -o l2tcsv -w timeline.csv timeline.plaso "date > '2026-01-15 00:00:00' AND date < '2026-01-16 00:00:00'"
Timeline analysis workflow:
- Establish the known anchor point — the time of the confirmed malicious event (first alert, ransom note, exfiltration timestamp)
- Work backward from the anchor point to identify initial access
- Work forward to identify post-exploitation activity, lateral movement, and persistence
- Identify gaps in the timeline that indicate log clearing or artifact deletion
NTFS timestamp manipulation (timestomping): Attackers use timestomping to modify file timestamps and blend malicious files into the normal timeline. The $STANDARD_INFORMATION attribute timestamps can be modified by the attacker; the $FILE_NAME attribute timestamps (in the MFT) cannot be easily changed without kernel-level access. Discrepancy between $SI and $FN timestamps is a timestomping indicator.
Cloud DFIR: How Investigation Changes in AWS, Azure, and GCP
Cloud environments present different forensic challenges than traditional endpoints: no physical hardware access, ephemeral compute, shared responsibility models, and evidence that lives in cloud-provider services rather than local disk.
What changes in cloud DFIR:
Evidence sources shift from endpoint to cloud services:
- CloudTrail (AWS), Azure Activity Log, GCP Cloud Audit Logs: API call records. Who called what API from where, with what identity. The cloud equivalent of EDR process execution logs.
- VPC Flow Logs / Azure NSG Flow Logs: Network connection records at the VPC level. Shows traffic between resources without payload content.
- S3 Access Logs / Azure Storage Logs: Data access records for object storage.
- IAM credential reports: Which credentials exist, when they were last used, what they accessed.
EC2 instance memory acquisition: You cannot directly attach a forensic tool to an EC2 instance's hypervisor memory. Options:
- Install an agent-based memory acquisition tool before the incident (proactive)
- Use AWS Systems Manager Run Command to execute a memory acquisition tool on a running instance
- For volatile memory: capture process state, network connections, and running processes via SSM before the instance is terminated
Disk forensics in cloud:
- Take an EBS snapshot of the compromised instance before termination:
aws ec2 create-snapshot --volume-id vol-xxxxxxxx --description "Forensic capture" - Attach the snapshot to a forensic analysis instance in a separate, isolated VPC
- Mount read-only and analyze with standard Linux forensic tools
CloudTrail forensics — key queries:
# Find all API calls from a compromised IAM identity
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=Username,AttributeValue=compromised-role \
--start-time 2026-01-15T00:00:00Z --end-time 2026-01-16T00:00:00Z
# Look for credential exfiltration indicators
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=GetSecretValue
Cloud DFIR tooling:
- AWS CloudTrail Lake: SQL-based querying of CloudTrail events — faster than S3 log analysis for large volumes
- Trivy / Prowler: Cloud configuration forensics — what was the security posture of the account at the time of the incident?
- Margarita Shotgun: Remote memory acquisition for AWS EC2 instances via SSM
- Cloud Forensics Workbench (CloudFox, Pacu in investigation mode): Enumerate what a compromised identity had access to
DFIR Toolchain: Open-Source and Enterprise Options
Velociraptor — enterprise DFIR at scale: Velociraptor is the most significant open-source advancement in DFIR in recent years. It is an endpoint agent and server that enables remote forensic artifact collection at fleet scale. Key capabilities: run VQL (Velociraptor Query Language) queries across thousands of endpoints simultaneously, collect specific artifacts remotely, perform real-time threat hunting, and monitor for specific conditions.
-- Hunt for processes making network connections to known bad IPs
SELECT Pid, Name, CommandLine, RemoteAddr
FROM connections()
WHERE RemoteAddr =~ '185\.220\.'
Deploy Velociraptor in your environment before an incident — post-incident deployment is harder.
KAPE (Kroll Artifact Parser and Extractor): The fastest way to collect forensic artifacts from a Windows system. Pre-built target configurations collect event logs, registry hives, prefetch, browser history, and more. Output is processed through Modules (parsers) that convert binary artifacts to human-readable CSV and JSON. Free, maintained by Kroll/Eric Zimmerman.
Autopsy / Sleuth Kit: Open-source disk forensics GUI and library. Supports Windows, Linux, and macOS disk images. Keyword search across disk images, deleted file recovery, timeline view, hash-set filtering (NSRL), and plugin architecture for specialized analysis.
Eric Zimmerman Tools: A suite of free Windows forensic artifact parsers covering every major artifact type: MFTECmd (MFT), LECmd (LNK), JLECmd (Jump Lists), PECmd (Prefetch), RECmd (Registry), AppCompatCacheParser, AmcacheParser, and more. The de facto standard for Windows artifact parsing.
MISP (Malware Information Sharing Platform): Threat intelligence sharing platform. During DFIR, use MISP to look up IOCs against community threat intelligence and to share findings with sector ISACs.
Commercial DFIR platforms:
- Magnet AXIOM/Cyber: Strongest GUI for Windows and cloud artifact analysis. Evidence management built in.
- EnCase: Traditionally dominant in law enforcement and legal-hold investigations. Strong chain of custody documentation.
- Exterro FTK: Strong for large-scale eDiscovery investigations with regulatory requirements.
Velociraptor
Remote fleet-scale forensic artifact collection and real-time threat hunting. Deploy before an incident for maximum value.
KAPE
Fastest targeted artifact collection from Windows endpoints. Pre-built SANS triage target bundles collect the essential artifact set in minutes.
Volatility 3
The standard framework for memory forensic analysis. Detects process injection, fileless malware, and extracts credentials from RAM.
Plaso/log2timeline
Super timeline generation from 50+ artifact types. Merges timestamps into a single chronological record for intrusion narrative reconstruction.
Eric Zimmerman Tools
Free suite of Windows artifact parsers covering MFT, LNK, Jump Lists, Prefetch, Registry, Amcache, and Shimcache.
Reporting: Technical and Executive Deliverables
A DFIR investigation produces two categories of deliverables: the technical forensic report for security teams and legal counsel, and the executive summary for leadership and insurers.
Technical forensic report structure:
- Executive summary (1 page): what happened, scope, timeline, impact
- Scope and methodology: systems examined, evidence collected, tools used, chain of custody documentation
- Findings: chronological timeline of attacker activity with supporting evidence references
- Root cause analysis: initial access vector, how it could have been prevented
- Impact assessment: data accessed or exfiltrated, systems affected, business impact
- Indicators of compromise: hashes, IPs, domains, registry keys, file paths
- Recommendations: short-term containment actions and long-term security improvements
- Evidence appendix: hash values, acquisition details, chain of custody forms
Chain of custody documentation: For each piece of evidence: item description, acquisition date/time, acquired by, acquisition method, hash values (MD5 and SHA-256), storage location, and log of everyone who accessed the evidence. This is essential for litigation and insurance claims.
Executive summary essentials:
- What happened (in plain language)
- When it happened and how long the attacker had access
- What data was affected (specific data types and approximate volumes)
- How the attacker got in
- What has been done to contain and eradicate the threat
- What is being done to prevent recurrence
- Regulatory notification obligations and timelines
Regulatory notification timing: GDPR: 72 hours from awareness of breach to supervisory authority. HIPAA: 60 days from discovery to affected individuals and HHS. SEC (public companies): 4 business days from determining materiality. State breach notification laws vary from 30 to 90 days. The forensic report is the evidentiary basis for all of these — start it immediately, even if incomplete.
The bottom line
DFIR is a discipline, not just a tool set. The most common failures are collecting evidence in the wrong order (rebooting before capturing memory), failing to document chain of custody until after the investigation (making it unreconstructable), and treating forensics as optional until insurers or regulators require it. Deploy Velociraptor proactively so you have remote collection capability before an incident. Establish a forensic-ready logging baseline — if CloudTrail, Windows Event Logs, and Sysmon are not configured to capture the events you need, no forensic tool will recover what was never logged. The investigation is only as good as the evidence that survives.
Frequently asked questions
What is the difference between DFIR and traditional incident response?
Incident response prioritizes containment and recovery: stop the bleeding, eradicate the attacker, restore operations. Digital forensics adds evidentiary rigor: collect evidence in order of volatility, maintain chain of custody, and produce findings that can support litigation, regulatory reporting, or insurance claims. In practice, most enterprise DFIR involves both disciplines applied simultaneously. The forensic mindset matters most when the investigation may result in law enforcement referral, civil litigation, insurance claims, or regulatory notification.
What order should I collect evidence in during a DFIR investigation?
Follow the order of volatility: RAM and running process state first (lost on reboot), then volatile system state (network connections, logged-in users, running services), then disk image or targeted artifact collection, then log sources from centralized logging. The most critical decision is whether to capture memory before rebooting or isolating the system. In most cases, capturing memory (5-15 minutes) is worth the time investment before making any changes to the system.
How do I perform memory forensics on a live system?
Use WinPmem or Magnet RAM Capture to acquire a raw memory dump without requiring a reboot. Then analyze the dump offline with Volatility 3. Key plugins: `windows.pstree` for process relationships, `windows.malfind` for process injection detection, `windows.netstat` for network connections from memory, and `windows.cmdline` for command history. Velociraptor can perform remote memory acquisition at scale across a fleet without physical access.
What is the best open-source DFIR toolchain?
Velociraptor for remote fleet-scale artifact collection, KAPE for targeted Windows artifact collection, Volatility 3 for memory analysis, Eric Zimmerman Tools for Windows artifact parsing (MFT, Prefetch, LNK, Registry), Plaso/log2timeline for super timeline construction, and Autopsy/Sleuth Kit for disk image browsing. All are free. SANS FOR508 course materials provide curated guidance on using these tools together in a structured investigation workflow.
How does DFIR change in cloud environments?
In cloud environments, evidence lives in cloud-provider services rather than local disk: CloudTrail/Azure Activity Log for API call records, VPC Flow Logs for network activity, IAM credential reports for identity forensics. Disk forensics shifts to EBS snapshot capture and analysis on a separate forensic instance. Memory acquisition requires an agent deployed before the incident or SSM Run Command during it. The key difference is that the evidence is often more complete (CloudTrail captures everything) but requires different tools and query patterns.
When do I need to preserve chain of custody?
Preserve chain of custody when the investigation may result in law enforcement referral, civil litigation against an attacker or a negligent party, regulatory enforcement action, or a cyber insurance claim above your insurer's threshold. Chain of custody documentation covers: what evidence was collected, by whom, using what method, when, with what cryptographic hash values, and who has accessed it since. If you are not sure whether chain of custody will be required, preserve it anyway — it is much harder to reconstruct after the fact.
How long should a DFIR investigation take?
Timeline varies significantly by incident scope. A contained malware infection with clear indicators can be investigated in 1-3 days. A sophisticated intrusion with lateral movement across multiple systems typically requires 2-4 weeks for full forensic analysis. Ransomware investigations involving data exfiltration and regulatory notification requirements commonly run 4-8 weeks to produce a complete forensic report. Cloud-only incidents can be faster if CloudTrail and logging were comprehensive. The 11-day median dwell time (Mandiant 2025) means most investigations must reconstruct at least two weeks of attacker activity.
Sources & references
- NIST SP 800-86: Guide to Integrating Forensic Techniques into Incident Response
- SANS FOR508: Advanced Incident Response, Threat Hunting, and Digital Forensics
- Volatility Foundation Documentation
- CISA Federal Incident Response Playbooks 2024
- Mandiant M-Trends 2025 Annual Threat Report
Free resources
Critical CVE Reference Card 2025–2026
25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.
Ransomware Incident Response Playbook
Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.
Get threat intel before your inbox does.
50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.
Unsubscribe anytime. We never sell your data.

Founder & Cybersecurity Evangelist, Decryption Digest
Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.
The Mythos Brief is free.
AI that finds 27-year-old zero-days. What it means for your security program.
