Level 0 or 1
Where 70%+ of enterprise security teams sit per SANS detection engineering survey
11 days
Average attacker dwell time before detection in 2024 per Mandiant M-Trends report
3x growth
Detection-as-Code adoption among enterprise security teams between 2022 and 2025
1,000+
Atomic Red Team test cases mapped to MITRE ATT&CK techniques

The 11-day average attacker dwell time from the 2024 Mandiant M-Trends report is a damning number. It means that in the median enterprise compromise, an attacker operates inside the network for nearly two weeks before detection triggers a response. During those 11 days, attackers move laterally, escalate privileges, establish persistence, and exfiltrate data. The detection failure is not a technology problem: most enterprises have SIEM platforms with hundreds of rules enabled. It is a program maturity problem: the rules are vendor defaults, untested against real threat techniques, generating alert fatigue that trains analysts to ignore alerts.

The Detection Engineering Maturity Model provides a structured framework for diagnosing where a security team sits in the detection program maturity spectrum and what specific capabilities must be developed to advance. This guide covers the full model, the tools and practices that define each level, and the metrics that measure progress.

Why Most Detection Programs Are Stuck at Level 0 and What It Costs

Level 0 is defined not by the absence of detection tools but by the absence of a systematic detection program. Almost every enterprise with a SIEM has hundreds of enabled detection rules. The Level 0 condition is when those rules are entirely vendor-default, untested, unvalidated, and unmaintained. The security team's detection capability is entirely determined by what the SIEM vendor ships and updates.

Why does Level 0 persist even in well-funded security programs? Four structural causes:

Alert fatigue crushes improvement cycles. When a SIEM generates 500 to 2,000 alerts per day (a typical Level 0 state with vendor-default rules), every analyst hour goes to triage. There is no time to write new rules, test existing rules, or analyze detection coverage gaps. The triage burden is self-reinforcing: noisy default rules drive alert volume, which consumes analyst capacity, which prevents tuning and development, which keeps rules noisy.

Detection improvement is a development discipline, not a SOC discipline. SOC analysts are trained to investigate and respond to alerts. Writing, testing, and deploying detection rules requires software development skills (version control, CI/CD, scripting, data engineering) that are not part of the traditional SOC skill set. Organizations that expect SOC analysts to also develop detection content without dedicated detection engineering roles get neither SOC operations nor detection development done well.

Vendor rules are designed for breadth, not precision. SIEM and EDR vendors optimize their default rule sets to maximize coverage across all customer environments, which means default rules must be generic enough to fire in any environment. Generic rules produce high false positive rates in environments with specific legitimate behaviors that match the generic pattern. A vendor's default brute force rule with a threshold of 5 failed logins in 5 minutes fires hundreds of times per day in a helpdesk environment where legitimate password resets look identical to brute force from the rule's perspective.

The cost of staying at Level 0: A 2024 IBM Security report found that organizations with poor threat detection capabilities have breach costs 30% higher than organizations with mature detection programs. The operational cost is also significant: SANS research shows that Level 0 SOC teams spend 60-70% of analyst time on false positive triage, leaving 30-40% for real investigation. Advancing even one level, to Level 1 (tuned vendor rules with documented exclusions), typically reduces false positive triage time by 30-40% and frees analyst capacity for meaningful investigation work.

The Five Maturity Levels Defined

The following maturity level definitions describe concrete capability characteristics at each level, not aspirational states.

Level 0: Ad Hoc and Reactive

Capability indicators:

  • Detection rules are entirely vendor-default, never modified or tuned by the internal team
  • No version control for detection content; rules are configured directly in the SIEM UI
  • No testing of detection rules against simulated adversary techniques
  • No documentation of detection coverage or known coverage gaps
  • Alert triage is the SOC's dominant activity; investigation is reactive to alerts, never proactive
  • Alert fatigue is significant; analysts apply manual suppression patterns ("I'll just close anything from this IP address")

Where most teams are: This is the baseline state for 70%+ of enterprise security teams per SANS survey data.

Level 1: Tuned and Documented

Capability indicators:

  • Vendor-default rules have been tuned based on false positive data (exclusions are documented and applied systematically, not case-by-case)
  • Detection rules are documented: each rule has a description, ATT&CK mapping, known false positive patterns, and the analyst who approved it
  • Coverage gaps are identified informally ("we know we have no DNS exfiltration detection")
  • Basic version history exists: changes to rules are tracked in a ticket system or change log even if not in Git

Level 2: Systematic and Measured

Capability indicators:

  • Detection rules are stored in version control (Git)
  • New detections are developed from threat intelligence and threat hunting findings, not only from vendor updates
  • ATT&CK coverage is mapped and reviewed quarterly
  • False positive rates are measured per rule and used to drive tuning decisions
  • Detection coverage validation is done manually but regularly (quarterly ATT&CK-based gap review)

Level 3: Programmatic and Tested

Capability indicators:

  • Detection-as-Code: all rule changes go through peer review via pull requests before deployment
  • Automated testing: CI/CD pipelines run detection rule syntax validation on every PR
  • Coverage testing: Atomic Red Team or equivalent is used regularly to validate that rules fire against real attack techniques
  • Threat model exists: the detection program explicitly prioritizes coverage for threat actors and techniques most relevant to the organization's industry and attack surface
  • Metrics are tracked: MTTD, coverage percentage, false positive rate, and deployment velocity are reported monthly

Level 4: Adaptive and Automated

Capability indicators:

  • Full Detection-as-Code pipeline: rule development, peer review, automated testing against adversary emulation, and deployment are all automated
  • Continuous coverage validation: adversary emulation tests run automatically on a schedule against production-equivalent environments and report coverage gaps
  • Intelligence-driven detection: new threat intelligence findings automatically trigger detection development tasks
  • Detection program is integrated with the incident response process: every significant incident generates a detection retrospective and at least one new or improved detection rule within 5 business days
  • Self-assessment and improvement: the detection team regularly publishes internal detection coverage reports and uses them to drive annual detection roadmap planning
Free daily briefing

Briefings like this, every morning before 9am.

Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.

Detection-as-Code: Version Control, Peer Review, and CI/CD for Detection Rules

Detection-as-Code is the practice of applying software engineering discipline to detection rule development. It is the defining capability of Level 3 maturity and the most transformational change a detection program can make.

Why Detection-as-Code matters:

Without version control, detection rules have no audit trail. When a rule is changed to fix a false positive, there is no record of what the original rule looked like, who changed it, why, or whether the change was reviewed. Rule quality degrades silently over time. When an analyst is fired or leaves, the institutional knowledge embedded in their undocumented rule exclusions goes with them.

With Detection-as-Code, every detection rule is a file in a Git repository. Every change is a commit with an author, timestamp, and message. Every new rule or modification goes through a pull request reviewed by at least one other detection engineer before merging. The history of every rule is permanently auditable.

Rule format selection:

The choice of detection rule format determines how portable and automatable the Detection-as-Code pipeline is. Two primary options:

  • Sigma (vendor-neutral): Sigma rules are YAML-formatted detection rules that describe the detection logic in abstract field names, then compiled to platform-specific query languages (KQL, SPL, EQL) via the pySigma library. Using Sigma enables a single rule to deploy to multiple SIEM platforms and provides access to thousands of community-contributed rules in the SigmaHQ/sigma GitHub repository. The trade-off: the Sigma abstraction layer does not perfectly translate every platform-specific feature, so some advanced rules must be written natively.

  • Native format (KQL, SPL, EQL): Rules written in the SIEM's native query language provide full access to platform-specific features without abstraction loss. The trade-off: rules are not portable across platforms and the team cannot leverage the Sigma community rule library directly (though Sigma rules can be compiled to KQL/SPL as a starting point).

CI/CD pipeline for detection rules:

A minimal CI/CD pipeline for detection rules includes three automated steps:

  1. Syntax validation: compile rules (Sigma to target format, or validate native rule syntax) and fail the CI job if syntax errors are found
  2. Duplicate detection: check that the rule slug or ID does not conflict with existing rules
  3. Metadata validation: verify that required metadata fields (ATT&CK technique mapping, severity, description, author, date) are present

An advanced pipeline adds: 4. Automated testing: generate synthetic log events matching the detection scenario and verify that the rule fires against them 5. False positive regression testing: run the rule against a historical dataset of known-clean events and verify the false positive rate does not exceed a threshold 6. Automated deployment: on merge to the main branch, deploy the rule to the SIEM via API

Testing Detection Coverage with Atomic Red Team, CALDERA, and Stratus Red Team

Detection rules that have never been tested against real adversary techniques are untested hypotheses. Coverage testing validates that rules actually fire when the techniques they are supposed to detect occur.

Atomic Red Team:

Atomic Red Team (ART), maintained by Red Canary, is the most widely adopted open-source library of adversary technique simulations. Each atomic test is a small, contained script that executes a specific MITRE ATT&CK technique on a target system. The library contains over 1,000 tests covering hundreds of ATT&CK techniques across Windows, macOS, Linux, and cloud platforms.

Atomic tests are available as PowerShell (Windows-primary) and shell scripts. The Invoke-AtomicRedTeam PowerShell module provides a management interface:

# List available atomics for a specific technique
Invoke-AtomicTest T1059.001 -ShowDetailsBrief

# Run all atomics for a technique
Invoke-AtomicTest T1059.001

# Run a specific atomic by index
Invoke-AtomicTest T1059.001-1

# Run with timeout and cleanup
Invoke-AtomicTest T1059.001 -TimeoutSeconds 120 -Cleanup

# Run and generate Atomics summary log
Invoke-AtomicTest T1059.001 -LoggingModule "AtomicsToElastic"

Atomic tests should run on a dedicated test system with the same monitoring stack as production (same EDR agent version, same Sysmon configuration, same log forwarding pipeline). After running a test, verify in the SIEM or EDR that the expected alert fired within the expected timeframe. Document results in a detection coverage matrix.

MITRE CALDERA:

CALDERA is MITRE's open-source automated adversary emulation platform. Where Atomic Red Team provides individual technique tests, CALDERA orchestrates multi-step attack chains using configurable adversary profiles. CALDERA agents (called "implants") run on target systems and execute technique sequences autonomously. This is more useful for testing end-to-end detection chains (initial access through lateral movement to persistence) rather than individual technique validation.

Stratus Red Team:

Stratus Red Team (by DataDog) focuses on cloud-native attack technique simulation for AWS, Azure, GCP, and Kubernetes environments. It fills the gap left by ART and CALDERA for cloud threat detection validation. Example usage:

# List available cloud attack techniques
stratus list

# Detonate an AWS credential theft technique
stratus detonate aws.credential-access.ec2-get-password-data

# Clean up after detonation
stratus cleanup aws.credential-access.ec2-get-password-data

For detection programs covering cloud workloads in AWS or Azure, Stratus Red Team is the equivalent of Atomic Red Team for the cloud control plane layer.

Metrics That Define Detection Program Health

Measuring detection program maturity requires metrics that are objectively measurable and operationally meaningful. The following five metrics collectively describe a detection program's health.

1. Mean Time to Detect (MTTD)

MTTD measures the elapsed time from when an attacker technique occurs to when a detection fires. It is the primary operational output metric for a detection program. To measure MTTD accurately requires: a timestamp for when the adversary technique occurred (from a red team exercise, a known compromise timeline in retrospect, or an adversary emulation test), and a timestamp for when the corresponding SIEM or EDR alert fired. MTTD under 1 hour for high-severity techniques indicates a detection program capable of containing attacks before significant damage occurs. MTTD over 24 hours for high-severity techniques indicates a detection program that cannot prevent major compromises even when eventually triggered.

2. Detection Coverage Percentage

The fraction of your priority ATT&CK techniques (those used by threat actors relevant to your industry and attack surface) with active, validated detection rules. Measured by: (number of priority techniques with active validated rules) / (total number of priority techniques). "Validated" means the rule was tested with a real or simulated technique execution and confirmed to fire. A rule that exists in the SIEM but was never tested does not count as validated coverage.

3. False Positive Rate per Rule

The percentage of rule-generated alerts that analysts close as false positives after triage. Measured per rule, not as an aggregate. Rules with a false positive rate above 20% are candidates for tuning or deprecation. Rules with a false positive rate above 50% are consuming more analyst time than they save and should be considered harmful until tuned. Tracking false positive rates per rule requires an alert triage workflow that captures analyst disposition (true positive, false positive, benign true positive) on every alert.

4. Rule Deployment Frequency

The number of new or significantly updated detection rules deployed to production per month. This is a velocity metric: it measures whether the detection program is improving over time. A target of 5-20 new or updated rules per month per detection engineer is a reasonable production-quality benchmark. Lower velocity indicates either inadequate detection engineering capacity or process friction that slows rule development and deployment.

5. ATT&CK Technique Coverage Change Rate

The month-over-month change in the number of priority ATT&CK techniques with validated detection coverage. This metric connects deployment velocity to coverage outcomes: a team deploying 20 rules per month but covering the same techniques repeatedly is not improving coverage. Tracking this metric ensures that new rule development is directed at coverage gaps rather than redundant coverage of already-detected techniques.

Building the Maturity Roadmap: What to Do at Each Level to Advance

Advancing detection program maturity requires specific, concrete actions at each level. The following roadmap describes the highest-leverage actions to move from one level to the next.

Advancing from Level 0 to Level 1 (2-4 months):

The first priority is reducing false positive volume from vendor-default rules. Identify the 10 highest-volume rules by alert count and tune them with documented exclusions. A rule that generates 200 alerts per day and has a 95% false positive rate, once tuned to 20 alerts per day with a 20% false positive rate, frees enormous analyst capacity. Document each rule: what it detects, why it fires, and what constitutes a false positive. Use a ticketing system to track all rule changes if Git is not yet in place. Reduce overall alert volume by 30-50% without disabling any rules that generate true positive alerts.

Advancing from Level 1 to Level 2 (3-6 months):

Migrate detection content to Git version control. Establish a peer review process for rule changes. Map current detection rules to ATT&CK techniques and produce the first coverage assessment. Identify the top 20 priority ATT&CK techniques based on threat intelligence for your industry (reference CISA advisories, threat actor profiles from your TIP or commercial intel feeds). Write 2-5 custom detection rules per month targeting coverage gaps. Start measuring false positive rates per rule and MTTD for known-good test scenarios.

Advancing from Level 2 to Level 3 (6-12 months):

Implement a CI/CD pipeline for detection rules. Establish a regular Atomic Red Team testing cadence (monthly coverage validation against the top 20 priority techniques). Hire or designate dedicated detection engineers separate from SOC analyst roles. Integrate threat hunting into the detection development cycle: successful hunts must produce a detection rule within two weeks. Publish a monthly detection coverage report to security leadership. Establish a SLA for new detection deployment after a threat intelligence finding identifies a new relevant technique: target 5 business days from intelligence finding to deployed rule.

Advancing from Level 3 to Level 4 (12-18 months from Level 3):

Automate detection coverage testing in the CI/CD pipeline. Integrate threat intelligence feeds into automated detection development task creation (new threat actor technique reported triggers a Jira ticket for coverage assessment). Implement a post-incident detection retrospective process that produces at least one new detection rule within 5 business days of every significant incident. Publish a quarterly detection engineering roadmap tied to the organization's threat model updates. Establish detection program KPIs (MTTD, coverage percentage, false positive rate) as management-level metrics reported to CISO leadership.

Detection Engineering Team Structure and Career Path

Detection engineering as a formal discipline is less than a decade old. Many organizations do not yet have detection engineering as a defined role, and career paths in the space are still being established. This section describes what effective detection engineering team structures look like and how the career path typically develops.

Team structure at different organization sizes:

For organizations with fewer than 50 SOC analysts, a common effective structure is a Detection Engineering pod of 2-4 engineers embedded within the security operations function but with a separate mandate (not alert triage). These engineers report to the SOC manager or VP of Security Operations, have their own Jira or project tracking board for detection development tasks, and hold a weekly sync with SOC analysts to gather false positive data and investigation findings that should generate new detection rules.

For organizations with 50+ SOC analysts, detection engineering typically becomes its own team of 5-15 engineers led by a Detection Engineering Manager or Principal Detection Engineer. This team may split into specializations: one group focused on SIEM rule development, another focused on EDR rule and exclusion management, and a third focused on cloud threat detection. Adversary emulation (running Atomic Red Team and CALDERA testing) may be managed by this team or by a dedicated purple team.

Skills that define detection engineering:

The detection engineer skill set spans security knowledge and software engineering:

  • Deep understanding of MITRE ATT&CK: technique mechanics, telemetry sources that surface each technique, and evasion variations
  • Fluency in at least one query language (KQL, SPL, EQL) and competence in a second
  • Version control (Git), CI/CD pipeline development (GitHub Actions, GitLab CI), and scripting (Python)
  • Data engineering: understanding how logs are parsed, normalized, and indexed in the SIEM, and how to design new log sources for ingestion
  • Adversary emulation: ability to run and interpret Atomic Red Team and CALDERA test results

Career progression:

  • Junior Detection Engineer: writes detection rules under senior review, runs ART tests, manages rule exclusion documentation, analyzes false positive patterns from SOC feedback
  • Detection Engineer: independently develops and deploys detection rules, designs coverage testing programs, contributes to the threat model, mentors junior engineers
  • Senior Detection Engineer: leads coverage strategy, develops CI/CD pipeline tooling, produces threat model updates, and defines detection standards for the team
  • Principal Detection Engineer / Detection Engineering Manager: sets multi-year detection roadmap, liaises with threat intelligence and red team functions, manages team staffing and professional development, reports coverage metrics to CISO

The bottom line

Detection program maturity is not a function of budget or tooling. It is a function of discipline: the discipline to tune rules before deploying them, to store detection content in version control, to test rules against real attack techniques before declaring coverage, and to measure the program's effectiveness with objective metrics rather than alert counts. Most organizations can advance from Level 0 to Level 2 in 6-12 months without adding headcount, by redirecting analyst time from false positive triage to systematic tuning and coverage development. The jump to Level 3 requires dedicated detection engineering roles and a CI/CD investment. Level 4 is where detection programs stop reacting to attackers and start structurally narrowing the window of opportunity they have to operate undetected.

Frequently asked questions

What is detection engineering and how is it different from threat hunting?

Detection engineering is the systematic practice of developing, testing, deploying, and maintaining detection rules in SIEM, EDR, and other monitoring systems so that known threat techniques are automatically detected when they occur. Threat hunting is the proactive, hypothesis-driven investigation of your environment to find evidence of threats that existing detection rules may have missed. The two practices are complementary: threat hunting discovers new attacker techniques in your environment that do not yet have detection rules, and detection engineering takes those hunting findings and converts them into automated detections so the same technique is caught automatically in the future. In mature teams, successful threat hunts always result in a new detection rule; in immature teams, hunting findings are documented but never operationalized, so the same technique must be hunted for repeatedly.

What does a Level 0 detection program look like in practice?

A Level 0 detection program relies entirely on vendor-default rules from the SIEM, EDR, and other security tools without any customization, curation, or testing. Detection rules are never written or modified by the security team. Alerts are triaged reactively with no documented process for converting investigation findings into new detections. There is no version control for detection content, no testing framework, and no measurement of detection coverage. Alert fatigue is common because default rules generate high false positive rates that have never been tuned. When a threat actor uses a technique not covered by vendor defaults (which is most sophisticated attacker activity), detection does not occur. The security team is entirely dependent on the vendor's release cadence for coverage improvements. This is the most common state among enterprise security teams with fewer than 10 SOC analysts.

What is Detection-as-Code and what tools support it?

Detection-as-Code treats detection rules as software artifacts that are version-controlled in Git, peer-reviewed via pull requests, automatically tested in CI/CD pipelines, and deployed programmatically rather than manually. The core tooling stack includes: a Git repository (GitHub or GitLab) for storing detection rule files, a rule format that enables automation (Sigma is the most popular vendor-neutral format; native formats like KQL or SPL are also viable), a CI/CD platform (GitHub Actions, GitLab CI, or Jenkins) that runs tests on rule changes, an automated testing framework (Detection-as-Code pipelines commonly use Atomic Red Team to generate test events and verify that rules fire), and a deployment mechanism (Sentinel deployment via Azure DevOps pipelines, Splunk app deployment via REST API or Splunk packaging tools). The Sigma project's pySigma library provides Python-based rule compilation from Sigma format to platform-specific query languages, which is the foundation for multi-platform Detection-as-Code pipelines.

How do I use Atomic Red Team to test my detection coverage?

Atomic Red Team (ART) is a library of test cases, called atomics, each mapped to a specific MITRE ATT&CK technique. Each atomic provides a small, contained simulation of the technique on a target system. To use ART for detection coverage testing: first identify which ATT&CK techniques you want to validate coverage for, then run the corresponding Atomic tests on a test system with your monitoring stack active (the same agents and log forwarding that your production systems have). After each test, verify in your SIEM or EDR that an alert or detection fired for the expected technique. Document the results: detected, not detected, or detected with tuning needed. ART tests are available via the Invoke-AtomicRedTeam PowerShell module (`Import-Module Invoke-AtomicRedTeam; Invoke-AtomicTest T1059.001`) and on Linux via the atomic-runner Python library. Run ART in a dedicated test environment, never in production, as some tests execute real attacker commands.

What metrics should I track to measure detection program maturity?

Five metrics provide the most actionable picture of detection program health. Mean Time to Detect (MTTD), measured as the time from when a threat technique occurs to when an alert fires, quantifies whether your detections are fast enough to matter. Detection coverage percentage, measured as the fraction of your priority ATT&CK techniques with active, validated detection rules, shows whether your program is systematically addressing your threat model. False positive rate per rule, measured as the percentage of alerts that analysts close as false positives after investigation, identifies rules that consume analyst time without producing value. Rule deployment frequency, the number of new or updated detection rules deployed per month, measures the velocity of detection content improvement. Alert triage completion rate, the percentage of alerts that receive analyst review within the defined SLA window, shows whether detection volume is manageable relative to analyst capacity.

How many detection engineers does an enterprise security team need?

The ratio that consistently produces a high-maturity detection program is approximately one detection engineer per five to seven SOC analysts, with a minimum of two detection engineers for organizational redundancy. A 20-analyst SOC should have three to four dedicated detection engineers. In practice, most organizations have no dedicated detection engineers: detection rule development falls to senior analysts as a secondary responsibility alongside alert triage, which means detection content advances slowly and inconsistently. The business case for dedicated detection engineers is straightforward: each detection rule that catches an attacker technique automatically and generates a high-confidence alert replaces hours of manual threat hunting effort per week across the analyst team. A single detection engineer who deploys 20 well-tuned rules per month creates multiplicative leverage across the entire SOC.

What is the difference between detection engineering and security operations?

Security operations (the SOC) is focused on monitoring, triaging, and responding to alerts in real time. SOC analysts work the alert queue, investigate incidents, contain threats, and coordinate remediation. Detection engineering is a development discipline focused on building and maintaining the detection content that the SOC relies on. Detection engineers analyze threat actor techniques, write and test detection rules, validate coverage against a threat model, and continuously improve rule quality based on false positive data from the SOC. The relationship is similar to the relationship between software developers and QA engineers: detection engineers build the tools (rules) that analysts use to do their jobs. In small teams, the same person may do both; in mature programs, these are distinct roles with different skill sets. Detection engineers need software development skills (version control, CI/CD, testing) that are not required for SOC analyst roles.

Sources & references

  1. Palantir: A Practical Model for Conducting Cyber Threat Hunting
  2. Atomic Red Team: Open Library of Adversary Emulation Tests
  3. MITRE CALDERA: Automated Adversary Emulation Platform
  4. Mandiant: M-Trends 2024 Threat Intelligence Report
  5. SANS: Detection Engineering Summit Proceedings 2024

Free resources

25
Free download

Critical CVE Reference Card 2025–2026

25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.

No spam. Unsubscribe anytime.

Free download

Ransomware Incident Response Playbook

Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.

No spam. Unsubscribe anytime.

Free newsletter

Get threat intel before your inbox does.

50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.

Unsubscribe anytime. We never sell your data.

Eric Bang
Author

Founder & Cybersecurity Evangelist, Decryption Digest

Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.

Free Brief

The Mythos Brief is free.

AI that finds 27-year-old zero-days. What it means for your security program.

Joins Decryption Digest. Unsubscribe anytime.

Daily Briefing

Get briefings like this every morning

Actionable threat intelligence for working practitioners. Free. No spam. Trusted by 50,000+ SOC analysts, CISOs, and security engineers.

Unsubscribe anytime.

Mythos Brief

Anthropic's AI finds zero-days your scanners miss.