3,000+
Production-quality Sigma rules in the SigmaHQ repository as of 2026, covering Windows, Linux, cloud, and network log sources
15+
SIEM backends supported by sigma-cli for single-source rule conversion
10x
Faster cross-platform detection deployment when using Sigma versus writing native queries per SIEM
72%
Of detection engineers report maintaining parallel rule sets per SIEM platform as their top detection program bottleneck

Sigma is a vendor-neutral, open standard for writing security detection rules. A Sigma rule written once converts to SPL for Splunk, KQL for Microsoft Sentinel and Elastic, Lucene queries for OpenSearch, and a dozen other SIEM-native query languages via the sigma-cli converter. This portability solves the detection engineering problem that every team with multiple SIEM platforms or a planned migration faces: maintaining parallel rule sets per platform.

This guide is for detection engineers and SOC leads who want to write and deploy production Sigma rules. We cover the full rule structure, detection condition syntax, logsource configuration, conversion workflow with sigma-cli, tuning strategies, and two annotated real-world examples covering PsExec lateral movement detection and Mimikatz lsass access.

Sigma Rule Anatomy: Every Field Explained

A Sigma rule is a YAML document with a defined schema. Understanding every field is prerequisite to writing rules that convert correctly and fire with appropriate fidelity.

The metadata fields — title, id, status, description, references, author, date, modified, tags, and logsource — define what the rule detects and provide context for the analyst receiving the alert. The title should be specific enough to explain the threat at a glance: 'Mimikatz LSASS Memory Access via OpenProcess' is useful; 'Malware Detected' is not. The status field uses a defined vocabulary: stable (production-ready, well-tested), test (deployed but under validation), experimental (new rule, expect false positives), deprecated (replaced by a better rule), or unsupported (platform-specific limitations). The tags field maps to MITRE ATT&CK technique IDs in the format attack.t1003.001 — this mapping is what enables ATT&CK coverage heatmap generation from your deployed rule set.

The logsource block specifies which log category the rule operates against. It uses three fields: category (the log type, such as process_creation, network_connection, or file_event), product (the platform, such as windows, linux, or aws), and service (the specific service within the product, such as security, sysmon, or cloudtrail). The logsource block is what sigma-cli uses to select the correct field mappings when converting to a target SIEM.

The detection block is the core of the rule. It contains named selection blocks that define the conditions to match, and a condition expression that combines those blocks with logical operators. The falsepositives field documents known legitimate sources of rule triggering — this is critical for analyst context and suppression planning. The level field uses a five-value scale: informational, low, medium, high, or critical — and should reflect the likelihood of true positive combined with potential business impact.

Detection Condition Syntax: Keywords, Field Matches, and Aggregations

The detection block syntax supports three matching primitives that combine to cover most detection scenarios.

Keyword matching searches for strings anywhere in the log record without specifying a field name: useful for catching specific command-line strings or error messages that appear in variable fields across log formats. Field-value matching binds a condition to a specific field: CommandLine contains 'mimikatz' limits the match to the CommandLine field, not the entire event. Field-value matching is more precise and produces fewer false positives than keyword matching.

Value modifiers extend field-value matching with additional match logic: contains checks for substring presence (CommandLine|contains: 'sekurlsa'); startswith and endswith match positional strings; re applies a regular expression; all requires every item in a list to match (the default is any — matching any item in the list is sufficient). Negation uses the not keyword in the condition expression: filter_legitimate handles allowlisting by defining a selection that, when matched, excludes the event.

Aggregation conditions enable threshold-based detection across time windows — essential for detecting techniques like Kerberoasting that are characterized by volume rather than a single event. The syntax uses the count, sum, min, max, and avg aggregation functions with a by grouping field and a timeframe. A Kerberoasting rule might aggregate: count() by Computer > 30 within 5m — flagging when more than 30 Kerberos service ticket requests occur from a single host within five minutes.

The condition expression combines named selections with and, or, not, and parentheses. The most common pattern is (selection and not filter): define what you want to detect in selection, define known false positive sources in filter, then require the detection to match while the filter does not.

Free daily briefing

Briefings like this, every morning before 9am.

Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.

Annotated Example: PsExec Lateral Movement Detection

The following Sigma rule detects PsExec-based lateral movement by identifying the PSEXESVC service installation on a target system. PsExec creates this service on the remote host when executing commands, producing a distinctive sequence of events that this rule captures via Windows Security Event ID 7045.

The rule targets the windows/system logsource (Event ID 7045 — Service Installed), with a selection block requiring the ServiceName field to contain 'PSEXESVC' and the ServiceFileName to reference admin shares (ADMIN$ or C$). The filter block excludes known-legitimate deployment from authorized IT administration IP ranges, which must be populated with your environment's actual admin subnets.

Key tuning notes for this rule: the ServiceName match alone produces very few false positives in environments where PsExec is not a sanctioned administration tool. If PsExec is used legitimately by IT teams, the filter block must enumerate their source addresses to avoid suppressing legitimate detections entirely. Set status to stable after validating against 30 days of historical data. Tag with attack.t1021.002 (Remote Services: SMB/Windows Admin Shares) and attack.s0029 (PsExec software reference). Level should be high for environments where PsExec is not an authorized tool; medium where it is used legitimately by IT.

Annotated Example: Mimikatz LSASS Memory Access Detection

Mimikatz is the most widely used credential dumping tool in enterprise intrusions. Its primary execution path accesses the Local Security Authority Subsystem Service (LSASS) process memory to extract plaintext credentials, password hashes, and Kerberos tickets. Sysmon Event ID 10 (ProcessAccess) captures this access with the GrantedAccess rights field indicating what permissions were requested.

The detection block for this rule uses two complementary approaches: GrantedAccess value matching for the specific access rights Mimikatz requests (0x1010, 0x1038, 0x143a — hexadecimal values corresponding to PROCESS_VM_READ plus additional access rights used by sekurlsa and lsadump modules) combined with a TargetImage endswith filter for lsass.exe. Using GrantedAccess values rather than just TargetImage reduces false positives from legitimate processes that access LSASS for non-credential-related reasons (security software, backup agents, Windows Defender).

A supplementary detection uses CallTrace field matching for known Mimikatz memory load patterns — specifically the presence of unknown memory regions (indicated by a CallTrace containing 'UNKNOWN' strings) in the call chain from a process accessing LSASS. This catches reflective loading and in-memory execution of Mimikatz variants that do not write to disk.

Set level to high. False positives are rare: the specific GrantedAccess value combination is characteristic of credential dumping tooling. Common legitimate false positive sources include some endpoint security products that access LSASS for protection purposes — these should be identified during tuning and added to the filter block by process name.

Converting and Deploying Rules with sigma-cli

sigma-cli is the official command-line tool for converting Sigma rules to SIEM-native query languages. It replaces the deprecated sigmac tool and supports the current pySigma backend architecture.

Install sigma-cli with pip install sigma-cli. Install the backend for your target SIEM: sigma plugin install splunk installs the Splunk backend; sigma plugin install microsoft365defender installs the Defender/Sentinel backend; sigma plugin install elasticsearch installs the Elastic backend.

Convert a single rule to Splunk SPL: sigma convert -t splunk -p splunk_windows rule.yml. The -p flag selects the pipeline, which provides the field name mappings for your specific log source configuration. Pipeline selection is critical: a rule converted with the wrong pipeline produces queries that reference field names that do not exist in your SIEM index, generating no results rather than an error.

For bulk conversion of the entire SigmaHQ repository: clone the sigma repository, then run sigma convert -t splunk -p splunk_windows -r rules/windows/ to recursively convert all Windows rules. Filter the output by status (stable only for production) and level (high and critical for initial deployment) to avoid deploying hundreds of experimental rules simultaneously.

Integrate Sigma into your CI/CD pipeline by running sigma convert as a pre-deployment step that validates rule syntax and generates platform-specific detection content from a single source-of-truth rule file. Store Sigma rules in git, review changes via pull request, and deploy converted queries to your SIEM via API.

Tuning Rules and Contributing to SigmaHQ

Production Sigma rules require environment-specific tuning that no community rule can provide out of the box. The tuning process follows a standard pattern: deploy the rule in audit mode (log matches without alerting), collect 14 days of matches, categorize matches as true positives or false positives, build filter selections for known false positive sources, then promote to alerting mode.

The most valuable tuning investment is building environment-specific pipeline files that reflect your actual field names and log source configuration. Sigma's pipeline abstraction means a rule that converts correctly for one Splunk deployment may need field mapping adjustments for another if log source configurations differ.

Contributing rules back to SigmaHQ benefits the community and improves rule quality through peer review. The contribution workflow: fork the SigmaHQ/sigma repository on GitHub, add your rule in the appropriate directory following naming conventions, ensure the rule passes sigma-cli validation (sigma check rule.yml), and open a pull request with a description of what the rule detects, why the detection conditions were chosen, and what false positive sources were observed during testing. Rules accepted into the main repository receive broader testing across community deployments, improving confidence and identifying edge cases faster than single-environment testing.

The bottom line

Sigma is the correct format for any detection engineering program that operates across multiple SIEM platforms or anticipates future platform changes. Write detection logic once in Sigma, store it in version control, convert to platform-native queries via sigma-cli, and deploy through your existing content management process. Start with the SigmaHQ repository: clone it, filter for stable high-confidence rules matching your environment's log sources, and deploy via bulk conversion rather than writing rules from scratch. Build tuning into the deployment process — no community rule is production-ready without environment-specific allowlisting. Contribute improvements back upstream.

Frequently asked questions

What is the difference between Sigma, YARA, and Snort rules?

Sigma rules detect threats in log data — structured records of events like process creation, authentication, network connections, and file access. YARA rules detect threats in file content — they match byte patterns, strings, and structural characteristics of files and memory to identify malware families. Snort (and Suricata) rules detect threats in network traffic — they inspect packet content, headers, and connection patterns to identify malicious communication. The three formats are complementary: Sigma for log-based behavioral detection, YARA for file-based malware identification, and Snort/Suricata for network-based traffic inspection. Most mature security programs use all three.

Which SIEM backends does sigma-cli support?

sigma-cli supports backends for Splunk (SPL), Microsoft Sentinel and Defender XDR (KQL), Elastic (EQL and Lucene), QRadar (AQL), Chronicle (YARA-L), OpenSearch, InsightIDR, Datadog, Carbon Black, and others through the pySigma plugin architecture. Run sigma plugin list to see currently available backends. Backend support quality varies — Splunk, Elasticsearch, and Sentinel backends are the most mature and actively maintained. Verify conversion output against your SIEM's query engine rather than assuming converted queries are syntactically correct for your specific platform version.

How do I handle field name differences across SIEM platforms?

Field name mapping is handled by sigma-cli pipelines. A pipeline file maps Sigma's generic field names (like CommandLine, Image, ParentImage) to the actual field names in your SIEM's index schema. SigmaHQ maintains standard pipelines for common configurations: Sysmon-ingested-into-Splunk, Windows-events-ingested-into-Elastic, etc. If your field names differ from the standard pipeline (due to custom log parsing configurations), create a custom pipeline file that maps Sigma fields to your actual schema. Without correct field mapping, converted queries reference non-existent fields and return no results.

What log sources do I need to get the most value from Sigma rules?

For Windows environments, the highest-value log sources are Sysmon (covering process creation, network connections, file creation, registry events, and LSASS access) and Windows Security Event Log (covering authentication events, logon types, privilege use, and service installation). Without Sysmon, you lose access to the process-level behavioral data that the majority of high-fidelity Windows Sigma rules require. For cloud environments, CloudTrail (AWS), Azure Monitor activity logs, and GCP Cloud Audit Logs are the foundational sources. Deploy Sysmon with a curated configuration (SwiftOnSecurity or Olaf Hartong's modular config) before deploying Sigma rules that depend on Sysmon event IDs.

How do I measure Sigma rule effectiveness over time?

Track four metrics per rule in production: true positive rate (confirmed threats detected by this rule), false positive rate (benign events matching the rule), alert volume per day (rules generating more than 10 alerts per day without proportional true positives need tuning), and MITRE ATT&CK technique coverage (the percentage of your deployed rules that have confirmed detections for each technique in your target threat profile). Review these metrics monthly. Retire rules with zero true positives over six months unless they cover high-priority techniques where absence of detections may indicate lack of attacker activity rather than rule ineffectiveness.

Can Sigma rules be used for threat hunting as well as alerting?

Yes. Sigma rules serve both functions. For alerting, rules run continuously against incoming log data and fire when conditions are met. For threat hunting, rules are converted to queries and run retroactively against historical log data to find evidence of threats that may have occurred before the rule was deployed — or before the threat was known. The SigmaHQ repository includes hunting rules (tagged with status: experimental or with tags indicating hunting focus) designed for retroactive execution rather than continuous monitoring. When a new threat actor TTP is published, convert the associated Sigma rules and run them against 90 days of historical logs before deploying for continuous alerting.

Sources & references

  1. SigmaHQ — Sigma Rule Repository
  2. Sigma Specification — Rule Format Documentation
  3. pySigma — Python Sigma Library
  4. MITRE ATT&CK — Defense Evasion and Credential Access
  5. Nextron Systems — sigma-cli Tool

Free resources

25
Free download

Critical CVE Reference Card 2025–2026

25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.

No spam. Unsubscribe anytime.

Free download

Ransomware Incident Response Playbook

Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.

No spam. Unsubscribe anytime.

Free newsletter

Get threat intel before your inbox does.

50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.

Unsubscribe anytime. We never sell your data.

Eric Bang
Author

Founder & Cybersecurity Evangelist, Decryption Digest

Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.

Free Brief

The Mythos Brief is free.

AI that finds 27-year-old zero-days. What it means for your security program.

Joins Decryption Digest. Unsubscribe anytime.

Daily Briefing

Get briefings like this every morning

Actionable threat intelligence for working practitioners. Free. No spam. Trusted by 50,000+ SOC analysts, CISOs, and security engineers.

Unsubscribe anytime.

Mythos Brief

Anthropic's AI finds zero-days your scanners miss.