80%
of exploited vulnerabilities are in application layer code, not infrastructure, per the Verizon DBIR 2024
6x cheaper
to fix a vulnerability during development than post-production, per NIST research on cost of defect correction
OWASP SAMM
provides 15 security practices across 5 business functions with 3 maturity levels each, for 45 measurable activities total
72%
of organizations run fewer than quarterly application security assessments, leaving months-long windows of exposure, per ESG 2024

Application security is where the modern attack surface lives. Infrastructure security controls, network segmentation, and perimeter defenses are necessary but insufficient when the primary attack vector is the application itself. Injection flaws, broken authentication, insecure direct object references, and vulnerable third-party dependencies represent the attack patterns that breach organizations at scale, and none of them are addressed by a firewall rule. An application security program embeds security into the software development lifecycle so that vulnerabilities are caught during development, not after exploitation.

This guide covers the full stack of a mature application security program: the OWASP SAMM maturity framework for assessment and roadmap building, SAST and SCA for code and dependency analysis, DAST for runtime testing, threat modeling for design-phase risk identification, penetration testing and bug bounty programs for external validation, and developer security training as the force multiplier that makes all other controls more effective. Whether you are building an AppSec program from zero or maturing an existing one, this guide provides the practitioner framework to make structured progress.

AppSec Program Foundations: Scope, Sponsorship, and OWASP SAMM

An application security program without CISO or CTO alignment is a collection of tools without authority to enforce change. Developer teams operating under delivery pressure will consistently deprioritize security findings over feature work unless security gates are established with executive backing and engineering leadership has committed to security quality as a delivery criterion alongside functionality and performance. The business case for AppSec investment is straightforward: it costs six times more to remediate a vulnerability in production than in development, and the cost multiplies further when a production vulnerability leads to a breach. Framing the program as cost avoidance and breach prevention rather than compliance overhead typically resonates with technical leadership.

OWASP SAMM provides the foundational maturity framework for assessing where your program is today and building a prioritized roadmap. The framework covers five business functions (Governance, Design, Implementation, Verification, and Operations) with fifteen security practices and three maturity levels per practice. Conducting a SAMM assessment at program inception gives you a structured baseline against which to measure progress and provides justification for investment priorities. A common finding in initial SAMM assessments is that organizations have implemented some verification practices (ad hoc penetration testing, some SAST tooling) but have significant gaps in governance (no security requirements defined, no training program) and design (no threat modeling, no security architecture review process). The SAMM gap analysis directs investment to the areas with the highest leverage.

Asset inventory and criticality tiering is the scope definition step that determines which applications receive which level of AppSec investment. Not every application warrants the full program stack. A customer-facing payment processing application that handles PII and financial data warrants SAST in CI/CD, SCA with continuous monitoring, DAST in staging, annual penetration testing, and threat modeling for every major feature. An internal tooling application with no sensitive data exposure might warrant only SCA and basic SAST scanning. Tiering your application portfolio into three to four criticality levels and mapping the AppSec controls required for each tier makes the program scalable and prevents the mistake of applying the same controls uniformly regardless of risk.

The 18 to 24-month roadmap structure for reaching OWASP SAMM Level 2 across all business functions typically follows this sequence: months one through three establish governance (policy, training baseline, SAMM assessment), months four through six deploy SCA and secret scanning across all repositories, months seven through twelve add SAST to CI/CD pipelines for critical and high-tier applications, months thirteen through eighteen introduce threat modeling for new feature development and DAST in staging environments, and months nineteen through twenty-four establish formal security requirements, security architecture review for critical systems, and a security champions program. This cadence is aggressive for organizations starting from zero and should be adjusted based on team capacity and existing tooling investments.

SAST: Static Application Security Testing

Static application security testing analyzes source code, compiled bytecode, or binary executables without executing the application. SAST tools parse the code into an abstract syntax tree or data flow graph and apply rules that identify patterns associated with vulnerabilities: unsanitized user input flowing into SQL query construction (SQL injection), memory buffer writes without bounds checking (buffer overflow), hardcoded credentials in configuration files, use of deprecated cryptographic functions, and dozens of other categories that align with OWASP Top 10 and CWE classifications. Because SAST operates on code rather than a running application, it can be integrated into the development workflow at the earliest possible point, providing feedback to developers before code is committed to the repository.

The SAST tool landscape spans both commercial and open-source options. Checkmarx and Veracode are established enterprise platforms with broad language support, compliance reporting features, and integrations with major CI/CD platforms and ticketing systems. Semgrep is a lightweight, fast, and highly customizable SAST tool that supports custom rule writing in a simple YAML format, making it a strong choice for organizations that want to build detection rules tailored to their specific codebase patterns. SonarQube provides a comprehensive code quality and security platform that blends SAST with technical debt tracking. Snyk Code offers developer-centric SAST with IDE integration and contextual remediation guidance. GitHub CodeQL is available free for public repositories and at a premium for private repositories via GitHub Advanced Security, with a powerful query language for writing custom security analysis.

Integrating SAST into CI/CD pipelines requires a deliberate rollout strategy to avoid developer backlash. The recommended approach is to start SAST as a non-blocking scan on all pull requests, publishing findings as code review comments but not failing the build. This gives developers visibility into findings without blocking workflow. After two to four weeks, analyze the finding types and false positive rates. Tune the rule set to suppress known false positive patterns (test files, generated code, vendor directories) and configure the pipeline to block only on newly introduced findings of critical or high severity. Legacy findings should be tracked in a separate remediation backlog rather than blocking new development, because requiring developers to fix all historical findings before merging new code creates an impossible backlog that leads to teams disabling the tool. SAST coverage should be measured as the percentage of active repositories with scanning enabled and reported monthly as a program health metric.

Developer-facing SAST findings require context to be actionable. A finding that says "SQL Injection at line 147" without explanation of the attack scenario, exploit potential, and remediation approach produces developer frustration rather than vulnerability fixes. Tools like Semgrep and Snyk Code invest heavily in finding context: they explain why the pattern is dangerous, show an example of how it could be exploited, and suggest the specific code change that would remediate it. When evaluating SAST tools, treat finding quality and developer experience as first-class criteria alongside detection coverage and false positive rates.

Free daily briefing

Briefings like this, every morning before 9am.

Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.

SCA: Software Composition Analysis and Dependency Risk

Modern applications are composed predominantly of third-party open-source libraries. A typical Java or JavaScript application may contain hundreds of direct dependencies and thousands of transitive dependencies (dependencies of dependencies). When a vulnerability is discovered in any of those libraries, every application that includes it is potentially exposed. Software Composition Analysis tools scan your dependency manifests (package.json, pom.xml, requirements.txt, go.sum) and lock files to identify which versions of which libraries your application uses, then correlate those versions against vulnerability databases including the NVD, GitHub Advisory Database, and vendor-specific advisories.

The SCA tool landscape includes Snyk Open Source, which provides continuous dependency monitoring with remediation pull request automation; OWASP Dependency-Check, an open-source CLI tool that integrates with build systems; GitHub Dependabot, which is built into GitHub repositories and automatically opens pull requests for vulnerable dependencies; Mend (formerly WhiteSource), an enterprise platform with strong license compliance features; and Black Duck, widely used in large enterprises for both vulnerability scanning and comprehensive license compliance management. For most organizations starting their SCA program, GitHub Dependabot (if hosted on GitHub) or Snyk Open Source provides the best combination of coverage, automation, and developer experience.

SBOM (Software Bill of Materials) generation is an increasingly important SCA capability. An SBOM is a formal, machine-readable inventory of all components in an application, including open-source libraries, their versions, and their license information. CISA and Executive Order 14028 established SBOM as a requirement for federal software suppliers, and it is increasingly required by enterprise customers in regulated industries as a procurement condition. SCA tools like Snyk, Black Duck, and Mend can generate SBOMs in standard formats (CycloneDX, SPDX) as part of the build process.

Exploitability filtering is the feature that separates mature SCA programs from immature ones. A raw CVE list from an SCA scan on a large application may contain hundreds of findings, most of which are in transitive dependencies that the application never actually calls in a way that exposes the vulnerability. Tools like Snyk and GitHub Dependabot provide reachability analysis that identifies whether a vulnerable function is actually called by your application code, dramatically reducing the actionable finding set. Prioritizing CVEs by reachability, CVSS score, and whether exploitation code is available in the wild produces a much more actionable remediation queue than treating all CVEs equally.

DAST: Dynamic Application Security Testing

Dynamic Application Security Testing tests an application by interacting with it as an attacker would: sending crafted HTTP requests, manipulating parameters, attempting authentication bypasses, and probing for injection vulnerabilities in the running application. Unlike SAST, which analyzes code in isolation, DAST sees the application as it actually behaves in a deployed state, including the interaction between application logic, web frameworks, middleware, and the database layer. This makes DAST uniquely capable of finding vulnerabilities that only manifest at runtime: cross-site scripting, server-side request forgery, authentication and session management flaws, and configuration-based vulnerabilities like missing security headers.

The DAST tool landscape includes OWASP ZAP (Zed Attack Proxy), the open-source standard for web application security testing used both by automated pipelines and manual penetration testers; Burp Suite Enterprise, the commercial version of the industry-standard proxy tool with automation and CI/CD integration capabilities; StackHawk, a developer-oriented DAST platform built specifically for CI/CD pipeline integration with strong OpenAPI and GraphQL support; and Invicti (formerly Netsparker) and Acunetix, commercial enterprise DAST platforms with broad coverage of web vulnerability classes and compliance reporting features.

Authenticated scanning is the most important DAST configuration decision. An unauthenticated scan only tests the application's publicly accessible surface, missing all of the functionality that requires login and which represents the majority of the application's attack surface. Configuring DAST tools to authenticate to the application using a dedicated test account and maintain valid session state throughout the scan requires upfront configuration effort but dramatically expands coverage. For applications using modern authentication (OAuth 2.0, SAML, JWT-based sessions), the authentication configuration for DAST tools can be complex and may require custom scripting. Budget time for authentication configuration in your DAST deployment plan.

API security testing requires specific DAST configuration because REST and GraphQL APIs do not have a browsable user interface that DAST tools can crawl. Instead, API DAST requires providing the tool with an API specification (OpenAPI/Swagger, GraphQL schema) that describes the available endpoints and their expected parameters. Tools like StackHawk and Burp Suite Enterprise support OpenAPI import and can generate and fuzz test cases from the specification. Without an API specification, DAST coverage of API endpoints is severely limited. This makes maintaining up-to-date OpenAPI documentation a security requirement, not just a developer convenience.

Threat Modeling: Finding Design Flaws Before Code is Written

Threat modeling is the practice of systematically analyzing how an application can be attacked during the design phase, before code is written. It identifies security requirements, design flaws, and trust boundary issues that cannot be found by code-level tools because they exist at the architectural level. A threat modeling session typically takes two to four hours for a feature or system, involves the development team, a security reviewer, and sometimes an architect, and produces a data flow diagram annotated with trust boundaries and a list of identified threats with mitigations. The most common complaint about threat modeling is that it takes time that teams do not feel they have. The counter-argument is that design flaws are the most expensive vulnerabilities to fix: they require architectural changes rather than code patches, and they often cannot be fully remediated without breaking changes.

The STRIDE methodology is the most widely used threat modeling framework for application security. STRIDE is an acronym for six threat categories: Spoofing (can an attacker impersonate a user or component?), Tampering (can data be modified in transit or at rest without detection?), Repudiation (can a user deny performing an action?), Information Disclosure (can sensitive data be exposed to unauthorized parties?), Denial of Service (can the system be made unavailable?), and Elevation of Privilege (can a user gain permissions they should not have?). For each trust boundary in the data flow diagram, the team works through the STRIDE categories to identify potential threats. Each identified threat is then assessed for likelihood and impact, and mitigations are defined as security requirements for the implementation.

Threat modeling works best when it is triggered by specific development events: new feature design, significant architecture changes, new third-party integrations, and annual reviews of critical existing systems. Attempting to threat model every change is impractical and dilutes the practice. A tiered trigger model, where only features touching authentication, authorization, payment processing, PII, or external integrations require formal threat modeling while other changes use a lightweight security checklist, provides coverage proportional to risk without creating a bottleneck.

Threat modeling tools reduce the friction of the diagramming and documentation process. OWASP Threat Dragon is a free, open-source web-based tool for creating data flow diagrams and documenting threats. Microsoft Threat Modeling Tool is a free Windows application with built-in STRIDE analysis templates. IriusRisk is a commercial platform that integrates threat modeling with requirements tracking and integrates with Jira for finding tracking. The tool choice matters less than the practice: a threat model documented in a whiteboard photo with a well-structured notes file is more valuable than a beautifully formatted document that never gets created because the tool is too complex.

Spoofing

Can an attacker impersonate a legitimate user, service, or component? Address with strong authentication, token validation, and mutual TLS for service-to-service communication.

Tampering

Can data be modified in transit or at rest without detection? Address with integrity checks, digital signatures, input validation, and encryption at rest.

Repudiation

Can a user deny performing an action? Address with non-repudiation logging, audit trails, and cryptographic signing of critical operations.

Information Disclosure

Can sensitive data be exposed to unauthorized parties? Address with least-privilege data access, encryption, output encoding, and error message sanitization.

Denial of Service

Can the system be made unavailable to legitimate users? Address with rate limiting, resource quotas, graceful degradation, and DDoS mitigation controls.

Elevation of Privilege

Can a lower-privileged user gain access to higher-privilege functions or data? Address with authorization checks on every privileged operation and role-based access control enforcement.

Penetration Testing and Bug Bounty Programs

Automated security testing tools find a significant portion of the known vulnerability class surface, but they miss logic flaws, business logic abuse, chained attack paths, and nuanced authorization issues that require human creativity and contextual understanding to discover. Manual penetration testing fills this gap by having security professionals actively try to attack the application using the same techniques an adversary would use. A web application penetration test scoping document should define the target applications, the authentication credentials and user roles the tester will operate as, whether social engineering is in scope, the rules of engagement (no denial of service, no data exfiltration of real customer data), and the reporting deliverables expected.

Penetration test cadence should be risk-driven. Critical applications handling payment card data, PII, or privileged access management functionality should be tested annually at minimum, and after any significant architecture change or major feature release. Lower-criticality applications can be tested every two years or on change triggers. The common mistake is treating the annual penetration test as the primary security validation mechanism rather than as a complement to the continuous testing pipeline of SAST, SCA, and DAST. When a penetration test consistently finds vulnerabilities that SAST should have caught, it indicates a SAST coverage or tuning problem rather than a penetration test frequency problem.

Bug bounty programs engage external security researchers to test your applications in exchange for monetary rewards for valid vulnerability reports. The program defines scope (which applications and domains are in scope), payout tiers by severity (critical findings typically pay $5,000 to $50,000 depending on program, high findings $1,000 to $10,000, and medium and low proportionally less), and the triage SLA within which the program operator commits to respond to submissions. Platforms including HackerOne, Bugcrowd, and Intigriti provide researcher communities, managed triage services, and legal safe harbor frameworks. A Vulnerability Disclosure Program (VDP) is the appropriate starting point for organizations new to external researcher engagement: a VDP provides a submission channel and safe harbor without monetary payouts, reducing volume and complexity while establishing the operational process for handling external reports.

The organizational prerequisite for a bug bounty is a vulnerability management process capable of triaging and remediating the incoming volume. Bug bounty programs on popular consumer-facing products can generate dozens to hundreds of submissions per month. Without a defined triage SLA, remediation queue, and engineering capacity allocation, submissions pile up, researchers grow frustrated at non-responses, and the program becomes a reputational liability rather than a security asset. Assign clear ownership of the program, including a dedicated triage function (either internal or via the platform's managed triage service), before launching.

Developer Security Training and Secure Coding Standards

All of the tools and processes in an application security program are more effective when developers understand why the vulnerabilities exist and how to avoid creating them in the first place. Developer security training is the force multiplier that shifts the program from reactive detection to proactive prevention. The most effective training model is not the annual compliance course that developers click through in 20 minutes; it is contextual, just-in-time training triggered at the moment a SAST tool identifies a vulnerability in the developer's code. When a SAST finding appears as a code review comment linking to a five-minute explanation of why that pattern is dangerous and how to fix it, the learning is immediately applicable and context-rich.

Secure coding standards define the organization's expectations for how specific vulnerability classes should be handled in code: how to perform parameterized queries instead of string concatenation for database access, how to validate and sanitize user input at the boundary, which cryptographic algorithms are approved for use and which are deprecated, how to handle secrets (never in source code, always via secrets management), and how to implement authentication and session management. Standards should be language-specific rather than generic, because the secure approach to SQL query construction in Python differs mechanically from Java, even though the underlying principle is the same. Publishing standards in a developer-accessible internal wiki or as a linked resource from SAST findings ensures they are accessible at the point of need.

Training platforms for developer security education include Secure Code Warrior, which provides language-specific hands-on coding challenges in a competitive format that developers actually engage with; Snyk Learn, which provides free, short-form security education modules triggered contextually from Snyk findings; and SANS AppSec courses, which provide deeper practitioner-level curriculum for developers who want to develop genuine security expertise. The selection of platform matters less than the incentive structure: training completion rates are consistently higher when participation is tied to career development criteria, when managers champion participation, and when internal recognition programs highlight developers who complete advanced security curricula.

A security champions program amplifies training investment by developing a distributed network of security-aware developers embedded in each engineering team. Champions receive additional training beyond the standard developer curriculum, participate in threat modeling sessions, review pull requests for security issues before they reach the central AppSec team, and serve as the first point of contact for security questions within their team. The program is most effective when champions are given dedicated time (two to four hours per month) for security activities, recognized formally in performance reviews, and connected to each other through a community of practice with regular meetups to share knowledge and emerging threat patterns. Measuring training effectiveness requires tracking pre-assessment and post-assessment scores for training participants, finding recurrence rates (whether the same vulnerability types reappear in the same teams after training), and security champion participation rates over time.

The bottom line

A mature application security program is built incrementally, not deployed all at once. The highest-ROI starting points are SCA (which immediately surfaces known CVEs in third-party dependencies with minimal developer friction) and SAST in CI/CD pipelines (which catches first-party code vulnerabilities before they reach production). Layer in threat modeling for new feature design and DAST in staging environments as the program matures, and invest in developer security training to reduce the rate at which vulnerabilities are introduced in the first place. Most organizations reach OWASP SAMM Level 2 within 18 months of consistent program investment, provided they have engineering leadership alignment and dedicated AppSec resources.

The measure of AppSec program success is not zero findings. That standard is unachievable and counterproductive, because a program that expands scanning coverage will always find more issues than it eliminates in the short term. The right measure is a decreasing trend in critical and high severity vulnerabilities reaching production over time, a reduction in mean time to remediate, and a developer community that treats security as a shared responsibility rather than an external gate. Programs that achieve those outcomes reduce breach probability in the place where attackers are most active: application code.

Frequently asked questions

What is the difference between SAST, DAST, and SCA?

SAST (Static Application Security Testing) analyzes source code, bytecode, or binaries without executing the application. It looks for vulnerabilities in the code itself, including injection flaws, hardcoded secrets, insecure functions, and patterns that match OWASP Top 10 categories. SAST runs during development and CI/CD pipelines and catches vulnerabilities early, but it cannot detect issues that only manifest at runtime. DAST (Dynamic Application Security Testing) tests the running application by sending attack payloads to its inputs and observing responses. It finds vulnerabilities like authentication flaws, session management issues, and server-side injection that require the application to be executing. DAST runs against a deployed instance in a staging environment and finds issues that SAST misses. SCA (Software Composition Analysis) analyzes the third-party libraries and open-source dependencies your application uses. It identifies known CVEs in those dependencies, flags license compliance issues, and generates a Software Bill of Materials. SCA is critical because modern applications are 70 to 90 percent third-party code by volume. A complete AppSec program uses all three: SAST for first-party code analysis, SCA for dependency risk, and DAST for runtime validation.

Where should I start if I have no AppSec program today?

Start with SCA (Software Composition Analysis) because it delivers the fastest return on investment with the least friction. SCA tools integrate into your existing repository and CI/CD pipeline in hours, immediately surface known CVEs in your third-party dependencies, and provide remediation guidance in the form of version upgrades. Most modern codebases have dozens of vulnerable dependencies that can be fixed with a dependency version bump, requiring no code changes. After SCA is operational, add SAST to your CI/CD pipeline as a non-blocking informational scan initially. Let findings accumulate for two to four weeks, assess the false positive rate and the volume of findings, and then tune the rule set before making SAST a blocking build gate. Starting SAST as a hard gate immediately typically generates developer backlash from high false positive rates. Run an OWASP SAMM assessment in parallel to establish your maturity baseline and build a prioritized 12-month roadmap. This sequence, SCA first, then SAST in observation mode, then threat modeling for new features, then DAST in staging, is the progression that consistently delivers sustainable AppSec programs rather than tool deployments that get disabled after developer friction.

What is OWASP SAMM and how do I use it?

OWASP SAMM (Software Assurance Maturity Model) is an open framework for assessing and improving the maturity of an organization's application security program. It is organized into five business functions: Governance (strategy, policy, education), Design (threat assessment, requirements, architecture), Implementation (secure build, secure deployment, defect management), Verification (architecture assessment, requirements testing, security testing), and Operations (incident management, environment management, operational management). Each function contains three security practices, and each practice has three maturity levels with defined activities and assessment criteria. To use SAMM, conduct an assessment interview with development leads, security teams, and operations to score your current state against each practice's level 1, 2, and 3 criteria. The output is a scorecard that shows where you are strongest and where your largest gaps are. Use the gap analysis to build a prioritized roadmap, focusing on reaching Level 1 across all practices before pursuing Level 2 in any single area. OWASP provides a free assessment tool and guidance at the link above, and many consultancies offer facilitated SAMM assessments as a program kickoff engagement.

How do I integrate security testing into CI/CD without slowing down developers?

The key to CI/CD integration without developer friction is selective blocking. Not every security tool should be a hard build gate that fails the pipeline. SAST should start as a non-blocking informational scan until the false positive rate is tuned below 10%, then graduate to blocking only on new critical and high findings, not the existing backlog. SCA should block on newly introduced CVEs above a defined severity threshold (CVSS 9.0+) but not on existing dependency vulnerabilities that require a separate remediation track. DAST should run in the staging environment as an asynchronous job, not in the primary developer pipeline, because DAST scans take 15 to 60 minutes and blocking deployments on DAST completion creates unacceptable pipeline latency. Secret scanning is the one control that should be a hard gate from day one with near-zero exceptions: committed credentials should never reach the repository. Shift-left means meeting developers where they work, including IDE plugins that surface SAST findings inline as the developer writes code, before code is committed. Tools like Semgrep and Snyk Code have IDE integrations that provide feedback at development time, which is far less disruptive than failing a CI job after a commit.

Should I run a bug bounty program?

A bug bounty program is appropriate when your application security program is mature enough to handle the incoming vulnerability reports, not as a substitute for internal AppSec practices. Running a bug bounty before you have SAST and regular penetration testing in place typically results in a flood of findings that your team cannot triage and remediate quickly, which frustrates researchers and creates reputational risk if critical findings are not addressed promptly. The prerequisite for a successful bug bounty is having a mature vulnerability management process, a defined triage SLA (typically 24 to 48 hours for critical findings), and remediation capacity to close findings within your defined SLA windows. Start with a Vulnerability Disclosure Program (VDP) rather than a paid bug bounty: a VDP provides a legal safe harbor and submission mechanism for researchers who find issues without paying for reports, which generates lower volume and lower researcher expectations. Graduate to a paid bug bounty on a scoped set of applications once the triage and remediation process is proven. Platforms including HackerOne, Bugcrowd, and Intigriti provide program management, researcher vetting, and triage support services.

What is a security champions program?

A security champions program embeds security expertise directly in engineering teams by identifying and developing individual developers who are passionate about security, training them in AppSec skills beyond the baseline developer curriculum, and giving them a formal role as the security touchpoint for their team. Security champions are not security engineers; they remain full-time developers. But they serve as the first line of security review for their team's code, participate in threat modeling sessions, evangelize secure coding practices within their team, and act as a bridge between the central security team and development teams. The program works because security issues are often caught earlier and resolved faster when the person reviewing the code is embedded in the team and understands the codebase, rather than being an external security team member reviewing changes in isolation. A successful security champions program requires dedicated training time (typically two to four hours per month), recognition and career progression incentives for champions, regular community meetups to share knowledge across champions in different teams, and explicit executive support for the time investment. Organizations with active security champions programs consistently show lower rates of critical findings reaching production than those relying solely on centralized AppSec teams.

How do I measure the effectiveness of an AppSec program?

AppSec program effectiveness is measured through a combination of finding trends, process compliance metrics, and developer behavior indicators. The primary outcome metric is the rate at which critical and high severity vulnerabilities reach production: a mature program should show a decreasing trend in production vulnerabilities over time, not necessarily zero findings but fewer findings of high severity. Track mean time to remediate by severity tier (critical findings should be remediated within 24 to 72 hours, high within 30 days) and the percentage of findings remediated within SLA. Process compliance metrics include SAST pipeline coverage (what percentage of repositories have SAST enabled), SCA coverage, and DAST scan frequency for production-facing applications. Developer behavior indicators include finding recurrence rate (are the same vulnerability types recurring in the same codebases, which indicates training is not landing?), security champion participation rates, and pre/post assessment scores from security training. For executive reporting, frame the metrics as risk reduction: an AppSec program that reduced critical production vulnerabilities by 60% over 18 months represents a quantifiable reduction in breach probability and potential regulatory exposure. Avoid presenting raw finding counts as the primary metric, because finding counts increase as scanning coverage expands, which makes a maturing program look like it is getting worse.

Sources & references

  1. OWASP Software Assurance Maturity Model (SAMM)
  2. OWASP Top 10
  3. NIST Secure Software Development Framework
  4. Microsoft Security Development Lifecycle (SDL)
  5. CISA Secure by Design

Free resources

25
Free download

Critical CVE Reference Card 2025–2026

25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.

No spam. Unsubscribe anytime.

Free download

Ransomware Incident Response Playbook

Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.

No spam. Unsubscribe anytime.

Free newsletter

Get threat intel before your inbox does.

50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.

Unsubscribe anytime. We never sell your data.

Eric Bang
Author

Founder & Cybersecurity Evangelist, Decryption Digest

Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.

Free Brief

The Mythos Brief is free.

AI that finds 27-year-old zero-days. What it means for your security program.

Joins Decryption Digest. Unsubscribe anytime.

Daily Briefing

Get briefings like this every morning

Actionable threat intelligence for working practitioners. Free. No spam. Trusted by 50,000+ SOC analysts, CISOs, and security engineers.

Unsubscribe anytime.

Mythos Brief

Anthropic's AI finds zero-days your scanners miss.