85%
of GCP security incidents involve misconfigured IAM permissions or exposed service account keys, per the Google Cloud Threat Horizons Report 2024
Security Command Center Standard
tier is free for all Google Cloud organizations; Premium tier with Event Threat Detection costs approximately $0.065 per asset per month
0 service account keys
is the target state Google recommends; Workload Identity Federation eliminates the need for downloaded JSON key files
200+
built-in Organization Policy constraints available in GCP for preventive security controls

Google Cloud Platform has evolved its security model significantly over the past several years, moving from project-centric resource organization toward a comprehensive hierarchy with organization-level controls, centralized threat detection, and native secrets management. Despite these improvements, the Google Cloud Threat Horizons Report consistently identifies misconfigured IAM permissions and exposed service account keys as the root cause of the overwhelming majority of GCP security incidents. The platform provides strong security primitives, but those primitives require deliberate configuration.

This guide covers the configuration controls that matter most in practice: IAM least-privilege enforcement and the migration away from service account keys, VPC network architecture and service perimeter design, data protection through customer-managed encryption and secrets management, monitoring through Security Command Center and Cloud Audit Logs, governance through Organization Policy, container security in GKE, and security integration in CI/CD pipelines. Each section focuses on practitioner-level decisions rather than surface-level documentation of what features exist.

IAM and Access Control: Least Privilege in Google Cloud

Google Cloud IAM has three tiers of roles: basic roles (Owner, Editor, Viewer), predefined roles (service-specific roles like roles/storage.objectAdmin or roles/bigquery.dataViewer), and custom roles. Basic roles grant extremely broad permissions: the Editor role grants create, update, and delete permissions on most resources across the project, and the Owner role additionally grants IAM policy management. Assigning Owner or Editor to service accounts or even human users on production projects is a high-severity misconfiguration that gives any compromised identity broad blast radius across the project.

The correct approach is to use predefined roles scoped to the specific resource a principal needs to access. A Cloud Run service that needs to read from a specific Cloud Storage bucket should be granted roles/storage.objectViewer on that specific bucket, not on the project. A data pipeline that writes to BigQuery should be granted roles/bigquery.dataEditor on the specific dataset, not the Editor role on the project. IAM Recommender analyzes actual API usage over a rolling 90-day window and generates recommendations to replace overly permissive role bindings with minimum-permission alternatives. Running IAM Recommender across all projects in an organization regularly is one of the highest-return IAM hygiene activities available.

For human access, use groups in Google Workspace or Cloud Identity rather than individual user IAM bindings. Group-based access management scales better, simplifies access reviews, and ensures that role assignments are automatically revoked when a user leaves the organization and is removed from the group. Audit IAM bindings at the organization, folder, and project level on a quarterly cadence using the Cloud Asset Inventory API, looking for allAuthenticatedUsers and allUsers bindings (which grant access to any Google account or any internet user respectively), primitive role assignments on production projects, and service accounts granted high-privilege roles on the organization or folder level.

Service account hygiene requires specific attention because service accounts authenticate both as identities (they can be impersonated by humans) and as resources (they can be granted roles like any other principal). The two most common high-risk patterns are service accounts with downloaded JSON key files and service accounts granted the Service Account Token Creator or Service Account Admin role, which allows creating tokens for any other service account. The target state is zero downloaded key files for service accounts used in GCP-native workloads (replaced by Workload Identity for GKE and Compute Engine default service accounts for VM-based workloads) and zero downloaded key files for external workloads (replaced by Workload Identity Federation).

VPC Security: Firewall Rules, Private Access, and VPC Service Controls

GCP VPC firewall rules control traffic at the network level and are applied to VM instances based on network tags or service account membership rather than subnet boundaries (unlike AWS security groups, which are applied to network interfaces, or Azure NSGs, which are applied to subnets or NICs). GCP VPC networks do not have a default deny-all rule for ingress; instead, they have implied deny-all ingress and allow-all egress rules at the lowest priority. The default network created automatically in new projects includes permissive allow rules for common ports (SSH, RDP, ICMP) that should be removed in production environments.

Hierarchical firewall policies, configured at the organization or folder level, allow network security teams to define baseline rules that apply across all VPCs in scope and cannot be overridden by project-level firewall rules at lower priority. This is the GCP equivalent of Azure Policy for network controls: a security team can define organization-level rules that deny all inbound management port traffic from the internet and allow specific egress patterns, then project teams operate within that baseline. Project-level firewall rules can add more permissive rules within the allowed space defined by hierarchical policies.

Private Google Access enables VM instances without external IP addresses to reach Google APIs and services (Cloud Storage, BigQuery, Pub/Sub, etc.) using internal IP addresses rather than routing through the internet. For the majority of Compute Engine workloads, assigning external IP addresses is unnecessary and creates attack surface. Use Private Google Access in combination with no external IP assignment as the default for all VMs, reserving external IPs only for resources that explicitly require direct internet connectivity. For GKE nodes, configure private cluster mode to eliminate external IP addresses from both nodes and the control plane.

VPC Service Controls are the strongest available preventive control for data exfiltration from GCP managed services. By creating a service perimeter around projects containing sensitive data, you prevent calls to in-scope APIs (like Cloud Storage or BigQuery) from succeeding if they originate from outside the perimeter, even with valid credentials. This prevents an attacker with stolen credentials from accessing BigQuery data from an external location or copying Cloud Storage data to a different project. Implementing VPC Service Controls in dry-run mode first is critical: the initial scan will surface legitimate cross-perimeter flows (CI/CD pipelines, data sharing with partner organizations, BigQuery data transfers) that require explicit ingress and egress rules before enforcement mode is activated.

Free daily briefing

Briefings like this, every morning before 9am.

Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.

Data Protection: CMEK, Secret Manager, and DLP

Google Cloud encrypts all data at rest by default using AES-256 with Google-managed encryption keys. Customer-managed encryption keys (CMEK) via Cloud Key Management Service (Cloud KMS) replace Google-managed keys with keys that you control in your GCP project or in an external key manager. CMEK provides two primary security benefits: you can revoke a key to prevent decryption of associated data even by Google infrastructure, and you have an audit log of every key operation including wrapping and unwrapping operations used during encryption and decryption of data. CMEK is supported for Cloud Storage, BigQuery, Compute Engine persistent disks, Cloud SQL, GKE Secrets, Pub/Sub, and Cloud Spanner, among other services. The key rotation policy should be set to 90 days or less for keys protecting regulated data.

Secret Manager is the correct storage location for application secrets, database credentials, API keys, and certificates in GCP environments. It provides versioning (allowing zero-downtime secret rotation by creating a new version before deprecating the previous), IAM-controlled access with per-secret binding granularity, audit logging of all secret access operations, and optional automatic replication across regions. Storing secrets in environment variables, hardcoded in application code, or in Cloud Storage buckets in plaintext are all patterns that Secret Manager replaces. Applications running on GCP services should access secrets at startup via the Secret Manager API using the service account's Managed Identity (for Compute Engine and Cloud Run) or Workload Identity (for GKE pods), never storing the secret value beyond the running process memory.

Cloud Data Loss Prevention (Cloud DLP) API provides discovery and inspection capabilities for identifying sensitive data in Cloud Storage buckets, BigQuery tables, and Datastore instances. Organizations handling PII, payment card data, or health information should run periodic Cloud DLP inspection jobs across their data stores to identify data that has migrated to locations outside of expected security controls. Cloud DLP also provides de-identification transformations (tokenization, masking, pseudonymization) that can be applied to sensitive fields before data is made available to lower-privilege users or analytics pipelines. The Cloud DLP findings can be exported to Security Command Center, creating a unified view of data risk alongside infrastructure misconfigurations and threat detections.

Preventing public GCS bucket exposure requires both preventive and detective controls. The storage.publicAccessPrevention Organization Policy constraint is the most reliable preventive control, blocking public access configuration at the org or folder level regardless of individual bucket settings. For buckets that legitimately need to serve public content (static websites, public software distribution), the explicit exception to this policy should be documented and reviewed periodically. Security Command Center Standard automatically surfaces public buckets as findings, providing detective coverage for environments where the preventive org policy cannot be applied.

Monitoring: Security Command Center, Cloud Audit Logs, and Chronicle

Security Command Center provides the unified security monitoring foundation for GCP environments. At the organization level, SCC aggregates findings from Security Health Analytics (misconfiguration detection), Event Threat Detection (real-time threat detection using Google's threat intelligence applied to Cloud Audit Logs), Container Threat Detection (eBPF-based runtime threat detection for GKE pods), VM Threat Detection (memory-based malware detection for Compute Engine instances), and Web Security Scanner (automated vulnerability scanning for HTTP-accessible applications). SCC Premium is licensed per asset, making cost predictable and proportional to environment size.

Cloud Audit Logs generate three log types that together provide comprehensive visibility into GCP resource activity. Admin Activity logs capture API calls that create, modify, or delete resources (including IAM policy changes) and are always enabled and cannot be disabled. Data Access logs capture API calls that read resource configurations or user-provided data; these are disabled by default and must be explicitly enabled for each service, with particular priority on BigQuery, Cloud Storage, Cloud SQL, Cloud KMS, and Secret Manager for environments handling sensitive data. System Event logs capture GCP infrastructure activity and are always enabled. Retaining these logs for at least one year (90 days in hot storage plus archive) is required by many compliance frameworks and allows investigation of incidents discovered weeks or months after they occurred.

Exporting logs to BigQuery for long-term retention and analysis is the standard pattern for organizations building security analytics capabilities on GCP. Log sinks can be configured at the organization level to export all audit logs from all projects to a centralized BigQuery dataset in a dedicated logging project, providing a single query surface for cross-project investigation. Google Chronicle (now part of Google Security Operations) ingests these logs and applies Google-scale threat intelligence and detection rules built by Google's Mandiant team, providing SIEM and SOAR capabilities purpose-built for cloud environments. Pub/Sub can serve as an intermediate export target when logs need to be consumed by external SIEMs or SOAR platforms in real time.

Log-based alerts configured in Cloud Monitoring allow teams to trigger notifications from specific log entries without waiting for a scheduled SIEM rule evaluation cycle. High-value alert configurations include alerts on IAM policy changes at the organization or project level (using the protoPayload.methodName of SetIamPolicy), changes to Organization Policy constraints, creation or deletion of VPC firewall rules, disabling of audit logging, and creation of service account keys. These alerts serve as a fast-path notification channel for the highest-severity administrative actions while longer-correlation SIEM rules process the full log stream.

Compliance and Governance: Org Policy and Assured Workloads

The Organization Policy Service provides over 200 built-in constraints that enforce preventive security controls across all projects in a GCP organization. Unlike IAM policies which control who can take actions, Organization Policies control what actions can be taken by anyone, including project owners. High-priority constraints to enable at the organization level include compute.restrictCloudArmor (require Cloud Armor for external HTTP services), compute.requireOsLogin (require OS Login for SSH to Compute Engine instances, eliminating project-wide SSH keys), iam.disableServiceAccountKeyCreation (prevent creation of service account JSON keys organization-wide), storage.publicAccessPrevention (prevent public access to Cloud Storage buckets), compute.vmExternalIpAccess (restrict external IP assignment to specific VM instances or prevent it entirely), and sql.restrictPublicIp (prevent Cloud SQL instances from being assigned public IP addresses).

The GCP resource hierarchy (Organization, Folder, Project) is the primary mechanism for policy inheritance and blast radius containment. A well-designed hierarchy separates production, non-production, and shared infrastructure workloads into separate folders, applies environment-appropriate policy constraints at the folder level, and uses project-level isolation to limit the impact of any single compromised service account or misconfiguration. Production projects should have stricter Organization Policy constraints (deny service account key creation, deny public GCS buckets, deny public SQL IPs) while development folders may relax some constraints to preserve developer productivity.

Assured Workloads is Google's managed compliance solution for regulated industries, providing certified control packages for FedRAMP Moderate, FedRAMP High, HIPAA, PCI DSS, ITAR, and IL4 workloads. Assured Workloads configures a set of Organization Policy constraints, data residency requirements, and access control configurations appropriate for the selected compliance regime, and monitors the environment for control drift. For organizations with formal compliance certification requirements in these frameworks, Assured Workloads reduces the configuration burden of implementing the required controls while providing continuous compliance monitoring.

Security Health Analytics findings in Security Command Center are mapped to compliance framework controls for CIS Google Cloud Foundations Benchmark, PCI DSS, NIST 800-53, and ISO 27001, providing a continuous gap assessment against each framework. The compliance posture view in SCC Premium shows the percentage of controls met, controls with active findings, and controls with no findings, allowing compliance teams to track remediation progress over time and export evidence for audit purposes.

Container Security: GKE Hardening and Binary Authorization

Google Kubernetes Engine (GKE) clusters require deliberate hardening because the defaults prioritize ease of deployment over security. The most impactful cluster-level security configurations are enabling private cluster mode (which removes external IP addresses from nodes and routes API server access through a private endpoint), enabling Workload Identity (which replaces node-level service account tokens with per-pod short-lived credential tokens mapped to specific GCP service accounts), enabling the GKE network policy provider (Calico or Dataplane V2) to enforce NetworkPolicy resources for pod-to-pod traffic control, and disabling the legacy metadata server endpoint to prevent pods from querying node metadata for service account tokens.

Binary Authorization is a GCP deploy-time security control that enforces a policy requiring all container images deployed to GKE to have attestations from approved signers before deployment is permitted. In practice, this means that a CI/CD pipeline vulnerability scanner must sign the image after passing a vulnerability scan, and the Binary Authorization policy on the cluster will reject any image without that signature. This prevents unvetted or tampered container images from being deployed to production even if an attacker gains write access to the container registry or the deployment pipeline. Binary Authorization policies can be set to dry-run mode for initial deployment to identify images that lack required attestations before enforcement begins.

Artifact Registry (replacing the older Container Registry) provides vulnerability scanning of stored container images through integration with Container Analysis. Vulnerability findings are categorized by severity and CVE identifier, and can be used as inputs to Binary Authorization attestation policy: images with Critical or High severity unfixed vulnerabilities fail the attestation step in CI/CD. Node auto-upgrade should be configured on all GKE node pools to apply Google-released node OS images automatically on a weekly schedule, ensuring that kernel patches and OS-level security fixes are applied without requiring manual operator action. This is particularly important for GKE Standard mode clusters where node management is not handled automatically by Google.

Pod Security Admission (the successor to Pod Security Policy, which was removed in Kubernetes 1.25) enforces security standards at the namespace level using three built-in profiles: Privileged (unrestricted), Baseline (prevents most known privilege escalation paths), and Restricted (strongly hardened, requires pods to run as non-root with read-only root filesystems). Production namespaces should be configured with the Baseline or Restricted profile enforced, with exceptions documented and approved for workloads with legitimate needs for elevated permissions. Defender for Containers (available through Google's partnership with Microsoft or through native GKE security features) provides runtime threat detection at the pod level using eBPF sensors.

DevSecOps: Shifting Security Left in GCP

Cloud Build is the native CI/CD service for GCP and runs on a dedicated service account that should be provisioned with minimum required permissions. By default, Cloud Build uses a service account with Editor role on the project, which is excessively permissive: a compromised build pipeline or malicious dependency can modify IAM policies, access secrets, or exfiltrate data. Replace the default Cloud Build service account with a custom service account granted only the specific permissions the build pipeline requires: typically roles/artifactregistry.writer for pushing images, roles/run.developer for deploying Cloud Run services, and specific resource-level permissions for any GCP resources the pipeline needs to configure.

Cloud Build steps that include container image builds should integrate with Artifact Registry vulnerability scanning and Binary Authorization attestation signing as part of the build pipeline. A security gate step that queries the Container Analysis API for the scanned image's vulnerability findings and fails the build if Critical or High unfixed vulnerabilities are present ensures that only scanned, compliant images advance to deployment. Combining this with Binary Authorization enforcement on the target GKE cluster creates a two-point enforcement: the pipeline will not produce a signed attestation for a vulnerable image, and the cluster will not deploy an image without a valid signed attestation.

Infrastructure as code scanning should be integrated into pull request workflows for all Terraform, Pulumi, or Google Cloud Deployment Manager configurations targeting GCP resources. tfsec and Checkov both include GCP-specific check libraries that identify misconfigurations including storage buckets without uniform bucket-level access, GKE clusters without private nodes, Cloud SQL instances without SSL required, and IAM bindings using basic roles. The Security Command Center API can be queried programmatically to check the security posture of existing infrastructure and surface findings as part of deployment pipeline quality gates.

Web Security Scanner, part of SCC Premium, provides automated OWASP-category vulnerability scanning for applications deployed on App Engine, Cloud Run, or accessible via GKE Ingress. It identifies common web vulnerabilities including XSS, mixed content, outdated JavaScript libraries, and insecure authentication configurations without requiring code access. Running Web Security Scanner scans on a weekly schedule against staging and production endpoints provides continuous detection of web-tier vulnerabilities that would otherwise require manual penetration testing to surface. Findings from Web Security Scanner appear in the Security Command Center findings interface alongside infrastructure misconfigurations, providing a unified security posture view from code through to running applications.

The bottom line

GCP security is built on a foundation of IAM discipline and preventive Organization Policy controls. Eliminating service account keys through Workload Identity and Workload Identity Federation, restricting primitive roles to non-production environments, and applying the highest-priority org policy constraints (no service account keys, no public GCS buckets, no external SQL IPs, require OS Login) address the root causes of the majority of GCP security incidents before they occur.

The monitoring layer through Security Command Center and Cloud Audit Logs provides continuous visibility into both the configuration posture and runtime threats. Organizations that enable Security Command Center at the organization level, activate Data Access audit logs for sensitive services, and establish alerting on high-severity IAM and Organization Policy changes are operating with a materially stronger security posture than those relying on periodic manual review. Start with the preventive controls, then build the monitoring layer, then extend into container security and DevSecOps integration as organizational maturity increases.

Frequently asked questions

What is Google's Cloud Security Command Center and is it worth enabling?

Google Security Command Center (SCC) is Google's native security and risk management platform for GCP, providing asset discovery, vulnerability detection, threat detection, and compliance monitoring across your Google Cloud organization. The Standard tier is free and provides Security Health Analytics findings (identifying misconfigurations like public GCS buckets, firewall rules allowing unrestricted ingress, and service accounts with excessive permissions), asset inventory, and basic vulnerability detection. The Premium tier adds Event Threat Detection (real-time threat detection using Google's threat intelligence applied to Cloud Audit Logs and other log sources), Container Threat Detection for GKE workloads, Virtual Machine Threat Detection for Compute Engine instances, and Web Security Scanner for App Engine and GKE HTTP-accessible applications. For any organization running workloads in GCP beyond a personal development project, enabling Security Command Center at the organization level is worth doing as a first step, because the Standard tier findings alone typically surface several high-severity misconfigurations in newly assessed environments at no cost. The Premium tier provides the threat detection capabilities necessary for a production security monitoring program.

What is Workload Identity Federation and why should I stop using service account keys?

Workload Identity Federation allows external workloads running outside Google Cloud (such as GitHub Actions, AWS Lambda, Azure workloads, or on-premises systems) to authenticate to Google Cloud APIs using short-lived tokens derived from their native identity provider, without downloading a Google service account JSON key file. The mechanism works by establishing a trust relationship between Google Cloud and an external identity provider (using OIDC or SAML), then mapping external identities to Google service accounts for permission grants. Service account JSON keys are a significant security risk for several reasons: they are long-lived credentials (valid until explicitly rotated or revoked), they can be downloaded and stored anywhere without detection, they are frequently committed to source code repositories or embedded in container images, and there is no built-in mechanism for detecting their misuse until after the damage is done. Google's own recommendation is to eliminate all service account keys in favor of Workload Identity Federation for external workloads and Compute Engine default service accounts with scoped permissions for workloads running inside GCP. The GCP IAM Recommender tool surfaces service accounts with unused permissions and identifies accounts with downloaded keys that have not been used recently.

What are VPC Service Controls and when should I use them?

VPC Service Controls create security perimeters around Google Cloud managed services like Cloud Storage, BigQuery, Cloud SQL, Cloud KMS, and Secret Manager, preventing data from being exfiltrated to resources outside the perimeter even by authenticated principals. Without VPC Service Controls, a user with IAM permissions to access BigQuery data could potentially exfiltrate that data to a different Google Cloud project under their control in another organization. VPC Service Controls enforce context-based access policies: access to APIs within the perimeter is permitted only from within the perimeter itself or from defined access levels (such as corporate IP ranges or specific device policies). VPC Service Controls are most valuable for environments handling regulated or highly sensitive data where data exfiltration is a material risk, such as healthcare data covered by HIPAA, payment card data under PCI DSS, or government data requiring FedRAMP controls. The tradeoff is operational complexity: Service Controls require explicit ingress and egress rules for every legitimate cross-perimeter data flow, and misconfigured perimeters commonly break workflows that crossed project or service boundaries. Start with a dry-run mode that logs violations without blocking them before enforcing controls.

How do I prevent accidental public GCS bucket exposure?

The most effective preventive control is applying the storage.publicAccessPrevention Organization Policy constraint at the organization or folder level, which prevents any Cloud Storage bucket within scope from having public access enabled regardless of bucket-level IAM settings or ACLs. This policy constraint is the GCP equivalent of the Azure storage account setting that disables Allow Blob Public Access at the account level, and it is the recommended approach over relying on bucket-level configuration discipline. For existing environments before applying the org policy, audit current bucket access using the gcloud storage buckets list command with IAM policy output and identify any bucket with allUsers or allAuthenticatedUsers in its IAM bindings. Security Command Center Standard tier also automatically flags public GCS buckets as high-severity Security Health Analytics findings. Additionally, enable uniform bucket-level access on all buckets (which disables legacy ACLs and enforces IAM-only access control), and configure object versioning and bucket lock for buckets storing compliance-sensitive data to prevent deletion and provide audit trail preservation.

What are the most critical GCP misconfigurations attackers target?

The most consistently exploited GCP misconfigurations across incident response investigations and threat research include: service account keys committed to public GitHub repositories or stored in plaintext in compute metadata, which are actively scanned for by automated credential harvesters within minutes of exposure; primitive roles (Owner, Editor, Viewer) granted at the project level to service accounts rather than purpose-built predefined roles, giving attackers full project access if a service account is compromised; Cloud Storage buckets with allUsers read access containing sensitive data, application configurations, or credentials; firewall rules allowing SSH (22) or RDP (3389) ingress from 0.0.0.0/0 on Compute Engine instances with external IP addresses; and GKE clusters with the legacy metadata server enabled, allowing pods to query the instance metadata API for node service account tokens. Secondary misconfigurations include Cloud SQL instances with public IP addresses and no authorized networks restriction, Secret Manager secrets with overly broad IAM bindings, and projects with the Cloud Audit Logs Data Access audit log type disabled, creating blind spots in monitoring for sensitive API calls.

How do I achieve CIS Google Cloud Foundations Benchmark compliance?

The CIS Google Cloud Foundations Benchmark provides prescriptive configuration recommendations organized across IAM, logging, networking, virtual machines, storage, Cloud SQL, and BigQuery. Security Command Center Premium tier's Security Health Analytics automatically evaluates many CIS Benchmark controls and surfaces findings in the compliance report view, providing a gap assessment without manual evaluation of each control. To work through the benchmark systematically, enable Security Command Center at the organization level, run a Security Health Analytics scan, export findings to a spreadsheet, and map them to the CIS control identifiers. Prioritize the IAM section first (CIS 1.x controls) because misconfigured identity and access controls represent the majority of incident risk, followed by the logging controls (CIS 2.x) to ensure comprehensive audit trail coverage, then networking controls (CIS 3.x) to eliminate unnecessary external exposure. Organization Policy constraints address many of the preventive controls in the benchmark and apply consistently across all projects in the organization. For formal compliance certification, an independent auditor will require evidence beyond the automated tool output, including documented processes for access reviews, incident response procedures, and change management.

How does GCP security compare to AWS and Azure?

All three major cloud platforms provide equivalent security capabilities at the control level, but they differ significantly in default settings, tooling architecture, and operational experience. GCP's IAM model is more fine-grained at the resource level than AWS IAM in some service areas, particularly for BigQuery and Cloud Storage where permissions can be set at the dataset, table, or object level. GCP's Organization Policy Service provides a broader set of preventive org-wide constraints than Azure Policy's built-in definitions, though both accomplish similar goals. Google Security Command Center is comparable in function to Microsoft Defender for Cloud (CSPM plus CWPP) and AWS Security Hub with GuardDuty, though the threat detection depth of each platform's native tooling reflects different investment levels over time. A key GCP differentiator is the default absence of inbound firewall rules allowing internet access to Compute Engine VMs, whereas AWS and Azure have historically had weaker defaults in this area for new resources. The most operationally distinct GCP capability is VPC Service Controls, which has no direct equivalent in AWS or Azure and provides uniquely strong data exfiltration prevention for managed services. Teams operating across multiple cloud environments should implement cloud-specific native security tooling on each platform rather than attempting to standardize on a single third-party CSPM tool, as native tools provide deeper API-level visibility and faster response to new service launches.

Sources & references

  1. Google Cloud Security Best Practices
  2. IAM Best Practices
  3. VPC Service Controls Overview
  4. Security Command Center Documentation
  5. CIS Google Cloud Foundations Benchmark

Free resources

25
Free download

Critical CVE Reference Card 2025–2026

25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.

No spam. Unsubscribe anytime.

Free download

Ransomware Incident Response Playbook

Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.

No spam. Unsubscribe anytime.

Free newsletter

Get threat intel before your inbox does.

50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.

Unsubscribe anytime. We never sell your data.

Eric Bang
Author

Founder & Cybersecurity Evangelist, Decryption Digest

Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.

Free Brief

The Mythos Brief is free.

AI that finds 27-year-old zero-days. What it means for your security program.

Joins Decryption Digest. Unsubscribe anytime.

Daily Briefing

Get briefings like this every morning

Actionable threat intelligence for working practitioners. Free. No spam. Trusted by 50,000+ SOC analysts, CISOs, and security engineers.

Unsubscribe anytime.

Mythos Brief

Anthropic's AI finds zero-days your scanners miss.