Kubernetes Security Hardening Guide for Enterprise Deployments
Kubernetes deployments that leave default configurations intact are running with excessive permissions, no network isolation between workloads, secrets stored as base64-encoded plaintext, and often with the Kubernetes API server exposed to the internet. Attackers have automated scanners running 24/7 against Kubernetes API servers — an exposed, misconfigured cluster is typically discovered and compromised within minutes. This guide covers the security controls that reduce Kubernetes attack surface to an enterprise-acceptable level, following the NSA/CISA Kubernetes Hardening Guide and CIS Kubernetes Benchmark.
API Server Hardening: The Control Plane Attack Surface
The Kubernetes API server is the cluster's control plane — every administrative action routes through it. Securing the API server is the highest-priority cluster hardening step.
No public API server exposure
The Kubernetes API server (default port 6443) should never be internet-accessible. Place the API server on a private network; access should require VPN or zero trust network access. Exposed API servers are trivially discoverable via Shodan and are actively exploited.
Authentication with strong identity
Disable anonymous authentication (--anonymous-auth=false). Use OIDC integration for user authentication, not static token files or client certificates where avoidable. OIDC allows integration with enterprise identity providers (Okta, Entra ID) with MFA enforcement.
Admission controllers
Enable key admission controllers: NodeRestriction (prevents node objects from modifying others' labels), AlwaysPullImages (forces image pull on every pod start, preventing use of cached malicious images), and LimitRanger (enforces resource limits). Disable admission controllers that weaken security (AlwaysAdmit).
Audit logging
Enable Kubernetes audit logging with a policy that captures authentication events, authorization decisions, and changes to sensitive resources (Secrets, RBAC roles, PodSecurityPolicies/admission). Ship audit logs to an external SIEM — logs stored only on the cluster are destroyable by an attacker with cluster access.
etcd security
etcd stores all cluster state including Secrets. Encrypt etcd at rest using a KMS provider; do not store encryption keys on the same host as etcd. Restrict etcd access to control plane nodes only — direct etcd access bypasses Kubernetes RBAC entirely.
RBAC: Least Privilege for Cluster Access
Role-Based Access Control (RBAC) is Kubernetes' authorization system. Default Kubernetes RBAC configurations are significantly overprivileged. The most common Kubernetes privilege escalation paths exploit excessive RBAC permissions.
Avoid cluster-admin bindings
The cluster-admin ClusterRole grants unrestricted cluster access. Audit all ClusterRoleBindings and RoleBindings for cluster-admin assignment and remove any that are not operationally required. Developers should not have cluster-admin access in any environment.
Namespace-scoped roles over cluster-scoped
Prefer namespace-scoped Roles over cluster-scoped ClusterRoles where possible. A namespace-scoped binding limits blast radius if credentials are compromised — an attacker with namespace-scoped access cannot affect other namespaces.
Audit dangerous permissions
Certain RBAC permissions enable privilege escalation regardless of other controls: create/update ClusterRoleBindings, create Pods in privileged namespaces, exec into pods, access secrets across namespaces, and create webhooks. Audit for these permissions across all service accounts and user bindings.
Service account token projection
Use projected service account tokens with short expiration and audience binding instead of long-lived default service account tokens. Disable automatic service account token mounting for pods that do not need API server access (automountServiceAccountToken: false in pod spec).
Regular RBAC auditing
RBAC configurations drift over time as developers add permissions to unblock immediate needs and never remove them. Run RBAC auditing tools (kubectl-who-can, rakkess, rbac-tool) quarterly to identify permission accumulation.
Briefings like this, every morning before 9am.
Threat intel, active CVEs, and campaign alerts, distilled for practitioners. 50,000+ subscribers. No noise.
Pod Security Standards: Restricting Workload Capabilities
Pod Security Standards (PSS) replaced deprecated PodSecurityPolicy in Kubernetes 1.25. PSS defines three policy levels that restrict what pods can do at the cluster or namespace level.
Privileged policy
No restrictions — any pod capability is permitted. Appropriate for infrastructure namespaces (kube-system) and security tooling that requires host-level access (node agents). Do not apply to application namespaces.
Baseline policy
Prevents known privilege escalation techniques while remaining compatible with most containerized applications. Blocks hostPath mounts, hostPID/hostNetwork, and privileged containers. This is the minimum standard for all application namespaces.
Restricted policy
Hardened against the full attack surface of container breakout techniques. Requires non-root user, drops all capabilities, blocks privilege escalation, and requires read-only root filesystems. The target for all application workloads that can meet the requirements.
Implementation
Apply Pod Security Standards via namespace labels: pod-security.kubernetes.io/enforce: restricted. Use warn and audit modes before enforce to identify non-compliant workloads without breaking existing deployments. Enforce restricted on new namespaces immediately; migrate existing namespaces incrementally.
Network Policies: Micro-Segmentation for Pods
By default, Kubernetes allows all pod-to-pod communication within a cluster. This is equivalent to placing all workloads on the same flat network with no firewall rules. NetworkPolicy resources define allowed traffic flows.
Default deny posture
Start with a default-deny NetworkPolicy in every namespace that blocks all ingress and egress traffic. Then explicitly allow required communication flows. This is the zero trust principle applied to pod networking.
Ingress policies
Define which other pods and external sources can send traffic to each pod. A web application pod should only accept ingress from the ingress controller namespace; a database pod should only accept ingress from the application namespace pods with matching labels.
Egress policies
Define which destinations each pod can reach. Most application pods should only reach specific services within the cluster and specific external endpoints — not the entire internet. Egress controls catch supply chain compromises attempting to beacon to external C2.
CNI plugin requirement
NetworkPolicies are defined in Kubernetes but enforced by the CNI (Container Network Interface) plugin. Not all CNI plugins enforce NetworkPolicies: Flannel does not; Calico, Cilium, Weave Net, and Antrea do. Verify your CNI enforces NetworkPolicies or they will have no effect.
Secrets Management: Beyond base64
Kubernetes Secrets are not secret by default — they are base64-encoded, stored in plaintext in etcd, and accessible to any workload with namespace access. Enterprise Kubernetes deployments require external secrets management.
External Secrets Operator
The External Secrets Operator (ESO) pulls secrets from external secret stores (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, GCP Secret Manager) and creates Kubernetes Secret objects synchronized to the external source. Applications consume standard Kubernetes Secrets; the actual values are managed externally with proper access controls and audit logging.
etcd encryption at rest
Enable etcd encryption for Secrets using a KMS provider (AWS KMS, GCP Cloud KMS, Azure Key Vault). This protects Secret values if etcd backup files or snapshots are compromised. The encryption key must be stored outside the cluster.
RBAC restriction on Secret access
Secrets should only be accessible to the specific service accounts that need them. Audit for overly broad Secret access in RBAC — a role that can read all Secrets in a namespace is a single misconfiguration away from full credential exposure.
Secret rotation
Implement automated secret rotation using the external secret store's rotation capability. Secrets that never rotate accumulate exposure risk over time. Short-lived credentials (AWS IAM roles for service accounts, Vault dynamic secrets) eliminate rotation complexity by making credentials inherently temporary.
Runtime Security: Detecting Attacks in Running Containers
Preventive controls like PSS and NetworkPolicy reduce attack surface; runtime security detects attacks that succeed despite preventive controls.
Falco
Falco (CNCF) is the dominant open source Kubernetes runtime security tool. It uses eBPF or kernel module instrumentation to monitor system calls from containers in real time, alerting on suspicious behavior: shell spawned in a container, unexpected file writes to sensitive directories, network connections to unexpected destinations, credential file access. Falco rules are written in YAML and are highly customizable.
Commercial runtime security
Aqua Security, Palo Alto Prisma Cloud, and Sysdig Secure provide commercial runtime security with managed rule libraries, threat intelligence integration, and centralized alerting. They add container image scanning, RBAC auditing, and compliance reporting to runtime detection capabilities.
Immutable containers
Configure containers with read-only root filesystems (readOnlyRootFilesystem: true in securityContext). Attackers who gain code execution in a container with a read-only filesystem cannot persist malware, modify application files, or create new executables. This single control significantly raises the cost of post-exploitation.
Behavioral anomaly detection
Establish behavioral baselines for each container type: which processes run, which files are accessed, which network connections are made. Deviations from baseline — a web server container spawning bash, or a database container making outbound internet connections — are high-fidelity alerts that warrant immediate investigation.
The bottom line
Kubernetes security hardening is a layered discipline: secure the API server, implement least-privilege RBAC, enforce Pod Security Standards, apply network micro-segmentation, externalize secrets management, and monitor runtime behavior. No single control is sufficient — a compromised container with a read-only filesystem is contained; the same container on a flat network with cluster-admin service account token mounted is catastrophic. The CIS Kubernetes Benchmark and NSA/CISA Hardening Guide provide audit-ready checklists for measuring compliance against each layer.
Frequently asked questions
What are the most critical Kubernetes security misconfigurations?
The highest-impact Kubernetes misconfigurations are: (1) publicly exposed API server without authentication, (2) workloads running as root with cluster-admin service account tokens mounted, (3) no NetworkPolicies (flat pod network), (4) Secrets stored only as base64 in etcd without encryption at rest, and (5) privileged containers with hostPath mounts to sensitive node directories. The OWASP Kubernetes Top 10 and NSA/CISA Kubernetes Hardening Guide cover these in detail.
What replaced PodSecurityPolicy in Kubernetes?
PodSecurityPolicy (PSP) was deprecated in Kubernetes 1.21 and removed in 1.25. It was replaced by Pod Security Standards (PSS) and Pod Security Admission, which implement three policy levels (Privileged, Baseline, Restricted) enforced via namespace labels. For more granular policy control, Open Policy Agent (OPA) Gatekeeper and Kyverno are policy engines that provide PSP-equivalent functionality and more.
How does Kubernetes RBAC privilege escalation work?
Kubernetes RBAC privilege escalation exploits permissions that allow an attacker to grant themselves higher privileges. Key escalation paths: if a role can create or modify ClusterRoleBindings, it can grant cluster-admin to itself; if a role can create Pods in kube-system, it can run a privileged pod and escape to the host; if a role can read Secrets across namespaces, it can steal service account tokens with elevated privileges. Auditing for these dangerous permissions with tools like kubectl-who-can identifies escalation paths before attackers do.
What is Falco and how does it work?
Falco is an open source CNCF runtime security tool for Kubernetes and Linux. It uses eBPF or a kernel module to intercept system calls from running containers and applies rules to detect suspicious behavior in real time — shells spawned inside containers, sensitive file accesses, unexpected network connections, and privilege escalation attempts. Falco generates alerts that can be shipped to a SIEM or SOAR for investigation and response. It is the de facto standard for Kubernetes runtime threat detection.
Should Kubernetes Secrets be used for sensitive credentials?
Default Kubernetes Secrets are not secure for sensitive credentials — they are base64-encoded (not encrypted) and stored in etcd accessible to any workload with appropriate RBAC access. For sensitive credentials, use an external secrets manager (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) integrated via the External Secrets Operator or CSI driver. Enable etcd encryption at rest as an additional layer. Use short-lived credentials (IAM roles for service accounts, Vault dynamic secrets) wherever possible to minimize the impact of any exposure.
What is the CIS Kubernetes Benchmark?
The CIS Kubernetes Benchmark is a consensus-based security configuration guide published by the Center for Internet Security. It covers API server configuration, etcd, kubelet settings, RBAC, network policies, and pod security. The benchmark assigns each control a level (L1 for essential, L2 for defense-in-depth) and provides audit commands to verify compliance. Tools like kube-bench (Aqua Security) automate CIS Kubernetes Benchmark assessment against a running cluster.
How should network policies be implemented in a new Kubernetes cluster?
Start with a default-deny NetworkPolicy in every namespace that blocks all ingress and egress traffic. Then add explicit allow rules for required communication flows: ingress from the ingress controller to web tier pods, from web tier to application tier, from application tier to database tier, and egress to specific external services. Verify your CNI plugin enforces NetworkPolicies (Calico, Cilium, and Antrea do; Flannel does not). Use network visualization tools like Hubble (Cilium) or Calico flow logs to map actual traffic before writing policies, avoiding false positives that block legitimate traffic.
Sources & references
Free resources
Critical CVE Reference Card 2025–2026
25 actively exploited vulnerabilities with CVSS scores, exploit status, and patch availability. Print it, pin it, share it with your SOC team.
Ransomware Incident Response Playbook
Step-by-step 24-hour IR checklist covering detection, containment, eradication, and recovery. Built for SOC teams, IR leads, and CISOs.
Get threat intel before your inbox does.
50,000+ security professionals read Decryption Digest for early warnings on zero-days, ransomware, and nation-state campaigns. Free, weekly, no spam.
Unsubscribe anytime. We never sell your data.

Founder & Cybersecurity Evangelist, Decryption Digest
Cybersecurity professional with expertise in threat intelligence, vulnerability research, and enterprise security. Covers zero-days, ransomware, and nation-state operations for 50,000+ security professionals weekly.
The Mythos Brief is free.
AI that finds 27-year-old zero-days. What it means for your security program.
