emerging threats and vulnerabilities

The 'IngressNightmare' vulnerabilities in the Kubernetes Ingress NGINX Controller: Overview, detection, and remediation

March 25, 2025

The 'ingressnightmare' Vulnerabilities In The Kubernetes Ingress Nginx Controller: Overview, Detection, And Remediation

Key points and observations

  • On March 24, researchers disclosed a set of five vulnerabilities, collectively known as "IngressNightmare,” affecting ingress-nginx.
  • CVE-2025-1974 is considered the most serious of the five and has been assigned a CVSS score of 9.8 (critical). When chained with one of the lower severity vulnerabilities, it allows for unauthenticated remote code execution.
  • Exploitation relies on an attacker's ability to reach the ingress controller's admission webhook endpoint. If the ingress controller's admission webhook is exposed to the internet, any remote attacker can compromise it. In the more common case where the admission webhook is exposed internally, it still allows for privilege escalation from any pod, because pods can communicate with each other by default.
  • Kubernetes has already responded publicly to the disclosure of CVE-2025-1974, encouraging users to install patches released by the ingress-nginx team that remediate CVE-2025-1974.

How to know if your cluster is affected

You are vulnerable if the ingress-nginx version is:

  • Earlier than v1.11.0
  • Between v1.11.0 and v1.11.4, or
  • v1.12.0

We developed a small Python script to facilitate understanding if you're affected by these vulnerabilities.

Sample script output on a vulnerable cluster (click to enlarge).
Sample script output on a vulnerable cluster (click to enlarge).

To check if your cluster is running the ingress-nginx for Kubernetes, run the following command, under an account that has cluster-wide read permissions on the cluster:

kubectl get pods --selector app.kubernetes.io/name=ingress-nginx --all-namespaces

If the ingress-nginx for Kubernetes is running, it's expected that the vulnerable admission webhook is also present in the cluster. You can confirm this by running:

kubectl get ValidatingWebhookConfiguration ingress-nginx-admission -o yaml

In that case, start by gaining an understanding of whether the ingress-nginx-controller-admission service is exposed publicly. In the example below, we can see that the service is exposed through a private ClusterIP and isn't directly exposed to the internet:

$ kubectl get service ingress-nginx-controller-admission -n ingress-nginx

NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
ingress-nginx-controller-admission   ClusterIP   10.100.239.110   <none>        443/TCP   2m4s

Even if the vulnerable admission controller isn't exposed to the internet, you'll want to remediate the vulnerability. Check which version of the ingress-nginx version is running:

kubectl get pods --selector app.kubernetes.io/name=ingress-nginx --selector app.kubernetes.io/component=controller -A -o yaml | grep image:

Sample result:

image: registry.k8s.io/ingress-nginx/controller:v1.12.0-beta.0@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5

You are vulnerable if the Ingress NGINX Controller version is earlier than v1.11.0, or is between v1.11.0 and v1.11.4 (included), or is v1.12.0.

How to remediate affected clusters

If your cluster is affected, you should upgrade the version of the ingress-nginx to one of the following patched versions:

  • v1.12.1 or later (Helm chart version 4.12.1 or later)
  • v1.11.5 or later (Helm chart version 4.11.5 or later)

In addition, or if you're not able to upgrade, ensure that the admission webhook is not publicly accessible. It should be at least private, and ideally only accessible from the control plane (API server).

About the "IngressNightmare" vulnerabilities

On March 24, 2025, researchers published a set of five vulnerabilities in the NGINX Controller for Kubernetes:

  1. CVE-2025-24513: ingress-nginx controller - auth secret file path traversal vulnerability (Medium)
  2. CVE-2025-24514: ingress-nginx controller - configuration injection via unsanitized auth-url annotation (High)
  3. CVE-2025-1097: ingress-nginx controller - configuration injection via unsanitized auth-tls-match-cn annotation (High)
  4. CVE-2025-1098: ingress-nginx controller - configuration injection via unsanitized mirror annotations (High)
  5. CVE-2025-1974: ingress-nginx admission controller RCE escalation (Critical)

Vulnerabilities 1 to 4 allow an attacker who has access to the API server and permission to create or update Ingress objects to inject a malicious NGINX configuration, potentially leaking sensitive information such as the underlying's pod service account token.

Vulnerability 5, CVE-2025-1974, is an unauthenticated remote code execution (RCE) vulnerability in the admission controller component of the Ingress NGINX Controller for Kubernetes. In the rest of this post, we'll focus on this vulnerability.

What is ingress-nginx?

ngress-nginx is an ingress controller for Kubernetes that allows users to make their applications available via an NGINX reverse proxy. The controller consumes Ingress resources that allow users to map traffic to different application backends (Service) within the cluster, in a protocol-aware manner. This allows external HTTP and HTTPS traffic to be routed to services within the cluster, providing functionality like load balancing, SSL/TLS termination, and name-based virtual hosting.

Naturally, vulnerabilities occurring in ingress controllers are particularly severe. They can serve as a valuable entry point for threat actors wishing to gain access to the cluster. Furthermore, the use of ingress controllers is a common design pattern for applications built around Kubernetes, and ingress-nginx is one of the most popular ingress controllers. It’s even used as a suggested ingress controller in the official Kubernetes Ingress documentation.

This isn’t the first time an impactful vulnerability has been discovered in ingress-nginx. Back in 2023, CVE-2023-5044, a similar vulnerability with a high severity rating, was discovered in the controller, allowing code injection and the exfiltration of credentials. The vulnerability highlighted the potential for threat actors to access credentials residing within the cluster if they can successfully exploit ingress controllers.

What is admission control?

If you're a regular Security Labs reader, you might have read Kubernetes security fundamentals: Admission Control in the past. When ingress-nginx is installed, the cluster is configured with a ValidatingWebHookConfiguration that instructs the API server to call a specific web service when an Ingress resource is created or updated.

In a default configuration, we can see that the ingress-nginx validating webhook configuration instructs the API server to call the service ingress-nginx-controller-admission in the namespace ingress-nginx:

kubectl get ValidatingWebhookConfiguration ingress-nginx-admission -o yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.12.0-beta.0
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
      port: 443
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
    scope: '*'

The following steps are shown in the accompanying diagram:

  1. When the API server receives a creation or mutation request for an Ingress object, it performs a POST request with a predefined JSON body containing the resource to validate.

  2. This HTTP request is handled by an in-cluster service (ingress-nginx-controller-admission).

  3. If the validation is successful, the API server persists the Ingress object in etcd, Kubernetes' persistent storage.

  4. The ingress controller component is notified that a new or modified Ingress object is available, and consumes it to ensure that the generated NGINX configuration is up to date.

(click to enlarge).
(click to enlarge).

The GitHub repository of ingress-nginx contains, in the same place, the implementation for both the admission webhook and the controller itself, facilitating code reuse.

The CVE-2025-1974 vulnerability

When the admission webhook receives a validation request, it calls IngressAdmission.Checker.CheckIngress, which calls NGINXController.testTemplate. This last function essentially writes the user-provided configuration to disk, and runs nginx -t against it to make sure it is valid.

Researchers found a way to cause nginx -t to run arbitrary code if it was passed a malicious NGINX configuration. One way to generate such a malicious configuration is to leverage one of the other lower-severity vulnerabilities, providing an unauthenticated attacker with an injection point to craft arbitrary NGINX instructions that are passed to nginx -t.

Exploitation requirements

For this vulnerability to be exploitable, a threat actor must be able to access the admission webhook service.

This access can occur:

  • If the admission webhook service is exposed externally. Although this isn't a standard configuration, some clusters make the admission webhook service publicly accessible. In this case, the service becomes directly exploitable because it by design doesn't require authentication.
  • Even if the admission webhook service is only accessible internally. By default, the service is accessible to any pod running in the cluster through the following address: https://ingress-nginx-controller-admission.ingress-nginx.svc.cluster.local.

In both cases, the vulnerability is exploitable and leads to privilege escalation, because the service account attached to the ingress-controller pods can access all secrets in the cluster through a ClusterRole in the default setup.

Sample exploitation scenarios

  • A public-facing web application that is running in a Kubernetes pod is compromised. Because all pods in a Kubernetes cluster can communicate with each other by default, the attacker is then able to exploit the ingress controller's admission webhook and gain code execution in its context. From there, the attacker can steal the service account token in the ingress controller pod and read all secrets in the cluster.
  • A dangerously configured cluster exposes the ingress controller's admission webhook to the internet. Attackers can scan this cluster and identify based on the HTTP response that they can communicate with the admission webhook. Attackers can therefore exploit the webhook, gain code execution, and escalate their privileges within the cluster if the API server is also internet-facing.

A note on managed clusters

Many organizations operating Kubernetes in cloud environments use managed distributions, such as Amazon EKS, Azure Kubernetes Service (AKS) or Google Kubernetes Engine (GKE).

These environments typically do not ship with the Ingress NGINX Controller installed by default. However, if you have installed a vulnerable version of the Ingress NGINX Controller in these environments, you are also vulnerable. However, it is less likely that organizations that use a managed distribution would expose the admission webhook to the external world, because the managed API server can already reach it by default without additional network flows being opened.

For more information, refer to the following advisories:

Reproducing a purposely vulnerable environment

If you're a security researcher or you'd like to reproduce what a vulnerable setup looks like, you can follow the following steps.

First, create a kind cluster locally by running:

kind create cluster

Then, deploy a vulnerable version of the Ingress NGINX Controller Helm chart, such as version 4.12.0:

helm upgrade --install ingress-nginx ingress-nginx \ 
  --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx \ 
  --create-namespace --version 4.12.0

Then, wait for the deployment to be available:

kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \  --selector=app.kubernetes.io/component=controller \  --timeout=90s

You can now create a port-forward to the admission webhook:

kubectl port-forward svc/ingress-nginx-controller-admission -n ingress-nginx 8443:443

From there, you can confirm that you're able to reach the validation endpoint by submitting a sample payload:

curl -vk https://localhost:8443/validate -H "Content-Type: application/json" -d'{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "kind": {
      "group": "networking.k8s.io",
      "version": "v1",
      "kind": "Ingress"
    },
    "resource": {
      "group": "",
      "version": "v1",
      "resource": "namespaces"
    },
    "operation": "CREATE",
    "object": {
      "metadata": {
        "name": "sample-ingress"
      },
      "spec": {
        "ingressClassName": "nginx",
        "rules": [
          {
            "host": "example.com",
            "http": {
              "paths": [
                {
                  "path": "/",
                  "pathType": "Prefix",
                  "backend": {
                    "service": {
                      "name": "kubernetes",
                      "port": {
                        "number": 80
                      }
                    }
                  }
                }
              ]
            }
          }
        ]
      }
    }
  }
}'

In a working setup, you should see a response that looks like:

< HTTP/1.1 200 OK
< Date: Tue, 25 Mar 2025 18:47:44 GMT
< Content-Length: 1114
< Content-Type: text/plain; charset=utf-8
<
{
  "kind": "AdmissionReview",
  "apiVersion": "admission.k8s.io/v1",
   ...
  "response": {
    "uid": "",
    "allowed": true
  }
}

You can access the logs of the admission webhook by running:

$ kubectl logs -l app.kubernetes.io/name=ingress-nginx -n ingress-nginx -f
...
I0325 18:47:44.509544      11 main.go:107] "successfully validated configuration, accepting" ingress="/"

Identifying potential exploitation attempts

If you suspect you may have exposed a vulnerability configuration, you can review logs of the ingress-nginx pods to understand when the admission webhook is validating an Ingress object. While this may be a tedious task if you're using Ingresses heavily, it's a good way of understanding if an attacker may have been attempting to exploit your ingress-nginx deployment. In legitimate cases, these events should correlate with an event for a creation or mutation of the same Ingress objection on the API server.

As an example, you can see the following log event when the ingress-nginx admission webhook successfully validates an object:

successfully validated configuration, accepting

This should always correlate with a create or update event for an Ingress, which may look like:

{
	"objectRef": {
		"uid": "19176441-1dc0-414b-b2a3-6944c711bbb9",
		"apiGroup": "networking.k8s.io",
		"apiVersion": "v1",
		"resource": "ingresses",
		"namespace": "default",
		"name": "example-ingress"
	},
	"userAgent": "nginx-ingress-controller/v1.12.1 (linux/amd64) ingress-nginx/51c2b819690bbf1709b844dbf321a9acf6eda5a7",
	"requestURI": "/apis/networking.k8s.io/v1/namespaces/default/ingresses",
	"stageTimestamp": "2025-03-25T20:59:00.830123Z",
	"http": {
		"url_details": {
			"path": "/apis/networking.k8s.io/v1/namespaces/default/ingresses"
		},
		"status_code": 200,
		"method": "update",
		"useragent": "nginx-ingress-controller/v1.12.1 (linux/amd64) ingress-nginx/51c2b819690bbf1709b844dbf321a9acf6eda5a7",
	}
}

If you don't find such an API server event in your logs, it may mean that someone is directly accessing the admission webhook and may be attempting to exploit it.

When an attacker exploits the IngressNightmare vulnerability successfuly, the ingress-nginx pod produces the following logs:

ENGINE_by_id("/proc/XXX/fd/YYY") failed (SSL: error:1280006A:DSO support routines::could not bind to the requested symbol name:symname(bind_engine): Symbol not found: bind_engine error:1280006A:DSO support routines::could not bind to the requested symbol name error:13000068:engine routines::DSO failure error:13000074:engine routines::no such engine:id=/proc/XXX/fd/YYY)

If you suspect that your ingress-nginx pod may have been compromised, you can review actions performed against the API server by the Kubernetes service account system:serviceaccount:ingress-nginx:ingress-nginx-admission and look for anomalies such as:

  • Increased API activity
  • Unusual user agents
  • Activity from external IP addresses

If you're using Datadog Log Management, the related queries are:

# Successful Ingress validation events
pod_name:ingress-nginx-controller* service:controller "successfully validated configuration" 

# Failed validation events indicating exploitation attempts
pod_name:ingress-nginx-controller* service:controller "bind_engine error:1280006A:DSO"
source:kubernetes.audit @http.method:(update OR create) @objectRef.resource:ingresses
source:kubernetes.audit @usr.id:"system:serviceaccount:ingress-nginx:ingress-nginx-admission" 

How Datadog can help

Cloud Security Management (CSM) Threats leverages real-time eBPF-powered threat detection across your hosts and containers. The out of the box (OOTB) rules Post compromise shell detected and Unfamiliar process created by web application can detect post-exploitation activity like unauthorized shell access inside critical pods and shell utilities, HTTP utilities, or shells spawned by a web server. Detailed system events can be reviewed using the query image_name:*ingress-nginx/controller* @process.ancestors.executable.name:nginx in the Workload Protection Events Explorer.

Infrastructure Monitoring provides complete visibility into infrastructure performance and security with easy deployment, minimal maintenance, and unmatched breadth of coverage.

With this Kubernetes Explorer query, customers can identify if they are running a vulnerable image.

Conclusion

The “IngressNightmare” vulnerabilities, and particularly CVE-2025-1974, underscore the critical role ingress controllers play in Kubernetes security—and the severe impact that misconfigurations or unpatched components can have. With a CVSS score of 9.8, CVE-2025-1974 represents a rare case of unauthenticated remote code execution via a component commonly trusted within Kubernetes clusters.

While public exposure of the admission webhook significantly increases risk, even clusters following standard configurations remain vulnerable due to the ability for any pod to reach the service internally. Remediating this issue requires immediate action: either by upgrading to a patched version of the ingress-nginx or ensuring strict network-level access controls around the admission service.

Because Kubernetes is the backbone of many production environments, defenders should treat ingress-related components as high-value targets and continuously monitor for suspicious activity. Tools like Datadog Cloud Security Management can provide an additional layer of protection by detecting post-exploitation behavior in real time. Ultimately, this disclosure serves as yet another reminder that the Kubernetes ecosystem’s flexibility comes with the responsibility of careful configuration, proactive patching, and layered defense.

References

Did you find this article helpful?

Subscribe to the Datadog Security Digest

Get the latest insights from the cloud security community and Security Labs posts, delivered to your inbox monthly. No spam.

Related Content