About
In the default configuration, pods running on an EKS cluster can often access the instance metadata service (IMDS) of the worker nodes. This allows an attacker to steal the AWS credentials of the worker node, which can lead to privilege escalation and unauthorized access to AWS resources.
Understanding Impact
Business Impact
An attacker with the ability to compromise a single public-facing application in your EKS cluster can use this vulnerability to escalate their privileges and access sensitive data or resources in your AWS account.
Technical Impact
When an attacker compromises a pod, for instance through a remote code execution (RCE) or server-side request forgery (SSRF) vulnerability, they can call the underlying worker node's IMDS to retrieve the AWS credentials. These credentials can then potentially be used to escalate privileges and access sensitive resources in your AWS account. By default, the role attached to EKS worker nodes contains the ability to pull all container images in the account and enumerate the compute infrastructure.
An attacker can steal the underlying worker node's AWS credentials by making a request to the metadata service using the following command:
curl "169.254.169.254/latest/meta-data/iam/security-credentials/eks-worker-node-role"
Identify affected resources
In the AWS console, browse to the Compute
tab of your EKS cluster, and write down the launch template name under Node groups
. Then, use the following command to retrieve the launch template configuration:
aws ec2 describe-launch-template-versions \
--launch-template-name "eksctl-datadog-pde-test-eks-cluster-us-east-1-nodegroup-main-2" \
--versions "$Latest" \
--query 'LaunchTemplateVersions[0].LaunchTemplateData.MetadataOptions'
Your launch template is secure only if the output of this command is exactly:
{
"HttpTokens": "required",
"HttpPutResponseHopLimit": 1
}
Alternatively, you can check your EKS cluster for network policies blocking access to the metadata service:
kubectl get networkpolicy --all-namespaces
To make sure that your cluster is secure, you can also use the Managed Kubernetes Auditing Toolkit (MKAT):
$ mkat eks test-imds-access
_ _
_ __ ___ | | __ __ _ | |_
| '_ ` _ \ | |/ / / _` | | __|
| | | | | | | < | (_| | | |_
|_| |_| |_| |_|\_\ \__,_| \__|
2024/07/03 11:08:52 Connected to EKS cluster mkat-cluster
2024/07/03 11:08:52 Testing if IMDSv1 and IMDSv2 are accessible from pods by creating a pod that attempts to access it
2024/07/03 11:08:58 IMDSv2 is not accessible to pods in your cluster: unable to establish a network connection to the IMDS
2024/07/03 11:09:00 IMDSv1 is not accessible to pods in your cluster: able to establish a network connection to the IMDS, but no credentials were returned
Remediate vulnerable resources
Enforce IMDSv2 on all worker nodes and set the http-put-response-hop-limit
to 1
. Here's an example assuming your EKS worker nodes are managed by a launch template:
resource "aws_launch_template" "worker-nodes" {
name = "eks-worker-nodes"
metadata_options {
http_tokens = "required"
http_put_response_hop_limit = 1
}
}
In addition, you can use IMDSv2 regional settings to ensure that these settings are set by default for future instances. Note that enforcing IMDSv2 alone is not enough, and the http-put-response-hop-limit
needs to be set to 1
.
For a second layer of defense, you can also use a Kubernetes NetworkPolicy
to block pod access to the metadata service.
Finally, the Managed Kubernetes Auditing Toolkit (MKAT) allows you to validate that your configuration is properly blocking pod access to the metadata service.
How Datadog can help
Cloud Security Management
Datadog Cloud Security Management detects this vulnerability using the following out-of-the-box rules:
References
Restrict access to the instance profile assigned to the worker node
aws documentation
Limit IMDS access
aws documentation
Attacking and securing cloud identities in EKS
securitylabs.datadoghq.com