Lab_05 - Hardening K8s
Lab 05 - Hardening K8s
Warning
Do NOT modify any deployment configuration .yaml files, unless explicitly specified in the lab material. Otherwise, you might be required to start the lab again.
Part I: Interacting with the cluster
Generally, to interact with the cluster, the kubectl
tool is used. For example, you can view the nodes in your cluster with the command:
kubectl get nodes
Output:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 5m48s v1.28.3
What is the purpose of each field?
NAME: This is the name of the node, in this case, "minikube". This is the default name for the node since we are using Minikube to create the cluster.
STATUS: This column tells the status of the node. "Ready" means that the node is healthy and ready to accept pods. Other possible values include "Out of Disk", "Memory Pressure", "Disk Pressure", "Network Unavailable", etc.
ROLES: This is the role that has been assigned to the node. The "control-plane" role means this node is a master node that controls the working of the Kubernetes cluster. Worker nodes, which actually run the applications, would not have this role.
AGE: This is how long ago the node was created.
VERSION: This is the version of Kubernetes that is running on the node.
Let us list all the available namespaces in the cluster.
kubectl get namespaces
Output:
NAME STATUS AGE
default Active 7m1s
kube-node-lease Active 7m1s
kube-public Active 7m1s
kube-system Active 7m1s
Namespaces in Kubernetes are intended to be used in environments with many users spread across multiple teams, or projects.
They provide a scope for names and are a way to divide cluster resources between multiple uses.
Not all installations will have the same namespaces, as some Kubernetes software may add additional namespaces. In our case, we have 4 default namespaces that come with a fresh Minikube install:
default: This is the default namespace that gets created when Kubernetes is installed. If no namespace is specified with a kubectl command, the 'default' namespace is used. User-created pods and other resources often go here.
kube-system: This namespace is for objects created by the Kubernetes system itself. This is where resources that are part of the Kubernetes system are usually located, like system-level components that are necessary for Kubernetes to function.
kube-public: This namespace is automatically created and is readable by all users (including those not authenticated). While this namespace is mostly reserved for cluster usage, in practice it is rarely used. Some clusters use it to hold public configuration details such as ConfigMaps.
kube-node-lease: This namespace contains lease objects associated with each node which improves the performance of the node heartbeats as the cluster scales.
Tasks:
a) As the first task of the lab, list all the pods across all namespaces.
Solution
kubectl get pods -A
b) What is the address of the cluster's control plane? And the address of the DNS service?
Solution
kubectl cluster-info
Expected output:
Kubernetes control plane is running at https://192.168.49.2:8443
CoreDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
c) What is the resource capacity of the 'minikube' node? And the operating system version of the node?
Solution
kubectl describe node minikube
Expected output:
[...]
OS Image: Ubuntu 22.04.3 LTS
Operating System: linux
Architecture: amd64
[...]
Capacity:
cpu: 5
ephemeral-storage: 71077508Ki
hugepages-2Mi: 0
memory: 10231668Ki
pods: 110
d) Create a new namespace called lab-web
.
Within it, create a simple pod running an nginx server called lab-nginx-pd
.
List the pod to ensure that the status is Running
.
What is the IP of the lab-nginx-pd
pod?
Can you curl
the nginx service hosted in the lab-nginx-pd
pod from the host system? Why?
Solution
kubectl create namespace lab-web
kubectl run lab-nginx-pd --image=nginx --namespace=lab-web
kubectl get pods --namespace=lab-web
kubectl describe pod lab-nginx-pd --namespace=lab-web
Kubernetes pods use a different network interface compared to the host system. This network interface is only available to the Kubernetes node (in this case, Minikube) and not directly accessible from the host system.
e) What is the IP of the minikube node?
Access the minikube node. Can you curl
the nginx service hosted in the lab-nginx-pd
pod from the minikube node? Why?
Return to the host system afterward.
Solution
minikube ssh
curl http://lab-nginx-pd.lab-web
You can curl the Nginx service from the Minikube node because the Minikube node is a part of the Kubernetes network, and it can directly communicate with the pods.
f) List all services across all the namespaces. Then, list all the services in the lab-web
namespace. Is there any nginx service listed?
Solution
kubectl get svc -A
kubectl get svc -n lab-web
# there should be nothing listed
g) Expose the nginx service on port 80. The service should be named nginx-service
.
Then, list all the services in the lab-web
namespace. Is there any nginx service listed?
Can you now curl
the nginx service hosted in the lab-nginx-pd
pod from the host system?
Solution
kubectl expose pod lab-nginx-pd --port=80 --type=NodePort -n lab-web --name nginx-service
kubectl get svc -n lab-web
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.109.237.234 <none> 80:31977/TCP 2s
Notice the internal and forwarded port.
Now, get the internal IP address:
kubectl describe node minikube | grep -C 1 Addres
Expected output:
Ready True Thu, 18 Jan 2024 10:00:35 +0200 Thu, 18 Jan 2024 09:40:10 +0200 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Based on the port and the internal IP address, one can now curl the nginx-service from the host:
curl http://192.168.49.2:31977
<!DOCTYPE html>
[...]
Notice that the internal IP and the forwarded port might be different for different students.
Part II: Exploring Dangerous Config Attributes
In this section, we are going to explore the default configurations of the Kubernetes cluster, identify potential security risks, such as anonymous authentication, unrestricted API server access.
1. Risks of privileges: true
Access vuln_pods
lab folder. Within it, you will see a deployment config file called privileged_enabled.yaml
. Let's see its contents.
$ cat privileged_enabled.yaml
apiVersion: v1
kind: Pod
metadata:
name: priv-enabled
namespace: lab-web
spec:
containers:
- name: priv-enabled
image: ubuntu
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 40; done;" ]
securityContext:
privileged: true
As you can see, privileged: true
is present. Upon deploying the pod, it will be available in the lab-web
namespace. Let's deploy the pod.
$ kubectl apply -f privileged_enabled.yaml
pod/privileged_enabled created
Let's list the pods within the namespace to ensure that the pod is running.
$ kubectl get pods --namespace=lab-web
NAME READY STATUS RESTARTS AGE
lab-nginx-pd 1/1 Running 1 (110m ago) 19h
priv-enabled 1/1 Running 0 12m
Now, let's get within the created pod.
$ kubectl exec --namespace lab-web priv-enabled -it -- /bin/bash
Upon successful execution of this command, a bash shell within the pod should be presented to us. Let us list the available filesystems:
root@priv-enabled:/# df | grep /dev
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 65536 0 65536 0% /dev
/dev/sda1 71077508 50247396 17193788 75% /etc/hosts
shm 65536 0 65536 0% /dev/shm
The output may be different for other systems. But in our case, the fs of interest happens to be called /dev/sda1
. This is actually the name of the host filesystem that hosts the cluster. It is also the biggest filesystem. Let's attempt to mount it in a /tmp
dir.
root@priv-enabled:/# mkdir /tmp/host-fs
root@priv-enabled:/# mount /dev/sda1 /tmp/host-fs/
root@priv-enabled:/# cd /tmp/host-fs
root@priv-enabled:/# ls
bin cdrom dev home initrd.img.old lib32 libx32 media opt reports run snap svr tmp var vmlinuz.old
boot core etc initrd.img lib lib64 lost+found mnt proc root sbin srv sys usr vmlinuz workspace
You now have access to the filesystem of the system that is running under your cluster! As you can see, the privileged: true
container-level security context breaks down almost all the isolation that containers are supposed to provide.
What's the worst an attacker could do, assuming that it manages to obtain access to a privileged pod?
An attacker can look for kubeconfig files on the host filesystem. With luch, it will find a cluster-admin config with full access to everything.
An attacker can access the tokens from all pods on the node. Using something like
kubectl auth can-i --list
, one can see whether any pods have tokens that give more permissions than one currently has.An attacker would usually look for tokens that have permissions to get secrets or create pods, deployments, etc., in kube-system, or that allow him to create
clusterrolebindings
.
An attacker can add his own SSH key. If an attacker manages to also gain network access to SSH to the node, then it can add his own public key to the node and SSH to it for full interactive access.
An attacker can crack hashed passwords. An attacker could grab the hashes in /etc/shadow. If the brute-force attack lends something useful, then an attacker would attempt lateral movement by checking those cracked creds against other nodes.
Tasks:
a) Describe the lab-nginx-pd
pod. Is there any privileged: true
setting?
Solution
kubectl get pod lab-nginx-pd -n lab-web -o yaml | grep priv
Expected output: One should observe no privileged setting enabled by default.
b) Access the lab-nginx-pd
pod, then attempt to mount the host fs again on this pod. Are you able to do it?
Solution
kubectl exec --namespace lab-web lab-nginx-pd -it -- /bin/bash
mkdir /tmp/host-fs
mount /dev/sda1 /tmp/host-fs/
Expected output:
mount: /tmp/host-fs: permission denied.
dmesg(1) may have more information after failed mount system call.
2. Risks of hostPID: true
Access vuln_pods
lab folder. Within it, you will see a deployment config file called hostpid_enabled.yaml
. Let's see its contents.
$ cat hostpid_enabled.yaml
apiVersion: v1
kind: Pod
metadata:
name: hostpid-enabled
namespace: lab-web
spec:
hostPID: true
containers:
- name: hostpid-enabled
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "apt-get update && apt-get install -y curl; while true; do sleep 30; done;" ]
As you can see, hostPID: true
is present in the config. Upon deploying the pod, it will be available in the lab-web
namespace. Let's deploy the pod.
$ kubectl apply -f hostpid_enabled.yaml
pod/hostpid_enabled created
Let's list the pods within the namespace to ensure that the pod is running.
$ kubectl get pods --namespace=lab-web
NAME READY STATUS RESTARTS AGE
hostpid-enabled 1/1 Running 0 94m
lab-nginx-pd 1/1 Running 1 (3h19m ago) 20h
priv-enabled 1/1 Running 0 27m
Now, let's get within the created pod.
$ kubectl exec --namespace lab-web hostpid-enabled -it -- /bin/bash
Upon successful execution of this command, a bash shell within the pod should be presented to us. Let us explore the available processes:
root@hostpid-enabled:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 18672 11532 ? Ss 07:05 0:00 /sbin/init
root 242 0.0 0.0 15416 9200 ? Ss 07:05 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 727 0.1 0.5 1652840 61092 ? Ssl 07:06 0:20 /usr/bin/containerd
root 951 1.1 1.0 2717728 105340 ? Ssl 07:06 2:31 /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docke
root 1180 0.8 0.4 755748 48124 ? Ssl 07:06 1:49 /usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.k8s.io/pause:3.9 --network-plugin=cni --hairpin-mod
root 1590 1.5 1.0 1784264 105140 ? Ssl 07:06 3:12 /var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml -
root 1782 0.0 0.0 719848 9696 ? Sl 07:06 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 60f08b2fa3051fbebc8a1cf93e9860eec0572f2cb1c198ee3a7cd16b0226d96f -address /run/containerd/c
root 1803 0.0 0.1 719848 10372 ? Sl 07:06 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id eb283f88078a6bcf3c0bbd8e971339a3d51803cad1f85de61815b109980ac2b0 -address /run/containerd/c
root 1806 0.0 0.0 719848 9452 ? Sl 07:06 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c3f442408dca707d74faced83d000de043bddc1a3264244bf41244750165094a -address /run/containerd/c
root 1829 0.0 0.0 719592 9452 ? Sl 07:06 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4c33f08bb1d5472971e268c7519b7d39b16771dc25374142835eab3c6a4fe0ba -address /run/containerd/c
65535 1887 0.0 0.0 1024 4 ? Ss 07:06 0:00 /pause
[...]
Upon closer inspection, we notice some interesting processes:
root 204398 0.0 0.0 11388 7360 ? Ss 10:36 0:00 nginx: master process nginx -g daemon off;
101 204432 0.0 0.0 11852 2800 ? S 10:36 0:00 nginx: worker process
101 204433 0.0 0.0 11852 2800 ? S 10:36 0:00 nginx: worker process
101 204434 0.0 0.0 11852 2800 ? S 10:36 0:00 nginx: worker process
101 204435 0.0 0.0 11852 2800 ? S 10:36 0:00 nginx: worker process
101 204436 0.0 0.0 11852 2800 ? S 10:36 0:00 nginx: worker process
Might these be, by any chance, the processes pertaining to the nginx-service
service, hosted by lab-nginx-pd
pod?
They are. Let's attempt to close them, to see what happens.
root@hostpid-enabled:/# while true; do kill -9 $(ps aux | grep '[n]ginx' | awk '{print $2}') &>/dev/null; done
Let's unpack the command:
The
ps
command offers the list of all the processes.The
kill -9
command sends theSIGKILL
signal to a process, effectively stopping it.The
grep
filters that based on your search string, [n] avoids listing the actual grep process itself.The
awk
selects the second field of each line, which is the PID.The
$(x)
construct means to execute thex
command, then take its output and use is for another command.The
while; do <cmd>; done
construct runs<cmd>
in a while loop indefinitely.The
&>/dev/null
redirects the error output to/dev/null
.
The processes are terminated. We have successfully stopped the nginx service, essentially causing a DoS (Denial of Service) attack. Let us list the pods in the lab-web
namespace to see their status:
NAME READY STATUS RESTARTS AGE
hostpid-enabled 1/1 Running 0 117m
lab-nginx-pd 0/1 CrashLoopBackOff 5 (15s ago) 20h
priv-enabled 1/1 Running 1 (18m ago) 19m
Depending on the moment when the command is executed, the status of the lab-nginx-pd
pod should be either CrashLoopBackOff
or Error
. curl
-ing the nginx server should fail.
$ curl http://192.168.49.2:30265
curl: (7) Failed to connect to 192.168.49.2 port 30265: Connection refused
NOTE: To obtain the Internal IP of the
minikube
node by running$ kubectl describe node minikube | grep -C 1 Addres
To obtain the forwarded nginx port, run
$ kubectl get svc -n lab-web
Beyond just processes from another pods, one can also list all processes on the host itself. Attempting to kill K8S associated processes, for example, will likely cause the disruption of the entire cluster and potential loss of data.
Tasks:
c) Stop the while loop. How does the pod status change? Does the nginx service becomes available immediately?
Solution
kubectl get pods --namespace lab-web
Expected output: lab-nginx-pd
pod should have the Running STATUS after about 30s
d) Describe the priv-enabled
pod. Is there any hostPID: true
setting?
Solution
kubectl get pod priv-enabled -n lab-web -o yaml | grep hostPID
# One should observe no hostPID setting enabled by default
e) Access the priv-enabled
pod, then attempt to list the nginx
processes pertaining to the lab-pod-ng
pod. Are you able to reproduce the attack above?
Solution
kubectl exec --namespace lab-web priv-enabled -it -- /bin/bash
ps aux | grep nginx
Expected output: no nginx process should be listed.
3. Risks of hostNetwork: true
To illustrate the risks of hostNetwork
, let's consider the situation of an worker that uses a token to connect to an HTTP-based API (./vuln_pods/worker_curl.yaml
):
apiVersion: v1
kind: Pod
metadata:
name: worker-curl
namespace: lab-web
spec:
containers:
- name: worker-curl
image: curlimages/curl
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 3; curl -o /dev/null -s -w '%{http_code}\n' http://nginx-service.lab-web.svc.cluster.local -H 'Authorization: Bearer MY_ULTRA_SECRET_TOKEN'; done"]
Note
Normally, the secrets should not be present in the deployment configuration file.
Let's start the worker-curl
pod:
$ kubectl apply -f worker_curl.yaml
pod/worker_curl created
What happens if a pod that has hostNetwork: true
is captured by an attacker?
Consider the following pod config (./vuln_pods/hostnetwork_enabled.yaml
):
apiVersion: v1
kind: Pod
metadata:
name: network-sniffer
namespace: lab-web
spec:
hostNetwork: true
containers:
- name: network-sniffer
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "apt-get update && apt-get install -y tcpdump; ; while true; do sleep 30; done;"]
Let's create the network-sniffer
pod:
$ kubectl apply -f hostnetwork_enabled.yaml
pod/network-sniffer created
# waiting until tcpdump is installed, then press ctr+c:
$ kubectl logs -f network-sniffer --namespace=lab-web
[...]
Setting up libpcap0.8:amd64 (1.10.1-4build1) ...
Setting up tcpdump (4.99.1-3ubuntu0.1) ...
Processing triggers for libc-bin (2.35-0ubuntu3.6) ...
^C
Now, let's get into the network-sniffer
pod and see what an attacker can do from it.
$ kubectl exec --namespace lab-web network-sniffer -it -- /bin/bash
root@minikube:/#
Let's try capturing all the HTTP packets on port 80 and the nginx-service
port:
root@minikube:/# tcpdump -i any -U -w /dev/stdout port 80 or port 30265 | grep -A 50 -a 'GET '
[...]
GET / HTTP/1.1
Host: nginx-service.lab-web.svc.cluster.local
User-Agent: curl/8.5.0
Accept: */*
Authorization: Bearer MY_ULTRA_SECRET_TOKEN
[...]
An attacker would be able to sniff traffic directed towards nginx-service
from another pod! Then, by capturing various tokens and secrets, it could attempt to move laterally to another pods.
Tasks:
f) Delete the network-sniffer
pod.
Solution
kubectl delete pod network-sniffer -n lab-web
g) Remove the hostNetwork
section from the vuln/hostnetwork_enabled.yaml
and re-deploy the pod. Are you able to capture traffic from other pods now?
Solution
vim hostnetwork_enabled.yaml
# remove the hostNetwork section
kubectl apply -f hostnetwork_enabled.yaml
kubectl exec --namespace lab-web network-sniffer -it -- /bin/bash
tcpdump -i any -U -w /dev/stdout port 80 or port 30265 | grep -A 50 -a 'GET '
Expected output: grep
matches nothing, as the only traffic that can be intercepted now is the one directed towards the network-sniffer
pod.
Part III: Exploring Default Configurations
1. Default service account token
By default, Kubernetes creates a service account in each namespace and attaches it to every Pod. This can be a security issue if a Pod doesn't need to interact with the Kubernetes API, but can still do so with the default account.
To exemplify this risk, let us attempt to access this token and use it to perform actions on the cluster.
First, let's confirm that the default service account exists for our lab-web
namespace:
$ kubectl get serviceaccounts -n lab-web
NAME SECRETS AGE
default 0 24h
Now, let's assume that the default service account has permissions to retrieve the logs in the current namespace. To simulate this, we need to do the following:
Create a role that allows to access logs within a namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: lab-web
name: pod-log-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
Bind this role to the
default
service account:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pod-logs
namespace: lab-web
subjects:
- kind: ServiceAccount
name: default
namespace: lab-web
roleRef:
kind: Role
name: pod-log-reader
apiGroup: rbac.authorization.k8s.io
Task:
a) Create two files named log_read_role.yaml
and log_read_binding.yaml
. Apply these policies to the cluster.
Solution
kubectl apply -f log_reader_role.yaml
kubectl apply -f log_reader_binding.yaml
Until now, there is nothing inherently wrong security-wise. Having permission to access logs can be a requirement for:
Debugging: If you're debugging an application that's running inside a pod, it might be useful to be able to read the logs of other pods in the same namespace.
Log Aggregation: If you're running a log aggregation tool inside your cluster that needs to collect logs from all pods, you might want to give the service account that this tool uses the ability to read logs.
Monitoring and Alerting: If you have a monitoring and alerting tool that needs access to the logs to monitor for specific events or errors, then it would need permissions to read logs.
The issue appears when an attacker manages to get into a pod that was created without disabling access to the default service account token, which, as mentioned earlier, is enabled by default.
To disable this default, one needs to configure automountServiceAccountToken: false
within a deployment config file. If we take a look at the hostpid_enabled.yaml
, we see that this is not the case. Let's see how an attacker would abuse this.
Start by accessing the
hostpid-enabled
pod:
$ kubectl exec --namespace lab-web hostpid-enabled -it -- /bin/bash
root@hostpid-enabled:/#
Get the default service account token that is automatically mounted into the container due to the default configuration:
root@hostpid-enabled:/# TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
We now have everything we need. Let's build the request to K8S API that allows us to successfully read the logs of another pod:
Note
The KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
environment variables are automatically created and made available to the pod, pointing to the Kubernetes API server. These variables combined typically represent the Kubernetes API server address.
root@hostpid-enabled:/# https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api/v1/namespaces/lab-web/pods/lab-nginx-pd/log -H "Authorization: Bearer $TOKEN"
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
[...]
2024/01/18 12:30:51 [notice] 1#1: nginx/1.25.3
2024/01/18 12:30:51 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2024/01/18 12:30:51 [notice] 1#1: OS: Linux 4.15.0-213-generic
2024/01/18 12:30:51 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/01/18 12:30:51 [notice] 1#1: start worker processes
2024/01/18 12:30:51 [notice] 1#1: start worker process 28
[...]
10.244.0.1 - - [18/Jan/2024:13:08:51 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.58.0" "-"
10.244.0.1 - - [18/Jan/2024:13:11:46 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.58.0" "-"
10.244.0.1 - - [18/Jan/2024:13:12:25 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.58.0" "-"
[...]
Depending on what the application logs, they might contain sensitive information like usernames, IP addresses, or even passwords. An attacker who gains access to these logs might gain further access to the system. This would not have been possible if the pod was hardened. Let's test that.
Tasks:
b) Copy the hostpid-enabled.yaml
to a new configuration file called hardened-pod.yaml
. Remove the hostPID: true
, and disable the default service token automount setting. Name the pod hardened-worker
. Ensure that the pod is part of the lab-web
namespace.
Solution
cd vuln_pods/
cp hostpid_enabled.yaml hardened-pod.yaml
vim hardened-pod.yaml
cat hardened-pod.yaml
Expected output:
apiVersion: v1
kind: Pod
metadata:
name: hardened-worker
namespace: lab-web
spec:
automountServiceAccountToken: false
containers:
- name: hardened-worker
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "apt-get update && apt-get install -y curl; while true; do sleep 30; done;" ]
c) Start the pod and attempt to access the logs of the nginx-service
again. Is this still possible?
Solution
kubectl exec --namespace lab-web hardened-worker -it -- /bin/bash
cat /var/run/secrets/kubernetes.io/serviceaccount/token
Expected output:
cat: /var/run/secrets/kubernetes.io/serviceaccount/token: No such file or directory
2. Default network permissions
By default, pods are non-isolated and accept traffic from any source. Let's prove this.
For starters, let's get the IP of the lab-nginx-pd
pod, where the nginx-service
runs.
$ kubectl get pods -l run=lab-nginx-pd -o jsonpath='{.items[0].status.podIP}' --namespace=lab-web
10.244.0.25
NOTE: You might observe a different IP.
Then, let's get into the hardened-worker
pod.
kubectl exec -it hardened-worker --namespace=lab-web -- /bin/bash
root@hardened-worker:/#
As the nginx-service
listens on port 80, let's attempt to curl
it.
root@hardened-worker:/# curl http://10.244.0.25
<!DOCTYPE html>
[...]
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
As mentioned earlier, there is no network isolation between pods by default.
We will explore how to enforce network policies in the part IV of the lab.
Part IV: Network Policies
For this section, we will create and apply Network Policies to restrict pod-to-pod communication.
Let's start with creating a new namespace called lab-network
$ kubectl create namespace lab-network
namespace/lab-network created
Within this namespace, let's deploy two apps:
$ kubectl run nginx-pod --image=nginx --port=80 --namespace=lab-network
pod/nginx-pod created
$ kubectl run hello-app-pod --image=gcr.io/google-samples/hello-app:1.0 --port=8080 --namespace=lab-network
pod/hello-app-pod created
$ kubectl get pods -n lab-network
NAME READY STATUS RESTARTS AGE
hello-app-pod 1/1 Running 0 6s
nginx-pod 1/1 Running 0 11s
Let's also expose the ports of the apps, to make them accessible from other pods:
$ kubectl expose pod hello-app-pod --type=NodePort --port=8080 --namespace=lab-network --name=hello-app-service
service/hello-app-service exposed
$ kubectl expose pod nginx-pod --type=NodePort --port=80 --namespace=lab-network --name=nginx-service
service/nginx-service exposed
$ kubectl get svc -n lab-network
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-server NodePort 10.108.236.32 <none> 8080:30825/TCP 12s
nginx-server NodePort 10.105.160.89 <none> 80:31578/TCP 7s
Now that everything is in place, let's verify that there are no network policies in active by default, by accessing the apps from a pod within another namespace:
$ kubectl exec --namespace lab-web worker-curl -it -- /bin/sh
~ $
~ $ curl hello-app-service.lab-network:8080
Hello, world!
Version: 1.0.0
Hostname: hello-app-pod
~ $
~ $ curl nginx-service.lab-network
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
[...]
As there are no network policies in place, the apps are accessible from any pod within the cluster. Let's change that.
Tasks:
a) Deny All Traffic
Apply a NetworkPolicy that denies all ingress and egress traffic in this namespace. Try to access your application from another pod and observe the result.
Finally, remove the "deny-all" policy.
Solution
$ cat deny-all.yaml
Expected output:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all
namespace: lab-network
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This NetworkPolicy selects all pods in the lab-network namespace (due to the empty podSelector) and denies all ingress and egress traffic (due to the policyTypes
field containing both Ingress
and Egress
).
Then apply the policy:
$ kubectl apply -f deny-all.yaml
networkpolicy.networking.k8s.io/deny-all created
After applying this policy, if you try to access your applications from another pod, you should not be able to reach them.
$ kubectl exec --namespace lab-web worker-curl -it -- /bin/sh
~ $ curl hello-app-service.lab-network:8080
Expected output:
curl: (28) Failed to connect to hello-app-service.lab-network port 8080 after 130657 ms: Couldn\'t connect to server
Remove the deny-all
policy using the following command:
$ kubectl delete networkpolicy deny-all -n lab-network
b) Allow Ingress Traffic From Certain Pods
Try to access the hello-app-service
from the nginx-pod
and also from a pod outside the namespace. Observe the results.
Apply a NetworkPolicy that allows ingress traffic to one application only from the other application (Hint: you can use labels for this).
Try again to access the hello-app-service
from the nginx-pod
and also from a pod outside the namespace. Observe the results.
Finally, remove the network policy.
Solution
As suggested, let's make use of labels
to configure NetworkPolicy that allows traffic only from one application to another. Let's assume we want to allow traffic only from nginx-pod
to hello-app-pod
.
First, let's ensure that your pods have labels that we can use in our NetworkPolicy.
$ kubectl label pod nginx-pod app=nginx-network -n lab-network
$ kubectl label pod hello-app-pod app=hello-network -n lab-network
Next, we will create a NetworkPolicy that allows ingress traffic to hello-app-pod
only from nginx-pod
. Here is a YAML for such a policy:
$ cat allow-nginx-to-hello.yaml
Expected output:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-nginx-to-hello
namespace: lab-network
spec:
podSelector:
matchLabels:
app: hello-network
ingress:
- from:
- podSelector:
matchLabels:
app: nginx-network
This NetworkPolicy selects the pod with label app=hello-network
and allows ingress traffic only from the pod with label app=nginx-network
.
Apply the policy:
$ kubectl apply -f allow-nginx-to-hello.yaml
networkpolicy.networking.k8s.io/allow-nginx-to-hello created
After applying this policy, if you try to access hello-app-pod
from nginx-pod
, you should be able to reach it:
$ kubectl exec --namespace lab-network nginx-pod -it -- /bin/sh
~ $ curl hello-app-service.lab-network:8080
Expected output:
Hello, world!
Version: 1.0.0
Hostname: hello-app-pod
~ $
However, if you try to reach hello-app-pod
from a pod outside of the lab-network
namespace, you should not be able to establish a connection:
$ kubectl exec --namespace lab-web worker-curl -it -- /bin/sh
~ $ curl hello-app-service.lab-network:8080
Expected output:
curl: (28) Failed to connect to hello-app-service.lab-network port 8080 after 130657 ms: Couldn\'t connect to server
Finally, remove the allow-nginx-to-hello
policy using the following command:
$ kubectl delete networkpolicy allow-nginx-to-hello -n lab-network
c) Allow Traffic To/From Specific Ports
Apply a NetworkPolicy that allows ingress traffic to both apps only on their specific ports.
Try to connect to your applications on that port and also on any other port. Observe the results.
Finally, remove the network policy.
Solution
Let's create a NetworkPolicy that allows ingress traffic only on port 8080. Here is a YAML for such a policy:
$ cat allow-ingress-on-8080.yaml
Expected output:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-port-8080
namespace: lab-network
spec:
podSelector:
matchLabels:
app: hello-network
ingress:
- ports:
- protocol: TCP
port: 8080
This NetworkPolicy selects the pod with label app=hello-network
and allows ingress traffic only on port 8080.
Apply the policy:
$ kubectl apply -f allow-ingress-on-8080.yaml
networkpolicy.networking.k8s.io/allow-port-8080 created
After applying this policy, if you try to access hello-app-pod
from nginx-pod
on port 8080, you should be able to reach it:
$ kubectl exec --namespace lab-network nginx-pod -it -- /bin/sh
~ $ curl hello-app-service.lab-network:8080
Expected output:
Hello, world!
Version: 1.0.0
Hostname: hello-app-pod
~ $
However, if you try to reach hello-app-pod
from nginx-pod
on any other port, you should not be able to establish a connection:
$ kubectl exec --namespace lab-network nginx-pod -it -- /bin/sh
~ $ curl hello-app-service.lab-network:80
Finally, remove the allow-port-8080
policy using the following command:
$ kubectl delete networkpolicy allow-port-8080 -n lab-network
d) Combine Multiple Policies
Apply a NetworkPolicy that denies all ingress and egress traffic in the namespace, and another policy that allows traffic only between the two applications.
Try to access one application from the other and also from a pod outside the namespace. Observe the results.
e) Default Deny All Egress Traffic
Apply a NetworkPolicy that denies all egress traffic from this namespace.
Try to access an external resource (like a public website) from your application and observe the result.
Part V: Role-Based Access Control (RBAC)
Part VI: Logging and Threat Detection
Part VII: Audit IaC
YAML Exercise Files
hardened-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: hardened-worker
namespace: lab-web
spec:
automountServiceAccountToken: false
containers:
- name: hardened-worker
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "apt-get update && apt-get install -y curl; while true; do sleep 30; done;" ]
hostnetwork_enabled.yaml
apiVersion: v1
kind: Pod
metadata:
name: network-sniffer
namespace: lab-web
spec:
hostNetwork: true
containers:
- name: network-sniffer
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "apt-get update && apt-get install -y tcpdump; ; while true; do sleep 30; done;"]
hostpid_enabled.yaml
apiVersion: v1
kind: Pod
metadata:
name: hostpid-enabled
namespace: lab-web
spec:
hostPID: true
containers:
- name: hostpid-enabled
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "apt-get update && apt-get install -y curl; while true; do sleep 30; done;" ]
privileged_enabled.yaml
apiVersion: v1
kind: Pod
metadata:
name: priv-enabled
namespace: lab-web
spec:
containers:
- name: priv-enabled
image: ubuntu
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 40; done;" ]
securityContext:
privileged: true
Last updated