Lab_01 - Running Workloads Imperatively
Lab 01: Running Workloads Imperatively
In this lab, you will begin to run actual workloads on our Kubernetes cluster.
Enabling Bash Completion
Bash Completion
While not technically necessary, this step will help you a lot while working with Kubernetes. Enable k8s command completion for the current session using the following command:
Persistent Bash Completion
To enable bash completion for all future sessions, add the command to your .bashrc file:
Test Bash Completion
Try it out - list all pods in all namespaces, this time using tab completion.
Examples:
Taking a Look Around
List all the namespaces in your cluster.
Hint
Namespaces are a resource just like everything else in k8s.
Solution
List all the pods in each namespace.
Solution
Right now, the only running pods are inside the kube-system
namespace. List the pods, including the node on which every pod is currently running.
Hint
The -o wide
option for kubectl get pods
will help.
Solution
Do you notice a pattern in the pod-to-node distribution?
Solution
There are actually several patterns:
Some pods (like the kube-proxy ones) run a copy of the pod on each node in the cluster. These are part of so-called DaemonSets.
Other pods (like the controller-manager) only run on the master
Yet other pods (like the user workloads, which we will soon see) are not normally scheduled on the master - only on the worker nodes.
Running a Simple Pod
Run a simple pod based on the alpine image (a very small, very lightweight, Linux image). Use the following options, without specifying anything else:
pod name: alpine1
image name: alpine
interactive mode, keep tty open (
-it
)command to run: sh
Hint
Solution
You should end up at a command-line prompt inside the pod. Verify this by checking the hostname of the machine you are now on.
Note
Keep in mind that connecting and running commands directly inside a container is NOT a best practice! We are just using it in the lab to illustrate various behaviours (also, it can be very useful for troubleshooting container issues).
Solution
What is the IP address of this pod?
Solution
Exit the interactive session, leaving the pod running
Hint
Press Ctrl-P Ctrl-Q
.
Display a list of pods. What is the status of the pod? On which host is the pod running?
Solution
Get the IP address of the pod, as it is known by Kubernetes.
Hint
Solution
Ping the podโs IP address. Can you reach it? (Optional: also try to ping it from another node).
Note
We will get back to this in a future lesson, but this is the โmagicโ of Kubernetes networking - all pods are reachable from all nodes, on the same (private) IP address!
Solution
Running nginx
nginx
Run a new pod, with the following settings:
name: nginx1
image name: nginx
Hint
This is a โnormalโ pod - there is no need to run it in interactive mode. Also, since the nginx image will automatically start the nginx daemon, there is no need to specify a command to run.
Solution
Check that the pod is running. On what host has it been scheduled?
Note
Chances are it has been scheduled on the other node (not the one on which the alpine pod is running). Kubernetes does its best to balance the workload among worker nodes.
Solution
What is the IP address of the pod?
Hint
Solution
Connect to the IP address by using curl
. You should see the default nginx
welcome page.
Solution
curl POD_IP
Getting Inside a Running Pod
Run a bash shell inside the nginx pod.
Hint
kubectl exec -it POD_NAME -- COMMAND
Solution
Change the contents of the default nginx index.html
file: /usr/share/nginx/html/index.html
.
Hint
You could use a text editor or the following command:
Exit the shell (type exit or press Ctrl-D). Is the pod still running? Why or why not?
Note
When the main process inside a container is stopped, the entire container stops. However, the shell was not the main process inside the container - the main process was (and still is) nginx
!
Solution
From inside the pod:
From the host (node):
Using curl
, connect to the web server once again. You should receive the new contents of the index.html
file.
Deleting a Pod
Time to do some cleanupโฆ delete the alpine1
pod.
Solution
Last updated