Lab_04 - Networking
Lab 04: Networking
In this lab, we will work with Kubernetes services, and use them to allow the outside world to access our pods.
Running a Custom Pod
We will use a custom webserver for this lab, just so you can see the inner workings of Kubernetes. Use kubectl run
to start a pod (or, more correctly, a deployment) based on the bogd/python-webserver
image.
The custom webserver (which is really just a small Python program serving the results of a shell script) listens on port 8080. Get the pod IP address, and connect to it using curl
. You should see a webpage listing the hostname and IP addresses of the pod.
Solution
Delete the deployment
Solution
Create a YAML file (16-python-pod.yaml) that describes a pod based on this image. The pod name should be pws-01
.
Create a pod based on the YAML file. Make sure that you can connect to the pod IP address using curl
.
Solution
16-python-pod.yaml
:
Delete the pod and recreate it from the YAML file. Try to connect to the same IP address again.
What happens?
Solution
Note
Most likely, the IP address has changed, even though the pod is absolutely identical to the previous one. It is never a good practice to rely on pod IP addresses in order to connect!
Running a Custom Pod
OK then - we will create a service. Write a YAML file (17-service.yaml
) describing a service:
The service name should be webservice-01
The service should match on selector app: webserver-01
The service should use TCP port 8080
Create a service based on this file. Verify that the service has been created successfully, and note the IP address allocated to the service.
Hint
Solution
Using curl, try to connect to the service IP address. What happens? Why? We still have a pod running - why isn’t this working??
Hint
Take a closer look at the service, using kubectl describe svc . Look under „Endpoints”.
Solution
There is no answer because there is no actual endpoint to respond. No pod in the system has a label matching the service selector.
Let us fix the issue - label the existing pod with the correct label, so that it will be correctly associated with the service.
Hint
The pod label has to match the service selector.
Solution
Look at the service again. The pod should have automatically been added to the list of endpoints.
Solution
Connect to the service again. This time, you should receive a response from the server.
Solution
Modify the 16-python-pod.yaml
file and add the label to the pod. This way, every time you create a pod based on this YAML file, it will automatically be labeled and picked up by the service.
Solution
Delete the existing pod, and recreate it from the YAML file. Connect via curl to the service IP. You should still receive a response, even though the pod has changed (different pod IP).
Solution
Load Balancing
We will now create a new pod, identical to the first one.
Copy the 16-python-pod.yaml file to 18-python-pod2.yaml.
Edit the 18-python-pod2.yaml ,changing the pod name to pws-02
. Create a pod based on this new file.
Solution
18-python-pod2.yaml
:
Look at the service endpoints - what do you see?
Solution
Connect to the service IP using curl, repeatedly. What do you notice?
Solution
The service automatically load balances between the servers. You will receive some replies from the first pod, others from the second one.
Services are OK for accessing data within the cluster - but what happens when we need to expose our applications to the outside world? Well… we'll still use a service - just a different type!
Copy the 17-service.yaml to 19-service-nodeport.yaml
Edit the new file:
Change she service name to webservice-02
Specify the service type (under
.spec.type
) as NodePort
Create a service based on this file.
Solution
19-service-port.yaml
:
Look at the list of services to find out the external port allocated for this service (by default it is a random port from a range above 30000).
Since the Kubernetes nodes are behind a NAT device, and using private IP addresses, we cannot connect to them directly. However, we can use the proxy to connect to them. Open a web browser on your laptop, and try connecting to one of the Kubernetes nodes, on the external port.
Note
You will need to look up the private IP addresses of your nodes (kubectl describe node). Once you have them, use an URL like the following:
http://proxy.labs.sass.ro/http:10.1.123.107:30554
(replace 10.1.123.107 with the IP address of your node, and 30554 with the service external port)
When a NodePort-type service is created, we can connect to the service using the allocated port on the IP address of every Kubernetes node!
Try refreshing your browser several times. What do you see?
Solution
The service automatically load balances between the two pods - you will see the IP address and hostname change, depending on which pod has been selected to answer your request.
Try connecting (via proxy) to the IP address of another Kubernetes node, on the same service port. What happens?
Solution
The result is exactly the same - the service behaves exactly the same, no matter what node you are connecting to, and what node the pod is actually running on!
Delete all services and pods.
Solution
Challenge: A Replicated Web Server
If you want a real challenge, try to implement this task - a complete, end-to-end, deployment of replicated web servers, all serving the same static content.
Hint
You already have all the necessary pieces to implement this. You just need to put them together.
Create a deployment of multiple web servers, all serving the same content. All the information necessary for this should be in a single YAML file.
The deployment should start with 4 replicas (but we can easily scale it up or down)
The webservers should use the nginx image
The webservers should mount an NFS volume from 10.10.80.23:/volume1/NFS_STUDENTS
The volume should be mounted under /usr/share/nginx/html (the default nginx web root), so that nginx will automatically serve static content from the NFS volume
The same YAML file should also create a service that exposes this server pool on a NodePort
In the end, you should be able to access your servers from the outside (via the proxy server).
Hint
For the NFS volume, you can use PV/PVCs, or you can map the volume directly to the pod. Your choice!
Cleaning Up
Delete all deployments, replicasets, services and pods.
Solution
Last updated