"We've got this project on kubernetes that you'll be helping maintain"
This started my journey with kubernetes. I found a great tutorial from freecodecamp and this blog is an attempt to set up some personal projects using it. It's best to read the original though as it's more in-depth.
I wanted to serve an index.html for a first project. A pod would be used. A pod is the smallest deployable unit in kubernetes, and it can contain one or more containers (e.g. docker containers). Its an isolated environment to run a docker image, providing storage and networking.
To create a pod, I can create one manually or use a workload resource, which provide extra features like replication, recreating pods when one stops working and more. The workload resources have a pod template, which provides a description of how to create the pods we want.
A deployment is a workload resource that creates replicas of the pod. I wanted 3 pods serving the index.html file, so I'd use a deployment.
To start off, I installed packages to help me play with kubernetes locally:
sudo pacman -S minikube kubectl docker sudo systemctl start docker minikube config set driver docker minikube start
I also created a dockerfile that served the index.html file using nginx.
# tagged static:0.1.0 FROM nginx:1.21.1 COPY index.html /usr/share/nginx/html/index.html
I created a yml file that defined the deployment resource that would be created in minikube.
# static_deployment.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: static-website-deployment spec: replicas: 3 selector: matchLabels: app: static-website-pod template: metadata: labels: app: static-website-pod # pod template section spec: containers: - name: static-website-container image: static:0.1.0 ports: - containerPort: 80
This creates a deployment work resource with the name static-website-deployment. The deployment ensures that there are 3 pods running the docker image at any one time (defined in replicas). The selector.matchLabels.app is used to define what pods are being managed, and are the same as the metadata.labels.app found in the template section. This metadata is applied to each pod that is created. The pod template defines how the pod are created, so in each of the 3 pods there will be a running container named static-website-container, and the pod would expose port 80.
To run the above:
minikube start eval $(minikube -p minikube docker-env) # ensure docker images are built in minikube context docker build -t static:0.1.0 -f Dockerfile_static_content . kubectl apply -f k8s/deployment.yml
To check that things are running as expected:
╰─$ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE static-website-deployment 3/3 3 3 17s ╰─$ kubectl get pods NAME READY STATUS RESTARTS AGE static-website-deployment-57bdbf7d94-7ngwt 1/1 Running 0 4s static-website-deployment-57bdbf7d94-9l5cv 1/1 Running 0 4s static-website-deployment-57bdbf7d94-gj59k 1/1 Running 0 4s
I wanted to use curl to verify the pods are running correctly, but the kubernetes environment is isolated. To deal with this, kubernetes has services which provide a means of exposing a set of pods. I set up a LoadBalancer service, which provides an ip address and a port that can be used to access the pods.
# static_load_balancer.yml apiVersion: v1 kind: Service metadata: name: static-load-balancer spec: selector: app: static-website-pod ports: - port: 80 targetPort: 80 type: LoadBalancer
and ran the following:
╰─$ kubectl apply -f static_load_balancer.yml service/static-load-balancer unchanged ╰─$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d5h static-load-balancer LoadBalancer 10.105.222.19 <pending> 80:30133/TCP 116s ╰─$ curl $(minikube ip):30133 <!DOCTYPE html> <html lang="en"> <head>
minikube ip provides the ip address of minikube, and the port is the second part of the PORTS section of the static-load-balancer service.
Since what we're exposing is http traffic, I could also use an ingress object, which is a type of controller that can expose http and https traffic. Other advantages include ssl termination and name-based virtual hosting. I first needed to enable ingress in minikube with:
minikube addons enable ingress
The ingress controller links up with a service, so we could use the LoadBalancer service previously created.
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: static-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: static-load-balancer port: number: 80
However, it doesn't make much sense to have both an ingress object and a load balancer pointing to the same thing. Another service I could use is the ClusterIP which provides an ip internal to the cluster. This way we only have one entry point into minikube.
# static_ingress.yml --- apiVersion: v1 kind: Service metadata: name: static-clusterip spec: selector: app: static-website-pod ports: - port: 80 targetPort: 80 type: ClusterIP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: static-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: static-clusterip port: number: 80
And now running:
╰─$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d6h static-clusterip ClusterIP 10.109.227.9 <none> 80/TCP 108s static-load-balancer LoadBalancer 10.105.222.19 <pending> 80:30133/TCP 81m ╰─$ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE static-ingress <none> * localhost 80 25m ╰─$ curl $(minikube ip) <!DOCTYPE html> <html lang="en"> <head> . .
Having the basics of kubernetes i.e. controllers, services and ingress down, I tried to set up a django project. I'll add a link to this when it's ready.