Kubernetes Essentials

Kubernetes Essentials

“Welcome to the Orchestration of Containers”

In this article I am going to tell you about ‘Kubernetes Essentials’ . If you are completely new to the world of kubernetes then this article would be a good start . Also if you already have some idea about kubernetes or want to revise kubernetes quickly at a high level then also you can go through this article.

This article will teach you how to setup a 3 node kubernetes cluster , Running a simple deployment and a service service onto the cluster and explaining a basics of Microservices.

What is kubernetes- Kubernetes is basically the orchestration of the containers at a very high level that includes the management of containerized application (which can be docker, rkt, containerd etc) . The cluster will help the high availability of the containerized application , scalability and even the ease of frequent code change deployments for the application . [Containers are light weight virtual machines which has the whole application and libraries bundled or packaged together so that it can run on any infrastructure.] More about kubernetes — https://kubernetes.io

Setting up a 3 node cluster : Now you have some idea about kubernetes so lets jump on to setting up the 3 node kubernetes cluster. You need to have 3 servers provisioned in any cloud environment and install docker as a first step :

Step1- Spinning up servers and installing docker Create 3 ubuntu 18.04 small machines, tag one as master and two as node1~node2 and start installing docker via following commands:

Now that docker of specific version(18.06.1) is installed on the master and worker nodes. Next we will install the kubernetes components: kubeadm- it automates major portion for automating the kubernetes cluster. kubelet-component required on the nodes to run the containers kubectl- kube control for interacting with the cluster using command line. Install all these on all three servers:

Step2- installing the kubernetes components - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

  • cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
  • sudo apt-get update
  • sudo apt-get install -y kubelet=1.12.2–00 kubeadm=1.12.2–00 kubectl=1.12.2–00
  • sudo apt-mark hold kubelet kubeadm kubectl Note- We are adding the gpg key , updating the sources list and then installing the specific version of all three kubernetes components .In to test successful installation you can run the command ‘kubeadm version’.

Step3- Bootstrap the kubernetes cluster: To initialize the cluster using kubeadm and set the cidr block for networking :

  • sudo kubeadm init — pod-network-cidr=10.244.0.0/16
  • mkdir -p $HOME/.kube
  • sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • sudo chown $(id -u):$(id -g) $HOME/.kube/config To check the successful installation run the command ‘kubectl vesrion’ . Note : When you do kubeadm init you will see a join token command which you need to copy and run on both the worker nodes so that they can become the part of the cluster.
  • kubeadm join {{ipaddress}}:6443 — token {{token}} — discovery-token-ca-cert-hash {{hash}}

Now the cluster is setup and the worker nodes have joined the master as well. Now is the last step of setting up the cluster with flannel networking: Kubernetes supports multiple networking modals which you can find in their documentation but for now I will be going with flannel.

Step4- Setting up Flannel Neworking

  • echo “net.bridge.bridge-nf-call-iptables=1” | sudo tee -a /etc/sysctl.conf
  • sudo sysctl -p Note- Above is setting up the bridge for networking and sysctl -p will make it permanent even if the system restarts. Now run the flannel configuration file provided by core-os team and run that only on the master node:
  • kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml After this if you run ‘kubectl get nodes’ you will see all the nodes in Ready state and if you run ‘kubectl get pods -n kube-system’ you will see all the system pods and the flannel pod in running state which means that the networking setup was successful. So now the kubernetes cluster is setup

Kubernetes Concepts :

Containers and pods : Pods are smallest unit in a kubernetes cluster where the container actually runs . A pod will have its own unique ip address within the cluster and it can have one or more containers running. For testing create a simple yml file called test.yml and paste below contents which just says to spin up a pod with nginx image: apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: — name: nginx image: nginx now run “kubectl apply -f test.yml” it will create a pod inside Kubernetes Cluster :

pod is up and runningpod is up and running

To get the full information about the pod you can run “kubectl describe pod nginx”- it will give the all the details about the pods and isvery useful for debugging purpose specially the events section where the most of the errors are captured. kubectl delete pod nginx — for deleting the pod

Clustering and nodes : Cluster is setup of nodes which are the master node and the worker nodes . Master node is actually the control server that hosts the kubernetes API . You can have multiple control servers as well. Worker node is the place where the pods will actually run . “kubectl get nodes” will tell you all the nodes and “kubectl describe node {{node-name}}” will tell you the complete information about the nodes.

Networking in Kubernetes: Kubernetes cluster will have its own virtual network that will be shared across the cluster and all the pods irrespective of the nodes that they are running on and will be able to communicate with each other . Lets understand it with help of an example : Create two deployments- one for nginx and one for busybox (form where we can run the curl commands). Deployment nginx : deploy1.yml (run kubectl -apply deploy1.yml)

kubectl -apply deploy1.ymlkubectl -apply deploy1.yml

Deployment 2 — deploy2.yml (run kubectl apply -f deploy2.yml)

kubectl apply -f deploy2.ymlkubectl apply -f deploy2.yml

Once both the deployments are created , you will see the ip address of where exactly they are running : “ kubectl get pods -o wide ”

now exec into busybox and try to do a curl where the nginx is running on a different pod by running following command :

  • kubectl exec busybox — curl {{nginx pod ip}} it will return- welcome to nginx html page content. This is how the networking works in kubernetes.

Kubernetes Architecture:

Architecture DiagramArchitecture Diagram

As you can see above we have different components for the kubernetes cluster. Most of the components run as pods within the control plane and some as a service like kublet. etcd -it serves as a data storage for the cluster including the pods running information , cluster state etc. Kube-APIserver — the backbone of kubernetes cluster which is used as a single source for interacting with the cluster. everything that happens in a cluster happens via the api server as a starting point. kube-controller manager — it has several components that runs in the background for the cluster. kube-scheduler — it is responsible to making thins run on the node. basically what and where to run . kubelet — its running as a service which interacts with the api server and then runs the pod on node if request comes. kube-proxy- runs on all the nodes to handle the pod to pod or node to node communication .

Kubernetes Deployments: Deployment in kubernetes is a object which handles the scaling, running of pods . For single pod creation you can spin up a pod manually but in order to spin complex pod structures , in order to provide updates to the running pods like change in the image or to make sure the pods are always available even if one gets deleted accidentally — you should use Deployments. Below is a sample deployment yaml file for the nginx image where the replicas is set to 2 . This means it will spin two nginx pods and even if you delete one manually it will automatically create a new one for you.

deployment.deployment.

you can get the deployments: “kubectl get deployments” you can describe the deployments to have full information about it : “kubectl describe deployment nginx-deployment” you can delete the running pod manually and see that another one comes up.

Kubernetes Service : Service in kubernetes is also very important for deploying apps. So via service you can access the pods rather than directly accessing them . Pods might keep on changing so if you want a constant access to the pod you will have to use service . Its kind of an abstraction layer so that you can communicate with the pods via the service instead of directly communicating with them . It creates kind of load balancer which always redirects the request to the healthy pod automatically.

kubectl apply -f service.ymlkubectl apply -f service.yml

Above is a sample service for the deployment that we created earlier. So here the selector decides where it has to redirect and in this case its app: nginx same is in the deployment section when you created the deployment for this. In this case NodePort is used but there are other ways as well in which you can use in type. Just to give a feel of what service does NodePort is used and the port exposed is 30080. After deploying it when you type “curl localhost:30080”- you will get the nginx html page response also the {{serverip}}:30080 would result in same. This is how to use service in Kubernetes .

Microservices : Micro-services architecture means that your application is spitted into different modules rather than a single application that is doing all the work . For example you have a monolithic application whole deployments will be difficult as its a big application tightly coupled but if you split the application into different modules it will help in a great way as : you can write independently each module in separate language as per need, you can scale easily different modules as per usage , you can maintain easily and you will have a cleaner code to maintain as well. Since there are lot of modules and to handle them becomes a big challenge , this is where kubernetes come into picture. Kubernetes handles all the module scaling, deployment and orchestration . Once you reach here you can take any opensource micro-services application deploy it and see how things work.

Sample Microservice Application : https://github.com/saiyam1814/HelidonProject

That is all for Kubernetes Essentials.

Happy Learning Saiyam Pathak https://www.linkedin.com/in/saiyam-pathak-97685a64/ https://twitter.com/SaiyamPathak