Pages

Sunday, May 13, 2018

Kubernetes - Daemon sets

Kubernetes has a concept of ReplicaSets which will allow to scale the pods based on the
requirements. The pods can be created on any nodes with out special conditions. but what
if we need to run a pod on every node that has a special condition Or what if you want to run a
pod on every node that can do some special action like log collectors and monitoring agents

Daemon sets in kubernets does the same job. For example we create a daemon set to tell
kubernetes that any time we create a node which is attached with a label app=frontend-node ,
run a nginx run automatically.

DaemonSets share similar functionality with ReplicaSets; both create Pods that are expected to
be long-running services and ensure that the desired state and the observed state of the cluster
match. So what is the difference between Replicaset and a Daemon set?

Rs can be used when we have an application that is completely decoupled with the node and
can run multiple copies on any given node with out special conditions. on the other side
Daemonset can be used when we want our application pod to run on subset of nodes in our
cluster based on the conditions like app=frontend-node

Create a DaemonSet
[root@manja17-I13330 kubenetes-config]# cat basic-daemonset.yml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       - name: webserver
         image: nginx
         ports:
         - containerPort: 80

In the above Yaml file , we are creating a daemon set with the name “frontend”. The yaml itself
has 2 main things node selector and container spec. we are very much aware of the container
spec , the node selector tells kubernetes which nodes are part of the condition and should run
containers. In other words we are using the label “app=frontend-node” to define on certain nodes
in the cluster. Once the nodes are labeled with the frontend-node label, the pod defined with
container web server will automatically run.

Label a node first with the “app=frontend-node”
kubectl label node work-node1 app=frontend-node
kubectl label node work-node2 app=frontend-node

In the above command iam defining my machine work-node1 and work-node2 with the
label “app=frontend-node” . once you are done this check the labels using
kubectl get nodes --show-labels”.


Create the Daemonset using the above yaml file
[root@manja17-I13330 kubenetes-config]# kubectl create -f basic-daemonset.yml
daemonset.extensions "frontend" created

Check the daemonsets
[root@manja17-I13330 kubenetes-config]# kubectl get ds
NAME DESIRED   CURRENT READY     UP-TO-DATE AVAILABLE   NODE SELECTOR AGE
frontend      2 2              0 2          0 app=frontend-node   4s

Now when we try to get the pods ,we see 2 pods with the frontend are automatically
being created ,
[root@manja17-I13330 kubenetes-config]# kubectl get pods
NAME             READY STATUS              RESTARTS AGE
frontend-7sc22   0/1 ContainerCreating   0 7s
frontend-v6xvq   0/1 ContainerCreating   0 7s

Since we have 2 nodes with the condition “app=frontend-node”, we see 2 pods creating.
Lets change the label for one of the node and see what happens
[root@manja17-I13330 kubenetes-config]# kubectl label  node work-node1
--overwrite app=backend
node "work-node1" labeled

[root@manja17-I13330 kubenetes-config]# kubectl get pods
NAME             READY STATUS   RESTARTS AGE
frontend-7sc22   1/1 Running   0 18m

We can see that one of the pods was automatically deleted.

No comments :

Post a Comment