Wednesday, May 23, 2018

Kubernetes - Volumes

Volumes in Kubernetes are a directory accessible to all containers running in a Pod. It also provides additional guarantee that the data is preserved across container restarts. A Volume can be backed with aws or nfs or azure or even the host location. below are the different types of volumes,
   Node local Volumes such as emptyDir or hostPath
   Network volumes such as nfs , glusterfs or cephfs
   Cloud volumes such as Aws elastic block, Azure disk and gce Persistent disk
   Special volumes like secretes or gitrepo

In this article we will see how we can use the node local volumes such as emptyDir and hostPath.

emptyDir - A volume type that provides an empty directory that containers in the pod can read and write from. When the pod is removed from the node , the data in the emptyDir is deleted for ever. As the name says it is empty. This volume type can be used with either the storage volumes of the node machine itself or an optional RAM disk for higher performance. As said the data will be lost during machine reboots and this can be used when we need some temporary space or have containers that process data in the emptyDir and hand it off to another container. The emptyDir volume can be attached from the local node or from local memory event. To Create a emptyDir volume we can create as,
[root@manja17-I13330 kubenetes-config]# cat emptyDir-mysql.yml
apiVersion: v1
kind: Pod
metadata:
 name: "mysql"
 labels:
  name: "lbl-k8s-mysql"
spec:
 containers:
  - image: docker.io/jagadesh1982/testing-service
    name:  "testing"
    imagePullPolicy: IfNotPresent
    ports:
     - containerPort: 3306
    volumeMounts:
      - mountPath: /data-mount
        name: data
 volumes:
   - name: data
     emptyDir: {}

Now when we login to the container and see the mount points we can see,
[root@manja17-I13330 kubenetes-config]# kubectl exec mysql -c testing -it bash
root@mysql:/usr/src/app# cd /data-mount
root@mysql:/data-mount# pwd
/data-mount
root@mysql:/data-mount# touch hello
root@mysql:/data-mount# echo "hello world" >> ./hello
root@mysql:/data-mount# exit
exit

We can see that a mount point /data-mount is available inside the container.

emptyDir with memory - Lets create the same above emptyDir with the medium memory as below,
[root@manja17-I13330 kubenetes-config]# cat emptyDir-memory-pod.yml
apiVersion: v1
kind: Pod
metadata:
 name: "mysql"
 labels:
  name: "lbl-k8s-mysql"
spec:
 containers:
  - image: docker.io/jagadesh1982/testing-service
    name:  "testing"
    imagePullPolicy: IfNotPresent
    ports:
     - containerPort: 3306
    volumeMounts:
      - mountPath: /data-mount
        name: data
 volumes:
   - name: data
     emptyDir:
       medium: Memory

There is a new element that came in between which the medium which says this is a Memory. start the container and see what are present,
[root@manja17-I13330 kubenetes-config]# kubectl exec mysql -c testing -it bash
root@mysql:/# ls
bin   data-mount  etc lib media  opt root sbin sys                 tmp var
boot  dev      home lib64 mnt    proc run srv testing_service.py  usr
root@mysql:/# cd data-mount/
root@mysql:/data-mount# pwd
/data-mount
root@mysql:/data-mount# cd ..
root@mysql:/# exit
exit

hostPath - A host Path volume mounts a file or directory from the host node file system into your pod.  To attach a node local directory into the pod we can use,[root@manja17-I13330 kubenetes-config]# cat hostpath-mount.yml
apiVersion: v1
kind: Pod
metadata:
 name: testing-service
spec:
 containers:
  - image: docker.io/jagadesh1982/testing-service
    name:  testing
    imagePullPolicy: IfNotPresent
    ports:
     - containerPort: 80
    volumeMounts:
      - mountPath: /usr/share/nginx/html
        name: data
 volumes:
   - name: data
     hostPath:
       path: /tmp

In the above we are attaching the node /tmp location to the pod /usr/share/nginx/html location. So what ever the file available in the /tmp location of the node will be seen in the /usr/share/nginx/html directory.

More to Come , Happy learning :-)

Read More

Monday, May 21, 2018

Kubernetes - Container Life Cycle

Kubernetes lets us to attach life cycle events to containers that are going to create. We can attach the events to containers so that they execute along with containers. 

There are 2 events called postStart and preStart. K8 sends the postStart event immediately after the container is created. The postStart handler runs asynchronously relative to the Container’s code, but Kubernetes’ management of the container blocks until the postStart handler completes. The Container’s status is not set to RUNNING until the postStart handler completes.

K8 sends the preStop event immediately before the container is terminated. the container termination will be blocked until the preStop event is completed.

Lets create a pod manifest with PreStop and postStart events as,
[root@manja17-I13330 kubenetes-config]# cat lifecycle-pod.yml
apiVersion: v1
kind: Pod
metadata:
 name: testing-service
spec:
 containers:
   - name: test-ser
     image: docker.io/jagadesh1982/testing-service
     ports:
     - containerPort: 9876
     lifecycle:
         postStart:
           exec:
             command: ["/bin/sh", "-c", "touch /tmp/hello"]
         preStop:
           exec:
             command: ["/bin/sh", "-c", "echo Hello from the preSop handler > /tmp/hello"]

In the above manifest file ,we have 2 lifecycle events defined in which postStart talks about creating a file hello in /tmp and preStop event is kicked when the container is going to terminate.

The life cycle events can have similar health check events that we discussed earlier including httpGet, tcpCheck and exec.

Now lets create the pod using "kubectl" command and when we login to the pod using,
[root@manja17-I13330 kubenetes-config]# kubectl exec  testing-service -it -- bash
root@testing-service:/usr/src/app# ls /tmp/
hello
root@testing-service:/usr/src/app# exit
exit

We can see that the file is created. This file is created as a part of the postStart event which runs before the container is created.

More to come , Happy learning :-)
Read More

Kubernetes - Init Containers

In kubernetes, a Pod can have multiple containers running apps. Beside these application containers we can also have a init Containers which run before the app containers are started.

So lets take a use case, we want to have a application Container running in a Pop which has a Volume attached to the Pod. our requirement is that before the application container starts in the pod we need to have some files in our volumes so that once the application containers are up and running we can have the data ready. This is the use case where we can use init Containers.

Create a pod manifest file using,
[root@manja17-I13330 kubenetes-config]# cat initContainer.yml
apiVersion: v1
kind: Pod
metadata:
 name: testing-service
spec:
 containers:
   - name: test-ser
     image: docker.io/jagadesh1982/testing-service
     ports:
     - containerPort: 9876
 initContainers:
   - name:         counter
     image:        centos:7
     imagePullPolicy: IfNotPresent
     command:
         - "bin/bash"
         - "-c"
         - "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"

Once we create the pod using the kubectl command, we can describe the pod using, [root@manja17-I13330 kubenetes-config]# kubectl describe pod testing-service
Name:         testing-service
Namespace:    default
Node:         manja17-i14021/10.131.36.181
Start Time:   Mon, 21 May 2018 03:51:01 -0400
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.38.0.2
Init Containers:
 counter:
   Container ID:  docker://9b70db7b56681d380002666e69485b375ca707eca728675d27a3cf2bbc892226
   Image:         centos:7
   Image ID:      docker-pullable://docker.io/centos@sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16
   Port:          <none>
   Host Port:     <none>
   Command:
     bin/bash
     -c
     for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done
   State:          Terminated
     Reason:       Completed
     Exit Code:    0
     Started:      Mon, 21 May 2018 03:51:03 -0400
     Finished:     Mon, 21 May 2018 03:51:03 -0400

   Ready:          True
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-fx8mm (ro)
Containers:
 test-ser:
   Container ID:   docker://245fcadefece115124d225a2400f4d63da93ee2d59a6665c1aa3b03d55902775
   Image:          docker.io/jagadesh1982/testing-service
   Image ID:       docker-pullable://docker.io/jagadesh1982/testing-service@sha256:fa0894c592b7891c177a22bc61eb38ca23724aa9ff9b8ea3b713b32586a75c3d
   Port:           9876/TCP
   Host Port:      0/TCP
   State:          Running
     Started:      Mon, 21 May 2018 03:51:07 -0400
   Ready:          True
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-fx8mm (ro)
Conditions:
 Type           Status
 Initialized    True
 Ready          True
 PodScheduled   True
Volumes:
 default-token-fx8mm:
   Type:        Secret (a volume populated by a Secret)
   SecretName:  default-token-fx8mm
   Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
 Type    Reason               Age From        Message
 ----    ------               ---- ----        -------
 Normal  SuccessfulMountVolume  59s kubelet, manja17-i14021  MountVolume.SetUp succeeded for volume "default-token-fx8mm"
 Normal  Pulled               58s kubelet, manja17-i14021  Container image "centos:7" already present on machine
 Normal  Created               58s kubelet, manja17-i14021  Created container
 Normal  Started               57s kubelet, manja17-i14021  Started container
 Normal  Pulling               57s kubelet, manja17-i14021  pulling image "docker.io/jagadesh1982/testing-service"
 Normal  Pulled               54s kubelet, manja17-i14021  Successfully pulled image "docker.io/jagadesh1982/testing-service"
 Normal  Created               53s kubelet, manja17-i14021  Created container
 Normal  Started               53s kubelet, manja17-i14021  Started container
 Normal  Scheduled              30s default-scheduler        Successfully assigned testing-service to manja17-i14021

This is how we will be using a init Container.
Read More

Sunday, May 20, 2018

Kubernetes - Config Map

While working with application running inside containers, there will be always be a need for the config file. Application that need to connect to backends and other config values are always sent as a configuration files. When we have our application in a container , the dependencies and configurations are also need to be added as a part of image.

Kubernetes provides us a different way of passing configuration by using configMaps. Configmap is one of two ways to provide configurations to your application. ConfigMaps is a simple key/value store, which can store simple values to files.

ConfigMaps is a way to decouple the application-specific artifacts from the container image, thereby enabling better portability and externalization. lets create a configMap with the configuration data,
[root@manja17-I13330 kubenetes-config]# cat configMap.yml
apiVersion: v1
kind: ConfigMap
metadata:
 name: testing-service-staging
 labels:
   name: testing-service-staging
data:
 config: |-
   ---
   :verbose: true
   :environment: test
   :logfile: log/sidekiq.log
   :concurrency: 20
   :queues:
     - [default, 1]
   :dynamic: true

   :timeout: 300

I created a config Map which includes my configuration for the application.
[root@manja17-I13330 kubenetes-config]# kubectl create -f configMap.yml
configmap "testing-service-staging" created

[root@manja17-I13330 kubenetes-config]# kubectl get configmaps
NAME                                   DATA AGE
testing-service-staging    1 4s

Now create a pod with the config map as,
[root@manja17-I13330 kubenetes-config]# cat configmap-file-mount.yml
apiVersion: v1
kind: Pod
metadata:
 name: testing-service
spec:
 containers:
 - name: testing
   image: docker.io/jagadesh1982/testing-service
   volumeMounts:
     - mountPath: /etc/sidekiq/config
       name: testing-service-staging
 volumes:
   - name: testing-service-staging
     configMap:
        name: testing-service-staging
        items:
         - key: config
           path: testing-stage.yml

We can see that the config map is even mounted as a volume. In the above case we created a config Map with the one that we created earlier as"testing-service-staging" and then mounted it on /etc/sidekiq/config. the file that we created is testing-stage.yml with the config data available in the config map that we created earlier.

Now access the configuration file that we mounted using as,
[root@manja17-I13330 kubenetes-config]# kubectl exec testing-service -c testing -it bash
root@testing-service:/usr/src/app# cd /etc/sidekiq/
root@testing-service:/etc/sidekiq# ls
config
root@testing-service:/etc/sidekiq# cat config/
cat: config/: Is a directory
root@testing-service:/etc/sidekiq# cd config/
root@testing-service:/etc/sidekiq/config# ls
testing-stage.yml
root@testing-service:/etc/sidekiq/config# cat testing-stage.yml
---
:verbose: true
:environment: test
:logfile: log/sidekiq.log
:concurrency: 20
:queues:
 - [default, 1]
:dynamic: true
:timeout: 300root@testing-service:/etc/sidekiq/config# exit
exit

We can see that when we access the config file testing-stage.yml file we have the same config data from the configmap that we created. More to Come , Happy Learning :-)


Read More