Pages

Wednesday, May 23, 2018

Kubernetes - Volumes with EmptyDir and Memory

There will be always cases where we need to save data on to a drive or disk. Pods when stopped will destroy the data stored inside the container. Node reboots also clear any data from the Ram based disks. There are many times where we need  some shared spacer have containers that process data and hand it off to another container before they die.

Kubernetes provides us a way to manage storage for containers. A Volume is a directory accessible to all running containers in the pod, with a guarantee that the data is preserved. In this article we will see how we can create volumes and attach them to the pods. For the demo we will use NFS as our additional drive.

Depending the type of volumes, we differentiate the types of volumes as,
  1. Node-local volumes, such as emptyDir or hostPath
  2. Generic networked volumes, such as nfs, glusterfs, or cephfs
  3. Cloud provider–specific volumes, such as awsElasticBlockStore,  azureDisk, or gcePersistentDisk
  4. Special-purpose volumes, such as secret or gitRepo

Empty directory  - One of the easiest ways to achieve improved persistence amid container crashes and data sharing within a pod is to use the emptydir volume. This volume type can be used with either the storage volumes of the node machine itself or an optional RAM disk for higher performance.

emptyDir volume is created when the pod is assigned to the node and exists until the pod runs on that node.As the name says  , the volume is initially empty and pod can read and write contents to the volume.

Volume with Directory Type -
[root@manja17-I13330 kubenetes-config]# cat emptyDir-mysql.yml
apiVersion: v1
kind: Pod
metadata:
  name: "mysql"
  labels:
   name: "lbl-k8s-mysql"
spec:
  containers:
   - image: docker.io/jagadesh1982/testing-service
     name:  "testing"
     imagePullPolicy: IfNotPresent
     ports:
      - containerPort: 3306
     volumeMounts:
       - mountPath: /data-mount
         name: data
  volumes:
    - name: data
      emptyDir: {}

Login to the container and see how it goes,
[root@manja17-I13330 kubenetes-config]# kubectl exec mysql -c testing -it bash
root@mysql:/usr/src/app# cd /data-mount
root@mysql:/data-mount# pwd
/data-mount
root@mysql:/data-mount# touch hello
root@mysql:/data-mount# echo "hello world" >> ./hello
root@mysql:/data-mount# exit
exit

Volume with Memory Type - By default emptyDir volumes are stored on whatever medium is backing the node. It that is an SSD disk the node restarts will still have the data available. There is another type of backend “memory” which tells k8 to mount a tmpfs ( Ram backed file System ). While this is fast , any node restart clears the contents.

Lets create a Volume type with
[root@manja17-I13330 kubenetes-config]# cat emptyDir-memory-pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: "mysql"
  labels:
   name: "lbl-k8s-mysql"
spec:
  containers:
   - image: docker.io/jagadesh1982/testing-service
     name:  "testing"
     imagePullPolicy: IfNotPresent
     ports:
      - containerPort: 3306
     volumeMounts:
       - mountPath: /data-mount
         name: data
  volumes:
    - name: data
      emptyDir:
        medium: Memory

[root@manja17-I13330 kubenetes-config]# kubectl exec mysql -c testing -it bash
root@mysql:/# cd data-mount/
root@mysql:/data-mount# pwd
/data-mount
root@mysql:/# exit
Exit

In the next Volumes we will see how to mount a NFS location to the container.
Read More

Monday, May 21, 2018

Kubernetes - kubectl Check Sheet

Kubernetes Cheat Sheet


kubectl label nodes work-node1 label=node1 : Label a Node "work-node1" with "node1"

kubectl exec testing-service -c test-ser -it -- bash : Connect to the container test-ser in the pod testing-service

kubectl exec testing-service -c test-ser free : Run a command in the container "test-ser" in pod "testing-service"


kubectl create secret docker-registry docker-secret --docker-server https://index.docker.io/v1/ --docker-email jagadesh.manchala@gmail.com --docker-username=<User Name> --docker-password <password> : Create a docker secret for pulling images from docker

kubectl get componentstatus : Get the kubernetes component status

Get all images pulled and being used: kubectl get pods --all-namespaces -o jsonpath="{..image}" |\
> tr -s '[[:space:]]' '\n' |\
> sort |\
> uniq -c

     1 docker.io/rajpr01/myapp
     1 docker.io/rajpr01/myapp:latest
     3 docker.io/weaveworks/weave-kube:2.3.0
     3 docker.io/weaveworks/weave-npc:2.3.0
     2 k8s.gcr.io/etcd-amd64:3.1.12
     2 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
     2 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
     2 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
     2 k8s.gcr.io/kube-apiserver-amd64:v1.10.2
     2 k8s.gcr.io/kube-controller-manager-amd64:v1.10.2
     6 k8s.gcr.io/kube-proxy-amd64:v1.10.2
     2 k8s.gcr.io/kube-scheduler-amd64:v1.10.2
     3 weaveworks/weave-kube:2.3.0
     3 weaveworks/weave-npc:2.3.0

List Containers by Pod : kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |sort



Read More

Kubernetes - Container Life Cycle

Kubernetes lets us to attach life cycle events to containers that are going to create. We can attach the events to containers so that they execute along with containers. 

There are 2 events called postStart and preStart. K8 sends the postStart event immediately after the container is created. The postStart handler runs asynchronously relative to the Container’s code, but Kubernetes’ management of the container blocks until the postStart handler completes. The Container’s status is not set to RUNNING until the postStart handler completes.

K8 sends the preStop event immediately before the container is terminated. the container termination will be blocked until the preStop event is completed.

Lets create a pod manifest with PreStop and postStart events as,
[root@manja17-I13330 kubenetes-config]# cat lifecycle-pod.yml
apiVersion: v1
kind: Pod
metadata:
 name: testing-service
spec:
 containers:
   - name: test-ser
     image: docker.io/jagadesh1982/testing-service
     ports:
     - containerPort: 9876
     lifecycle:
         postStart:
           exec:
             command: ["/bin/sh", "-c", "touch /tmp/hello"]
         preStop:
           exec:
             command: ["/bin/sh", "-c", "echo Hello from the preSop handler > /tmp/hello"]

In the above manifest file ,we have 2 lifecycle events defined in which postStart talks about creating a file hello in /tmp and preStop event is kicked when the container is going to terminate.

The life cycle events can have similar health check events that we discussed earlier including httpGet, tcpCheck and exec.

Now lets create the pod using "kubectl" command and when we login to the pod using,
[root@manja17-I13330 kubenetes-config]# kubectl exec  testing-service -it -- bash
root@testing-service:/usr/src/app# ls /tmp/
hello
root@testing-service:/usr/src/app# exit
exit

We can see that the file is created. This file is created as a part of the postStart event which runs before the container is created.

More to come , Happy learning :-)
Read More

Kubernetes - Init Containers

In kubernetes, a Pod can have multiple containers running apps. Beside these application containers we can also have a init Containers which run before the app containers are started.

So lets take a use case, we want to have a application Container running in a Pop which has a Volume attached to the Pod. our requirement is that before the application container starts in the pod we need to have some files in our volumes so that once the application containers are up and running we can have the data ready. This is the use case where we can use init Containers.


Create a pod manifest file using,

[root@manja17-I13330 kubenetes-config]# cat initContainer.yml
apiVersion: v1
kind: Pod
metadata:
 name: testing-service
spec:
 containers:
   - name: test-ser
     image: docker.io/jagadesh1982/testing-service
     ports:
     - containerPort: 9876
 initContainers:
   - name:         counter
     image:        centos:7
     imagePullPolicy: IfNotPresent
     command:
         - "bin/bash"
         - "-c"
         - "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"

Once we create the pod using the kubectl command, we can describe the pod using, [root@manja17-I13330 kubenetes-config]# kubectl describe pod testing-service
Name:         testing-service
Namespace:    default
Node:         manja17-i14021/10.131.36.181
Start Time:   Mon, 21 May 2018 03:51:01 -0400
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.38.0.2
Init Containers:
 counter:
   Container ID:  docker://9b70db7b56681d380002666e69485b375ca707eca728675d27a3cf2bbc892226
   Image:         centos:7
   Image ID:      docker-pullable://docker.io/centos@sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16
   Port:          <none>
   Host Port:     <none>
   Command:
     bin/bash
     -c
     for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done
   State:          Terminated
     Reason:       Completed
     Exit Code:    0
     Started:      Mon, 21 May 2018 03:51:03 -0400
     Finished:     Mon, 21 May 2018 03:51:03 -0400

   Ready:          True
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-fx8mm (ro)
Containers:
 test-ser:
   Container ID:   docker://245fcadefece115124d225a2400f4d63da93ee2d59a6665c1aa3b03d55902775
   Image:          docker.io/jagadesh1982/testing-service
   Image ID:       docker-pullable://docker.io/jagadesh1982/testing-service@sha256:fa0894c592b7891c177a22bc61eb38ca23724aa9ff9b8ea3b713b32586a75c3d
   Port:           9876/TCP
   Host Port:      0/TCP
   State:          Running
     Started:      Mon, 21 May 2018 03:51:07 -0400
   Ready:          True
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-fx8mm (ro)
Conditions:
 Type           Status
 Initialized    True
 Ready          True
 PodScheduled   True
Volumes:
 default-token-fx8mm:
   Type:        Secret (a volume populated by a Secret)
   SecretName:  default-token-fx8mm
   Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
 Type    Reason               Age From        Message
 ----    ------               ---- ----        -------
 Normal  SuccessfulMountVolume  59s kubelet, manja17-i14021  MountVolume.SetUp succeeded for volume "default-token-fx8mm"
 Normal  Pulled               58s kubelet, manja17-i14021  Container image "centos:7" already present on machine
 Normal  Created               58s kubelet, manja17-i14021  Created container
 Normal  Started               57s kubelet, manja17-i14021  Started container
 Normal  Pulling               57s kubelet, manja17-i14021  pulling image "docker.io/jagadesh1982/testing-service"
 Normal  Pulled               54s kubelet, manja17-i14021  Successfully pulled image "docker.io/jagadesh1982/testing-service"
 Normal  Created               53s kubelet, manja17-i14021  Created container
 Normal  Started               53s kubelet, manja17-i14021  Started container
 Normal  Scheduled              30s default-scheduler        Successfully assigned testing-service to manja17-i14021

This is how we will be using a init Container.
Read More

Sunday, May 20, 2018

Kubernetes - Config Map

While working with application running inside containers, there will be always be a need for the config file. Application that need to connect to backends and other config values are always sent as a configuration files. When we have our application in a container , the dependencies and configurations are also need to be added as a part of image.

Kubernetes provides us a different way of passing configuration by using configMaps. Configmap is one of two ways to provide configurations to your application. ConfigMaps is a simple key/value store, which can store simple values to files.

ConfigMaps is a way to decouple the application-specific artifacts from the container image, thereby enabling better portability and externalization. lets create a configMap with the configuration data,
[root@manja17-I13330 kubenetes-config]# cat configMap.yml
apiVersion: v1
kind: ConfigMap
metadata:
 name: testing-service-staging
 labels:
   name: testing-service-staging
data:
 config: |-
   ---
   :verbose: true
   :environment: test
   :logfile: log/sidekiq.log
   :concurrency: 20
   :queues:
     - [default, 1]
   :dynamic: true

   :timeout: 300

I created a config Map which includes my configuration for the application.
[root@manja17-I13330 kubenetes-config]# kubectl create -f configMap.yml
configmap "testing-service-staging" created

[root@manja17-I13330 kubenetes-config]# kubectl get configmaps
NAME                                   DATA AGE
testing-service-staging    1 4s

Now create a pod with the config map as,
[root@manja17-I13330 kubenetes-config]# cat configmap-file-mount.yml
apiVersion: v1
kind: Pod
metadata:
 name: testing-service
spec:
 containers:
 - name: testing
   image: docker.io/jagadesh1982/testing-service
   volumeMounts:
     - mountPath: /etc/sidekiq/config
       name: testing-service-staging
 volumes:
   - name: testing-service-staging
     configMap:
        name: testing-service-staging
        items:
         - key: config
           path: testing-stage.yml

We can see that the config map is even mounted as a volume. In the above case we created a config Map with the one that we created earlier as"testing-service-staging" and then mounted it on /etc/sidekiq/config. the file that we created is testing-stage.yml with the config data available in the config map that we created earlier.

Now access the configuration file that we mounted using as,
[root@manja17-I13330 kubenetes-config]# kubectl exec testing-service -c testing -it bash
root@testing-service:/usr/src/app# cd /etc/sidekiq/
root@testing-service:/etc/sidekiq# ls
config
root@testing-service:/etc/sidekiq# cat config/
cat: config/: Is a directory
root@testing-service:/etc/sidekiq# cd config/
root@testing-service:/etc/sidekiq/config# ls
testing-stage.yml
root@testing-service:/etc/sidekiq/config# cat testing-stage.yml
---
:verbose: true
:environment: test
:logfile: log/sidekiq.log
:concurrency: 20
:queues:
 - [default, 1]
:dynamic: true
:timeout: 300root@testing-service:/etc/sidekiq/config# exit
exit

We can see that when we access the config file testing-stage.yml file we have the same config data from the configmap that we created. More to Come , Happy Learning :-)


Read More