Pages

Monday, July 30, 2018

RunC - Container Tool

RunC
RunC is a Command line tool for spawning and running containers according to the OCI specification.This is a docker container format and runtime that is being donated to the OCI.

What is a OCI?
Open Container initiative defines specifications for building tools that help in building,
transporting and preparing container images to run.

The OCI consists of 2 specifications,
Runtime Specification ( runtime-spec ) defines how to  run a filesystem bundle that is available on the disk. Generally an OCI implementation will first download the OCI image, unpack the image into a OCI Runtime file system bundle.  At this point we can run this Runtime bundle using a Oci Runtime.

Image specification (image-spec) defines how to create an OCI Image. The image is created
by a build system which will give a image manifest, file system and image configuration.
The manifest file will have details about the content of the file system , dependencies of the
image like link to other file systems etc which will make up the final image.

The Image configuration will have application arguments, env variables etc. All these combined to form an OCI image

How to use RunC to run Containers?
1. Download the runC library based on the platform from here using,
wget https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64

2. Create a directory structure
mkdir runC
cd runC
mkdir test-container
cd test-container

3. Download a busybox docker container image and export the image to the rootfs filesystem
like,  docker export $(docker create busybox) | tar -C rootfs -xvf -

Now we will see a directory by the name rootfs with multiple files and directories inside

4. Run the runC spec command from the download library using,
[root@manja17-I13330 test-container]# /root/runc/runc.amd64 spec
[root@manja17-I13330 test-container]# ll
total 4
-rw-r--r--  1 root root 2614 Jul 26 07:27 config.json
drwxr-xr-x 12 root root  137 Jul 26 07:09 rootfs

A spec file is created by the name config.json. Check the file to see the configurations details
for the image.

[root@manja17-I13330 test-container]# cat config.json
{
        "ociVersion": "1.0.0",
        "process": {
                "terminal": true,
                "user": {
                        "uid": 0,
                        "gid": 0
                },
                "args": [
                        "sh"
                ],
                "env": [
                        "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                        "TERM=xterm"
                ],
                "cwd": "/",
                "capabilities": {
                        "bounding": [
                                "CAP_AUDIT_WRITE",
                                "CAP_KILL",
                                "CAP_NET_BIND_SERVICE"
                        ],
                        "effective": [
                                "CAP_AUDIT_WRITE",
                                "CAP_KILL",
                                "CAP_NET_BIND_SERVICE"
                        ],
                       *********
If you check the config.json, we can see what this container does and how it will run etc.Run the container using the runC command as,
[root@manja17-I13330 test-container]# /root/runc/runc.amd64 run container1
/ # ps ux
PID   USER     TIME  COMMAND
    1 root      0:00 sh
    6 root      0:00 ps ux
/ # exit

Run the container background using,
[root@manja17-I13330 test-container]# /root/runc/runc.amd64 run container1 &
[root@manja17-I13330 test-container]# /root/runc/runc.amd64 list
We will see the containers running listed
https://lh6.googleusercontent.com/fbc64NMM5Zf3kkdNiR4Id1J0IZydACevlYXlBNSQvXdOkg-wKvlTwOs83qrf9j09oAKdwRMOb11buqCbhsMizHwpcoaMiZC_Mdn1V964AcZmC-vh-Sx6hYxVn_rREPUlWdfQOhqF

All commands that we run is based on the Container ID. lets run some more commands as
[root@manja17-I13330 runc]# ./runc.amd64 ps container1
UID        PID   PPID   C STIME TTY          TIME    CMD
root     21033 21025  0 00:20 ?         00:00:00 sh

[root@manja17-I13330 runc]#
./runc.amd64 exec container1 free
                total          used           free        shared    buffers     cached
Mem:       8175444    8013024     162420          0       2776    5760124
-/+ buffers/cache:     2250124     5925320
Swap:       0             0                 0

More to Come, Happy Learning :-)


Read More

Friday, July 20, 2018

Kubernetes Volumes

There will be always cases where we need to save data on to a drive or disk. Containers
when stopped will destroy the data stored inside the container. Kubernetes provides us a way to manage storage for containers. Kubernetes uses volumes for adding additional storage to the pods. In this article we will see how we can create volumes and attach them to the pods. For the demo we will use NFS as our additional drive.

Important elements for running a state ful containers are,
PersistentVolume - Low level representation of the storage volume
PersistentVolumeClaim - Binding between a Pod and Persistent Volume
Volume Driver - code that will be used to communicate with the back end storage
StorageClass - Allows dynamic provisioning of Persistent volumes.

I used this link for creating nfs share and access that from the worker nodes where our pods
will run.

Persistent Volume - lets create a persistent volume for our Pod.
[root@manja17-I13330 kubenetes-config]# cat nfs-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
 name: test-nfs
spec:
 capacity:
   storage: 1Mi
 accessModes:
   - ReadWriteMany
 nfs:
   server: 10.131.175.138
   path: "/nfsshare"

In the above config, we are actually using a nfs volume from the ip address “10.131.175.138” where nfs shares where already created under the /nfsshare. Run this and see how it works

[root@manja17-I13330 kubenetes-config]# kubectl create -f nfs-pv.yml
persistentvolume "test-nfs" created

If we check the mount command from our worker nodes we can see the below output,

[root@manja17-I14021 nfsshare]# mount | grep nfsshare
10.131.175.138:/nfsshare on /mnt/nfsshare type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,
hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.131.36.181,
local_lock=none,addr=10.131.175.138)

We can see that the /nfsshare from 10.131.165.138 is mounted onto the /mnt/nfssshare on
the worker nodes. In the above case we are creating a persistent volume with 1MI size

PersistentVolumeClaim -
[root@manja17-I13330 kubenetes-config]# cat nfs-pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: test-nfs
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 1Mi

The Persistent volume claim works in different way. In the above configuration we are asking
K8 to provide us a volume which has storage of 1MI. We are not calling the persistent
volume directly rather we are creating a persistent Volume claim and giving K8 the authority of identifying the volumes with the size that we requested. if that finds with a pv with that volume it allocates that to the pod.

Sample Pod Configuration -
[root@manja17-I13330 kubenetes-config]# cat nfs-pod.yml
apiVersion: v1
kind: ReplicationController
metadata:
 name: nfs-web
spec:
 replicas: 1
 selector:
   role: web-frontend
 template:
   metadata:
     labels:
       role: web-frontend
   spec:
     containers:
     - name: web
       image: nginx
       ports:
         - name: web
           containerPort: 80
       volumeMounts:
           # name must match the volume name below
           - name: test-nfs
             mountPath: "/usr/share/nginx/html"
     volumes:
     - name: test-nfs
       persistentVolumeClaim:
         claimName: test-nfs

In the above pod configuration, the main elements are Volumes and Volume mounts. The volume element contains the Persistent Volume claim details that we created earlier. The Volume mount is the place where if volume with our requirement is available it will mount on the location /usr/share/nginx/html inside the container.

Run the Container and see how it works -
[root@manja17-I13330 kubenetes-config]# kubectl get pods
NAME              READY      STATUS             RESTARTS AGE
mysql             1/1        Running            0 1d
nfs-web-qk2qn     1/1        Running            0 4m

Now log in to the nfs container and see if we can access the nfs drives,
[root@manja17-I13330 kubenetes-config]# kubectl exec -it nfs-web-qk2qn -- bash
root@nfs-web-qk2qn:/# cd /usr/share/nginx/html/
root@nfs-web-qk2qn:/usr/share/nginx/html# touch container.txt
root@nfs-web-qk2qn:/usr/share/nginx/html# cat container.txt
root@nfs-web-qk2qn:/usr/share/nginx/html# echo "this is from container" >> container.txt
root@nfs-web-qk2qn:/usr/share/nginx/html# cat container.txt
this is from container
root@nfs-web-qk2qn:/usr/share/nginx/html# exit

More to Come, Happy learning :-)
Read More

Union File System for Containers

As we all know that container are run from image which are layers. The Implementation is based on Union file systems. UnionFS is a file systems that operate by creating layers making them very lightweight and fast. There are multiple implementations of UnionFS like AUFS, btrfs , vfs and device mapper.

Basically union mounting is a way of combining multiple directories into one that appears to contain their combined contents. So a UnionFS takes a existing file system and overlays it on a newer file system. It allows files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system. Contents of directories which have the same path within the merged branches will be seen together in a single merged directory, within the new virtual filesystem.

Lets see a implementation of union Fs.  
1.   Install aufs-tools on a Ubnutu machine.
2.   Create a dir in home location : mkdir /home/home_dir.
3.   Create a dir in tmp location : mkdir /tmp/tmp_dir
4.   Mount both dir using the aufs as, mount -t aufs -o br=/tmp/tmp_dir:/home/home_dir none /tmp/aufs-root/
We have mounted the /tmp/tmp_dir and /home/home_dir onto a single file system /tmp/aufs-root
Now once we create file in /tmp/home_dir and /home/home_dir directories and check in the /tmp/aufs-root we can see,
          root@work-node2:/tmp/aufs-root# ll
-rw-r--r-- 1 root root    0 Jul 17 01:38 home_dir_file
-rw-r--r-- 1 root root    0 Jul 17 01:38 tmp_dir_file

So we have basically mounted the 2 directories as layers on /tmp/aufs-root. Changes made to these files will also be reflected on the originals. This is basis for docker container technology.
Read More

Docker with Ansible

Ansible as we know is one of the best automation tools available now and docker is the industry leading container framework. In this article we will see how we can integrate ansible and docker. We will see how we use ansible to create docker containers.

Install the necessary packages for using docker with ansible. The docker-py package is required for playing with docker using ansible. To install this package we will use the python pip command as,
pip install docker-py

Docker Pull  -  Ansible does contain modules which help us in playing with docker. Lets create a ansible playbook for pulling a image

[root@manja17-I13330 docker-ansible]# cat docker-pull.yml
---
- hosts: localhost
  tasks:
  - name: Pull Ubuntu image
    docker_image:
      name: ubuntu

In the above playbook, we are trying to pull an ubuntu image using the ansible playbook. Run the playbook as below,

[root@manja17-I13330 docker-ansible]# ansible-playbook docker-pull.yml
 [WARNING]: Could not match supplied host pattern, ignoring: all

 [WARNING]: provided hosts list is empty, only localhost is available

PLAY [localhost] ***************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************
ok: [localhost]

TASK [Pull Ubuntu image] *******************************************************************************************************
changed: [localhost]

PLAY RECAP *********************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0

Once the playbook ran successfully, we can see that the ubuntu image is
downloaded. This can be verified by the “docker images” command.

Build the Docker Image - Ansible provides a way to build docker images using the playbook. Below is the simple example of building a image using the ansible playbook,

[root@manja17-I13330 ansible-docker]# cat docker-build.yml
---
- hosts: localhost
  tasks:
  - name: Build Nginx image
    docker_image:
      path: .
      name: my-nginx
      tag: 0.1

In the above snippet , we are building a image from the Dockerfile available in the current location. Here is the Docker file that we are using

[root@manja17-I13330 ansible-docker]# cat Dockerfile
FROM ubuntu
MAINTAINER Jagadish Manchala <jagadish.manchala@gmail.com>
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y nginx

Once we run the command ,we will be having a nginx built with tag 0.1

Starting a Container - in order to start a container , we can use the below playbook as,

[root@manja17-I13330 docker-ansible]# cat docker-start-container.yml
---
- hosts: localhost
  tasks:
   - name: Create another Nginx container
     docker_container:
       name: my-nginx
       image: my-nginx:0.1
       ports:
         - "18880:80"
       env:
         KEY: value
       command: sleep infinity

In the above playbook, we are actually running the nginx container that we created earlier. In this we are starting the container and making sure a job “sleep infinity” is ran so that we have the container up and running.

[root@manja17-I13330 docker-ansible]# ansible-playbook docker-start-container.yml
 [WARNING]: Could not match supplied host pattern, ignoring: all

 [WARNING]: provided hosts list is empty, only localhost is available

PLAY [localhost] *************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************
ok: [localhost]

TASK [Create another Nginx container] ****************************************************************************************
changed: [localhost]

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0

Lets check the docker ps and see if our container is running,
[root@manja17-I13330 docker-ansible]# docker ps | grep nginx
e132f5acdcca        my-nginx:0.1                             "sleep infinity"         10 seconds ago      Up 8 seconds        0.0.0.0:18880->80/tcp     my-nginx

We can see that the container is up and running now. For the example yml files, Check the github repo here.
Read More

Thursday, July 12, 2018

Name Spaces

With the advent of containers , it is very easy to isolate linux process into their own little environments. By this we run a whole range of applications on a single linux Machine where each container works independently. No two containers will interfere each other. But how does this happen?, what happens under the hood?

There are many libraries and components from Linux kernel that are used while working with containers. Some of these are available by default and some needs to be installed.

We already discussed about the chroot command which is a basic idea of the namespace. Just as chroot allows process to see any directory as the root of the system ( which is independent to the rest of the process ) linux namespaces allow other aspects of the operating system like network,process ,memory,mount points, inter process communication etc to be independently modified.

Name Spaces in Use - When we are working on single user computer everything is fine but when working on the multi user computer where multiple services are running, it is very important to handle security of the services. If many cases when one service is attacked,it can lead to the whole system attack. Namespace isolate the environment where the service is running.

If we ever see a site like TopCoder or HackerRank where it allows environment for developers to write code and test. How does the code does not affect the system if it is malicious. The sites implement a way where they provide a secure environment with all resources like memory, network, cpu , disk etc. When the developer executes his code , the code will executed in a environment that will not impact other environments running code. Each environment will have their own area with own resources like memory,cpu , disk , network etc and will not interface with other environments. This is what we all as Namespace.

This is what the container use. They run in their own namespaces where they have their own resources and will not interfere with other containers.

Unshare - Unshare is a command available in linux that allows to run a program with namespaces defined which  are unshared from parent. Lets see how to use this unshare command,

[root@dev jail]# hostname 
dev.testing.com

[root@dev jail]# unshare -u /bin/sh 
sh-4.1# hostname my-new-hostname
sh-4.1# hostname
my-new-hostname
sh-4.1# exit
exit

[root@dev jail]# hostname
dev.testing.com

If you see the usage of the above command , we can see that hostname is set to “dev.testing.com” . Once we use the unshare command ,we are actually creating a namespace and logging in to the namespace. Once we are in a separate environment, we changed the hostname to “my-new-hostname” and once we are done , the hostname is back to original.

More to Come,Happy learning :-)

Read More