Friday, January 29, 2016

Ansible – Playbooks

In our previous articles we have seen how we can use Ansible to perform basic commands on a remote machine. In this article we will see playbooks. Playbooks are a completely different way to use Ansible.

Playbooks are nothing but configuration details that are defined in a text files which tell us what actions needs to be done on the remote machines. For the Ansible to execute on the remote machine we don’t need any additional software other than Python. The tasks that need to be performed on a remote machine are written inside a playbook in YAML format.

Now lets create a Sample playbook for the article purpose. As a Scenario we can create a playbook for executing a Echo command on the remote machine.  In order to run a command on a remote machine we can use

[root@vx111a ansible]# ansible dev -m command -a "/bin/echo hello Sample"
172.16.202.96 | success | rc=0 >>
hello Sample

The ‘dev’ is something that we configure in our inventory file with a list of IP address. Now if we check the above command we had used the module command to run a command on the remote machine. Now we will create a  Sample Playbook for the same above command using the YAML format.

Now lets create a Ansible directory which we will use for our article. The next main step is to tell Ansible about the remote machines that we need to talk. For this we need to create a Ansible hosts file also called as inventory file. This file contains list of Ip address that are defined in groups that we can use them while running Ansible commands. By default the location of this file is /etc/Ansible/hosts. This is a global file that we can use but Ansible lets you create your hosts file and pass that to the Ansible command.

When we run the Ansible command, the command will always check for the Ansible.cfg file in the local directory that it is being run. If the file is found it will override the global configuration with the local values.

Once the directory is created, create a ansible.cfg file with the values as,
[root@vx111a ansible]# cat ansible.cfg
[defaults]
hostfile=hosts

We have defined the hostfile configuration option with the value of the hosts, within the defaults group.

Now lets define the hosts file like this,
[root@vx111a ansible]# cat hosts
[dev]
172.16.202.96 ansible_ssh_user=root

In the host file we have defined a “dev” group with an IP address and the user with which we need to login to that machine and perform actions on.

Now once the configuration of the Ansible.cfg and hosts file is done ,we can test our configuration using the basic command as,

[root@vx111a ansible]# ansible -m ping 'dev'
172.16.202.96 | success >> {
    "changed": false,
    "ping": "pong"
}

It was a Success, what we did was to ping the servers defined in the hosts file as group “dev”. You can compare the IP in the above command output with the one in the hosts file.
Now we will define a play book for the same above command execution.

[root@vx111a ansible]# cat sample-playbook.yml
---
- hosts: dev
  tasks:
    - name: run echo Command
      command: /bin/echo Hello Sample PlayBook

This above is the sample play book that we need to write. The hosts: dev declaration is at the top, which tells Ansible that we are using the dev hosts group. Next is the list of the tasks. In the above example we have one task with the name “run echo Command”. This is simply a description to allow users understand what the task does. Finally the command: /bin/echo Hello Sample PlayBook use the module command for running the command /bin/echo with the arguments “Hello Sample PlayBook”.

Now lets run the playbook as,

[root@vx111a ansible]# ansible-playbook sample-playbook.yml

PLAY [dev] ********************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [run echo Command] ******************************************************
changed: [172.16.202.96]

PLAY RECAP ********************************************************************
172.16.202.96              : ok=2    changed=1    unreachable=0    failed=0  

Once the above playbook is completed, we can see the status as ok. The most important thing to notice is that the playbook does not return the output of the module.

Now lets write another playbook example where will add some debug statements for the output that is generated. For the same above playbook we add a register

[root@vx111a ansible]# cat sample-playbook1.yml
---
- hosts: dev
  tasks:
    - name: Echo a Command
      command: /bin/echo Hello
      register: out

    - debug: var="{{ out.stdout }}"
    - debug: var="{{ out.stderr }}"


In the above playbook other than name and command we have a register field. We have registered a variable by the name out for which we assigned the out.stdout (output of the command). Now once we execute the play book using the Ansible we can see something like this,

[root@vx111a ansible]# ansible-playbook sample-playbook1.yml

PLAY [dev] ********************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [Echo a Command] ********************************************************
changed: [172.16.202.96]

TASK: [debug var="{{ out.stdout }}"] ******************************************
ok: [172.16.202.96] => {
    "var": {
        "Hello": "Hello"
    }
}

TASK: [debug var="{{ out.stderr }}"] ******************************************
ok: [172.16.202.96] => {
    "var": {
        "": ""
    }
}

PLAY RECAP ********************************************************************
172.16.202.96              : ok=4    changed=1    unreachable=0    failed=0  


We can see the stdout as ”Hello”.

This is the basic introduction of the playbooks in Ansible. Hope this helps. In the next articles we will see advanced examples on Ansible playbooks
Read More

Thursday, January 21, 2016

Ansible – More Modules and Commands


In this article we will see a more in-depth uses of the Ansible command using the Same hosts file.

Ping the Local Machine
[root@vx111a docker]# ansible all -i 'localhost,' -c local -m ping
localhost | success >> {
    "changed": false,
    "ping": "pong"
}

List all the Hosts Defined in the Inventory File
[root@vx111a docker]# ansible -i hosts all --list-hosts
    172.16.202.96

Execute a Command on the Remote Machine defined in the Hosts File
[root@vx111a docker]# ansible -i hosts all  -m shell -a "uptime" --user vagrant
172.16.202.96 | success | rc=0 >>
 15:41:20 up 4 days,  5:10,  1 user,  load average: 0.00, 0.00, 0.00

Execute the Command on the Local machine
[root@vx111a docker]# ansible all -i 'localhost,' -c local -m shell -a "uptime"
localhost | success | rc=0 >>
 20:21:00 up 13 days, 15 min,  4 users,  load average: 0.22, 0.14, 0.14

Copy a File
[root@vx111a docker]# ansible -i hosts all -m copy -a "src=/work/docker/Dockerfile dest=/tmp" --user vagrant
172.16.202.96 | success >> {
    "changed": true,
    "checksum": "591af5f15567c89b7ebcaf4e7d9ca4657ea63135",
    "dest": "/tmp/Dockerfile",
    "gid": 500,
    "group": "vagrant",
    "md5sum": "2ec05713f50f4c2026aa514329111722",
    "mode": "0664",
    "owner": "vagrant",
    "size": 132,
    "src": "/home/vagrant/.ansible/tmp/ansible-tmp-1453200901.47-275728752624352/source",
    "state": "file",
    "uid": 500
}


Set File Permissions
[root@vx111a docker]# ansible -i hosts all -m file -a "dest=/tmp/Dockerfile mode=664" --user vagrant
172.16.202.96 | success >> {
    "changed": false,
    "gid": 500,
    "group": "vagrant",
    "mode": "0664",
    "owner": "vagrant",
    "path": "/tmp/Dockerfile",
    "size": 132,
    "state": "file",
    "uid": 500
}

Create a Directory
[root@vx111a docker]# ansible -i hosts all -m file -a "dest=/tmp/pump mode=755 state=directory" --user vagrant
172.16.202.96 | success >> {
    "changed": true,
    "gid": 500,
    "group": "vagrant",
    "mode": "0755",
    "owner": "vagrant",
    "path": "/tmp/pump",
    "size": 4096,
    "state": "directory",
    "uid": 500
}

Delete a Directory
[root@vx111a docker]# ansible -i hosts all -m file -a "dest=/tmp/pump state=absent" --user vagrant
172.16.202.96 | success >> {
    "changed": true,
    "path": "/tmp/pump",
    "state": "absent"
}

Get Package Status
[root@vx111a docker]# ansible -i hosts all -m yum -a "name=wget state=present" --user vagrant
172.16.202.96 | success >> {
    "changed": false,
    "msg": "",
    "rc": 0,
    "results": [
        "wget-1.12-1.4.el6.x86_64 providing wget is already installed"
    ]
}

Get Service Status
[root@vx111a docker]# ansible -i hosts all -m service -a "name=auditd state=started" --user vagrant
172.16.202.96 | success >> {
    "changed": false,
    "name": "auditd",
    "state": "started"
}

Setup – One of the very use-full commands that we use mostly is to get the details of the machine. Setup is one such module which will allow to get all details like env variables , memory configurations etc on a remote machine. In order to get these details on the local machine we can use

[root@vx111a docker]# ansible all -i 'localhost,' -c local -m setup

More to Come, Happy Learning
Read More

Getting Ansible to Work


In the previous article we have seen what is Ansible and installing Ansible. In this article we will see how we can start working with Ansible. This article provides you with the basic Usage and commands available. For the testing purpose we will be creating a Vagrant virtual machine and then use the Ansible tool on that machine.

1) Create a Vagrant Machine - For more details on Configuring Vagrant, check here.   For the virtual box that we configure we use the below vagrant file,

[root@vx111a docker]# cat Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "CentOS2"
  config.vm.box_url = "https://saleseng.s3.amazonaws.com/boxfiles/CentOS-6.3-x86_64-minimal.box"
  config.vm.host_name = "dev.foohost.vm"
  config.vm.network "private_network", ip: "172.16.202.96"

 config.vm.provider :virtualbox do |vb|
     vb.name = "foohost"
 end
end

Once the file is saved as Vagrantfile , run the “vagrant up” command to start the Virtual box. Once the Virtual machine is up and running, the machine is assigned with the IP address “172.16.202.96”.

2) Generate SSH keys - Once the Virtual Machine is up and running, Lets get the Status of the Virtual machine whether it is Up and running using the Ping Command. For this we will use the Ansible.

Before making Ansible to get the status of the Remote machine , we need to configure the SSH keys so that the Host machine can connect to the Remote machine. For this run the “ssh-keygen” command.

[root@vx111a docker]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
8e:d3:a5:5d:f6:a4:7f:5b:b0:e1:1e:5e:f3:3c:16:63 root@vx111a.jas.com
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|                 |
|        S . o +  |
|       + + o = E |
|      o + . . *.=|
|       .     + *=|
|              =o=|
+-----------------+

Once this is done , we will now have the Public key and private keys available. The public key is available in the ~/.ssh/ id_rsa.pub file.

3) Copy the SSH key - Copy the contents of the file and add that to the authorized_keys file in the remote machine (vagrant Virtual machine ) .For this use the “vagrant ssh” command which will allow you to login to the running Virtual machine that we created.
Create a authorized_keys file under ~/.ssh/ location ( if not available) and copy the above public key contents to the file.

4) Ping the remote Machine – Now we have the host machine and remote machine with SSH setup we can now start using Ansible. For this create a hosts definition file. The default host definition is located in /etc/ansible/hosts. You can use a custom hosts definition (located outside /etc, for example) by defining them elsewhere and passing the -i [host-file location] parameter to the Ansible command. We will create a sample Host file as

[root@vx111a docker]# cat hosts
[servers]
172.16.202.96

[dev]
172.16.202.96

Now in the host file we have defined the IP address of the Server that we want to manage ( in this case it the vagrant Virtaul box).

Now once the host file Is configured , run the Ansible command as,

[root@vx111a docker]# ansible -i $PWD/hosts all -m ping -u vagrant
172.16.202.96 | success >> {
    "changed": false,
    "ping": "pong"
}

Lets take a look at the command that we ran above,
  • ansible is the command which runs one task at a time.
  • all tells Ansible to run this task on all the hosts in the inventory file ( Host file).
  •  -m means “use this Ansible module”, and ping is the name of the module. The ping module contacts the host and proves that it’s listening.
  • -u means “use the user that passed after the –u”.  in this case it is vagrant.

So we are running a Ansible command on the remote machine to get the Ping Status by using the more user vagrant.

More to Come in the next articles. Happy learning 
Read More

Getting Started With Ansible


Now in the world of increasing virtual machines, Containers and Cloud environment, consider the process of administering them, keeping them all updated, pushing changes out to them, copying files etc. These things will get complicated and time consuming when we have a large server fleet.

One of the common problems while managing multiple servers is keeping their configuration identical. When we have a large server fleet to manage , maintaining them with identical configuration is very complex. We tweak the configuration by hand on one server and when you set up the same on another server and when we forget the changes that we made on first server, a different ecosystem is born.

What may also happen is that you have to reinstall a server. Say, you’ve got your primary name-server all set up and running for like a year, but then you want to (or have to) switch hosts. How on earth did you install that name-server again? You probably need to start all over and Google your way around. When switching hosts, this is not something you want to think about. These common problems can be solved using the automation languages.

Automation languages are bases on the structure of “describe the desired state of the system and the tool will make that happen”. The automation tools like Chef/Puppet/Ansible/SaltStack are all based on the above principle. The automation languages make sure that the desired state change is done only when all desired steps are successful.

In this article we will explore a automation tool called “Ansible”. Ansible is a helpful tool that allows you to create a group of machines, describe how these machines should be configured or what actions should be taken on them. The main important thing is that all these can be done from one central location.

Ansible uses plain SSH, so nothing needs to be installed on any client machines that we are targeting. Other automation tools like Puppet or Chef will need a agent to be installed on the target machine when performing action on them.

On the Chef Automation area, it utilizes a master-agent model and in addition to the master server it also requires a work station to control the master. Agents need to be installed on the target machine and from the work station using the “knife” tool that uses SSH for deployment and other automation works. Certificates are used in dealing with the authentication with master etc. The agents are configured to check with the master continuously for the changes.

On the other hand, Ansible is quite different. This is first developed and released in early 2012. This is simply written in Python and only requires Python libraries to be present on the servers that need to be configured. Python is almost available in all Linux distos.

Ansible’s is light weight, relative ease of use and speed of deployment compared to other CM tools. Ansible does away with the need for agents; all master-agent communication is handled either via standard SSH commands, or the Paramiko module which provides a Python interface to SSH2.  It also provides a excellent in-built security for SSH.

Ansible at this moment only support Linux and does not support windows machines and also the GUI is quite bad when compared with other Tools.

Installing Ansible

Installing Ansible is Quite Easy. Ansible can be installed using the Yum command on RHEL/Centos Machines where as we need to get certain third party tools for running Ansible on Ubuntu/Debian machines.

Use “yum install Ansible” on RHEL/Centos Machines.

Other way of installing Ansible is using by using the PIP of Python. PIP is a tool for installing and managing python packages. This works for most linux machines as “Pip install Ansible”

One more method of installing Ansible is by downloading the Source code from the Git-Hub repository and running the setup command.

More to Come, Happy Learning J
Read More

Docker – Data Volumes

In the previous articles we have seen how we can create containers from various images and created our own basic image for our use. But a container is just a image that we use and if we remove a container everything is gone. What if we want to write data from inside a container? Consider a application running inside a container want to store data. Now even if the container is removed and if in future we launch another container, we would like the data to be available there.

In other words, we are trying to get over here is to separate out the container lifecycle from the data. we want to keep these separate so that the data generated is not destroyed or tied to the container lifecycle and can thus be reused.

There are 2 ways in which we can manage data in the Docker and they are,
Data volumes
Data volume containers

A Data Volume is a area specially designed directory for the container. This is not handled with the Container life cycle but will be initialized during the container creation. The deletion is not done when the container is removed. No Garbage collection of this location happens.
We can use this data location when we create another container by sharing in read only mode (by default)

In this article we will see how we can create a Data Volume and use that

1) Create a Data Volume
[root@vx111a SampleTest]# docker run -it -v /data --name container1 busybox

We will be passing the “-v /data” parameter to the command line. Once the container is up and running, login and see data locations as

/ # ls
bin   data  dev   etc   home  proc  root  run   sys   tmp   usr   var

/ # cd data/
/data # touch hello.txt

/data # ls
hello.txt

/data # exit

We created a Text file with the name “hello.txt”. Exit the container after the file creation. Now check the “docker ps –a” command to see the status of the containers running as
  
[root@vx111a SampleTest]# docker ps -a
CONTAINER ID    IMAGE     COMMAND     CREATED     STATUS     PORTS    NAMES
92ae3eb0281a    busybox   "sh"              minute ago   Exited                    container1

We can see the busybox container that we started is in the Exited state.

2) Inspect the Container - Now lets inspect the Container using the inspect command

[root@vx111a SampleTest]# docker inspect container1

Once we run the command, we can see multiple lines of output of which the data volumes are of our interest, we can see a JSON output and we look for Volumes attribute in the output.

"Volumes": {
            "/data": {}
        },

 "Mounts": [
        {
            "Name": "3cda62a8f710cef37d7ad11843a19186470afa2e1c29a5b082bf969070b14118",
            "Source": "/var/lib/docker/volumes/3cda62a8f710cef37d7ad11843a19186470afa2e1c29a5b082bf969070b14118/_data",
            "Destination": "/data",
            "Driver": "local",
            "Mode": "",
            "RW": true
        }
    ],

We can see both Volumes and Mounts in the output. The Volumes tell us about the volumes mounted “/data” and Mounts tell us about the location where volumes are mounted. In this case the “/data” is mounted on the location “/var/lib/docker/volumes/3cda62a8f710cef37d7ad11843a19186470afa2e1c29a5b082bf969070b14118/_data”

Now we can go to the location and see the file that we have created inside the docker container,

[root@vx111a SampleTest]# cd /var/lib/docker/volumes/3cda62a8f710cef37d7ad11843a19186470afa2e1c29a5b082bf969070b14118/_data

[root@vx111a _data]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 13 09:26 hello.txt

We can also see that the Volumes RW mode is set to true i.e. Read and Write.

3) Restart the Container – Now lets start the container again which we stopped previously.

[root@vx111a _data]# docker restart container1
container1

When we attach back to the Contianer1 we can see the data available
[root@vx111a _data]# docker attach container1

/ # ls
bin   data  dev   etc   home  proc  root  run   sys   tmp   usr   var
/ # cd data/
/data # ls
hello.txt

4) Remote the Container Completely – We can now stop the container and remove the Container using the “docker rm <container ID>”. Once the container is removed we can still see the data available

[root@vx111a _data]# cd /var/lib/docker/volumes/3cda62a8f710cef37d7ad11843a19186470afa2e1c29a5b082bf969070b14118/_data
[root@vx111a _data]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 13 09:26 hello.txt

More to Come , Happy learning J
Read More

Monday, January 11, 2016

Docker – Tomcat with Docker

We have been seeing multiple articles on how to run applications inside the Docker container. Similarly in this article we will see how we can run a tomcat server in a Docker container. We will also expose ports so that we can access the tomcat application.

Search for the Docker container with tomcat server by using the command “docker search tomcat” which will give the details below,
Now once we got the tomcat image to be downloaded and used, we can then write the DockerFile to this purpose.

Create a DockerFile with the below contents
[root@vx111a testTomcat]# cat Dockerfile
FROM tomcat
MAINTAINER tomcat/ub jagadish
COPY ./myApp.war /usr/local/tomcat/webapps/
CMD ["catalina.sh","start"]

As we can see that we are going to use the “tomcat” image for our purpose. In the second line we have the MAINTAINER details. The third line will copy the application myApp.war from the current location to the /usr/local/tomcat/webapps/ location inside the container. The last step is to run the command cataline.sh by passing the argument Start to that.

Lets build the docker using the DockerFile created as,
[root@vx111a testTomcat]# docker build -t jagadish/tomcat .
Sending build context to Docker daemon 4.096 kB
Step 0 : FROM tomcat
Trying to pull repository docker.io/library/tomcat ... latest: Pulling from library/tomcat
6d1ae97ee388: Pull complete
8b9a99209d5c: Pull complete
2e05a52ffd47: Pull complete
9fdaeae348bb: Pull complete
67d05086af43: Pull complete
2e9d1ec89d66: Pull complete
1afb0d51eee0: Pull complete
5cb24a57fa37: Pull complete
110c2f290b04: Pull complete
966dcd51a14f: Pull complete
8a57ce404f1b: Pull complete
e1b97b980d07: Pull complete
548f21c48132: Pull complete
3e93be06ad38: Pull complete
3e2882dd7e87: Pull complete
4ef5a14c7b39: Pull complete
fca011d2612a: Pull complete
119ddf0db1a7: Pull complete
1b8329afb263: Pull complete
Digest: sha256:6880839ca278600ea2a853455dd73c8ec8db9c0860d4aafc4a2b8b4d23dcdd85
Status: Downloaded newer image for docker.io/tomcat:latest
 ---> 1b8329afb263
Step 1 : MAINTAINER tomcat/ub jagadish
 ---> Running in e0588ccfdb59
 ---> c5829c11b42b
Removing intermediate container e0588ccfdb59
Step 2 : COPY ./myApp.war /usr/local/tomcat/webapps/
 ---> b3765f3df7c3
Removing intermediate container 091028d853b9
Step 3 : CMD catalina.sh start
 ---> Running in 0aba2a5f35a4
 ---> ca997a0f848e
Removing intermediate container 0aba2a5f35a4
Successfully built ca997a0f848e

Now once the Tomcat Image for docker is downloaded and configured, we can see the image using the “docker images” as,
[root@vx111a testTomcat]# docker images
REPOSITORY          TAG               IMAGE ID          CREATED           VIRTUAL SIZE
jagadish/tomcat      latest            ca997a0f848e     6 minutes ago    350 MB

Now once we have the Tomcat Container downloaded and available, we can run the Docker container using the above iamge as,

[root@vx111a testTomcat]# docker run -t -t -p 8080:8080 jagadish/tomcat /usr/local/tomcat/bin/catalina.sh run

Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
****************
08-Jan-2016 13:51:26.925 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
08-Jan-2016 13:51:26.925 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 547 ms

We ran the Cataline.sh command by passing the “run” argument to it. Once we ran the command we will also see the catalina.sh logs shown on the screen.

We also made sure that the ports 8080 is exposed on the tomcat docker container that we are running. Now we can access the tomcat container using the http://localhost:8080 which will show you the tomcat container login page.
Read More

Docker – basics

We have been working on Virtual machines until now but containers are rapidly pushing the virtualization technology behind. In this article we will see a container based technology called Docker.

Docker is an Open-source Project that helps in creating and managing linux based containers. These containers are light weight VM’s thus allow code to run in isolation from other Containers. One other important thing is this container shares the Host features like kernel, network, Disk and memory etc making it more light weight, less CPU intensive and low memory.

Since these containers use the Host machine features booting up the containers are done very fast. Docker does provide a CLI that allow you to do almost everything you could want to do on containers.

In other words Containers allow creating multiple isolated, secure Linux containers in order to run them on the same physical server without any conflict between applications. The first type of containers was created by OpenVZ. FreeBSD then came up with another container technology which lets us to put apps and servers into one Jail ( which we call as a Container ) by using Chroot.

What exactly can docker can do for us?
Docker solves many problems that we see from the Virtualization.
1) Containers are less resource users
2) They don’t need a separate hardware abstraction layer for running.
3) They share the same Host kernel and thus no need to install separate Kernel for every container
4) They isolate the application dependencies
5) Containers are shared as images
6) Creating ready to start applications that are easily distributable
7) Allowing easy and fast scaling of instances
8) Testing out applications and disposing them afterwards

Docker Terms
Before moving into the docker usage, there are certain terms that need to learn.

Images – Images on Docker are some thing like a snapshot of the Virtual machine but are light weight. These images allow one to replicate containers if we have an existing image of the same container. Most of the images are available publically and if we don’t have the image available docker allows us to create our own. Images can be called, for example, ubuntu:latest, ubuntu:precise, django:1.6, django:1.7, etc. This means that we can download a light weight container image version for ubuntu and create a container based on that.

Containers - From images you can create containers; this is the equivalent of creating a VM from a snapshot, but way more lightweight. Containers are the ones that run stuff. They also have an unique ID and a unique human-readable name. It’s necessary for containers to expose services, so Docker allows you to expose specific ports of a container

Volumes – Volumes are how you persist your data beyond the lifespan of the container. These are the spaces inside the container that store data outside of it thus allowing us to destroy container with out touching the data. Docker allows you to define what parts is your application and what parts are your data, and demands that you gives you the tools to keep them separated.

Links – Whenever a container is started a random Private IP is assigned to that so that other containers can talk to that using the IP address. This is important for 2 reasons: first it provides a way for containers to talk to each other, second containers share a local network. The links allows one container running web application connect to another container running Database.

Installing Docker
Installing docker is pretty straight ward.  Docker packages are being added to the Main repository for linux and other flavors. In Centos we can install the package docker.<ENV> directly to install docker.

Once the docker with all necessary packages are installed, we can start that using “service docker start”. As I said earlier Docker has a CLI that allows you to do almost everything you could want to a container. Check the installation by running the version command as

[root@vx111a work]# docker version
Client:
Version: 1.8.2-el7.centos
API version: 1.20
Package Version: docker-1.8.2-10.el7.centos.x86_64
Go version: go1.4.2
Git commit: a01dc02/1.8.2
Built:
OS/Arch: linux/amd64
Server:
Version: 1.8.2-el7.centos
API version: 1.20
Package Version:
Go version: go1.4.2
Git commit: a01dc02/1.8.2
Built:
OS/Arch: linux/amd64

How does this Work?
A Container is something related to Operating System level virtualization that allows us to create multiple isolated user spaces instead of just one. This isolation is made by using the chroot. According to Wiki “chroot on Unix operating systems is an operation that changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name (and therefore normally cannot access) files outside the designated directory tree


When we create multiple virtual machines by using VM, the operating system and virtualized hardware are duplicated for each guest but when we create multiple containers, only the operating system distribution related folders are created from scratch, while the parts related to the Linux kernel are shared between containers.

Generally in a virtual machine,
In this case even when we configure the different distributions using Vm , their Guest OS will be duplicated even though they have same kernel.

But when you configure things using Docker or any other container tool we have,
In this case of containers, we have the operating system level architecture is shared across containers and only bin and lib’s are created from scratch for different containers. Docker engine takes care of these orchestrating containers.

More to Come, Happy learning J
Read More