Docker Swarm
is a clustering and scheduling tool for Docker Containers. With Swarm, IT
administrators and developers can establish and manage a Cluster of Docker
nodes as a single virtual system. Swarm mode also exists natively for Docker Engine,
the layer between the OS and container images. Swarm mode integrates the
orchestration capabilities of Docker Swarm into Docker Engine 1.12. In this
article, we will see how we can create swarm mode using docker containers but
before that lets understand some of the terminology of docker swarm mode
Why do we want a Container
Orchestration System?
imagine that you had to
run hundreds of containers. You can easily see that if they are running in a
distributed mode, there are multiple features that you will need from a
management angle to make sure that the cluster is up and running, is healthy
and more.
Some of these necessary
features include:
- Health Checks on the Containers
- Launching a fixed set of
Containers for a particular Docker image
- Scaling the number of Containers
up and down depending on the load
- Performing rolling update of
software across containers
Clustering - Clustering is an important feature for Container
technology, because it creates a cooperative group of systems that can provide
redundancy, enabling Docker Swarm failover if one or more nodes experience an
outage
What does Swarm provides - A Docker Swarm cluster also provides administrators
and developers with the ability to add or subtract container iterations as
computing demands change.
Swarm manager - An IT administrator controls Swarm through a swarm
manager, which orchestrates and schedules containers. The swarm manager allows
a user to create a primary manager instance and multiple replica instances in
case the primary instance fails. In Docker Engine's swarm mode, the user can
deploy manager and worker nodes at runtime. This Enables multiple machines
running Docker Engine to participate in a cluster, called swarm. The Docker
engines contributing to a Swarm are said to be running in swarm mode.
Machines enter into the
Swarm mode by either initializing a new swarm or by joining an existing swarm.
Manager node - The manager node performs
cluster management and orchestration while the worker nodes perform tasks
allocated by the manager.
Node - A Docker engine participating in a swarm is called a
node. A node can either be a manager node or a worker node.
A node is an instance of the Docker engine participating in the swarm. A
manager node itself, unless configured otherwise, is also be a worker node.
Service
- The central entity in the Docker Swarm infrastructure is called a service.
A Docker swarm executes services. The user submits a service to the
manager node to deploy and execute.
Task
- A service is made up of many tasks. A task is the most basic
work unit in a Swarm. A task is allocated to each worker node b the manager
node.
Services can be scaled at
runtime to handle extra load. The swarm manager natively supports internal load
balancing to distribute tasks across the participating worker nodes. Also, the
manager also supports ingress load balancing to control exposure of Docker
services to the external world. The manager node also supports service
discovery by automatically assigning a DNS entry to every service.
Lets create a Swarm mode
and see how things work.
1.Create a Swarm Manager.
Obtain your local host address to so that the swarm can be initialized
[root@ 10-149-66-36]# docker swarm init --advertise-addr 10.149.66.36
Swarm initialized: current node (4f2j4n02r0p8bs4mcu65h9dt7)
is now a manager.
To add a worker to this swarm, run the
following command:
docker swarm join \
--token
SWMTKN-1-0rxc91z9zbyg9pevtpr1s3f2jdpkhuwcgcbn1m7i4x15ku9y6f-3xn94z48fsjr0tbu1y6vzvv5v
\
10.149.66.36:2377
To add a manager to this swarm, run 'docker
swarm join-token manager' and follow the instructions.
The docker swarm manager is now started. Once
the docker swarm is initialized, check the "docker info" command to
confirm that swarm has been started
Docker info
Swarm: active
NodeID: 4f2j4n02r0p8bs4mcu65h9dt7
Is
Manager: true
ClusterID: 7vdyrotpbxp4peonia1xp6hby
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA
Configuration:
Expiry Duration: 3 months
Node
Address: 10.149.66.36
So now the docker swarm is initialized, we need to now add
worker nodes to the swarm so the services deployed to swarm will have the
containers run on that worker node.
2. Go to remote machine and run the swarm join command ( make
sure the docker engine is installed on the remote machine )
[root@ip-10-149-66-123]# hostname -I
10.149.66.123 172.17.0.1 172.18.0.1
[root@ip-10-149-66-123 centos]# docker swarm join --token
SWMTKN-1-0rxc91z9zbyg9pevtpr1s3f2jdpkhuwcgcbn1m7i4x15ku9y6f-3xn94z48fsjr0tbu1y6vzvv5v
10.149.66.36:2377
This node joined a swarm as a worker.
Once the above command ran successfully, the worker node is
joined to the manager node.
3. Once the docker swarm is initialized, manager node is added.
we need to confirm the details. On the master node run the "docker ps"
command as
[root@ip-10-149-66-36 yum.repos.d]# docker
node ls
ID HOSTNAME STATUS AVAILABILITY
MANAGER STATUS
4f2j4n02r0p8b *ip-10-149-66-36.ca.com Ready Active Leader
6p440yuk3g44k ip-10-149-66-123 Ready
Active
From the above command we can see that the
IP address "10.149.66.36" is set to manager or leader. the other IP
address "10.149.66.123" is added as node or worker node.
4. Deploy a Service
Now lets deploy a ping service.
[root@ip-10-149-66-36]# docker service
create --replicas 1 --name helloworld alpine ping docker.com
e00irbaijlk9n4h6yz1219mz2
A Service with the name "helloworld"
is deployed. In order to check if the service is running, use the "docker
service ls" command as below
[root@ip-10-149-66-36 yum.repos.d]# docker
service ls
ID NAME
REPLICAS IMAGE COMMAND
e00irbaijlk9 helloworld
0/1 alpine
ping docker.com
In order to get more details about the Service, use the
"docker inspect" command with service ID as
[root@ip-10-149-66-36 yum.repos.d]# docker
service inspect --pretty e00irbaijlk9
ID: e00irbaijlk9n4h6yz1219mz2
Name: helloworld
Mode: Replicated
Replicas:
1
Placement:
UpdateConfig:
Parallelism:
1
On
failure: pause
ContainerSpec:
Image:
alpine
Args:
ping docker.com
Resources:
Now lets check the status of service using
the "docker service ps" command as
[root@ip-10-149-66-36 yum.repos.d]# docker
service ps e00irbaijlk9
ID
NAME IMAGE
NODE DESIRED STATE CURRENT STATE ERROR
9s7*
helloworld.1 alpine ip-10-149-66-123 Running Running 43 seconds ago
In this case, the one instance of the
helloworld
service
is running on the worker2
node.
You may see the service running on your manager node. By default, manager nodes
in a swarm can execute tasks just like worker nodes.
Check the same details on the worker
node as
[root@ip-10-149-66-123 centos]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a827c2d976d5 alpine:latest "ping
docker.com" 2 minutes ago Up 2 minutes
helloworld.1.9s7cyf913h6dsneh41cgsfy7i
Now we have the swarm manager is up and running. the Worker node
is added to swarm manager. A service with the name "helloworld" is
deployed and we can see that it is running on the worker node.
5. Scale the Service
Once you have deployed
a service to a swarm, you are ready to use the
Docker CLI to scale the number of containers in the service. Containers running
in a service are called “tasks.”
[root@ip-10-149-66-36]# docker service
scale e00irbaijlk9=3
e00irbaijlk9 scaled to 3
Now the service is scaled from 1 container
to 3 containers. We can see that 3 containers are running either on the manager
node or worker node. Though the swarm manager is manager node ,it can still
hold certain containers up and running. The details of the containers running can
be checked out using,
[root@ip-10-149-66-36 yum.repos.d]# docker
service ps helloworld
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
9s7cyf913h6dsneh41cgsfy7i helloworld.1 alpine
ip-10-149-66-123 Running Running 3 minutes ago
48bj29pr4dzvtm6odt72brp4c helloworld.2 alpine
ip-10-149-66-36.ca.com Running Starting 46 seconds ago
bapvp1km9bwrof66ujup79eld helloworld.3 alpine
ip-10-149-66-36.ca.com Running Starting 46 seconds ago
From the above output we can see that the
service is up and running on 3 containers. Two containers are running on
manager node and 1 is running on swarm node.
6. Delete the Service
A service can be deleted using "docker
service rm" command. This is done on the manager node as
[root@ip-10-149-66-36 yum.repos.d]# docker
service rm helloworld
helloworld
[root@ip-10-149-66-36 yum.repos.d]# docker
service inspect helloworld
[]
Error: no such service: helloworld
At the same time on the worker node, when
ran the "docker ps" will not show any services running.
This is a small introduction about the docker
swarm and how to implement those. More to come. Happy learning :-)
No comments :
Post a Comment