Sunday, September 3, 2017

Jenkins - Project matrix based strategy

In this next article of Jenkins authorization ,we will see how the project based matrix authorization can be implemented in Jenkins. The difference between a matrix and project based authorization are in the matrix based users with permissions can be able to perform certain actions and in the case project based authorization, there will be permission on  jobs that users can perform. Let's see how this works
1. Choose the Project-based matrix security under the "Configure global Security" in "manage Jenkins". Now there will be a matrix and text box to add users. Add the admin user and worker user as earlier. Give all permissions to admin user and for the worker, give him the overall read permission.
2. Save and go to the specific project where you want your users to execute. Once the authorization is moved to project based, every project will have an Option called "Enable project-based security" in the configure project as below,
In the above case ,I have added worker user and given him the necessary permission. this needs to be done as a user who have admin access to the job so that he can give access to other users on the job 
3. Now logout and login as worker to see that he can see the execute permission on the job.

Read More

Jenkins authorization - Matrix based Strategy

Allowing what users can do when then login to a Jenkins server is quite important when running jenkins server in production. Jenkins does provide multiple ways of authorization. When we go to the "Manage Jenkins" -> Configure Global security, under the Authorization tab we can see the available authorization strategies as below, 
In this article ,we will see how we can use the Matrix Bases security and how it can be implemented. A matrix based security allows users to be configured in such a  way that only users will specific permission can be able to perform certain actions.
1. In the Manage Jenkins -> "Configure Global Security". Choose the "Matrix-based security". We will see a matrix with "user/group" details and there will be a text box below that to add users
2. Now add the admin user first and give full permissions by selecting all checkbox. This will make sure we will have one user who have all permissions to modify things in future and also work as admin users

3. Now to the "Manage Jenkins"-> "Add Users". Create a user "worker" for our session.
4. Now go back to the "Manage Jenkins" -> "Configure global security" and in the matrix based security, add the "worker" user and provide only access that are necessary. In the above case ,we have provided access to the worker user in job section as Read. We also need to provide a overall access permissions as "Read" so that the user can have overall read permissions. Now save the configuration and logout.
5. Once we login we can only see certain jobs that the user has read only permissions. we can see some thing like below,
We can see jobs that are given with a read permissions. Even though we try to execute the job it wont as we dont have enough permissions for the user.

This is how the matrix based security works

Read More

Saturday, September 2, 2017

Jenkins - Unix users/Groups

Authentication is one of the internal feature available in Jenkins. Jenkins does have its own internal database for holding user name and passwords. Jenkins also provides a way to allow login using underlying operating system usernames. This will allow users to login using their system login credentials. This uses Pluggable Authentication Modules (PAM), and also works fine with NIS.

1. In order to do this, first change the permission for the /etc/shadow file so that jenkins user can read them. This is since jenkins server is running using the jenkins userID

chmod g+r /etc/shadow
2. Now add root group to jenkins

[root@ip-10-149-198-16 jenkins]# usermod -a -G root jenkins
[root@ip-10-149-198-16 jenkins]# groups jenkins
jenkins : jenkins root

3. Once the above changes are done,
login to jenkins server with the admin account and in manage jenkins, select “unix user/groups” as authentication area
Once done , logout and you can login using an user name and password from unix machines
This is how we can use our existing system credentials to login to jenkins server
Read More

Sunday, August 27, 2017

Jenkins – Role based Strategy

Jenkins provides its own user database to login but it does not have the facility to create groups/roles for the users.
If we want groups in Jenkins, we have few options
1.    Use Open LDAP with Jenkins
2.    Use Active Directory with Jenkins
3.    Use Role-based authorization strategy plugin in Jenkins
The default behavior (i.e. Can’t create group) is because it uses Jenkins user database for the security realm.

To verify this, login to Jenkins as admin, go to “Manage Jenkins”, click on “Configure Global Security”, and under the “Access Control” section, for the “Security Realm”, if you’ve selected “Jenkins’ own user database”, then you can only create users, and not groups.

There are 2 ways by which we can implement the project based authentication.
1.Project based matrix authorization strategy
2.Role based strategy
Project-based Matrix Authorization Strategy" is pre-installed  and more easy to use for individual projects. "Role-based strategy"  is preferred when number of projects in Jenkins is very large. It uses pattern to match project names.

Install the Role based authorization strategy plugin
Login to Jenkins with your admin account -> Click on “Manage Jenkins” -> Click on “Manage Plugins” -> Click on “Available” tab -> Search for “role” in the Filter text box.

You’ll see “Role-based Authorization Strategy” in the results. Click on the “check-box” infront of it to select this item. Click on “Install without restart” button at the bottom

Change the Jenkins Authorization method
Once the plugin is installed, next step is to change the default Jenkins authorization method to use the role based plugin.

For this, go to “Manage Jenkins”, click on “Configure Global Security”, under the “Access Control” section, for the “Authorization”, click on “Role-Based Strategy”.

Manage and Assign Role Options

Now ,if we go to the “Manage Jenkins”, we will see a “Manage and assign Roles”.
Create a new global role - 
Click on the “Manage roles”, from where we can create global roles that will be applicable to objects in Jenkins. The roles can be “admin”,” developer” and “Devops” etc. To add a global role, enter the role name in the “Role to add” text field and add. Once added provide the permissions to the role as below. The permissions for agents, jobs ,views are also available. Provide the correct permissions to use to give full control
The following are the permissions available to be assigned to your new global role. 
Overall – Administer, ConfigureUpdateCenter, Read, RunScripts, UploadPlugins 
Credentials – Create, Delete, ManageDomains, Update, View 
Agent – Build, Configure, Connect, Create, Delete, Disconnect, Provision 
Job – Build, Cancel, Configure, Create, Delete, Discover, Move, Read, Workspace 
Run – Delete, Replay, Update 
View – Configure, Create, Delete, Read SCM – Tag 

Project Roles Besides roles, we can also create roles for projects that will be applied to certain projects (jobs). For example we can create a project roles “web” which will apply only to all projects that start with the keyword “web*”. We can create project with matching pattern so that we can allow certain users to check the jobs.

Some of the notable points regarding Project roles are The regular expression “web*” will match all the Jenkins jobs that start with “web”. If you want case-insensitive, add “(?i)” to the pattern. For example (?i)web* will match jobs starting with both “web” and “Web”. Once the project role are added, select the permissions that you want to assign for the project role. 
Below are the permissions available 
Credentials – Create, Delete, ManageDomains, Update, View 
Job – Build, Cancel, Configure, Create, Delete, Discover, Move, Read, Workspace 
Run – Delete, Replay, Update 
SCM – Tag

Assigning Users to the Roles Or Groups After creating roles with permissions we need to assign roles to users. Click the “Assign Roles” in “Manage and Assign Roles” link under “Manage Jenkins” Assign the users the appropriate roles based on their 
In the item roles section , add the users who can access the projects that are created in the Project roles.
Even within those matched projects, users can only perform certain activities based on the permissions that we assigned for that particular project role.
Read More

Saturday, August 26, 2017

NodeJS – Linting

Process of checking source code for programmatic as well as stylistic errors is called linting. These tools provide us the code analysis. They don’t run the code but inspect it by looking for typos or anti-patterns
Bug detection and maintainability are some of the uses we obtain by using the linting tools.

Eslint – Eslint is an Open Source Project whose goal is to provide a pluggable linting utility for Java Script. ESLint is a tool that analyses your code and points out any issues it finds. It can find bugs, potential problem areas, poor coding styles and stylistic issues. Best of all it’s highly configurable. Eslint is chosen to be a linting tool for Nodejs applications in Cloud automation.

Some of the noticeable points about Eslint are
·         ECMAScript is a Standard for scripting languages. Languages like JavaScript are based on the ECMAScript standard.
·         ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code, with the goal of making code more consistent and avoiding bugs. 
·         Easily Pluggable and Extensible.
·         Written in JavaScript on Node.js
·         Supports custom reporters
·         Supports ES6 ( new ECMAScript Standards )
·         All rules are plugins, more can be added at runtime.
·         Different parsers can be used (Esprima, Espree or Babel are currently compatible).
·         Integrates with editors, build systems, command line tools, and more!
Eslint can be installed either locally or globally.
Install Eslint locally as,
     Install eslinnt : npm install eslint --save-dev
     setup a configuration file : ./node_modules/.bin/eslint --init
     Run ESLint on Source code as :  ./node_modules/.bin/eslint yourfile.js

Install Eslint globally as
     Install eslint : npm install -g eslint
     Setup a configuration file :  eslint –init
     Run ESLint on any file or directory :  eslint yourfile.js

Note - eslint --init is intended for setting up and configuring ESlint on a per-project basis and will perform a local installation of ESlint and its plugins in the directory in which it is run. If you prefer using a global installation of ESlint, any plugins used in your configuration must also be installed globally.

Eslint getting Started –
To get started with the Eslint tool, first create a configuration file using

/node_modules/.bin/eslint --init 

This will present with options
  1. You will be asked a few questions about your coding style, and the generated config file will be based on your answers
  2. community-developed style guides available as shared ESlint configs which can be installed and used
  3. It will result in the most strict configuration (enabling the most rules) 
Follow the below steps to get started with Eslint 
Once the configuration file is created , run the test on sample code below

function test() {
  var myVar = 'Hello, World';

We will be given with the Eslint response as below
We can obtain configuration file that are sharable at this moment. We have chosen to use “google” configuration file and made tweaks to the rules according to the project.
Read More

Docker - Running docker registry locally

There are cases in big organizations where internet access will be restricted. In the case of docker, most of the containers will be created locally from the original Image downloaded from internet.  Most of the images that developer will be working on will downloaded from internet and modified accordingly. the modified images are then saved to the local docker registry where other teams will consume that. In this article we will see how we can create our own local docker registry to push and pull images

1. Install docker-registry rpm using yum
yum install docker-distribution

2. In order to run the docker registry specific ports needs to be opened. The docker registry runs on port 5000. Configure the firewall or iptables to enable port 5000

3. Run the docker-registry service
service docker-distribution restart

The docker-distribution configuration file is available at /etc/docker-distribution/registry/config.yml

Don't make any modification as of now and move with the existing configuration.

4. Download an image from online dockerhub repository.
docker run --name myhello hello-world
The hello-world is a minimal docker implementation.

5. Once the container image is downloaded and run. We can then make the necessary changes to the docker configuration file to change the registry location.

Make the necessary changes as below to the file /etc/sysconfig/docker. For elements we need to un comment the lines. make sure to uncomment the lines and configure the registry as below

ADD_REGISTRY='--add-registry localhost:5000'
INSECURE_REGISTRY='--insecure-registry localhost:5000'

Once the changes are done, restart the docker engine.

6. Now tag and push the hello-world image to local registry as
[root@ip-10-149-66-36 init.d]# docker tag hello-world localhost:5000/hello-me:latest

[root@ip-10-149-66-36 init.d]# docker push localhost:5000/hello-me:latest
The push refers to a repository [localhost:5000/hello-me]
45761469c965: Pushed
latest: digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f size: 524

We are pushing the hello-world image as hello-me to the local registry.Now once the image is pushed to the local registry, we can check the images with

[root@ip-10-149-66-36 init.d]# docker images
REPOSITORY                  TAG                IMAGE ID              CREATED              SIZE      latest              1815c82652c0        9 weeks ago         1.84 kB
localhost:5000/hello-me latest              1815c82652c0        9 weeks ago         1.84 kB

we can see that 2 hello-world images are available with and localhost:5000. Remove both the images so that we can download the images again from our local registry

[root@ip-10-149-66-36 init.d]# docker pull hello-me
Using default tag: latest
Trying to pull repository localhost:5000/hello-me ...
latest: Pulling from localhost:5000/hello-me

Digest: sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f

Once the "docker pull hello-me" command ran we can see that docker is using hello-me to obtain from the local registry. once pull is done we can run the "docker images" and see that the image is available locally

This is how we can run a local docker registry to push and pull images.
Read More

Friday, August 25, 2017

Jenkins - Sonar Qube Integration

Testing is one of the important aspects of code to identify various issues that can occur while running the code. At the same time it is necessary to examine the code quality while moving the code to production.

Static analysis also called as static code analysis is a method of debugging that is done by examining the code without executing that. This allows developers a better understanding of the code structure and can help developers to ensure that the code adheres to industry standards. The main advantage of the static analysis is that it reveals errors that do not occur until months and years of application running. It is said that the static analysis is only a first step in a comprehensive software quality-control regime. Sonar is one such tool which provides us the static code analysis.

Sonar is an open source web-based application to manage code quality which covers seven axes of code quality as: Architecture and design, comments, duplications, unit tests, complexity, potential bugs and coding rules. Developed in Java and can cover projects in Java, FlexPHPPL/SQL,  Visual Basic 6. It's very efficient to navigate; offering visual reporting and you can follow metrics evolution of your project and combine them.

In this article we will see how we can install SONAR tool and use that.

1. Download SonarQube from here

2. Extract the tar file to /op/sonarqube.
Once extracted move to the /opt/sonarqube/bin/linux-x86-64.And run the “  start”.

That’s all you need to do in starting the sonarqube. Access the sonarqube console using the “localhost:9000” and we can see the web console as below,

 The default credentials for login are admin and admin.

2.In the Jenkins server, install the sonar-qube plugin using the Manage plugins. Configure the Sonar-qube in the “Configure System” as
Since this is a community version, we don't need to add any credentials details.

4.Download the sonar-runner on the slave machine and extract

A sonar-runner is the tool that actually scans the source code for conventions and pass them to the sonar qube server for displaying them on web page.

5.Now go to the “Configure Global tool” section and configure sonar-runner under the sonarqube scanner

6.Create a Maven Job and in the Add pre build step , choose a “Execute SonarQube Scanner” option
Fill the details as above.In the path to project properties location , pass the location where exists in the project source code. Naturally this will be present on the root level.

We can either add the file in source code or paste them in the analysis properties field.

In the maven Build arguments pas the "mvn clean install" and see the results in the Sonar qube web application as below,
More to come , Happy learning :-)
Read More

Docker - Orchestrating Containers using Swarm

Docker Swarm is a clustering and scheduling tool for Docker Containers. With Swarm, IT administrators and developers can establish and manage a Cluster of Docker nodes as a single virtual system. Swarm mode also exists natively for Docker Engine, the layer between the OS and container images. Swarm mode integrates the orchestration capabilities of Docker Swarm into Docker Engine 1.12. In this article, we will see how we can create swarm mode using docker containers but before that lets understand some of the terminology of docker swarm mode

Why do we want a Container Orchestration System?
imagine that you had to run hundreds of containers. You can easily see that if they are running in a distributed mode, there are multiple features that you will need from a management angle to make sure that the cluster is up and running, is healthy and more.

Some of these necessary features include:
  • Health Checks on the Containers
  • Launching a fixed set of Containers for a particular Docker image
  • Scaling the number of Containers up and down depending on the load
  • Performing rolling update of software across containers
Clustering - Clustering is an important feature for Container technology, because it creates a cooperative group of systems that can provide redundancy, enabling Docker Swarm failover if one or more nodes experience an outage

What does Swarm provides - A Docker Swarm cluster also provides administrators and developers with the ability to add or subtract container iterations as computing demands change.

Swarm manager - An IT administrator controls Swarm through a swarm manager, which orchestrates and schedules containers. The swarm manager allows a user to create a primary manager instance and multiple replica instances in case the primary instance fails. In Docker Engine's swarm mode, the user can deploy manager and worker nodes at runtime. This Enables multiple machines running Docker Engine to participate in a cluster, called swarm. The Docker engines contributing to a Swarm are said to be running in swarm mode.

Machines enter into the Swarm mode by either initializing a new swarm or by joining an existing swarm.

Manager node - The manager node performs cluster management and orchestration while the worker nodes perform tasks allocated by the manager.

Node - A Docker engine participating in a swarm is called a node. A node can either be a manager node or a worker node. A node is an instance of the Docker engine participating in the swarm. A manager node itself, unless configured otherwise, is also be a worker node.

Service - The central entity in the Docker Swarm infrastructure is called a service. A Docker swarm executes services. The user submits a service to the manager node to deploy and execute.

Task - A service is made up of many tasks. A task is the most basic work unit in a Swarm. A task is allocated to each worker node b the manager node.

Services can be scaled at runtime to handle extra load. The swarm manager natively supports internal load balancing to distribute tasks across the participating worker nodes. Also, the manager also supports ingress load balancing to control exposure of Docker services to the external world. The manager node also supports service discovery by automatically assigning a DNS entry to every service.
Lets create a Swarm mode and see how things work.

1.Create a Swarm Manager. Obtain your local host address to so that the swarm can be initialized

[root@ 10-149-66-36]# docker swarm init --advertise-addr
Swarm initialized: current node (4f2j4n02r0p8bs4mcu65h9dt7) is now a manager.

To add a worker to this swarm, run the following command: 
    docker swarm join \
    --token SWMTKN-1-0rxc91z9zbyg9pevtpr1s3f2jdpkhuwcgcbn1m7i4x15ku9y6f-3xn94z48fsjr0tbu1y6vzvv5v \

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The docker swarm manager is now started. Once the docker swarm is initialized, check the "docker info" command to confirm that swarm has been started

Docker info
Swarm: active
 NodeID: 4f2j4n02r0p8bs4mcu65h9dt7
 Is Manager: true
 ClusterID: 7vdyrotpbxp4peonia1xp6hby
 Managers: 1
 Nodes: 1
  Task History Retention Limit: 5
  Snapshot Interval: 10000
  Heartbeat Tick: 1
  Election Tick: 3
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address:

So now the docker swarm is initialized, we need to now add worker nodes to the swarm so the services deployed to swarm will have the containers run on that worker node.

2. Go to remote machine and run the swarm join command ( make sure the docker engine is installed on the remote machine )

[root@ip-10-149-66-123]# hostname -I

[root@ip-10-149-66-123 centos]# docker swarm join --token SWMTKN-1-0rxc91z9zbyg9pevtpr1s3f2jdpkhuwcgcbn1m7i4x15ku9y6f-3xn94z48fsjr0tbu1y6vzvv5v
This node joined a swarm as a worker.

Once the above command ran successfully, the worker node is joined to the manager node.

3. Once the docker swarm is initialized, manager node is added. we need to confirm the details. On the master node run the "docker ps" command as

[root@ip-10-149-66-36 yum.repos.d]# docker node ls
ID              HOSTNAME                            STATUS  AVAILABILITY  MANAGER STATUS
4f2j4n02r0p8b   * Ready    Active             Leader
6p440yuk3g44k   ip-10-149-66-123          Ready    Active

From the above command we can see that the IP address "" is set to manager or leader. the other IP address "" is added as node or worker node.

4. Deploy a Service
Now lets deploy a ping service.

[root@ip-10-149-66-36]# docker service create --replicas 1 --name helloworld alpine ping

A Service with the name "helloworld" is deployed. In order to check if the service is running, use the "docker service ls" command as below

[root@ip-10-149-66-36 yum.repos.d]# docker service ls
ID               NAME        REPLICAS  IMAGE   COMMAND
e00irbaijlk9  helloworld  0/1          alpine     ping

In order to get more details about the Service, use the "docker inspect" command with service ID as

[root@ip-10-149-66-36 yum.repos.d]# docker service inspect --pretty e00irbaijlk9
ID:             e00irbaijlk9n4h6yz1219mz2
Name:           helloworld
Mode:           Replicated
 Replicas:      1
 Parallelism:   1
 On failure:    pause
 Image:         alpine
 Args:          ping

Now lets check the status of service using the "docker service ps" command as

[root@ip-10-149-66-36 yum.repos.d]# docker service ps e00irbaijlk9
ID       NAME           IMAGE   NODE            DESIRED STATE  CURRENT STATE           ERROR
9s7*   helloworld.1  alpine    ip-10-149-66-123 Running        Running 43 seconds ago

In this case, the one instance of the helloworld service is running on the worker2 node. You may see the service running on your manager node. By default, manager nodes in a swarm can execute tasks just like worker nodes.

Check the same details on the worker node as

[root@ip-10-149-66-123 centos]# docker ps
CONTAINER ID     IMAGE         COMMAND             CREATED            STATUS      PORTS       NAMES 
a827c2d976d5     alpine:latest "ping"   2 minutes ago       Up 2 minutes                            helloworld.1.9s7cyf913h6dsneh41cgsfy7i

Now we have the swarm manager is up and running. the Worker node is added to swarm manager. A service with the name "helloworld" is deployed and we can see that it is running on the worker node.

5. Scale the Service
Once you have deployed a service to a swarm, you are ready to use the Docker CLI to scale the number of containers in the service. Containers running in a service are called “tasks.”

[root@ip-10-149-66-36]# docker service scale e00irbaijlk9=3
e00irbaijlk9 scaled to 3

Now the service is scaled from 1 container to 3 containers. We can see that 3 containers are running either on the manager node or worker node. Though the swarm manager is manager node ,it can still hold certain containers up and running. The details of the containers running can be checked out using,

[root@ip-10-149-66-36 yum.repos.d]# docker service ps helloworld
ID                         NAME              IMAGE   NODE                    DESIRED STATE  CURRENT STATE            ERROR
9s7cyf913h6dsneh41cgsfy7i  helloworld.1      alpine  ip-10-149-66-123        Running        Running 3 minutes ago

48bj29pr4dzvtm6odt72brp4c  helloworld.2      alpine  Running        Starting 46 seconds ago

bapvp1km9bwrof66ujup79eld  helloworld.3      alpine  Running        Starting 46 seconds ago

From the above output we can see that the service is up and running on 3 containers. Two containers are running on manager node and 1 is running on swarm node.

6. Delete the Service
A service can be deleted using "docker service rm" command. This is done on the manager node as
[root@ip-10-149-66-36 yum.repos.d]# docker service rm helloworld

[root@ip-10-149-66-36 yum.repos.d]# docker service inspect helloworld
Error: no such service: helloworld

At the same time on the worker node, when ran the "docker ps" will not show any services running.

This is a small introduction about the docker swarm and how to implement those. More to come. Happy learning :-) 
Read More