Thursday, August 10, 2017

BlazeMeter integration with Jenkins

BlazeMeter provides an enterprise performance testing and capacity planning SAAS that supports multiple types of application platforms ranging from webapps , websites to mobile apps and web services

Some of the noticeable features are
1.100% Apache Jmeter compatible so you can run any Jmeter Script or use any Jmeter plugin
2.Easily simulate up to 100,000 concurrent users.
3.Create load from both the public cloud and inside your own firewall.

Blaze meter not just provides server data it also provides Client side details like
1. The number of Virtual users hitting the Service
2. Real Client side response time and latency and throughput
3. Detailed Error, response messages from Client Side

Blaze Meter provides a plugin which can be integrated into our Jenkins platform for doing a Automated testing as a part of CI/CD Pipeline. Follow the steps below to install Blazemeter plugin and configuring that

1.Install the Blazemeter plugin from “Jenkins -> Manage Plugins”.

2. Configure the Blaze Meter cloud location in “Jenkins -> Configure System” as
3. We need to create a Credential to connect to the Blaze Meter cloud as
Enter a Description and for the API Key we need to login to the “blazemeter.com” and go to Personal Settings. Under the personal settings we can see our Current key under the API key as
Copy the Key and paste that in the API Key Section of the above credential box. Click on the Test BlazeMeter API key to verify.

Note – Some how the new keys generated are not working when we verify from Jenkins. Use the legacy key.

Save once the verification is done.

4. Now create a Job and in the “Add Build Item” , select Blaze Meter. In the configuration of Blaze Meter , select the API key which will display all the Blaze Meter tests available.

We can save the JTL and Junit Reports to a location for further processing.

5. Run the Job and this will take time depending on the time set for the Blaze Meter tests.
Read More

Understanding Jenkins Pipeline

One of the added features of Jenkins 2 is the introduction of Jenkins Pipeline. A Jenkins Pipeline allows you to define an entire application life cycle as code. This enable users to implement a project entire build/test/deploy pipeline in a Jenkinsfile which is stored alongside with code.

The default interaction model with Jenkins is by using the Web UI to manually create jobs, configure plugins and fill the necessary details. A manual effort is needed to create and manage jobs. Not just this but this model will keep the configuration of the Job to build/test/deploy separate from the actual code being build/test and deploy.

With the introduction of the “Pipeline” plugin, users now configure their entire project build/test/deploy phases in a Pipeline file called “Jenkinfile” and can be stored along side their original code. The pipeline file will be treated as a code committed into the Source control. Jenkins can read for the Jenkins file in the repositories and use that code to build/test and deploy code.

The Pipeline plugin was inspired by the Build flow plugin that is used to create build pipeline in previous versions of Jenkins. Some of the features of Jenkins pipeline are
1. Job can be suspended and resumed in between
2. Pipeline file can be committed to repository which can be checkout and executed as build flow

One of the Huge benefit of using a Pipeline is that the job itself is durable which will survive planned or even unplanned restarts of the Jenkins master.

Pipeline Vocabulary
Each Pipeline generally consists of 3 things: Stages, steps and Nodes

A Step can be taught as a “build Step” which is a single task that we want Jenkins to execute

A Node is a special step that schedules the contained steps to run by adding them to Jenkins Build Queue. Even better a node leverages Jenkins build system


Stages are for setting up logical divisions with in pipelines. The Pipeline Visualization plugin available in Jenkins will display each stage as a separate segment. By using stages we can define each phase with a name as “dev” ,”test” etc . A Stage will look like this

A sample Jenkinsfile looks as,

pipeline {
   agent {
        node {
                    label 'linux-slave' }
                    }
    }
         
   stages
     {
                stage("check-out")                     {
                                 steps {
                                  git 'git@github-isl-01.ca.com:manja17/funniest.git'                           
                                 }
            }
                                       
                stage('lint'){
                         steps {
                               echo "linting Success, Generating Report"
                                  }
                        }                                 
       }                
}

In the above snippet , we create a Pipeline script which is included in the Jenkinfile. The script performs the below steps
1.    Run the Job on the slave node by the name “linux-slave” thus using the Jenkins distributed system
2.    It has 2 stages “check-out” and “lint”. In the “check-out” we are checking out the code from the git hub repository. In the “lint” stage we are just running the echo command
A more extended version of Pipeline scripts can be written with doing all build/test and deploy phases
Using Scripting – Jenkins Pipeline supports executing shell or batch scripts just like free style jobs
stage('lint') {
                   steps {
                               sh 'sleep 10' # for Nix
                              bat 'timeout /t 10' # for Windows
                                  }
                        }   

Tool Installation – Jenkins Pipeline has the core capability of adding tools or installing them when ever necessary. We need to define the pipeline script as
def mvnHome = tool 'M3'
sh "${mvnHome}/bin/mvn -B verify"

Variables – The Env global variable allows accessing environment variables available on the node using
echo env.PATH

Integration with Plugins- The Pipeline file provides support to write plugin details in a declarative way.
steps {
                /* `make check` returns non-zero on test failures,
                * using `true` to allow the Pipeline to continue nonetheless
                */
                sh 'make check || true'
                junit '**/target/*.xml'
            }

In the above snippet we are trying to run the tests and use junit plugin to display the results
As another example we can run the test in python and publish the respot in HTML using the publishHTML plugin as

steps {
        sh "pytest --cov ./ --cov-report html --verbose"
        publishHTML(target:
                              [allowMissing: false,
                              alwaysLinkToLastBuild: false,
                              keepAll: false,
                              reportDir: 'htmlcov',
                              reportFiles: 'index.html',
                              reportName: 'Test Report',
                              reportTitles: ''])  
                             
                    echo "Testing Success"      
          }  

An Email can be send using the email plugin as
stage('mail'){
        steps{
            emailext attachLog: true, body: 'Jenkins Build - Status Report', subject: 'Build Report', to: 'jagadesh.manchala@gmail.com'
        }
    }  
Parallel execution – Jenkin Pipeline does have a built-in functionality for executing portions of Scripted Pipeline in parallel.

parallel 'task1': {}, 'task2': {}

Handling Approvals – Jenkins Workflow supports approvals, manual or automated through the input step as,

Input “Are you Sure ?”

Once ran the job, we can see a question during the checkout the code as
Waiting – Waiting feature is also added by using the “waitUntil” as
waitUntil {
    sh “input.sh”
   }

This will wait until the script input.sh is completed. This can be used during execution of the Automated test cases which generally would take some time

Since Jenkins Pipeline is based on Groovy scripting , most of its features are supported in Jenkins file like Exception handling, timing ,stashing etc. 

More  to Come,Happy Learning :-)

Read More

Jenkins – Integration with Active Directory

Install the Active directory using Jenkins ->  Manage Jenkins -> Manage plugins
Now configure Active directory plugin from Jenkins -> Manage Jenkins -> Configure Global Security

Under the Access Control – Security Realm, choose Active directory. Once you select the Active directory , click on the “Add domain” button 
Enter the details - 
Domain Name – nova.com
Domain Controller – “<ServerName>:3268” ( enter the server where the Active directory server runs and identify the port on where it is running)
For the Bind DN – Enter a User name. The bindDN DN is basically the credential you are using to authenticate against an LDAP. When using a bindDN it usually comes with a password associated with it.
Bind password – Enter the password for the above entered user ID

Once configured run the test domain and once successfully, we can now logout and login using the user name and password

Note – For the first time this make take some time to login. Configure the cache in the same active directory plugin.

More to come,Happy learning 
Read More

Wednesday, February 22, 2017

Docker compose



Docker compose is an orchestration tool for docker that allows you to define a set of containers and their interdependencies in the form of a YAML file. Once the yaml file is created , we can use the docker-compose commands to create the whole application stack. Besides this we can track the application output and various other things.

The docker-compose is a tool for defining and running multi-container application. In this we create a compose file to configure our application services. Then by using a single command we create all images necessary for the container, Start the container. We just need to access the application running inside the container

Using Compose is basically a three-step process.
  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Lastly, run docker-compose up and Compose will start and run your entire app.

1.Let's write a basic example of running a java application inside the container. Below is the directory structure of my first docker compose example

[puppet@root$:/test/docker]$  tree docker-compose/
docker-compose/
── compose
│   ── docker-compose.yml
│   ── ping
│   └── PingPong
│        ── Dockerfile
│        └── PingPong.java
└── PingPong

2. Lets write our java code for the PingPong.java class

[puppet@root$]$  cat PingPong.java
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;

public class PingPong {

          public static void main(String[] args) throws Exception {
          HttpServer server = HttpServer.create(new InetSocketAddress(19090), 0);
          server.createContext("/ping", new MyHandler());
          server.setExecutor(null);
          server.start();
          }

          static class MyHandler implements HttpHandler {
          @Override
          public void handle(HttpExchange t) throws IOException {
          String response = "pong\n";
          t.sendResponseHeaders(200, response.length());
          OutputStream os = t.getResponseBody();
          os.write(response.getBytes());
          os.close();
          }
          }
}

The above java code is a simple HTTP listener which responds to the requests made on the port 19090 with a response. Once the user sends a request called “ping” , the above code responds with a reponse of “pong”.

3.Besides the PingPong.java class we have the Dockerfile with contents as,

[puppet@root$]$  cat Dockerfile
FROM java:8
MAINTAINER Jagadish Manchala <jagadishm@example.com>
WORKDIR /
COPY PingPong.java /
RUN javac PingPong.java
EXPOSE 19090
CMD ["java","PingPong"]

The instructions are self explanatory. We are using a Java 8 based container. The work directory is / and we are moving the PingPong.java class to this contianer location “/”. We also compiling the java class and running the code by exposing the 19090 port.

4. Now come out of the directory and we can have the compose file as,
[puppet@root$]$  cat docker-compose.yml
version: '2'
services:
 PingPong:
          build: ./PingPong
          image: pingpong:1
          ports:
          - "5000:19090"
          volumes:
          - ../PingPong:/code

We start off with the line “version: ‘2’”, which tells Docker Compose we are using the new Docker Compose syntax. We define a single service called pingpong, which runs from an image called pingpong:1. It exposes a single port 5000 on the docker host that maps to port 19090 inside the container.

The “build: ./PingPong”

We’ve added a single line “build: ./PingPong” to the helloworld service. It instructs Docker Compose to enter the compose/PingPong directory, run a docker build there, and tag the resultant image as pingpong:1.

Now if you tried to run this as-is, using “docker-compose up”, docker could complain that it couldn’t find pingpong:1. That’s because it’s looking on the docker hub for a container image called pingpong:1. We haven’t created it yet. So now, let’s add the recipe to create this container image.

Now run the docker-compose up --force-recreate command and we can see below output as,

[puppet@root$:/test/docker/docker-compose/compose]$  docker-compose up --force-recreate
Creating network "compose_default" with the default driver
Building PingPong
Step 1 : FROM java:8
---> d23bdf5b1b1b
Step 2 : MAINTAINER Jagadish Manchala <jagadishm@example.com>
---> Running in 278b9e78e8ac
---> db3eeedd3862
Removing intermediate container 278b9e78e8ac
Step 3 : WORKDIR /
---> Running in cbfa16414694
---> 2ee46f20f539
Removing intermediate container cbfa16414694
Step 4 : COPY PingPong.java /
---> dc039d682eca
Removing intermediate container 2771d925c2d9
Step 5 : RUN javac PingPong.java
---> Running in 792a5fbfdc57
---> 3e9a9ddac467
Removing intermediate container 792a5fbfdc57
Step 6 : EXPOSE 19090
---> Running in 2efa5017a79e
---> 8164eb703ff4
Removing intermediate container 2efa5017a79e
Step 7 : CMD java PingPong
---> Running in ef7347c01dee
---> bd15166ab1a7
Removing intermediate container ef7347c01dee
Successfully built bd15166ab1a7
WARNING: Image for service PingPong was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating compose_PingPong_1
Attaching to compose_PingPong_1

Now in the above output , we have the docker-compose command waiting after the Attaching to compose_*** command. At this point when we see the docker images , we can see a pingpong image created. And When we run the “docker ps” command we can see the container running as below,

[puppet@root$:/test/docker/docker-compose/compose]$  docker ps
CONTAINER ID         IMAGE                    COMMAND               CREATED                
078f287da46c          pingpong:1              "java PingPong"        About a minute ago  
STATUS              PORTS                          NAMES
Up About a minute   0.0.0.0:5000->19090/tcp   compose_PingPong_1

Now we can test the Java Code running inside the Container using

[puppet@root$:/test/docker/docker-compose/compose]$  wget http://localhost:5000/ping
--2017-02-21 08:13:22--  http://localhost:5000/ping
Resolving localhost (localhost)... ::1
Connecting to localhost (localhost)|::1|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5
Saving to: ‘ping’

100%[============================================================================================================>] 5        --.-K/s   in 0s          

2017-02-21 08:13:22 (1.52 MB/s) - ‘ping’ saved [5/5]

Or we can take a browser and type the URL as “http://localhost:5000/ping

To stop the application, simply type Ctrl-C at the terminal prompt and Docker Compose will stop the container and exit. You can go ahead and change the code in the PingPong directory, add new code or modify existing code, and test it out using “docker-compose up” again.

To run it in the background: docker-compose up -d.
To tail the container standard output: docker-compose logs -f.
To down the container: docker-compose down -v
To Force recreate: docker-compose up --force-recreate
Read More