Pages

Thursday, March 21, 2019

Anchore - Container Image Scan Engine

The Anchore engine is an open source project that inspects, analyzes and certifies Docker Images. Anchore conducts static analysis on container images and applies user defined acceptables policies to allow automated container image validation and certification. We can use Anchore to gain deep insight to the OS and non-OS packages contained in the image but also the ability to create governance around the artifact and its contents via customizable policies.

Anchore analysis tools inspect your container image and generate a detailed manifest of the image, a virtual ‘bill of materials’ that includes official operating system packages, unofficial packages, configuration files and language modules and artifacts such as NPM, PiP, GEM, and Java archives.

Anchore engine is available as a Docker image that can run as a standalone or with orchestration platform such as kubernetes. Anchore is also available as a Jenkins plugin, allowing you to integrate container image scanning as part of the CI/CD workflow. The identified vulnerabilities are compared with the Anchore Hosted cloud service and details are provided after the scan

Installing Anchore using docker compose - We will run Anchore on a docker system using the docker compose as below,
jagadishAvailable$Thu Mar 21@ mkdir anchore
jagadishAvailable$Thu Mar 21@ cd anchore
jagadishAvailable$Thu Mar 21@ curl https://raw.githubusercontent.com/anchore/anchore-engine/master/scripts/docker-compose/docker-compose.yaml > docker-compose.yaml
jagadishAvailable$Thu Mar 21@ mkdir config
jagadishAvailable$Thu Mar 21@ curl https://raw.githubusercontent.com/anchore/anchore-engine/master/scripts/docker-compose/config.yaml > config/config.yaml
jagadishAvailable$Thu Mar 21@ mkdir db
jagadishAvailable$Thu Mar 21@ docker-compose up -d

Confirm the anchore docker containers are up and running
jagadishAvailable$Tue Mar 21@ anchore-cli --u admin --p foobar --url http://127.0.0.1:8228/v1 system status

Service analyzer (dockerhostid-anchore-engine, http://anchore-engine:8084): up
Service policy_engine (dockerhostid-anchore-engine, http://anchore-engine:8087): up
Service catalog (dockerhostid-anchore-engine, http://anchore-engine:8082): up
Service apiext (dockerhostid-anchore-engine, http://anchore-engine:8228): up
Service kubernetes_webhook (dockerhostid-anchore-engine, http://anchore-engine:8338): up
Service simplequeue (dockerhostid-anchore-engine, http://anchore-engine:8083): up

Engine DB Version: 0.0.9
Engine Code Version: 0.3.3


Anchore engine once started runs on the localhost on 8228 port. In order to scan a image, we first need to push the image to the public registry first. In this case i have created a image with the name scanimage and push that to my dockerhub account. Once the image is pushed, we will use anchore to add the image to the local engine using the command,

jagadishAvailable$Tue Mar 21@ anchore-cli --u admin --p foobar --url http://127.0.0.1:8228/v1 image add docker.io/jagadesh1982/scanimage:latest

Image Digest: sha256:f38bfbd1722ea6131a6cacf9c176bbe3ea9048f1079df304691b592a463c1676
Parent Digest: sha256:f38bfbd1722ea6131a6cacf9c176bbe3ea9048f1079df304691b592a463c1676
Analysis Status: not_analyzed
Image Type: docker
Image ID: b9e49720be226946983101f5e59546548f0a92dff31462ce7856c461065c47ef
Dockerfile Mode: None
Distro: None
Distro Version: None
Size: None
Architecture: None
Layer Count: None

Full Tag: docker.io/jagadesh1982/scanimage:latest


The image will be downloaded and then added to the anchore engine scan. The credentials “admin” and “foobar” are user name and password.

Check the status of the scan using,
jagadishdocker-github$Tue Mar 19@ anchore-cli --u admin --p foobar --url http://127.0.0.1:8228/v1 image list

Full   Tag               Image ID                          Analysis Status
scanimage:latest    b9e49720be226946983101 analyzing


We can see that the image is still in analyzing status as shown in the above output. Run the same command until the status changes to analyzed.

Get a Image when the status goes to analyzed using,
jagadishdocker-github$Tue Mar 21@ anchore-cli --u admin --p foobar --url http://127.0.0.1:8228/v1 image get docker.io/jagadesh1982/scanimage:latest

Obtain the results of the Vulnerability scan on the obtained image,
jagadishdocker-github$Tue Mar 21@ anchore-cli --u admin --p foobar --url http://127.0.0.1:8228/v1 image vuln docker.io/jagadesh1982/scanimage:latest os


List operating system packages present in an image:
jagadishdocker-github$Tue Mar 21@ anchore-cli --u admin --p foobar --url http://127.0.0.1:8228/v1 image content docker.io/jagadesh1982/scanimage:latest os

os: available
files: available
npm: available
gem: available
python: available
java: available


This is an introduction to the anchore engine. Hope this helps in getting started.
Read More

Tuesday, March 19, 2019

Container Security - Capabilities

The most common security setting that we do with containers is the capability settings or dropping. The capability dropping is a technique where a privileged processes revokes a subset of the privileges it is endowed with.

Root is the most powerful user. He has access to everything and he can perform any thing on any thing. Running processes inside a container with root user is always risky. It would be better if we have a way to control the powers of the root users. This is where capabilities come into picture. The linux kernel is able to break down the privileges of the root user into units called as capabilities. For example , the CAP_CHOWN is a type of capability that allows the root user to make arbitrary changes to the files UID and GID. Another capability CAP_DAC_OVERRIDE allows the root user to bypass the kernel permissions checks on the file read, write and execute operations. Almost all powers of the root user can be broken down into individual capabilities. Breaking down the capabilities provides us to,
  1. Remove individual capabilities from the root user account, making it less powerful
  2. Adding these capabilities to non-root user at a very granular level
There are file based and thread based capabilities. File capabilities allows users to execute programs with higher privileges and thread based keep track of the current state of the capabilities in running programs. Capabilities can be set using 3 options
  1. Run Containers as root with a large set of capabilities and try to manage capabilities within your container manually.
  2. Run containers as root with limited capabilities and never change then within a container
  3. Run Container as a unprivileged user with no capabilities
Most of the times option 2 is the most realistic and option 3 is ideal too. Option 1 should be avoided whenever possible.

To drop capabilities from the root account of the container we can start the container as, docker run --rm -it --cap-drop alpine sh

To add capabilities to the root account of the container, we can start the container as,docker run --rm -it --cap-add alpine sh
To drop all capabilities and then add the individual capabilities to the root account of the container , we can use docker run --rm -it --cap-drop ALL --cap-add alpine sh

The capabilities in Linux Kernel starts with “ CAP_” like CAP_CHOWN, CAP_SETUID etc but docker capabilities are not prefixed with “CAP_” instead they are defined as regular string like “chown” etc. The mapping of the docker Capabilities to the Kernel are taken care.

Lets see some examples of how capabilities work with Docker
Start a new Container and confirm that the container root account can change the ownership of the file by,
jagadishAvailable$Tue Mar 19@ docker run --rm -it alpine chown nobody /

The Command will not return any output indicating that the root account can change the ownership of the files. We did not add any capabilities but the root user has the default capability including chown or CAP_CHOWN.

Start a new container and drop all capabilities and add only chown capability and confirm that the root account in the container can change the ownership of the flies by,
jagadishAvailable$Tue Mar 19@ docker run --rm -it --cap-drop ALL --cap-add CHOWN alpine chown nobody /

This also will not return any output indicating that the container ran successfully.

Now in the last step, lets run a container with dropping the chown capability for the root account and confirm if that can change ownership of files as,
jagadishAvailable$Tue Mar 19@ docker run --rm -it --cap-drop CHOWN alpine chown nobody /
chown: /: Operation not permitted

Now this throws a error saying that the chown operation is not permitted which means the capability chown is dropped.

Pscap - We are by now aware of how to add capabilities and remove them, but we need a way to check what capabilities are currently set and what are not. Linux provides a tool call pscap available in the libcap-ng-utils package which helps you to identify what packages are available and what are not.
Read More

Container Security - AppArmor

AppArmor is a linux kernel security Module that can be used to restrict the capabilities of processes running on the host operating system.AppArmor is similar to SELinux which is used by default in Redhat linux or centos. AppArmor is being used by Ubuntu by default. Both AppArmor and Selinux provides mandatory access controls ( MAC ) security. In effect, AppArmor allows restrictions on the processes on what actions they can take.

Each process will have its own security profile. The security profile will be defined with capabilities that allow or disallow certain actions that process can perform. These capabilities range from network access to a file read/write/execute permissions

Note - as already said, the AppArmor is the default mandatory access control for Ubuntu so appArmor may not be available on centos or Rhel flavor Operating systems.

On a ubuntu machine , check if apparmor is available by running the command “sudo apparmor_status”. This gives you the status of the apparmor. If it says “apparmor module is loaded” then apparmor profile is available.

Docker automatically generates and loads a default profile for containers names docker-default. The docker binary generates this profile in /etc/apparmor.d/docker location.
Lets run a container which by default uses the default-docker profile
root@spinnaker-machine:/home/vagrant# docker run -it --rm centos bash -i
[root@69fe95789e43 /]# cat /proc/sysrq-trigger
cat: /proc/sysrq-trigger: Permission denied
[root@69fe95789e43 /]# exit
exit


If we see we have started a centos container and tried to access the /proc/sysrq-trigger file. It gives us a permission denied exception. The default profile has this capability disabled.

Custom profile - As we already discussed, custom profiles can also be passed to the docker container. These custom profiles can be loaded during the container start or run by passing the argument apparmor=. While running docker with a custom profile, we first need to override the docker-default policy using the --security-opt argument when running the container. Lets create a custom profile with the below example,

root@spinnaker-machine:/home/vagrant# cat > /etc/apparmor.d/no_raw_net < #include

profile no-ping flags=(attach_disconnected,mediate_deleted) {
#include

network inet tcp,
network inet udp,
network inet icmp,

deny network raw,
deny network packet,

file,
mount,
}
EOF


I'm not going to deep into how to write profiles but the above profile disables creating network packets. I named this as no-ping profile. Once the profile is written i will load the profile into apparmor using the below command,
root@spinnaker-machine:/home/vagrant# /sbin/apparmor_parser --replace --write-cache /etc/apparmor.d/no_raw_net

Once the profile is loaded with no issues, we can then use the profile by starting a container using,root@spinnaker-machine:/home/vagrant# docker run --rm -i --security-opt apparmor=no-ping centos ping -c3 8.8.8.8
ping: socket: Permission denied

Now we can see that we started a container with the profile no-ping profile. We also tried to ping a ip address. The container throws an error “ping: socket: Permission denied” which says that our profile is working fine. If we need to check if the correct profile is loaded, we can run the container first and then check the /proc file system as below,
root@spinnaker-machine:/home/vagrant# docker exec 502bac2749f3 cat /proc/1/attr/current
no-ping (enforce)


We can see that the no-ping profile is loaded inside the container. In order to unload the profile from apparmor we can use the commands, apparmor_parser -R /path/to/profile
Read More

Saturday, March 16, 2019

Jenkins - Integrating Source Clear with Jenkins


As we already know that the Source clear is Saas based application. We scan the source code in our local machine and the results are sent to the source clear website. The results can be viewed with the account that we have taken.

Integrating the Software composition analysis tools with the Continuous integration tools is very important. We will integrate the source clear tool with the Jenkins tool and perform a source code scan as a part of the build system. We can then view the results in the source clear website.

1. Create a Secret text credential using the source clear token obtained when taking the account.


I have created the credentials with the ID and description so that i can be accessed in the build jobs.

2. Inside a Maven job, We will first create a secret text binding with a variable. In the Build environment Click on the “use Secret text(s) or file(s)”. Click on the Add button with the secret text binding. Provide the variable name and choose the credential id that we have created in the first step. This looks like as below,

3. Once the binding is done, In the Post steps of the job select the “execute shell” and enter the following command, curl -sSL https://download.sourceclear.com/ci.sh | sh

4. Run the job and we can see the results in the source clear website.

More to Come, Happy learning :-)
Read More

Docker - Multi Stage Build

Containers are the running instance of an Image. While working with Docker and other container technologies, though the size of the image does not matter but having larger images will sure consume lot of disk space. If we save the images in the local registry or having them in /var/lib/docker, the disk space needs to be cleared very often.

Having larger images requires cleaning them very often. Docker from version 17 has introduced a feature that helps in creating thin docker images, in other words small, short docker images. The feature is called multi-stage-build. This feature allows the reuse of artifacts produced by one stage from another stage. This helps in creating smaller images.

The old way - Lets say i want to create a Docker images for one of my java application. The current Dockerfile looks something like this,
jagadishsampleUtils$Fri Mar 15@ cat old-Dockerfile
FROM maven:3.5.2
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
VOLUME /Users/jagadish/.m2
RUN mvn -f /usr/src/app/pom.xml package
EXPOSE 8080
ENTRYPOINT [“java”, “-classpath”, “/usr/src/app/target/sampleUtils-1.0-SNAPSHOT.jar com.sample.core.bank.App”]
The Dockerfile looks simple. All i want to run is the application using “java -classpath /usr/src/app/target/sampleUtils-1.0-SNAPSHOT.jar com.sample.core.bank.App”

Now from the Dockerfile, i am using a maven:3.5.2 base image, uploading the source code and then running the “mvn install package command”. This will inturn install many dependency libraries. Once the command is successfully, it generates the jar file where i used to run my application. But if you observe for running our application all we need to is the jar file but our image file is going to have our source code, downloaded dependency jar file etc. This could take up the image size to 748 mb.

What if we have a way where we build the image such only jar file is left in the image rather than all other things. Multi-stage build helps in this case.

The new Way - The multistage build will have the generated jar file from the first image file. Here is how the Dockerfile looks,
jagadishsampleUtils$Fri Mar 15@ cat Dockerfile
FROM maven:3.5.2 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
VOLUME /Users/jagadish/.m2
RUN mvn -f /usr/src/app/pom.xml package
 
FROM openjdk:8
COPY --from=build usr/src/app/target/sampleUtils-1.0-SNAPSHOT.jar /usr/app/sampleUtils-1.0-SNAPSHOT.jar

In the above image file, we are creating a image file with copying the source code, running maven command and naming the image file as build. The FROM element talks about this, FROM maven:3.5.2 AS build

Once the build image is created, i am using the build as reference and copying the generated jar file from this image to second created image using the,

FROM openjdk:8
COPY --from=build usr/src/app/target/sampleUtils-1.0-SNAPSHOT.jar /usr/app/sampleUtils-1.0-SNAPSHOT.jar

Once the image is generated, we will be having only the jar file and java. This will reduce the image size. Once the iamge is created, the size of the image will be around 624 mb which got reduced by 120mb from the old way.

More to Come. Happy Learning :-)
Read More

Friday, March 15, 2019

Security - Software Composition Analysis

Security is everyone's job. When said this security now takes a shift left and moves to the first phase which is development. Writing secure code is always necessary. Code written can be analysed during the build phases to identify potential vulnerabilities even before going live. But what about the libraries that we import and use in our applications.
When writing code, most of times we will use libraries from external parties. These libraries provide additional functionality to whatever we are developing. So how can we make sure that these libraries are secured. How can we make sure these libraries does not include any vulnerabilities. This is where the Software composition analysis tool comes into picture.

Software Composition analysis (SCA) is a process of automating the visibility into the Open source components for the purpose of risk managements, security and license compliance. Source Clear is one such tool.

Source clears helps developers by telling (scanning) then what open source components they are using, who created it and what it is doing in the application and what components have vulnerabilities. It allows the scanning to be part of the Developers workflow and examine the security risk of the open source code|libraries in real time. It analyzes the open source components and report many details from their origin, creation and impact on the applications

Installation - Source Clear tool is a SaaS based application which means all we need to download and install is a source clear tools and using them we need to scan the code. Once the code scan is done, the results are then passed to the source clear web application for which we need to request account. Register for a Source clear account from “https://www.sourceclear.com”. Once we take a account , on a mac based system run the following commands for installing source clear components and activation

brew tap srcclr/srcclr
brew install srcclr
srcclr activate

When we run the last command ,it will ask for a activation token. The token is something like  a license which will be provided by the source clear when we are taking an account. Once the activation is done we can now scan our code. it would be something like this,
jagadishconsul$Wed Mar 13@ srcclr activate
Activation Token:

Now, scan a remote or local repository:
 `srcclr  scan --url https://github.com//`
 `srcclr  scan ~/path`


Once we activate the token with the srcclr command on our local machine we made a communication with the SourceClear. When ever we run the scan on our local machine the results will be sent to the source clear website where we can login with our credentials to see the results.

Run a Scan - Check out your GitHub source code and run the scan inside the directory as,
[root@ip-172-31-36-247 SampleTest]# /usr/local/bin/srcclr scan .
SourceClear scanning engine ready
Running the Maven scanner
Scanning completed
Found 64 lines of code
Processing results...
Processing results complete

Summary Report
Scan ID                                   dc9ee77c-7335-493d-8ba4-a25a40551e46
Scan Date & Time                     Mar 14 2019 02:58AM UTC
Account type                            PRO
Scan engine                             3.2.4 (latest 3.2.4)
Analysis time                            30 seconds
User                                      root
Project                                   /home/centos/SampleTest
Package Manager(s)                 Maven

Open-Source Libraries
Total Libraries                         17
Direct Libraries                       13
Transitive Libraries                  4
Vulnerable Libraries                 4
Third Party Code                     99.9%

Security
With Vulnerable Methods         0
High Risk Vulnerabilities           1
Medium Risk Vulnerabilities      5
Low Risk Vulnerabilities           0

Vulnerabilities - Public Data
CVE-2014-0114                             High Risk       Arbitrary Code Execution Through The Class Parameter Passed To The GetClass     Apache Commons BeanUtils 1.8.3

CVE-2014-3596                             Medium Risk     Man In The Middle (MitM) Attacks Are Possible With Spoofed SSL Servers          Axis Web Services 1.4

CVE-2012-5784                             Medium Risk     Man In The Middle (MitM) Attacks Are Possible With Spoofed SSL Servers          Axis Web Services 1.4

CVE-2018-8032                             Medium Risk     Cross-Site Scripting (XSS)                                                      Axis Web Services 1.4

Vulnerabilities - Premium Data
NO-CVE                                    Medium Risk     Remote Code Execution (RCE) Via Java Object Deserialization                     Apache Commons IO 2.4

NO-CVE                                    Medium Risk     Potential Remote Code Execution Via Java Object Deserialization                 Apache Commons Collections 3.2.1

Licenses
Unique Library Licenses             5
Libraries Using GPL                   1
Libraries With No License           3
Libraries With Multiple Licenses  1

Issues
Issue ID    Issue Type          Severity    Description                                                                                   Library Name & Version In Use

13696119    Vulnerability       7.5         CVE-2014-0114: Arbitrary Code Execution Through The Class Parameter Passed To The GetClass    Apache Commons BeanUtils 1.8.3

13696120    Vulnerability       5.1         NO-CVE: Remote Code Execution (RCE) Via Java Object Deserialization                           Apache Commons IO 2.4

13696121    Vulnerability       5.1         NO-CVE: Potential Remote Code Execution via Java Object Deserialization                       Apache Commons Collections 3.2.1

13696122    Vulnerability       4.3         CVE-2018-8032: Cross-Site Scripting (XSS)                                                     Axis Web Services 1.4

13696123    Vulnerability       5.8         CVE-2014-3596: Man In The Middle (MitM) Attacks Are Possible With Spoofed SSL Servers         Axis Web Services 1.4

13696124    Vulnerability       5.8         CVE-2012-5784: Man in the Middle (MitM) Attacks are Possible with Spoofed SSL Servers         Axis Web Services 1.4

13696125    Outdated Library    3.0         Latest version at scan: 1.9.3                                                                 Apache Commons BeanUtils 1.8.3

13696126    Outdated Library    3.0         Latest version at scan: 3.3.3                                                                 ZXing Core 2.0

13696127    Outdated Library    3.0         Latest version at scan: 1.5.0-b01                                                             JavaMail API (compat) 1.4.7

13696128    Outdated Library    3.0         Latest version at scan: 1.1.1                                                                 JSON.simple 1.1

13696129    Outdated Library    3.0         Latest version at scan: 2.6                                                                   Apache Commons IO 2.4

13696130    Outdated Library    3.0         Latest version at scan: 3.2.2                                                                 Apache Commons Collections 3.2.1

13696131    Outdated Library    3.0         Latest version at scan: 3.0-alpha-1                                                           JavaServlet(TM) Specification 2.5

Full Report Details                       https://technik.sourceclear.io/teams/wddUo70/scans/6031542

Once the scan is done, the results will give us a lot of data including, the no of libraries that we are using, vulnerability data telling what type of vulnerability, licenses information etc.

It also gives us a link where to find the same full information in User Interface. When we login to the source clear with the credentials, we can see

It gives us the Risk score along with the project inventory. It also gives us the vulnerability information as below,

Software composition analysis tools are very important when writing code. We always use the third party libraries for extending our functionality, it is very important to understand how good are using those libraries. More to Come, Happy Learning :-)
Read More