Thursday, March 24, 2016

CGroup Case 2 – Device Whitelisting

The devices subsystem in Cgroups provides a fine grained control for the system devices. An admin can define Cgroups that can restrict access to particular devices and define what users or groups can access those devices thus providing security and data protection.

In this article we will see how we can whitelist a device using Cgroups.

Add the Configurations
The first thing we need to do is to add the configuration to the /etc/cgconfig.conf file with

group blockDevice {
   devices {
    # Deny access to /dev/sda2
      devices.deny="b 8:2 mrw ";
     }
}
 
In the above snippet we have blocked the access to device /dev/sda2.  The devices sub system accepts a parameter “devices.deny” which takes the major and monitor numbers of the devices as values. Let’s see what the value provides above tells,

B – Type of the device. There are 3 types
  • a — applies to all devices, both character devices and block devices
  • b — specifies a block device
  • c — specifies a character device

8:2 – Major and Minor versions. These can found using

[root@vx111a dev]# ls -l sda2
brw-rw---- 1 root disk 8, 2 Mar 15 14:24 sda2

[Note: Major for /dev/sda2 is 8 and minor is 2]

Mrw- access is a sequence of one or more of the following letters:
  • r — allows tasks to read from the specified device
  • w — allows tasks to write to the specified device
  • m — allows tasks to create device files that do not yet exist

devices.deny - specifies devices that tasks in a cgroup cannot access. The syntax of entries is identical with devices.allow

Now once the cgconfig file is configured we will move to the cgred configuration which will add the process to the subsystems.


[root@vx111a tmp]# cat /etc/cgrules.conf 
*:bash           devices      blockDevice/

Now I have added the bash to the cgrules files. This makes that commands that are run from the bash prompt which are trying to access the /dev/sda2 will be restricted.

Start the Service
Start both the services using the commands,
Service cgconfig restart
Service cgred restart

Testing
In order to test the Cgroups we need to first make sure that the PID for the bash prompt is available in the cgroups created.

find the PID for the current bash Prompt using
[root@vx111a docker]# ps -p $$
  PID TTY          TIME CMD
 8966 pts/2    00:00:00 bash

Check the lscgroup and make sure that the devices subsystem is active as
[root@vx111a docker]# lscgroup | grep block
devices:/blockDevice

Check the PID – Now once the sub system is active we need to check the PID obtained above is available in tasks file. This is a special file which contains all the PID that are connected to the sub system. For checking the PID go to the location “/sys/fs/cgroup/devices/blockDevice” and check the tasks file as,

[root@vx111a blockDevice]# cat tasks
8966
9096
9638

We can see that 8966 is available. Check the drive that /dev/sda2 is mounted

[root@vx111a tmp]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda2      xfs        49G  207M   49G   1% /test

So we have the device mounted on /test. Now from the bash prompt if we run a command that access the /test drive the cgroups should not allow that. We can check using

[root@vx111a tmp]# dd if=/dev/sda2 of=/dev/null bs=512 count=1
dd: failed to open ‘/dev/sda2’: Operation not permitted

We can see that the current bash prompt does have the access on the /dev/sda2.

More to come, Happy learning J
Read More

CGroup Case 1 - I/O throttling

In the first use case we will see how we can manage the I/O Operations on the Disk devices. This is done by the blkio sub system available in the CGroups. The blkio sub system moderates I/O operations to the specified block devices.  In this example we will see how we can restrict the read operations performed on a drive

For this we use the “blkio.throttle.read_bps_device” parameter which specifies a upper limit on the number of read operations a device can perform. The rate of the read operations are specifired in bytes per second. The values for this accepts majorminor, and bytes_per_second.

Major& Minor - device types and node numbers specified in Linux
Bytes_per_second is the upper limit rate at which read operations can be performed.

Now lets block the read Operations on the device /dev/sda to 10MB. For this we need to first find the major and minor values for the device. These can be found by using the

[root@vx111a dev]# ls -l sd*
brw-rw---- 1 root disk 8, 0 Mar 15 14:24 sda
brw-rw---- 1 root disk 8, 1 Mar 15 14:24 sda1
brw-rw---- 1 root disk 8, 2 Mar 15 14:24 sda2
brw-rw---- 1 root disk 8, 3 Mar 15 14:24 sda3
brw-rw---- 1 root disk 8, 4 Mar 15 14:24 sda4
brw-rw---- 1 root disk 8, 5 Mar 15 14:24 sda5
brw-rw---- 1 root disk 8, 6 Mar 15 14:24 sda6
[Note: Major for /dev/sda1 is 8 and minor is 1]

OR

[root@vx111a ~]# cat /proc/partitions | grep sda
   8        0  488386584 sda
   8        1   81920000 sda1
   8        2   51200000 sda2
   8        3   51200000 sda3
   8        4          1 sda4
   8        5   10240000 sda5
   8        6   10240000 sda6

In this case for /sda, we have the major number as 8 and minor number as 0. Lets run the hdparm command first with out the cgroups and see what is the disk read rate for the drive  /dev/sda.

NOTE - Hdparm is the tool to use when it comes to tuning your hard disk or DVD drive, but it can also measure read speed, deliver valuable information about the device, change important drive settings, and even erase SSDs securely

[root@vx111a ~]# hdparm --direct -t /dev/sda
/dev/sda:
 Timing O_DIRECT disk reads: 368 MB in  3.00 seconds = 122.42 MB/sec

We can see that disk read rate is 122MB per second. Now we want to restrict the value to 10MB.

Now create a group in the /etc/cgconfig.conf file as

group limitIO{
    blkio {
              blkio.throttle.read_bps_device = "8:0   1048576";
    }
}

We have defined a limitIO group with taking the blkio subsystem.  Now lets configure the cgrules.conf files by adding the below line to the end of the file,

*:hdparm      blkio    limitIO/

This tells that operations performed by hdparm command needs to be added to blkio sub sytem and limited by the group limitIO.

Now restart both the services and run the lssubsys command to check the configuration,

[root@vx111a /]#lssubsys
cpuset:/
perf_event:/
hugetlb:/
blkio:/
blkio:/limitIO
memory:/
memory:/testOOM
net_cls:/

We can see the limitIO group is associated with the blkio subsystem.Once the serices are restarted run the hdparm again and see the values.

Testing
Now test the cgroup using the hdparm command as,

[root@vx111a ~]# hdparm --direct -t /dev/sda
/dev/sda:
 Timing O_DIRECT disk reads:   4 MB in  4.00 seconds = 1023.38 kB/sec

We can see that the value is limited to 1MB which is under 10MB.

More to Come, Happy learning J
Read More

CGroups

Resource exhaustion is one of the common issues while running production machines. There are cases where running servers crash due to other process using high memory or any other process running a high CPU intensive code. It is always good if we have a way to control resource usage. On larger systems kernel resource controllers (also called as Control groups (CGroups)) can be usefull to help priority applications to get the necessary resources thus limiting resources for other applications.

In this article we will see how we can use Cgroups to manage resources. According to kernel documentation, Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behavior.

CGroups is a linux kernel feature to limit, account and isolate resource (CPU, memory, disk, I/O) usages of process groups. Using this we can get a control over allocating, prioritizing, managing and monitoring system resources. The Cgroup can also be taught as a generic framework where resource controller can be plugged in which is then used to manage different resources of the system.

The resource controller can be a memory, CPU, Network or a Disk I/O controller. As the name suggests each of this controller performs the necessary functions like memory controller managing the memory of the processes.

The type of the resources that can be managed by CGroups include the following,
·         Blkio (Storage) - Limits total input and output access to storage devices (such as hard disks, USB drives, and so on)
·         CPU (Processor Scheduling) - Assigns the amount of access a cgroup has to be scheduled for processing power
·         Memory - Limits memory usage by task. It also creates reports on memory resources used.
·         Cpuacct (Process accounting) – Reports on CPU usage. This information can be leveraged to charge clients for the amount of processing power they use
·         Cpuset (CPU assignment) - On systems with multiple CPU cores, assigns a task to a particular set of processors and associated memory
·         Freezer (Suspend/resume) - Suspends and resumes cgroup tasks
·         net_cls (Network bandwidth) - Limits network access to selected cgroup tasks

There are some other resources that are even managed by CGroups. Check the docs for more details.

Installation
The easiest way to work with CGroups is to install the libcgroup package which contains the necessary packages and utilities for using CGroups.libcgroup is a library that abstracts the control group file system in Linux.

[root@vx111a work]# yum list installed | grep libcgroup
libcgroup.x86_64                       0.41-8.el7                      @anaconda
libcgroup-tools.x86_64               0.41-8.el7                      @anaconda

Install the libcgroup library and we can start from there.

Configuration
Cgroups are implemented using a file system-based model—just as you can traverse the /proc tree to view process-related information, you can examine the hierarchy at /cgroup to examine current control group hierarchies, parameter assignments, and associated tasks.

Once the libcgroup package is installed we get 2 services
Cgconfig
Cgred

Cgconfig – The cgconfig service installed with the libcgroup packages provides a convenient way to create hierarchies, attach sub systems to the hierarchies and manage cgroups with in thise hierarchies. The service is not started by default.  The service reads the file /etc/cgconfig.conf and depending on the contents of the file it creates hierarchies, mounts necessary files systems, creates cgroups and sets the sub system parameters.

The default /etc/cgconfig.conf file installed with the libcgroup package creates and mounts an individual hierarchy for each subsystem, and attaches the subsystems to these hierarchies. In other words this is used to define control groups, their parameters and also mount points.

Once the cgconfig service is started, a virtual file system is mounted. This can be either /cgroup or /sys/fs/cgroup.

[root@vx111a conf.d]# ll /sys/fs/cgroup/
total 0
drwxr-xr-x 2 root root  0 Mar 16 13:35 blkio
lrwxrwxrwx 1 root root 11 Mar 15 14:24 cpu -> cpu,cpuacct
lrwxrwxrwx 1 root root 11 Mar 15 14:24 cpuacct -> cpu,cpuacct
drwxr-xr-x 5 root root  0 Mar 16 13:36 cpu,cpuacct
drwxr-xr-x 2 root root  0 Mar 15 14:24 cpuset
drwxr-xr-x 4 root root  0 Mar 15 14:24 devices
drwxr-xr-x 2 root root  0 Mar 15 14:24 freezer
drwxr-xr-x 2 root root  0 Mar 15 14:24 hugetlb
drwxr-xr-x 2 root root  0 Mar 16 13:36 memory
drwxr-xr-x 2 root root  0 Mar 15 14:24 net_cls
drwxr-xr-x 2 root root  0 Mar 15 14:24 perf_event
drwxr-xr-x 4 root root  0 Mar 15 14:24 systemd

The configuration file contains the group elements. The resource that needs to be managed is defined in the configuration file. A simple configuration looks as

group group1 {
 cpu {
        cpu.shares="800";
    }
    cpuacct {
        cpuacct.usage="0";
    }
    memory {
        memory.limit_in_bytes="4G";
        memory.memsw.limit_in_bytes="6G";
    }
}

In the above snippet we defined a group with the name group1 in which we defined the sus systems that we want to manage. We defined the CPU and Memory Sub systems.

The cpu.shares parameter determines the share of CPU resources available to each process in all cgroups

The memory.limit_in_bytes parameter tells the amount of memory that this group has access to. The processes associated to this group will be given with 4GB limit of memory.
The memory.memsw.limit_in_bytes parameter specifies the total amount of memory and swap space processes may use. 

Cgred is a service that moves tasks into cgroups according to parameters set in the /etc/cgrules.conf file. This file contains list of rules which assign to a defined group/user a control group in a subsystem.

The configuration file contains the form
user    subsystems    control_group

a sample example would be like
*:java    memory  group1

In the above snippet we have defined a rule such that all java Process that starts will be added to the memory system under the group1. So all process started by java will have the limit of 4G memory as we defined in the /etc/cgconfig.conf file.

Start the Service
Start both the services using the commands,
Service cgconfig restart
Service cgred restart

Once the services are started we can check our configuration using the lscgroup command as

[root@vx111a conf.d]# lscgroup -g cpu:/
cpu,cpuacct:/
cpu,cpuacct:/group1
cpu,cpuacct:/browsers
memory:/group1
devices:/blockDevice

lssubsys - The command lssubsys -am lists all subsystems which are present in the system, mounted ones will be shown with their mount point:

[root@vx111a docker]# lssubsys
cpuset
cpu,cpuacct
memory
devices
freezer
net_cls
blkio
perf_event
hugetlb

In the next article we will see some of the use cases using the Cgroups. More to come. Happy Learning
Read More

Monday, February 22, 2016

Ansible vault

While working with automation it is necessary to have a secure system to store various details like Password, variable, SSH keys etc. Ansible does provide a facility called vault which helps sys admin to store sensitive data and use the vault while running playbooks on remote machines. In this article we will how we can use Ansible vault in securing things. We will be seeing on how

1) To encrypt data using Vault
2) Using Ansible valut while running Playbooks

While using Ansible as a configuration management system or a orchestration engine, it is necessary to pass certain data like passwords, keys etc to run the playbooks. These details can be common most times and used multiple times. An automated system that prompts the operator for passwords all the time is not very efficient. To maximize the power of Ansible, secret data has to be written to a file that Ansible can read and utilize the data from within. Though they are stored on the vault we can have these hacked.

For these ansible provides a facility to protect your data at rest. That facility is Vault, which allows for encrypting text files so that they are stored "at rest" in encrypted format. Without the key or a significant amount of computing power, the data is indecipherable.ansible-vault command is provided by Ansible in securing things

Encrypt Data
1) Create a Sample file to encrypt

[root@vx111a vault]# ansible-vault create sample_passwd.yml
Vault password:
Confirm Vault password:

We created a file sample_passwd.yml file which will ask for vault password. If we check the file type and contents,

[root@vx111a vault]# file sample_passwd.yml
sample_passwd.yml: ASCII text

[root@vx111a vault]# cat sample_passwd.yml
$ANSIBLE_VAULT;1.1;AES256
31643134323761626464336334333461636135656435333161636538326132356166303536353838
3661376665336163613139613836313765633836333838320a626262323565653837363735336163
36396639336239383566306439396262383965623338613664383434663765366639636534393634
6239376138303763610a363165323630326231626334633931323732316564316135643033383730
62666630333233366234633366623331326266633932363166656130373164333335

We see that the file is a ASCII file with encrypted contents.

2) Edit the Encrypted file to change the conents of the file
 In Oder to edit the encypted file, we need touse the edit command with the valut.
[root@vx111a vault]# ansible-vault edit sample_passwd.yml
Vault password:

When you try to edit the conents, it will ask for the vault password.

3) Decypt the file contents
[root@vx111a vault]# ansible-vault decrypt sample_passwd.yml
Vault password:
Decryption successful

Once decrypted we can see the conents of the file as,
[root@vx111a vault]# cat sample_passwd.yml
password: vagrant

We can then use the encrypt command to encypt the contents again as,
[root@vx111a vault]# ansible-vault encrypt sample_passwd.yml
Vault password:
Confirm Vault password:
Encryption successful

Ansible does also provide the rekey facility to change the valut password using
[root@vx111a vault]# ansible-vault rekey sample_passwd.yml
Vault password:
New Vault password:
Confirm New Vault password:
Rekey successful

Using vault with Playbooks
Until now we have seen how we can use the Ansible vault in encrypting data. Its no use when the data is encrypted and not being used.  Now we will see how we pass the encrypted data to the Ansible playbooks while running on remote machine.

1) Create a sample yml file with 2 variables as
[root@vx111a vault]# cat main.yml
version: 8.0.32
http_port: 8084

Now encrypt the file using the Ansible-vault,

[root@vx111a vault]# ansible-vault decrypt main.yml
Vault password:
Decryption successful

Check the encryption
[root@vx111a vault]# cat main.yml
$ANSIBLE_VAULT;1.1;AES256
34333132653430616138313235366435613232653662653865663264346632616664666665356437
6133633063396434326538373531326231623536373465360a326334303832366530373535356334
32376162336164643561343462643063326366653039303433666439633064383364633064303939
3139663363366336610a336566353664613536643933633166356536336634363734626664363261
34373865613831646339333866613564373937326262643432353866316339306263346430643434
3030633864663837613934663166616630623966653533383733

Now once the file is encrypted write a playbook as,
[root@vx111a vault]# cat sample-playbook.yml
---
- hosts: cent
  vars_files:
     - main.yml
 
  tasks:
    - name: run echo Command
      local_action: shell echo {{ http_port }}
      register: local_process

    - debug: msg="{{ local_process.stdout }}"


Now we have written a playbook which includes the main.yml file (which is encrypted using Ansible-vault) containing the variables. When we run the playbook as

[root@vx111a vault]# ansible-playbook sample-playbook.yml
ERROR: A vault password must be specified to decrypt /work/vault/main.yml

It clearly says that we need to pass the vault password in running the playbook. The correct way to run that playbook is

[root@vx111a vault]# ansible-playbook sample-playbook.yml --ask-vault-pass
Vault password:

PLAY [cent] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [run echo Command] ******************************************************
changed: [172.16.202.96 -> 127.0.0.1]

TASK: [debug msg="{{ local_process.stdout }}"] ********************************
ok: [172.16.202.96] => {
    "msg": "8084"
}

PLAY RECAP ********************************************************************
172.16.202.96              : ok=3    changed=1    unreachable=0    failed=0  

We can see that Ansible asks for the vault password in order to run the playbook. Once the password is provided, it run the playbook and also the Portt 8084 is replaced with values from main.yml file.

This is how we can use the Ansible valut with playbooks.

Password File
Though we provide the vault password every time while running the playbooks, there are some times with cases where we need to run the playbook with out any manual intervention. Ansible does provide an option to create a password file with the vault password and pass the file to Ansible command line as argument.

1) use the same above mail.yml file and encrypt that
[root@vx111a vault]# ansible-vault encrypt main.yml
Vault password:
Confirm Vault password:
Encryption successful

Now create a password file with password as “redhat”. Save in the home location and provide it with correct permissions.
[root@vx111a vault]# echo "redhat" >> ~/.vault_password
[root@vx111a vault]# chmod -R 600 ~/.vault_password

Now run the playbook by passing the password file as argument as,

[root@vx111a vault]# ansible-playbook sample-playbook.yml --vault-password-file ~/.vault_password

PLAY [cent] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [run echo Command] ******************************************************
changed: [172.16.202.96 -> 127.0.0.1]

TASK: [debug msg="{{ local_process.stdout }}"] ********************************
ok: [172.16.202.96] => {
    "msg": "8084"
}

PLAY RECAP ********************************************************************
172.16.202.96              : ok=3    changed=1    unreachable=0    failed=0  

This is how we can use the Ansible vault and secure data.
Read More

Ansible Monitor-Alert

At an infra and application level we need to have several monitoring metrics configured right from CPU, memory to application leavel such as Heap and mumber of Database connections. Also, from an infrastructure automation point of view, you need to make sure you start a strong feedback loop by integrating automation with your monitoring and alerting system or systems through constant reporting.

In this article we will how we can configure monitor and alert when a playbook ran. Lets write a sample playbook as,

---
- hosts: cent

  tasks:
    - name: run echo Command
      command: /bin/echo Hello Sample PlayBook

    - name: get the IP address
      shell: hostname
      register: host_name

    - name: Send Mail Alert
      local_action: mail
                    host="127.0.0.1"
                    subject="[Ansible] Testing Mail"
                    body="Hello the Play book ran successfully on {{ host_name.stdout }}"
                    to="jagadesh.manchala@gmail.com"
                    from="root"     

In the above playbook we have multiple tasks as to run a command on remote machine and also get the hostname of the remote machine.

The last one is the important one as we are sending the mail. We have used the local_action element which will make to exeute the mail command on the local machine. We can also configure our playbook in such as to execute the mail command based on the response of the commands above. I just got the hostname and added that to the mail Subject. Now once you run the playbook we can see,


[root@vx111a mail]# ansible-playbook sample-playbook.yml

PLAY [cent] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [run echo Command] ******************************************************
changed: [172.16.202.96]

TASK: [get the IP address] ****************************************************
changed: [172.16.202.96]

TASK: [Send Mail Alert] *******************************************************
ok: [172.16.202.96 -> 127.0.0.1]

PLAY RECAP ********************************************************************
172.16.202.96              : ok=4    changed=2    unreachable=0    failed=0  

Now if we check our mail account we can see some thing like this,
We received a mail once the playbook is ran. We can add various logic to extract details from remote machine and add them to our alerting mechanism. In the next article we will see how we can use other monitoring and alerting methods provided by Ansible.
Read More

Ansible delegation

Ansible by default run the task all at once on the remote configure machine. What If a task is based on status of a certain command on another machine?. Say if you are patching a package on a machine and you need to continue until a certain file is available on another machine. This is done in Ansible using the delegation option.

By using the Ansible delegate option we can configure to run a task on a different host than the one that is being configured using the delegate_to key. The module will still run once for every machine, but instead of running on the target machine, it will run on the delegated host. The facts available will be the ones applicable to the current host. Lets write a basic playbook using Ansible delegate

[root@vx111a delegate]# cat sample-playbook.yml
---
- hosts: cent
 
  tasks:
    - name: Install zenity
      action: yum name=zenity state=installed
    - name: configure zenity
      action: template src=hosts dest=/tmp/zenity.conf
    - name: Tell Master
      action: shell echo "{{ansible_fqdn}} done" >> /tmp/list
      delegate_to: 172.16.202.97

We have defined multiple tasks to install zenity, configure zenity and the last task is to echo command on the delegate machineof different IP address.  Ansible-playbook will delegate the last task to the IP address 172.16.202.97. Once we run the playbook we can see


[root@vx111a delegate]# ansible-playbook sample-playbook.yml

PLAY [cent] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [Install zenity] ********************************************************
changed: [172.16.202.96]

TASK: [configure zenity] ******************************************************
changed: [172.16.202.96]

TASK: [Tell Master] ***********************************************************
changed: [172.16.202.96 -> 172.16.202.97]

PLAY RECAP ********************************************************************
172.16.202.96              : ok=4    changed=3    unreachable=0    failed=0  

Now check the remote machine 172.16.202.97 for the file created.

root@ubuntu:/home/vagrant# cat /tmp/list
dev.foohost.vm done


More to Come. Happy learning
Read More

Ansible Tags

Tags are another feature in Ansible that is used to run only a subset of tasks/roles. When we have tasks in a playbook and we executed the playbook then all tasks are executed. Byt by using the tags we can make the playbook to execute only a subset of tasks by defining them with tag attribute. In this article we will see how we can create tags and use them to execute a subset of tasks in playbook.

1) Lets create a basic sample playbook as
[root@vx111a tags]# cat sample-playbook.yml
---
- hosts: cent
 
  tasks:
    - name: run echo Command
      command: /bin/echo Hello Sample PlayBook
      tags:
        - sample

    - name: Create Sub Directories
      file:
         dest: "/tmp/html"
         state: directory
         mode: 755
      tags:
        - create

If you check the above example, I have defined 2 tasks defining them with 2 different tags. Now when we execute the playbook all the tasks are executed but if we want only a specific tag to be execute we can run the playbook as,

[root@vx111a tags]# ansible-playbook sample-playbook.yml --tags sample

PLAY [cent] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [run echo Command] ******************************************************
changed: [172.16.202.96]

PLAY RECAP ********************************************************************
172.16.202.96              : ok=2    changed=1    unreachable=0    failed=0  
In the above example we ran only the sample tag by passing the argument –tags sample to the command line. In the below example I ran only the create tag.

[root@vx111a tags]# ansible-playbook sample-playbook.yml --tags create

PLAY [cent] *******************************************************************

GATHERING FACTS ***************************************************************
ok: [172.16.202.96]

TASK: [Create Sub Directories] ************************************************
changed: [172.16.202.96]

PLAY RECAP ********************************************************************
172.16.202.96              : ok=2    changed=1    unreachable=0    failed=0  


We can add the register attribute so that we can see what exactly changing. By default ansible runs as if ‘–tags all’ had been specified.This about the tags in Ansible. More to come. Happy learning
Read More