Pages

Monday, March 24, 2014

Solaris P-commands

As we know , every thing in linux is a file . what ever we open or execute we deal with files. With the /proc virtual file system, even processes may be treated like files.

/proc (or procfs) is a virtual file system that allows us to examine processes like files. This means that /proc allows us to use file-like operations and intuitions when looking at processes. /proc does not occupy disk space; it is located in working memory.

Under the /proc location we see a lot of number which are nothing but the pid(process ID) for the running process. Under this pid directory, we can see some more sub directories which will give many details about the process running. We can get the same details using the Process command or P-commands in Solaris.

The available P-commands in Solaris are, 
  • pcred: Display process credentials (eg EUID/EGID, RUID/RGID, saved UIDs/GIDs)
  • pfiles: Reports fstat() and fcntl() information for all open files. This includes information on the inode number, file system, ownership and size.
  • pflags: Prints the tracing flags, pending and held signals and other /procstatus information for each LWP.
  • pgrep: Finds processes matching certain criteria.
  • pkill: Kills specified processes.
  • pldd: Lists dynamic libraries linked to the process.
  • pmap: Prints process address space map.
  • prun: Starts stopped processes.
  • prstat: Display process performance-related statistics.
  • ps: List process information.
  • psig: Lists signal actions.
  • pstack: Prints a stack trace for each LWP in the process.
  • pstop: Stops the process.
  • ptime: Times the command; does not time children.
  • ptree: Prints process genealogy.
  • pwait: Wait for specified processes to complete.
  • pwdx: Prints process working directory.

And here are some of the snippets using the above P-commands.

Display Process credentials
oracle@solaris_11X:~/Downloads/apache-tomcat-7.0.52/bin# pcred 1289
1289:   e/r/suid=0  e/r/sgid=0
        groups: 0 1 2 3 4 5 6 7 8 9 12

Display the File Opened by a process
oracle@solaris_11X:~# pfiles 1289
1289:   /usr/bin/java -Djava.util.logging.config.file=/home/oracle/Downloads/a
  Current rlimit: 65536 file descriptors
   0: S_IFCHR mode:0666 dev:551,0 ino:25690120 uid:0 gid:3 rdev:49,2
      O_RDONLY|O_LARGEFILE
      /devices/pseudo/mm@0:null
      offset:0
   1: S_IFREG mode:0644 dev:174,65544 ino:5047 uid:0 gid:0 size:2872
      O_WRONLY|O_APPEND|O_CREAT|O_LARGEFILE
      /home/oracle/Downloads/apache-tomcat-7.0.52/logs/catalina.out
      offset:2872


Display the flags
oracle@solaris_11X:~# pflags 1289
1289:   /usr/bin/java -Djava.util.logging.config.file=/home/oracle/Downloads/a
        data model = _ILP32  flags = ORPHAN|MSACCT|MSFORK
 /1:     flags = ASLEEP  lwp_wait(0x2,0x8046cb4)
 /2:     flags = ASLEEP  accept(0x32,0xce2ae320,0xce2ae380,0x1)
        sigmask = 0x00000004,0x00000000,0x00000000


Display the linked libraries by a process
oracle@solaris_11X:~# pldd 1289
1289:   /usr/bin/java -Djava.util.logging.config.file=/home/oracle/Downloads/a
/lib/libthread.so.1
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/jli/libjli.so
/lib/libdl.so.1
/usr/lib/libc/libc_hwcap3.so.1
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/client/libjvm.so
/lib/libsocket.so.1
/usr/lib/libsched.so.1
/lib/libm.so.1
/usr/lib/libCrun.so.1
/lib/libdoor.so.1
/lib/libm.so.2
/lib/libnsl.so.1
/lib/libmd.so.1
/lib/libmp.so.2
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/libverify.so
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/libjava.so
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/native_threads/libhpi.so
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/libzip.so
/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3
/usr/lib/locale/common/methods_unicode.so.3
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/libmanagement.so
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/libj2pkcs11.so
/usr/lib/libpkcs11.so.1
/lib/libcryptoutil.so.1
/usr/lib/security/pkcs11_softtoken.so.1
/usr/lib/libsoftcrypto.so.1
/lib/libgen.so.1
/usr/jdk/instances/jdk1.6.0/jre/lib/i386/libnet.so


Display the process address space of a process
oracle@solaris_11X:~# pmap 1289
1289:   /usr/bin/java -Djava.util.logging.config.file=/home/oracle/Downloads/a
08045000      12K rwx--    [ stack ]
08050000      44K r-x--  /usr/jdk/instances/jdk1.6.0/bin/java
0806A000       4K rwx--  /usr/jdk/instances/jdk1.6.0/bin/java
0806B000    7064K rwx--    [ heap ]

top command in solaris
prstat
 PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 719 oracle     84M   57M sleep   46    0   0:02:23  10% Xorg/3
 933 oracle     90M   20M sleep   54    0   0:00:30 6.7% gnome-terminal/2

print the signals from a process
oracle@solaris_11X:~# psig 1289
1289:   /usr/bin/java -Djava.util.logging.config.file=/home/oracle/Downloads/a
HUP     caught  UserHandler     RESTART HUP,INT,QUIT,ILL,TRAP,ABRT,EMT,FPE,BUS,SEGV,SYS,PIPE,ALRM,TERM,USR1,USR2,CLD,PWR,WINCH,URG
INT     ignored


Print the Threads of a Process
oracle@solaris_11X:~/Downloads/apache-tomcat-7.0.52/bin# pstack 1661
1661:   /usr/bin/java -Djava.util.logging.config.file=/home/oracle/Downloads/a
-----------------  lwp# 1 / thread# 1  --------------------
 ceaa21a7 lwp_wait (2, 8046ce4)
 cea9988c _thrp_join (2, 0, 8046d50, 1) + 63
 cea999db thr_join (2, 0, 8046d50) + 23


stop a process
oracle@solaris_11X:~/Downloads/apache-tomcat-7.0.52/bin# pstop 1661
oracle@solaris_11X:~/Downloads/apache-tomcat-7.0.52/bin# ps aex 1661
   PID TT       S  TIME COMMAND
1661 pts/2    T  0:05 /usr/bin/java
-Djava.util.logging.config.file=/home/oracle/Downloads/apache-tomcat-7.0.52/conf/logging.properties

Check the State “T” which says it is stopped

Start the Process
oracle@solaris_11X:~/Downloads/apache-tomcat-7.0.52/bin# prun 1661
oracle@solaris_11X:~/Downloads/apache-tomcat-7.0.52/bin# ps aex 1661
   PID TT       S  TIME COMMAND
1661 pts/2    R  0:05 /usr/bin/java
-Djava.util.logging.config.file=/home/oracle/Downloads/apache-tomcat-7.0.52/conf/logging.properties

Check the State “R” which says it is started.


More to Come , Happy Learning J
Read More

Tuesday, March 18, 2014

Route

As we have seen how IP address in a local network are added with arp , in the same way if we need to search for IP address outside of the network we  need to have some thing called gateway/router. A gateway is simply a machine that connects to more than one network and can therefore take packets transmitted within one network and re-transmit them on other networks it is connected to. The Linux command “route”  allows us to view/edit routing information.

[root@vx111a ~]# route -n
Kernel IP routing table
Destination      Gateway           Genmask         Flags Metric Ref    Use Iface
172.16.100.0   0.0.0.0             255.255.254.0   U      0       0        0   eth0
169.254.0.0     0.0.0.0             255.255.0.0      U      1002  0        0   eth0
0.0.0.0           172.16.100.254  0.0.0.0             UG    0       0        0   eth0

What does this mean, this simply means that if a packet to “172.16.100” belong to the local network and will be sent to the correct machines using the arp search but if the packets to machines which are other than “172.16.100” will be sent to Gateway machine “172.16.100.254” which will do the rest of job like finding the machine with the IP address and sending the packet to the machine.

The above is an example of the routing table which will help in determining the gateway or host to send a packet to, given a address. An address pattern is specified by combining an address with a subnet mask. A subnet mask is a bit pattern, usually represented in dotted quad notation, that tells the kernel which bits of a destination to treat as the network address and which remaining bits to treat as a subnet.

subnetwork/ subnet is a range of logical addresses within the address space that is assigned to an organization. Sub netting is a hierarchical partitioning of the network address space of an organization into several subnets.

We can add gateway using the route command like,
route add -net 216.109.0.0 netmask 255.255.0.0 gw 192.168.2.2

$ route -n
Kernel IP routing table
Destination    Gateway       Genmask        Flags Metric Ref Use Iface
169.254.0.0   192.168.2.2  255.255.0.0     UG    0      0      0 eth0
216.239.0.0 192.168.2.3    255.255.0.0     UG    0      0      0 eth0


So if we something like this in our routing table, we can say that address from 169.254.* will be routed using 192.168.2.2 gateway.We can use "route delete" to delete route information from the route table.

More to come , happy learning
Read More

Hot Plug a CPU in Linux

There are cases where application needs more performance during a period of time. In those times it is always good if we have an option to add additional Processing System with out a System Reboot. In this article we will see how we can add a additional CPU to the existing ones with out a System reboot.

Some times applications include Instant Capacity on Demand where extra CPUs are present in a system but aren't activated. This is useful for customers that predict growth and therefore the need for more computing power but do not have at the time of purchase the means to afford.

Check How many CPU are currently UP and running,

[root@vx111a cpu]# grep "processor" /proc/cpuinfo
processor : 0
processor : 1
processor : 2
processor : 3

We can see that we have 0-3 CPU up and running. So now if we want to make a CPU 3 to disable like,

[root@vx111a ~]# cd /sys/devices/system/cpu

[root@vx111a cpu]# ll
total 0
drwxr-xr-x. 8 root root    0 Mar 11 17:56 cpu0
drwxr-xr-x. 8 root root    0 Mar 11 17:56 cpu1
drwxr-xr-x. 8 root root    0 Mar 11 17:56 cpu2
drwxr-xr-x. 8 root root    0 Mar 11 17:56 cpu3
drwxr-xr-x. 3 root root    0 Mar 11 18:00 cpufreq
drwxr-xr-x. 2 root root    0 Mar 11 18:00 cpuidle
-r--r--r--. 1 root root 4096 Mar 11 18:00 kernel_max
-r--r--r--. 1 root root 4096 Mar 11 18:00 offline
-r--r--r--. 1 root root 4096 Mar 11 18:00 online
-r--r--r--. 1 root root 4096 Mar 11 18:00 possible
-r--r--r--. 1 root root 4096 Mar 11 18:00 present
-rw-r--r--. 1 root root 4096 Mar 11 17:56 sched_smt_power_savings

Now when we see the we can see directories cpu0 – cpu3. In order to disable a CPU (consider cpu3) .

[root@vx111a cpu]# cd  cpu3/
[root@vx111a cpu3]# ll
total 0
drwxr-xr-x. 6 root root    0 Mar 11 17:56 cache
drwxr-xr-x. 3 root root    0 Mar 11 17:56 cpufreq
drwxr-xr-x. 6 root root    0 Mar 11 18:08 cpuidle
-r--------. 1 root root 4096 Mar 11 18:08 crash_notes
drwxr-xr-x. 2 root root    0 Mar 11 18:08 microcode
-rw-r--r--. 1 root root 4096 Mar 11 17:56 online
drwxr-xr-x. 2 root root    0 Mar 11 18:08 thermal_throttle
drwxr-xr-x. 2 root root    0 Mar 11 17:56 topology

We can see a file called online , when we check the value we see
[root@vx111a cpu3]# cat online
1

Now send ‘0’ to the same file

[root@vx111a cpu3]# echo 0 > ./online

Now if we check the CPU available now again. We see
[root@vx111a cpu]# grep "processor" /proc/cpuinfo
processor : 0
processor : 1
processor : 2

The last cpu 3 is disabled. We don’t need a reboot. We just need to send the value of the online file that exists in every cpu directory.


More to Come, Happy learning
J
Read More

Wednesday, March 5, 2014

Trouble Shooting High I/O In Linux

I/O analysis in Linux is very important when we see the System is performing slow. First we need to make sure that the system is performing slowly due to the High I/0.

Disk I/O encompasses the input/output operations on a physical disk. If you’re reading data from a file on a disk, the processor needs to wait for the file to be read. During this process, the access time is very important. This is the time taken for a process to be processed on a processor along with all operations done like accessing data from the disk etc.

In this article we will see how we can analyze the Disk I/O performance and trouble shoot the performance Issues.

I/O wait is very important factor when there is a System slowness. I/O Wait is the percentage of time your processors are waiting on the disk.

Finding whether I/O is causing the system Slowness

For making sure that the system slowness is due to the High I/O, we can use “top” command provided by the linux.

When we run the “top” command, we see

top - 04:00:14 up 26 days, 10:17,  2 users,  load average: 4.91, 2.69, 2.65
Tasks: 546 total,   1 running, 542 sleeping,   2 stopped,   1 zombie
Cpu(s): 11.3%us, 0.5%sy, 9.0%ni, 78.9%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem:  65831008k total, 63023472k used,  2807536k free,  3177444k buffers
Swap:  4194296k total,      208k used,  4194088k free, 10945140k cached

0.0%wa – This is an important value which tells about the amount of time the CPU is waiting for the I/O operation to complete. The higher the value , the more CPU resources are waiting for the I/O operations.

Once we find out the the I/O wait is higher which is causing the System to perform slow we then need to find out the disk that is actually having Issues in performing the I/O operations.

Finding out which Disk is Having Issues in performing I/O Operations

Now we need to make sure which disk is having issues in performing the I/O Operations. For this linux provides us “iostat” command which will analyze and provide us the I/O Statistics.

If we run the iostat command as

Dev:vx1379:hello-~ $ iostat -x 2 5
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                0.83     0.12    0.53       3.01       0.14       95.37

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
xvda              0.04     2.84     0.12   3.37   4.64    45.91    14.50      0.05       15.05  14.14   4.93
xvda1            0.00     0.00     0.00   0.00  0.01     0.00      37.17     0.00       123.64  59.65   0.00

This will display the Current I/O Statistics.
We can also see the %iowait as 3.01 which is same in top command. The above iostat command will run for every 2 seconds and for 5 times.

The first report printed by the above command are the statistics from the systems last reboot and after the first report all reports printed are based on the previous statistics.

The important attribute we need to check is the %util column at last. This tells you how the disk is being utilized and if we see a higher percentage value then the Disk has issues performing the I/O operations.

The iostat command gives a large amount of information on how the Disk is being used like  items such as read and write requests per millisecond(rrqm/s & wrqm/s), reads and writes per second (r/s & w/s) and plenty more

The Other 3 Important columns we need to check are avgqu-sz ,await  and svctm  which are

Service time (storage wait) is the time it takes to actually send the I/O request to the storage and get an answer back – this is the time the storage system (EMC in this case) needs to handle the I/O. It varies between 3.8 and 7 ms on average.

Average Wait is the time the I/O’s wait in the host I/O queue.

Average Queue Size is the average amount of I/O’s waiting in the queue.

Assume a storage system handles each I/O in exactly 10 milliseconds and there are always 10 I/O’s in the host queue. Then the Average Wait will be 10 x 10 = 100 milliseconds.

Now lets analyze the iostat command more deeply

Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
xvda              0.04     2.84      0.12  3.37   4.64    45.91    14.50     0.05       15.05   14.14   4.93

The above is the output the iostat command for a Specific device xvda

r/s and w/s—the number of read and write requests issued by processes to the device
rsec/s and wsec/s — sectors read/written (as each sector is 512 bytes).
rkB/s and wkB/s — kilobytes read/written.
avgrg-sz – average sectors per request and the formula is
(rsec + wsec) / (r + w) = (4.64+45.91)/(0.04+2.84)

If you want it in kilobytes, divide by 2.
If you want it separate for reads and writes — do you own math using rkB/s and wkB/s.

avgqu-sz — average Queue length for the device
await - is the time the I/O’s wait in the host I/O queue.
Util - Percentage of CPU time during which I/O requests were issued to the device. The formula would be

%util = ( r + w ) * svctim /10 = (0.12 + 3.37 ) * 14.14 /10 = 4.93

Now once we are aware of the Disk having issues. We need to find the process that Is causing the High I/O operations.

Find process Causing High I/O

Iotop command provided by linux gives us exactly which process is causing the high I/O. But the command may not be available on many of the linux systems or it may not be installed by the System admin. If we run the command we can see

 Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                           1   be/4  root        0.00 B/s     0.00  B/s       0.00 %    0.00    % init
   2   be/4  root        0.00 B/s     0.00  B/s       0.00 %    0.00    % [kthreadd]

We need to check the “IO>” which tells us about the IO% being used.

But as I Said , we don’t have the command available on most linux machines. For the same data we can use the “ps” command which will be available on all linux machines.

Every process in linux has a State much like

D uninterruptible sleep (usually IO)
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to complete)
T stopped, either by a job control signal or because it is being traced.
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z defunct ("zombie") process, terminated but not reaped by its parent.

So a Process when it is in Uninterruptible sleep ( which is D state ) are said to be waiting for the I/O. Just run the loop on the command line

for x in `seq 1 1 10`; do ps -eo state,pid,cmd | grep "^D"; echo "----"; sleep 5; done

The above loop runs for every 5 seconds for 10 times and will print the process that are in “D” state. Once we narrow out the process which are in “D” state, we can get more details of the process from the /Proc file system

Take the PID of the Process that is in “D” state and check the details like

Dev:vx1379:hello-~ $ cat /proc/<PID>/io
rchar: 58378821
wchar: 25851061
syscr: 154249
syscw: 13221
read_bytes: 32673792
write_bytes: 3541221376
cancelled_write_bytes: 16384

We can see all amounts of the date being read and written.

Finding what files are being written too heavily
Once we obtained the Process that is causing the High I/O , we need to find out what is happening with the Process for performing those oprations.

For this we can use the “lsof” command provided by linux to see what files are currently open and being read/written. We can find out the files that are being held by the PID (process that is in “D” state).

More Details about the I/O is here. Some More Snippets

Get the I/O wait time using
iostat -c|awk '/^ /{print $4}'
3.01

Grep the Top command for the CPU statues
Dev:vx1379:hello-~ $ top -p 1 -b -d 1 | awk '
>     /^top/{a=$0}
>     /^Cpu/{
>         sub(/top - /,"Time:",a);
>         sub(/up.*$/,"",a);
>         printf "%s %s\n",a,$0;a=""}'
Time:04:05:43  Cpu(s):  0.8%us,  0.5%sy,  0.1%ni, 95.4%id,  3.0%wa,  0.0%hi,  0.0%si,  0.1%st
Time:04:05:44  Cpu(s):  0.0%us,  0.5%sy,  0.0%ni, 99.0%id,  0.5%wa,  0.0%hi,  0.0%si,  0.0%st
*********
*********


More to come , Happy learning
Read More

Monday, March 3, 2014

Uptime explained

When we see high load on a System, we are talking about the load-average of the system. In this article we will see how the load average can be used to identify issues. The load average can be obtained either by running command

omhq199e:dwls999-~ $ uptime
 01:03:24 up 15 days, 14:30,2 users,  load average: 1.30, 1.32, 0.89

Or we can get the same details using the top command too like,

omhq199e:dwls999-~ $ top
top - 01:03:26 up 15 days, 14:30,  2 users,  load average: 1.20, 1.30, 0.88

In both the above cases, the load average can be seen as
load average: 1.20, 1.30, 0.88

These numbers represent my system load average from the last 1, 5 and 15 minutes time. This says how system is handling the process that needs system CPU time for their processing. The says the average number of process that are have to wait for the CPU time during the last 1, 5 and 15 minutes.

If we have a load of 0, it means that the system is idle.
If we have a load of 1, then it means that there is at least 1 process waiting to get the CPU time.
If we have a load of 2, that means that there are multiple process waiting to get the CPU time.

More Detail, Consider that iam a Bridge operator, I want cars on the bridge to move smoothly on the bridge. So if I say I have a 0 load, it means that there are no cars to move on the bridge. The load between ‘0’ and ‘1’ says that the load is normal. If I say I have a Load of ‘1’, it means that I have cars that are moving smoothly and if more cars come the load increases.

If I says that I have a load of ‘2’, it means that there are 2 lanes of cars waiting to cross.

The cars here are similar to Process in Linux. The load will rise when there are process waiting for the CPU time. So the CPU load should ideally stay below 1.0

So is ‘1.00’ is Ideal Load? What is the load that we need to consider as Serious?
The load average value deferrers with CPU Installed and the Core the system has.

For a Single Core System, the load average of normal upto1.0 is considered normal. Similarly for a Duel core system the load average is normal up to 2.0. This means that If  a Quad core System has load average of 4.0 , it is working fine and if it is more than 4.0 , the load is more on that system and we need to find the reasons.

Average to Consider?
When we run the uptime command we see load average for 1,5 and 15 minutes , now which one to consider.

We need to concentrate on the 5 and 15 minute average because if there is a hype in the 1 minute ,it is acceptable but if the average is high for the 5 min or 15 minutes we need to consider it as a server issue and find out the reasons.

How do I Find out How Many Core are available?

[root@vx148d tmp]$ grep 'model name' /proc/cpuinfo | wc -l
2

[root@vx148d tmp]$ egrep -e "core id" -e ^physical /proc/cpuinfo|xargs -l2 echo|sort -u
physical id : 0 core id : 0
physical id : 1 core id : 0


More to Come , Happy Learning
Read More