Pages

Monday, February 4, 2013

Here Document & Here String


Here Document

we are pretty much aware of the re direction in Linux .When we use

# echo 'hello world' >output
# cat <output

The first line writes "hello world" to the file "output", the second reads it back and writes it to standard output ( To Terminal)

Then there are "here" documents:

# cat <<EOF
> hello
> world
> EOF

A "here" document is essentially a temporary, nameless file that is used as input to a command, here the "cat" command.

A less commonly seen form of here document is the "here" string:
# cat <<<'hello world'

In this form the string following the "<<<" becomes the content of the "here" document.

This type of redirection tells the shell to read input from the current source (HERE) until a line containing only word (HERE) is seen. HERE word is not subjected to variable name, parameter expansion, arithmetic expansion, path name expansion, or command substitution. All of the lines read up to that point are then used as the standard input for a command. Files are processed in this manner are commonly called here documents.

Here Docs

wc -w <<EOF
> This is a test.
> Apple juice.
> 100% fruit juice and no added sugar, colour or preservative.
> EOF

Here Strings
Command <<<$word
 wc -w <<< "This is a test."
Read More

File Descriptors In Linux


As we all know that every thing in Linux is a file. Any program that runs on a Linux machine has access to something called a €œFile Descriptor Table. A This table acts as a map providing the process access to files, directories, unnamed pipes, named pipes, sockets and kernel-level data structures. This table exists for each process.

There are 3 standard file descriptors accessible inside the bash shell. They are
0 - Input - Keyboard (stdin) - Standard input
1 - Output - Screen (stdout) - Standard output
2 - Error - Screen (stderr) -Standard error

The above three numbers are standard POSIX numbers and also known as file descriptors. Every Linux command at least open the above streams to talk with users or other system programs. These 3 are character devices. Character devices provide a mechanism to send a stream of characters or bytes. A stream provides sequential access that is, it provides output in the order in which it was received, this is also known as a FIFO pipe, which stands for First-In First-Out.

stdin 0 Read input from a file (the default is keyboard)
cat < File Name
stdout 1 Send data to a file (the default is screen).
date > Output File Name
cat Output File Name
stderr 2 Send all error messages to a file (the default is screen).
rm /tmp/File Name 2> error.txt
cat Error.txt
Using the Above FD Numbers, 2> redirects file descriptor 2, or standard error. &n is the syntax for redirecting to a specific open file (Because 0 (stdin), 1 (stdout) and 2 (stderr) are actually file descriptors the shell requires an ampersand put in front of them for redirection. It duplicates the file descriptor in this case effectively merging the two streams of information together. if you just had "1" with no ampersand, the shell would create a file named "1" and redirect stderr output to it).

Now when you try to execute the below command which gives an error like,
Dev:vx1423:djbs002-jas $ ls -l myfile.txt > test.txt
ls: myfile.txt: No such file or directory

We can redirect the error to a file like
Dev:vx1423:djbs002-jas $ ls -l myfile.txt 2> test.txt
Check the Contents of the Error File
Dev:vx1423:djbs002-jas $ cat test.txt
ls: myfile.txt: No such file or directory


For example 2>&1 redirects 2 (standard error) to 1 (standard output); if 1 has been redirected to a file, 2 goes there too.

Character Action
>         Redirect standard output
2>       Redirect standard error
2>&1   Redirect standard error to standard output
<        Redirect standard input
|         Pipe standard output to another command
>>      Append to standard output
2>&1| Pipe standard output and standard error to another command

/dev/null & /dev/zero
All data written on a /dev/null or /dev/zero special file is discarded by the system. Use /dev/null to send any unwanted output from program/command

command >/dev/null
This syntax redirects the command standard output messages to /dev/null where it is ignored by the shell.

command 2>/dev/null
This syntax redirects the command error output messages to /dev/null where it is ignored by the shell.

command &>/dev/null
This syntax redirects both standard output and error output messages to /dev/null where it is ignored by the shell.

As an Example
Localhost:Root$ grep root /etc/passwd && echo "Found" || "Not Found"
root:x:7282:1566:Nohting:/privdir/root:/bin/bash
Found

This Command shows the Standard Output as well as the Result. If you need only the result ,then send the output to /dev/null like

Localhost:Root $ grep root /etc/passwd > /dev/null && echo "Found" || "Not Found"
Found

A Standard Error Re-direction include
command-name 2>error.log

LocalHost:Root$ find /home -name .profile 2>/tmp/error
LocalHost:Root $ cat /tmp/error
find: /home/lost+found: Permission denied
find: /home/dsig999: Permission denied

Redirect Script Errors

You can redirect script error to a log file called scripts.err:
./script.sh 2>scripts.err
/path/to/example.pl 2>scripts.err

Append To Error Log
You can append standard error to end of error.log file using >> operator:
command-name 2>>error.log
./script.sh 2>>error.log
/path/to/example.pl 2>>error.log

2>&1

The 1 denotes standard output (stdout). The 2 denotes standard error (stderr).

command-name >/dev/null 2>&1

So 2>&1 says to send standard error to where ever standard output is being redirected as well.
Which since it's being sent to /dev/null is akin to ignoring any output at all.

Input redirection can be useful if you have written a program which expects input from the terminal and you want to provide it from a file.

$ myprog < Input File > Output File

To redirect standard error and output to different files (note that grouping is not necessary in Bourne shell):

$ cat myfile > Output File 2> Error File

Assigns the file descriptor (fd) to file for output
Besides 1,2 and 3 there are also file Descriptors from 3 to 1023 which are free to use. For each one a symlink in /dev/fd is created as soon as it is initialized.

For creating a new FD , we need to use the “exec” command.

The internal exec command replaces the current shell process with the specified command. Normally, when you run a command a new process is spawned. The exec command does not spawn a new process. Instead, the current process is overlaid with the new command.

You can assign a file descriptor to an output file with the following syntax:
exec fd> output.txt
where, fd >= 3

Dev:vx1423:djbs002-jas $ exec 4> /config/jboss/ews/1.0/domains/jas/noter
Dev:vx1423:djbs002-jas $ echo "This is a test" >&4
Dev:vx1423:djbs002-jas $ cat noter
This is a test
exec 4<&-

Assigns the file descriptor (fd) to file for input
exec fd< input.txt
exec 3< /etc/resolv.conf
# Executes cat commands and read input from
# the file descriptor (fd) # 3 i.e. read input from /etc/resolv.conf file
cat <&3
# Close fd # 3
exec 3<&-

To Create a read/Write FD for a File use,

exec 3<> file

You can use exec to perform I/O redirection commands. The following examples illustrate the use of exec for redirection purposes.

exec 3< inputfile # Opens inputfile with file descriptor 3 for reading.
exec 4> outputfile # Opens outputfile with file descriptor 4 for writing.
exec 5<&0 # Makes fd 5 a copy of fd 0 (standard input).
exec 6>&p # Attach fd 6 to co-process.

Read More

Controlling Resources


Many Times as a Sys admin,I see users create different scripts or process that cause a system failure. Many times there are done with out the knowledge of the scripts or process effectively.

One of the dangers associated with the thin client model is that a runaway process might eat up all of the available system memory and/or cpu on the host system. When this happens, the performance on that system can degrade resulting in system hangs, freezes, and a host of other generally undesirable consequences.

sysctl.conf ( System Level )
The number of concurrently open file descriptors throughout the system can be changed via /etc/sysctl.conf file under Linux operating systems.

So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):

sysctl -w fs.file-max=100000

Also we can edit the sysctl.conf file and add
fs.file-max = 100000

Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
# sysctl -p

check the changes using
sysctl fs.file-max

Limits.conf ( User Level )
The limits.conf is a file located in /etc/security. This file provides the ability to specify the user and group level limits to specific resources like memory , cpu etc. Limits set in this file are set on a per user or per group basis.

The basic syntax for limits.conf consists of individual lines with values of the following types:
(domain) (type) (item) (value)
where domain is user or group, type refers to a hard or soft limit, item refers to the resource being limited and value referring to the value associated with the limit being set. For example, setting the following value:

me hard priority 20

places a hard limit on the priority with which jobs are scheduled for a user named 'me'. In this case, user 'me' is always scheduled at the lowest possible priority.

Some more examples include ,

guest hard cpu 10 : max CPU time

Ulimt
In Linux , there is a way to restrict things that a user is allowed to do. This can be achieved by using the 'ulimit' command. This is available as a part of the shell is that you can include it in the individual user's profile and have it apply to them, or place your restrictions in /etc/security/limits.conf and have them apply to all users.

One important thing is that ulimit cant be used to manage file space. This is specifically used to manage process and we can limit then in three ways
Unlimited ( the Default one)
Hard Limit ( the User Cannot Exceed this)
Soft Limit (The Users may exceed this)

These are based on the parameters -H and -S that we pass .A hard limit cannot be increased, while a soft limit can be increased until it reaches the value of a hard limit

When we run the 'ulimit -a' command

LocalHost:Root -~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) 1073741824
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 32832
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) 1073741824
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 32832
virtual memory (kbytes, -v) 1073741824
file locks (-x) unlimited

If there are no hard or soft restrictions set, the response returned will simply be “unlimited." This does not mean there are not limits.

Here are Explanation of the Output

CPU TIME - Limits the number of CPU seconds a process can use (not clock seconds). CPU time is the amount of time the CPU actual spends executing processor instructions and is often much less than the total program "runs time".

VIRTUAL MEMORY SIZE - the most useful of the memory-related limitations, because it includes all types of memory, including the stack, the heap, and memory-mapped files. Specified in kilobytes. For Single Process

DATA SEGMENT SIZE - Limits the amount of memory that a process can allocate on the heap, Specified in kilobytes. For Single Process

STACK SIZE - Limits the amount of memory a process can allocate on the stack. Specified in kilobytes.
FILE SIZE - Limits the maximum size of any one file a process can create. Specified in 512-byte blocks.

MAX USER PROCESSES - limits the number of processes the current user is permitted to have in the process table at one time. Attempts to start additional processes will fail.

OPEN FILES - limits the number of file descriptors belonging to a single process. "File descriptors" includes not only files but also sockets for Internet communication. Attempts to open additional file descriptors will fail.

LOCKED MEMORY - this parameter limits the maximum amount of memory that can be "locked down" to a specific address in physical memory by a given process. In practice, only the root user can lock memory in this fashion. Specified in kilobytes, in the bash shell, as found in
Linux and many other systems.

MAX RESIDENT SET SIZE - this parameter limits the amount of memory that can be "swapped in" to physical RAM on behalf of any one process. Specified in kilobytes.

MAX TOTAL SOCKET BUFFER SIZE - this parameter limits the total amount of memory that may be taken up by buffers holding network data on behalf of a given process. Specified in bytes.

PIPE SIZE - when two Unix processes communicate via a pipe or FIFO (first in first out) buffer, as in the simple case of paging through a directory listing with the command "ls | more", the output of the first command is buffered before transmission to the second. The size of this buffer, in bytes, is the pipe size. This is typically not an adjustable parameter, except at kernel compile time.

CORE FILE SIZE - Limits the size of a "core" file left behind when a process encounters a segmentation fault or other unexpected fatal error. Core files are images of the entire memory space used by the process at the time of the crash, and can be used to examine the state
of the process. Specified in 512-byte blocks.

Changing the Values
These values can be changed using the 'ulimit' it self.

For example, to reduce the number of processes a user can have from the default to 100, you can use the command

ulimit –u 100

and then try 'ulimit -a' for the updated result.

To Set the Hard Limit and Soft Limit use

Local host-~ $ ulimit –S –u 100
100

Local host-~ $ ulimit –H –u 8192
8192

To set a value to unlimited, use the word itself: ulimit –u unlimited .

To see only one value, specify that parameter. For example, to see the soft value of user processes, enter: ulimit –Su


Happy Learning
Read More