Red hat linux maintains an rpm (Red hat Package Manager) database which gets accessed whenever an rpm command is issued. The database is modified when a new package is installed.
The rpm database is located in /var/lib/rpm.The files in this are binary files which holds information about the installed packages.
In this article we will see on how to backup the rpm database in red hat linux, but before taking a backup we need to make sure that no process is currently using any files within this directory,
lsof | grep /var/lib/rpm
Any files that are currently used within this location should be stopped before continuing.
Once we are sure that no files are being accessed currently, the next step is to remove the lock files. These lock files are left behind by a process attempting to access the rpm database. use,
ll /var/lib/rpm
Lock files begin with double underscore character followed by db ( __db ).If there are any lock files we can remove them using
rm -f /var/lib/rpm/_db*
Once the lock files are removed we can make a back up of the rpm database using
tar czvf /root/Desktop/rpmDatabase.tar.gz /var/lib/rpm/
In this article, we will see how to configure a
service to start after the boot process is completed. We use RHEL5 as an Os and
tomcat as a service to start after the boot process.
For a service to start automatically after the boot process is compelted, we
need to make sure that the script for starting that process is available in
/etc/init.d location. First we need write a script which starts the process.
The script looks like this,
#!/bin/bash
#
# tomcat This shell script takes care of starting and stopping the JON server
for JBOSS.
# description: Tomcat Server
#
# process name: startup.sh
# Source function library.
. /etc/rc.d/init.d/functions
RETVAL=0
prog="Tomcat"
start() {
# Start daemons.
echo -n $"Starting $prog:
"
su - root -c
"/usr/1tmc/bin/startup.sh &" >/usr/1tmc/logs/catalina.out
2>&1
if [ $? -eq 0 ]; then
success
else
failure
fi
echo
RETVAL=$?
}
stop() {
# Stop daemons.
echo -n $"Shutting down $prog:
"
su - root -c
"/usr/1tmc/bin/shutdown.sh &" >/usr/1tmc/logs/catalina.out
2>&1
if [ $? -eq 0 ]; then
success
else
failure
fi
echo
RETVAL=$?
}
restart() {
stop
start
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
Now the script is ready, which helps in starting, stopping and restarting the
tomcat. Save as tomcat.
The next step is to copy this tomcat script to /etc/init.d location. Once you
copy this to the location, there will be a sym link created in the respective
run level directories. Since the current run level is 5, I can see the sym link
created in /etc/rc5.d location.
[root@vx111a rc5.d]# ll | grep tom
lrwxrwxrwx 1 root root 16 Feb 2 17:22 S90tomcat -> ../init.d/tomcat
Now we did the copy, next thing is to add this service to start after boot
process, this can be done chkconfig command like
By this we are sure that the tomcat service is on in 2, 3, 4 and 5 levels. Now
if we want to add the service to a specific level we can
chkconfig --level 35 tomcat on
This makes sure that the service gets started in run level 3 and 5.By this we
added the tomcat to start as a process after reboot.we can start the service
using
Service tomcat start
Now reboot your system to see that tomcat will be started automatically.
This article explains you on how to configure a cluster using multiple tomcat instances and perform a session replication between them. We will be using the in-memory session replication in this article. For this demo article I have chosen RHEL 5.4 and Tomcat 6.
In order to configure clustering in tomcat. We follow the sequence,
1. Multicast?
2. Test Whether Your Kernel supports Multicast.
3. Configure Apache Http Server with mod_jk.
4. Configure Multiple Instances of Tomcat.
5. Configure Clustering in Tomcat.
6. Make your application distributable.
7. Session Replication Rules
8. Deploy the sample application.
9. Test.
10. Basic Issues.
1. Multicast?
If we have information that should be sent to various hosts on the internet or LAN, then it is called multicasting (UDP).
This is the opposite of unicast (TCP) where there will be once sender and only one receiver.
As we know, the IP address are divided into classes based on the high order of 32 bits, like
0 31 Address Range:
0| Class A Address 0.0.0.0 - 127.255.255.255
|1 0| Class B Address 128.0.0.0 - 191.255.255.255
|1 1 0| Class C Address 192.0.0.0 - 223.255.255.255
Note the MULTICAST attribute in the third line of the eth0 properties. If this is not present, it's possible that your kernel has not been compiled with multicast support.
Once we are sure that our kernel supports multicasting the next step is to enable multicast routing and this can be done by,
route add 224.0.0.0 netmask 240.0.0.0 dev eth0
Once this is done. You can make sure to ping the ip address, like
ping 224.0.0.8
You can also make sure about the multicasting using
[root@vx111a bin]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
In httpd.conf, add the following line at the bottom,
Include conf/mod_jk.conf
By this the configuration on the Http Server is done. We have now configured http server with mod_jk (load balancing support)
4. Configure Multiple Instances of Tomcat.
The next step is configuring tomcat instances. The article for configuring multiple instances of tomcat can be found here or.
Download the Tomcat to a location
Export the CATALINA_HOME and CATALINA_BASE.Add these to the /root/.bashrc
export CATALINA_HOME=/usr/jboss-ews-1.0/tomcat6;
export CATALINA_BASE=/usr/jboss-ews-1.0/tomcat6;
Once added, use source command to update, source /root/.bashrc
Now create 2 directories in other location (I have create tmc1 and tmc2 in /usr/local)
Once both locations are created, copy the directories bin, conf, lib, logs, temp, webapps, and work.
In bin directory, remove all the files except, startup.sh and shutdown.sh (if windows keep their respective bat files)
Clean startup.sh and shutdown.sh and update with the following
startup.sh
export CATALINA_BASE=/usr/local/tmc2
export CATALINA_HOME=/usr/jboss-ews-1.0/tomcat6
/usr/jboss-ews-1.0/tomcat6/bin/startup.sh
And shutdown.sh
export CATALINA_BASE=/usr/local/tmc2
export CATALINA_HOME=/usr/jboss-ews-1.0/tomcat6
/usr/jboss-ews-1.0/tomcat6/bin/shutdown.sh
Once these changes are done, go to the conf location and modify the server.xml file according to your port needs. The 2 main ports to change in server.xml are shutdown port and http port. It would be good if we change all ports.
Now we have 2 instances of tomcat running. Now for the Clustering we need to modify the ports like
These are the changes that I did in the both tomcat instances
Shutdown Http Connector jvmRoute
Tomcat 1 8005 8080 11009 web1
Tomcat 2 8105 8180 12009 web2
These changes are to be done in server.xml files in Tomcat (1,2)/conf/.These changes should be same as we configured in worker.properties file(port, jvmRoute…).
The JvmRoute attribute element of the Engine element allows the load balancer to match request to the exact jvm responsible for updating the session. This is done by appending the name of the jvm to the JSESSIONID of the request and matching this against the worker name provided in the worker.properties file
By this we are done with configuring multiple instances of tomcat. The next step is to configure cluster
5. Configure Clustering in Tomcat.
For configuring cluster we need to add the Cluster element in the Engine element like
Add the cluster element to tomcat{1,2}/conf/server.xml file and restart the tomcat instances.
Since we are going for a In-Memory session replication, We need to configure 2 elemets
Cluster: this element is responsible for the actual session replication which includes sending the new session information to the group, taking incoming session information and also managing group’s membership.
Replication: The other element is the replication element which helps in reducing the session data by filtering certain request from session replication
SimpleTcpCluster is the implementation available in tomcat for In-Memory replication. This uses apache tribes for communication. This uses heartbeat mechanism to determine the group memberships.
The SimpleTcpCluster provides us with 2 managers
DeltaManager : replicate sessions across all tomcat instances
BackupManager : replicate session from master to a backup server
We can configure a manager depending on our needs.
SimpleTcpCluster uses Apache Tribes for communication between the servers or groups. Membership is established and maintained by Apache Tribles.It also handles server crash and recovery. This also offers guaranteed message delivery between members. This also carries the task of replicating the session data to all members.
6. Make Your Applications Distributable
In order to work with your application in a clustered environment, our applications need to add <distributable/> element to the web.xml file.
Your other option is to simply add the "distributable" attribute to the relevant application's Context element, as follows:
<Context distributable="true">
7. Session Replication Rules
Session attributes must implement java.io.Serializable.
HttpSession.setaAttribute() must be called any time changes are made to an object that belongs to a session, so that the session replicator can distribute the changes across the cluster.
The sessions must not be so big that they overload the server with traffic when copied.
Deploy the sample application.
Now the Cluster configuration is done, we need to deploy a sample application.
Create a web application. Download the following jsp file from here and add to the web application. Make sure you add <distributable/> element to the web.xml file