If
the application execution time is taking more time, or if the Operating System
is performing slow and slow, there may be a case of Out of memory. In other
words we can say that memory is being allocated but is not being returned when
it is no longer used or needed. At
last the application of the System will run out of memory causing application
to terminate abnormally.
Out of Memory
Normally
Out of memory is thrown when there is insufficient space available to allocate
in the Java heap or in area of heap. The Garbage collector cannot make any
further space available to allow new objects and we can also think that heap
cannot be expanded at this point.
When
ever a OOM is thrown a stack trace is also printed. In order to diagnose a Out
of Memory condition, it is good to analyze the Stack Trace and what error was
generated.
After
an OutOfMemoryError condition, the JVM can be in an unknown or unreliable
state. Even if the JVM appears to be operating normally, it is recommended that
you restart the Java application to ensure a clean environment
OOM: Out of Heap Space
This
message says that an Object could not be allocated in the Java Heap. There is
couple of cases here. At this Point it is not necessary sure that there is
memory leak. This Out of heap Space can be due to wrong configuration where a
smaller heap was configured for the application. It is always good to tune the
application (load test) before finalizing the heap space for the application.
In
other cases when an application is running for long time, the application may
hold references to Objects that are unable to garbage collect. In these cases
the memory that these objects are being used cannot be reclaimed by the Garbage
collector and causes the memory to raise and Out of memory.
The
Api’s that are called by the application could also hold object references
causing memory leaks. In other cases Finalize methods can also cause Out of
memory leaks. When an class has a Finalize method, these type of objects are not reclaimed during
Garabage collection,instead they all are added to the finalize Queue which is a
daemon thread that are run after the Garbage collection. If the finalize thread
cant keep up to the finalize objects, then it may throw up the Out of Heap
Space errors.
Find Out objects available for
finialization
Dev:vx1423:djbs002-webapps
$ jmap -finalizerinfo 32576
Number
of objects pending for finalization: 0
Heap
dump is created in the directory where the jmap command is issued.
- Sun
JDK 1.5: jmap -heap:format=b <JAVA_PID>
- Sun
JDK 1.5 64-bit: jmap -J-d64 -heap:format=b <JAVA_PID>
- JRocket
: jrcmd <PID> hprofdump filename=/tmp/abc.hprof
The
heap dump files will be large sized.
OOM: Out of Perm Space
PermSpace
is a area which does not belong to the heap Space. This is a different area
configured with -XX:MaxPermSize option.
The permSpace area is an area that stores class and method objects. So when
ever an object of a class is created in Heap Space , the corresponding Class
and its meta data is loaded into the Perm Space. Out of Perm Space arrives when
there is no sufficient space available in order to allocate class metadata
information for a newly created Object in Heap.
This
may be due to some causes like,
- The
application has many classes (e.g. many JSPs).
- There
are many static member variables in classes.
- There
is heavy use of String.intern().
- There
is a classloader leak. If an object outside the classloader holds a
reference to an object loaded by the classloader, the classloader cannot
be collected.
- There
are many isolated classloaders.
- The
CMS low pause collector by default does not collect the permanent
generation space.
- Singleton
patterns where a class has a static reference to the class itself, making
the class unloadable.
Perm
Space Information can be obtained using
jmap
-permstat <PID> >> /tmp/dump
Once
we open the file we can see some thing like this
44496 intern Strings occupying 4507848 bytes.
class_loader classes
bytes parent_loader alive?
Type
0xd5764d68 46 257496
0xd57adba0 dead ContextTypeMatchClassLoader
0xd47bcbe8 8 16720
0xd4618170 dead ContextOverridingClassLoader@0xc52b4ef0
0xe5060348 3 6944
0xe4cad700 live
ContextOverridingClassr@0xd5d823a8
total = 321 10000
41331344 N/A alive=114, dead=207 N/A
For
each class loader object, the following details are printed:
1.
The address of the class loader object – at the snapshot when the utility was
run.
2.
The number of classes loaded (defined by this loader with the method
java.lang.ClassLoader.defineClass).
3.
The approximate number of bytes consumed by meta-data for all classes loaded by
this class
loader.
4.
The address of the parent class loader (if any).
5.
A “live” or “dead” indication – indicates whether the loader object will be
garbage collected in the future.
6.
The class name of this class loader.
OOM: GC OverHead limit exceeded
“GC overhead limit exceeded” is generally
thrown by the Garbage Collectors that we use for the JVM. This is generally
thrown by serial or parallel collectors.
The
issue comes when more amount of time is spent in doing the Garabage Collection
and very less than 2% of the heap is recovered.
This
may be due to applications that are running very long time or may be due to
threads that are struck. Due to these sort of threads, the objects that are
loaded are not reclaimed and are hold-ed by these struck threads for a long
time
The
serial or parallel collectors throw this exception and this feature is designed
to prevent applications from running for an extended period of time while
making little or no progress because the heap is too small. If necessary, this
feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the
command line.
If
the new generation size is explicitly defined with JVM options, decrease the
size or remove the relevant JVM options entirely to un-constrain the JVM and
provide more space in the old generation for long lived objects.
If
there is unintended object retention , we need to check code for changes If the
retention looks normal, and it is a load issue, the heap size would need to be
increased.
Note : The
New Generation Size is specified by -XX:NewSize=n
Set
this value to a multiple of 1024 that is greater than 1MB. As a general rule,
set -XX:NewSize to be one-fourth the size of the maximum heap size. Increase
the value of this option for larger numbers of short-lived objects.
Be
sure to increase the New generation as you increase the number of processors.
To
find what values were given to the JVM use jmap like
#$
Jmap 9977
Heap
Configuration:
MinHeapFreeRatio
= 40
MaxHeapFreeRatio
= 70
MaxHeapSize
= 268435456 (256.0MB)
NewSize
= 1048576 (1.0MB)
MaxNewSize
= 4294901760 (4095.9375MB)
OldSize
= 4194304 (4.0MB)
NewRatio
= 8
SurvivorRatio
= 8
PermSize
= 16777216 (16.0MB)
MaxPermSize
= 134217728 (128.0MB)
OOM: Requested array
size exceeds VM limit
This message says that application tried to allocate an array size in
heap that cannot be created. In most cases these issues arise due to the wrong
memory configuration or a bug in application code part where it creates a array
with that size.
OOM: requested 32756 bytes for ChunkPool::allocate. Out
of swap space?
Possible causes:
- not enough swap space left, or
- kernel parameter MAXDSIZ is very small.
This message says
that the JVM cannot allocate native memory for the operation. The
message indicates the size (in bytes - 32765) of the request that failed and
the reason for the memory request. In most cases when this type of error comes
, the JVM will invoke the fatal error handling mechanism which will generate a
fatal error log file which contains information like threads, heap and process
information.
This can be
considered as a problem with the Operating System which cannot allocate enough
native swap space which might be being used heavily by other process or might
be insufficient swap space. During these cases we need to depend on the
Operating System tools to analyze the issues.
OOM: Unable to create new native thread or StackOverflowException or Attempt to ungaurd stack red zone failed
A
StackOverflowError occurs when the size if the memory that the stack of a Java
program require is greater than the memory size set by the Java Runtime
Environment.
These occur mainly
due to deeply-nested applications or through infinite loops created in error by
the programmer in the application. Graphics intensive programs commonly require
larger Java stacks than the default value created by the Runtime Environment
Variables stored in
stacks are only visible to the owner Thread, while objects created in heap are
visible to all thread. In other words stack memory is kind of private memory of
Java Threads, while heap memory is shared among all threads.
Most cases we can
use the –Xss jvm Falg and lower the amount of stack Size that each Threads gets
When I see an
OutOfMemoryError, does that mean that the Java program will exit?
Not always. Java
programs can catch the exception thrown when OutOfMemory occurs, and
(possibly after freeing up some of the allocated objects) continue to run.
GOod information on OOM. I am stuck with the OOM analysis.
ReplyDeleteIs it possible to get your help on analyzing OOM. if yes could you please provide your contact number
Good Info!!!!!
ReplyDelete