/ Java EE Support Patterns: OutOfMemoryError
Showing posts with label OutOfMemoryError. Show all posts
Showing posts with label OutOfMemoryError. Show all posts

9.21.2012

OutOfMemoryError: unable to create new native thread – Problem Demystified

As you may have seen from my previous tutorials and case studies, Java Heap Space OutOfMemoryError problems can be complex to pinpoint and resolve. One of the common problems I have observed from Java EE production systems is OutOfMemoryError: unable to create new native thread; error thrown when the HotSpot JVM is unable to further create a new Java thread. 

This article will revisit this HotSpot VM error and provide you with recommendations and resolution strategies. 

If you are not familiar with the HotSpot JVM, I first recommend that you look at a high level view of its internal HotSpot JVM memory spaces. This knowledge is important in order for you to understand OutOfMemoryError problems related to the native (C-Heap) memory space.


OutOfMemoryError: unable to create new native thread – what is it?

Let’s start with a basic explanation. This HotSpot JVM error is thrown when the internal JVM native code is unable to create a new Java thread. More precisely, it means that the JVM native code was unable to create a new “native” thread from the OS (Solaris, Linux, MAC, Windows...).

We can clearly see this logic from the OpenJDK 1.6 and 1.7 implementations as per below:



Unfortunately at this point you won’t get more detail than this error, with no indication of why the JVM is unable to create a new thread from the OS…


HotSpot JVM: 32-bit or 64-bit?

Before you go any further in the analysis, one fundamental fact that you must determine from your Java or Java EE environment is which version of HotSpot VM you are using e.g. 32-bit or 64-bit.

Why is it so important? What you will learn shortly is that this JVM problem is very often related to native memory depletion; either at the JVM process or OS level. For now please keep in mind that:

  • A 32-bit JVM process is in theory allowed to grow up to 4 GB (even much lower on some older 32-bit Windows versions).
  • For a 32-bit JVM process, the C-Heap is in a race with the Java Heap and PermGen space e.g. C-Heap capacity = 2-4 GBJava Heap size (-Xms, -Xmx) – PermGen size (-XX:MaxPermSize)
  • A 64-bit JVM process is in theory allowed to use most of the OS virtual memory available or up to 16 EB (16 million TB)
 As you can see, if you allocate a large Java Heap (2 GB+) for a 32-bit JVM process, the native memory space capacity will be reduced automatically, opening the door for JVM native memory allocation failures.

For a 64-bit JVM process, your main concern, from a JVM C-Heap perspective, is the capacity and availability of the OS physical, virtual and swap memory.


OK great but how does native memory affect Java threads creation?

Now back to our primary problem. Another fundamental JVM aspect to understand is that Java threads created from the JVM requires native memory from the OS. You should now start to understand the source of your problem…

The high level thread creation process is as per below:

  • A new Java thread is requested from the Java program & JDK
  • The JVM native code then attempt to create a new native thread from the OS
  • The OS then attempts to create a new native thread as per attributes which include the thread stack size. Native memory is then allocated (reserved) from the OS to the Java process native memory space; assuming the process has enough address space (e.g. 32-bit process) to honour the request
  • The OS will refuse any further native thread & memory allocation if the 32-bit Java process size has depleted its memory address space e.g. 2 GB, 3 GB or 4 GB process size limit
  • The OS will also refuse any further Thread & native memory allocation if the virtual memory of the OS is depleted (including Solaris swap space depletion since thread access to the stack can generate a SIGBUS error, crashing the JVM * http://bugs.sun.com/view_bug.do?bug_id=6302804

In summary:

  • Java threads creation require native memory available from the OS; for both 32-bit & 64-bit JVM processes
  • For a 32-bit JVM, Java thread creation also requires memory available from the C-Heap or process address space

Problem diagnostic

Now that you understand native memory and JVM thread creation a little better, is it now time to look at your problem. As a starting point, I suggest that your follow the analysis approach below:

  1. Determine if you are using HotSpot 32-bit or 64-bit JVM
  2. When problem is observed, take a JVM Thread Dump and determine how many Threads are active
  3. Monitor closely the Java process size utilization before and during the OOM problem replication
  4. Monitor closely the OS virtual memory utilization before and during the OOM problem replication; including the swap memory space utilization if using Solaris OS

Proper data gathering as per above will allow you to collect the proper data points, allowing you to perform the first level of investigation. The next step will be to look at the possible problem patterns and determine which one is applicable for your problem case.

Problem pattern #1 – C-Heap depletion (32-bit JVM)

From my experience, OutOfMemoryError: unable to create new native thread is quite common for 32-bit JVM processes. This problem is often observed when too many threads are created vs. C-Heap capacity.
JVM Thread Dump analysis and Java process size monitoring will allow you to determine if this is the cause.

Problem pattern #2 – OS virtual memory depletion (64-bit JVM)

In this scenario, the OS virtual memory is fully depleted. This could be due to a few 64-bit JVM processes taking lot memory e.g. 10 GB+ and / or other high memory footprint rogue processes. Again, Java process size & OS virtual memory monitoring will allow you to determine if this is the cause.

Also, please verify if you are not hitting OS related threshold such as ulimit -u or NPROC (max user processes). Default limits are usually low and will prevent you to create let's say more than 1024 threads per Java process.

Problem pattern #3 – OS virtual memory depletion (32-bit JVM)

The third scenario is less frequent but can still be observed. The diagnostic can be a bit more complex but the key analysis point will be to determine which processes are causing a full OS virtual memory depletion. Your 32-bit JVM processes could be either the source or the victim such as rogue processes using most of the OS virtual memory and preventing your 32-bit JVM processes to reserve more native memory for its thread creation process.

Please note that this problem can also manifest itself as a full JVM crash (as per below sample) when running out of OS virtual memory or swap space on Solaris.

#
# A fatal error has been detected by the Java Runtime Environment:
#
# java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
#
#  Internal Error (allocation.cpp:166), pid=2290, tid=27
#  Error: ChunkPool::allocate
#
# JRE version: 6.0_24-b07
# Java VM: Java HotSpot(TM) Server VM (19.1-b02 mixed mode solaris-sparc )
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x003fa800):  JavaThread "CompilerThread1" daemon [_thread_in_native, id=27, stack(0x65380000,0x65400000)]

Stack: [0x65380000,0x65400000],  sp=0x653fd758,  free space=501k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
………………


Native memory depletion: symptom or root cause?

You now understand your problem and know which problem pattern you are dealing with. You are now ready to provide recommendations to address the problem…are you?

Your work is not done yet, please keep in mind that this JVM OOM event is often just a “symptom” of the actual root cause of the problem. The root cause is typically much deeper so before providing recommendations to your client I recommend that you really perform deeper analysis. The last thing you want to do is to simply address and mask the symptoms. Solutions such as increasing OS physical / virtual memory or upgrading all your JVM processes to 64-bit should only be considered once you have a good view on the root cause and production environment capacity requirements.

The next fundamental question to answer is how many threads were active at the time of the OutOfMemoryError? In my experience with Java EE production systems, the most common root cause is actually the application and / or Java EE container attempting to create too many threads at a given time when facing non happy paths such as thread stuck in a remote IO call, thread race conditions etc. In this scenario, the Java EE container can start creating too many threads when attempting to honour incoming client requests, leading to increase pressure point on the C-Heap and native memory allocation. Bottom line, before blaming the JVM, please perform your due diligence and determine if you are dealing with an application or Java EE container thread tuning problem as the root cause.

Once you understand and address the root cause (source of thread creations), you can then work on tuning your JVM and OS memory capacity in order to make it more fault tolerant and better “survive” these sudden thread surge scenarios.

Recommendations:

  • First, quickly rule out any obvious OS memory (physical & virtual memory) & process capacity (e.g. ulimit -u / NPROC) problem.
  • Perform a JVM Thread Dump analysis and determine the source of all the active threads vs. an established baseline. Determine what is causing your Java application or Java EE container to create so many threads at the time of the failure
  • Please ensure that your monitoring tools closely monitor both your Java VM processes size & OS virtual memory. This crucial data will be required in order to perform a full root cause analysis. Please remember that a 32-bit Java process size is limited between 2 GB - 4 GB depending of your OS
  • Look at all running processes and determine if your JVM processes are actually the source of the problem or victim of other processes consuming all the virtual memory
  • Revisit your Java EE container thread configuration & JVM thread stack size. Determine if the Java EE container is allowed to create more threads than your JVM process and / or OS can handle
  • Determine if the Java Heap size of your 32-bit JVM is too large, preventing the JVM to create enough threads to fulfill your client requests. In this scenario, you will have to consider reducing your Java Heap size (if possible), vertical scaling or upgrade to a 64-bit JVM
Capacity planning analysis to the rescue

As you may have seen from my past article on the Top 10 Causes of Java EE Enterprise Performance Problems, lack of capacity planning analysis is often the source of the problem. Any comprehensive load and performance testing exercise should also properly determine the Java EE container threads, JVM & OS native memory requirement for your production environment; including impact measurements of "non-happy" paths. This approach will allow your production environment to stay away from this type of problem and lead to better system scalability and stability in the long run.

Please provide any comment and share your experience with JVM native thread troubleshooting.

3.05.2012

OutOfMemoryError: Out of swap space - Problem Patterns

Today we will revisit a common Java HotSpot VM problem that you probably already experienced at some point in your JVM troubleshooting experience on Solaris OS; especially on a 32-bit JVM.

This article will provide you with a description of this particular type of OutOfMemoryError, the common problem patterns and the recommended resolution approach.

If you are not familiar with the different HotSpot memory spaces, I recommend that you first review the article Java HotSpot VM Overview before going any further in this reading.

java.lang.OutOfMemoryError: Out of swap space? – what is it?

This error message is thrown by the Java HotSpot VM (native code) following a failure to allocate native memory from the OS to the HotSpot C-Heap or dynamically expand the Java Heap etc... This problem is very different than a standard OutOfMemoryError (normally due to an exhaustion of the Java Heap or PermGen space).

A typically error found in your application / server logs is:

Exception in thread "main" java.lang.OutOfMemoryError: requested 53459 bytes for ChunkPool::allocate. Out of swap space?

Also, please note that depending of the OS that you use (Windows, AIX, Solaris etc.) some OutOfMemoryError due to C-Heap exhaustion may not give you detail such as “Out of swap space”. In this case, you will need to review the OOM error Stack Trace and determine if the computing task that triggered the OOM and determine which OutOfMemoryError problem pattern your problem is related to (Java Heap, PermGen or Native Heap exhaustion).

Ok so can I increase the Java Heap via –Xms & -Xmx to fix it?

Definitely not! This is the last thing you want to do as it will make the problem worse. As you learned from my other article, the Java HotSpot VM is split between 3 memory spaces (Java Heap, PermGen, C-Heap). For a 32-bit VM, all these memory spaces compete between each other for memory. Increasing the Java Heap space will further reduce capacity of the C-Heap and reserve more memory from the OS.

Your first task is to determine if you are dealing with a C-Heap depletion or OS physical / virtual memory depletion.

Now let’s see the most common patterns of this problem.

Common problem patterns

There are multiple scenarios which can lead to a native OutOfMemoryError. I will share with you what I have seen in my past experience as the most common patterns.

-        Native Heap (C-Heap) depletion due to too many Java EE applications deployed on a single 32-bit JVM (combined with large Java Heap e.g. 2 GB) * most common problem *
-        Native Heap (C-Heap) depletion due to a non-optimal Java Heap size e.g. Java Heap too large for the application(s) needs on a single 32-bit JVM
-        Native Heap (C-Heap) depletion due to too many created Java Threads e.g. allowing the Java EE container to create too many Threads on a single 32-bit JVM
-        OS physical / virtual memory depletion preventing the HotSpot VM to allocate native memory to the C-Heap (32-bit or 64-bit VM)
-        OS physical / virtual memory depletion preventing the HotSpot VM to expand its Java Heap or PermGen space at runtime (32-bit or 64-bit VM)
-        C-Heap / native memory leak (third party monitoring agent / library, JVM bug etc.)

Troubleshooting and resolution approach

Please keep in mind that each HotSpot native memory problem can be unique and requires its own troubleshooting & resolution approach.

Find below a list of high level steps you can follow in order to further troubleshoot:

-        First, determine if the OOM is due to C-Heap exhaustion or OS physical / virtual memory. For this task, you will need to perform close monitoring of your OS memory utilization and Java process size. For example on Solaris, a 32-bit JVM process size can go to about 3.5 GB (technically 4 GB limit) then you can expect some native memory allocation failures. The Java process size monitoring will also allow you to determine if you are dealing with a native memory leak (growing overtime / several days…)

-        The OS vendor and version that you use is important as well. For example, some versions of Windows (32-bit) by default support a process size up to 2 GB only (leaving you with minimal flexibility for Java Heap and Native Heap allocations). Please review your OS and determine what is the maximum process size e.g. 2 GB, 3 GB or 4 GB or more (64-bit OS)

-        Like the OS, it is also important that you review and determine if you are using a 32-bit VM or 64-bit VM. Native memory depletion for a 64-bit VM typically means that your OS is running out of physical / virtual memory

-        Review your JVM memory settings. For a 32-bit VM, a Java Heap of 2 GB+ can really start to add pressure point on the C-Heap; depending how many applications you have deployed, Java Threads etc… In that case, please determine if you can safely reduce your Java Heap by about 256 MB (as a starting point) and see if it helps improve your JVM memory “balance”.

-        Analyze the verbose GC output or use a tool like JConsole to determine your Java Heap footprint. This will allow you to determine if you can reduce your Java Heap in a safe manner or not

-        When OutOfMemoryError is observed. Generate a JVM Thread Dump and determine how many Threads are active in your JVM; the more Threads, the more native memory your JVM will use. You will then be able to combine this data with OS, Java process size and verbose GC; allowing to determine where the problem is

Once you have a clear view of the situation in your environment and root cause, you will be in a better position to explore potential solutions as per below:

-        Reduce the Java Heap (if possible / after close monitoring of the Java Heap) in order to give that memory back to the C-Heap / OS
-        Increase the physical RAM / virtual memory of your OS (only applicable if depletion of the OS memory is observed; especially for a 64-bit OS & VM)
-        Upgrade your HotSpot VM to 64-bit (for some Java EE applications, a 64-bit VM is more appropriate) or segregate your applications to different JVM’s (increase demand on your hardware but reduce utilization of C-Heap per JVM)
-        Native memory leak are trickier and requires deeper dive analysis such as analysis of the Solaris pmap / AIX svmon data and review of any third party library (e.g. monitoring agents). Please also review the Oracle Sun Bug database and determine if your HotSpot version you use is exposed to known native memory problems

Still struggling with this problem? Don’t worry, simply post a comment / question at the end of this article. I also encourage you to post your problem case to the root cause analysis forum.

1.20.2012

GC overhead limit exceeded: Understand your JVM

For JVM related problems and tuning, I always recommend application support individuals to improve their basic JVM skills. The OpenJDK HotSpot implementation (c++ code) allows you to understand at a lower level the JVM business logic and memory conditions triggering errors such as GC overhead limit exceeded.

This article will attempt to dissect the JVM business logic so you can better understand this severe Java memory condition.

java.lang.OutOfMemoryError: GC overhead limit exceeded

Before going any deeper into this problem and HotSpot implementation, I first recommend that you review my first 2 articles on the subjects as they will provide you an overview of this problem along with common patterns and recommendations:


HotSpot implementation – GC overhead limit exceeded logic

Find below a JVM c++ code snippet showing you both the pseudo code and implementation of the JVM logic leading to the GC overhead limit exceeded error message.

The primary criteria’s and conditions leading to GC overhead limit exceeded are as per below:

1)     Full GC triggered / observed
2)     GC cost e.g. everage time is higher than default limit. This means the time spent in GC is way too high
3)     The amount of memory available is smaller than limit e.g. this means your JVM is very close to full depletion (OldGen + YoungGen)
4)     The GC overhead limit count is reached e.g. typically 5 iterations limit is the default

Now please find below a snippet of the OpenJDK HotSpot implementation (c++). You can download the full OpenSource HotSpot implementation at:

  bool print_gc_overhead_limit_would_be_exceeded = false;
  if (is_full_gc) {
    if (gc_cost() > gc_cost_limit &&
      free_in_old_gen < (size_t) mem_free_old_limit &&
      free_in_eden < (size_t) mem_free_eden_limit) {
      // Collections, on average, are taking too much time, and
      //      gc_cost() > gc_cost_limit
      // we have too little space available after a full gc.
      //      total_free_limit < mem_free_limit
      // where
      //   total_free_limit is the free space available in
      //     both generations
      //   total_mem is the total space available for allocation
      //     in both generations (survivor spaces are not included
      //     just as they are not included in eden_limit).
      //   mem_free_limit is a fraction of total_mem judged to be an
      //     acceptable amount that is still unused.
      // The heap can ask for the value of this variable when deciding
      // whether to thrown an OutOfMemory error.
      // Note that the gc time limit test only works for the collections
      // of the young gen + tenured gen and not for collections of the
      // permanent gen.  That is because the calculation of the space
      // freed by the collection is the free space in the young gen +
      // tenured gen.
      // At this point the GC overhead limit is being exceeded.
      inc_gc_overhead_limit_count();
      if (UseGCOverheadLimit) {
        if (gc_overhead_limit_count() >=
            AdaptiveSizePolicyGCTimeLimitThreshold){
          // All conditions have been met for throwing an out-of-memory
          set_gc_overhead_limit_exceeded(true);
          // Avoid consecutive OOM due to the gc time limit by resetting
          // the counter.
          reset_gc_overhead_limit_count();
        } else {
          // The required consecutive collections which exceed the
          // GC time limit may or may not have been reached. We
          // are approaching that condition and so as not to
          // throw an out-of-memory before all SoftRef's have been
          // cleared, set _should_clear_all_soft_refs in CollectorPolicy.
          // The clearing will be done on the next GC.
          bool near_limit = gc_overhead_limit_near();
          if (near_limit) {
            collector_policy->set_should_clear_all_soft_refs(true);
            if (PrintGCDetails && Verbose) {
              gclog_or_tty->print_cr("  Nearing GC overhead limit, "
                "will be clearing all SoftReference");
            }
          }
        }
      }
      // Set this even when the overhead limit will not
      // cause an out-of-memory.  Diagnostic message indicating
      // that the overhead limit is being exceeded is sometimes
      // printed.
      print_gc_overhead_limit_would_be_exceeded = true;

    } else {
      // Did not exceed overhead limits
      reset_gc_overhead_limit_count();
    }
  }


Please feel free to post any comment or ask me any question.

11.28.2011

java.lang.outofmemoryerror: Java heap space - Analysis and resolution approach

java.lang.OutOfMemoryError: Java heap problem is one of the most complex problems you can face when supporting or developing complex Java EE applications.

This article will provide you with a description of this JVM HotSpot OutOfMemoryError error message and how you should attack this problem until its resolution.

For a quick help guide on how to determine which type of OutOfMemoryError you are dealing with, please consult the related posts found in this Blog. You will also find tutorials on how to analyze a JVM Heap Dump and identify potential memory leaks. A troubleshooting guide video for beginners is also available from our YouTube channel.

java.lang.OutOfMemoryError: Java heap space – what is it?

This error message is typically what you will see your middleware server logs (Weblogic, WAS, JBoss etc.) following a JVM OutOfMemoryError condition:

·         It is generated from the actual Java HotSpot VM native code
·         It is triggered as a result of Java Heap (Young Gen / Old Gen spaces) memory allocation failure (due to Java Heap exhaustion)

Find below a code snippet from the OpenJDK project source code exposing the JVM HotSpot implementation. The code is showing which condition is triggering the OutOfMemoryError: Java heap space condition.

# collectedHeap.inline.hpp


I strongly recommend that you download the HotSpot VM source code from OpenJDK yourself for your own benefit and future reference.

Ok, so my application Java Heap is exhausted…how can I monitor and track my application Java Heap?

The simplest way to properly monitor and track the memory footprint of your Java Heap spaces (Young Gen & Old Gen spaces) is to enable verbose GC from your HotSpot VM. Please simply add the following parameters within your JVM start-up arguments:

-verbose:gc –XX:+PrintGCDetails –XX:+PrintGCTimeStamps –Xloggc:<app path>/gc.log

You can also follow this tutorial  which will help you understand how to read and analyze your HotSpot Java Heap footprint.

Ok thanks, now I can see that I have a big Java Heap memory problem…but how can I fix it?

There are multiple scenarios which can lead to Java Heap depletion such as:

·         Java Heap space too small vs. your application traffic & footprint
·         Java Heap memory leak (OldGen space slowly growing over time…)
·         Sudden and / or rogue Thread(s) consuming large amount of memory in short amount of time etc.

Find below a list of high level steps you can follow in order to further troubleshoot:

·         If not done already, enabled verbose GC >> -verbose:gc
·         Analyze the verbose GC output and determine the memory footprint of the Java Heap for each of the Java Heap spaces (YoungGen & OldGen)
·         Analyze the verbose GC output or use a tool like JConsole to determine if your Java Heap is leaking over time. This can be observed via monitoring of the HotSpot old gen space.
·         Monitor your middleware Threads and generate JVM Thread Dumps on a regular basis, especially when a sudden increase of Java Heap utilization is observed. Thread Dump analysis will allow you to pinpoint potential long running Thread(s) allocating a lot of objects on your Java Heap in a short amount of time; if any
·         Add the following parameter within your JVM start-up arguments: -XX:HeapDumpOnOutOfMemoryError This will allow your HotSpot VM to generate a binary Heap Dump (HPROF) format. A binary Heap Dump is a critical data allowing to analyze your application memory footprint and / or source(s) of Java Heap memory leak

From a resolution perspective, my recommendation to you is to analyze your Java Heap memory footprint using the generated Heap Dump. The binary Heap Dump (HPROF format) can be analyzed using the free Memory Analyzer tool (MAT). This will allow you to understand your java application footprint and / or pinpoint source(s) of possible memory leak. Once you have a clear picture of the situation, you will be able to resolve your problem by increasing your Java Heap capacity (via –Xms & Xmx arguments) or reducing your application memory footprint and / or eliminating the memory leaks from your application code. Please note that memory leaks are often found in middleware server code and JDK as well.

I tried everything I could but I am still struggling to pinpoint the source of my OutOfMemoryError

Don’t worry, simply post a comment / question at the end of this article or email me directly @[email protected]. I currently offer free IT / Java EE consulting. Please provide me with your problem description along you’re your generated data such as a download link to your Heap Dump, Thread Dump data, server logs etc. and I will analyze your problem for you.

11.18.2011

java.lang.OutOfMemoryError – Weblogic Session size too big

A major problem was brought to our attention recently following a migration of a Java EE Portal application from Weblogic 8.1 to Weblogic 11g.

This case study will demonstrate how you can analyze an IBM JRE Heap Dump (.phd format) in order to determine the memory footprint of your application HttpSession objects.

Environment specifications (case study)

-         Java EE server: Oracle Weblogic Server 11g
-         Middleware OS: AIX 5.3
-         Java VM: IBM JRE 1.6.0
-         Platform type: Portal application

Monitoring and troubleshooting tools

-         Memory Analyzer 1.1 via IBM support assistant (IBM  JRE Heap Dump analysis)

Step #1 – Heap Dump generation

A Heap Dump file was generated following an OutOfMemoryError. The IBM JRE Heap Dump format is as per below (phd extension stands for Portal Heap Dump).

* Please note that you can also manually generate a IBM JRE Heap Dump by either using dumpHeap JMX operation via JConsole or by adding IBM_HEAPDUMP=true in your Java environment variables along with kill –QUIT command *

// Portal Heap Dump generated on 2011-09-22
heapdump.20110922.003510.1028306.0007.phd

Step #2 – Load the Heap Dump file in MAT

We used the Memory Analyzer (MAT) tool from IBM Support Assistant in order to load our generated Heap Dump. The Heap Dump parsing process can take quite some time. Once the processing is completed, you will be able to see a Java Heap overview along with a Leak Suspect report.




Step #3 – Locate the Weblogic Session Data objects

In order to understand the runtime footprint of your application HttpSession (user session) objects, you first need to locate the Weblogic HttpSession internal data structure. Such internal data structured with using memory Session persistence is identified as:

weblogic.servlet.internal.session.MemorySessionData

Unless these objects are showing up from the Leak Suspects report, the best way to locate them is to simply load the Histogram as per below and sort the Class Names. You should be able to easily locate the MemorySessionData class name and determine how many Objects instances. 1 instance of MemorySessionData corresponds to one user session of one of your Java EE Web Application.










Step #4 – HttpSession memory footprint analysis

It is now time to analyze the memory footprint of each of your HttpSession data. Simply right click on your mouse over weblogic.servlet.internal.session.MemorySessionData
 and select: List objects > with incoming references


For our case study, large HttpSession (MemorySessionData) objects were found using up to 52 MB for a single session objects which did explain why our Java Heap was depleted under heavy load.

At this point, you can explore the Session data object dig within a single instance of MemorySessionData. This will allow you to look at all your application Session data attributes and determine the source(s) of memory allocation. Simply select right click on one MemorySessionData and select >> List objects > with outgoing references.


Using this approach, our development team was able to identify the source of high memory allocation within our application HttpSession data and fix the problem.

Conclusion

I hope this tutorial along with our case study has helped you understand how useful and powerful a Heap Dump analysis can be in order to understand and identify your application HttpSession memory footprint.

Please don’t hesitate to post any comment or question.