/ Java EE Support Patterns: Heap Dump
Showing posts with label Heap Dump. Show all posts
Showing posts with label Heap Dump. Show all posts

11.22.2012

Java Heap Dump: Are you up to the task?

If you are as much enthusiasm as I am on Java performance, heap dump analysis should not be a mystery to you. If it is then the good news is that you have an opportunity to increase your Java troubleshooting skills and JVM knowledge.

The JVM has now evolve to a point that it is much easier today to generate and analyze a JVM heap dump vs. the old JDK 1.0 – JDK 1.4 days.

That being said, JVM heap dump analysis should not be seen as a replacement for profiling & JVM analysis tools such as JProfiler or Plumbr but complementary. It is particularly useful when troubleshooting Java heap memory leaks and java.lang.OutOfMemoryError problems.

This post will provide you with an overview of a JVM heap dump and what to expect out of it. It will also provide recommendations on how and when you should spend time analyzing a heap dump. Future articles will include tutorials on the analysis process itself.

Java Heap Dump overview

A JVM heap dump is basically a “snapshot” of the Java heap memory at a given time. It is quite different than a JVM thread dump which is a snapshot of the threads.

Such snapshot contains low level detail about the java objects and classes allocated on the Java heap such as:

  • Java objects such as Class, fields, primitive values and references
  • Classloader related data including static fields (important for classloader leak problems)
  • Garbage collection roots or objects that are accessible from outside the heap (System classloader loaded resources such as rt.jar, JNI or native variables, Threads, Java Locals and more…)
  • Thread related data & stacks (very useful for sudden Java heap increase problems, especially when combined with thread dump analysis)
Please note that it is usually recommended to generate a heap dump following a full GC in order to eliminate unnecessary “noise” from non-referenced objects.

Analysis reserved for the Elite?

One common misperception I have noticed over the last 10 years working with production support teams is the impression that deeper analysis tasks such as profiling, heap dump or thread dump analysis are reserved for the “elite” or the product vendor (Oracle, IBM…).

I could not disagree more.

As a Java developer, you write code potentially running in a highly concurrent thread environment, managing hundreds and hundreds of objects on the JVM. You do have to worry not only about concurrency issues but also on garbage collection and the memory footprint of your application(s). You are in the best position to perform this analysis since you are the expert of the application.



Find below typical questions you should be able to answer:

  • How much concurrent threads are needed to run my application concurrently as per load forecast? How much memory each active thread is consuming before they complete their tasks?
  • What is the static memory footprint of my application? (libraries, classloader footprint, in-memory cache data structures etc.)
  • What is the dynamic memory footprint of my application under load? (sessions footprint etc.)
  • Did you profile your application for any memory leak?
Load testing, profiling your application and analyzing Java heap dumps (ex: captured during a load test or production problem) will allow you to answer the above questions. You will then be in position to achieve the following goals:

  • Reduce risk of performance problems post production implementation
  • Add value to your work and your client by providing extra guidance & facts to the production and capacity management team; allowing them to take proper IT improvement actions
  • Analyze the root cause of memory leak(s) or footprint problem(s) affecting your client IT production environment
  • Increase your technical skills by learning these performance analysis principles and techniques
  • Increase your JVM skills by improving your understanding of the JVM, garbage collection and Java object life cycles
 The last thing you want to reach is a skill “plateau”. If you are not comfortable with this type of analysis then my recommendations are as per below:

  • Ask a more senior member of your team to perform the heap dump analysis and shadow his work and approach
  • Once you are more comfortable, volunteer yourself to perform the same analysis (from a different problem case) and this time request a more experienced member to shadow your analysis work
  • Eventually the student (you) will become the mentor
When to use

Analyzing JVM heap dumps should not be done every time you are facing a Java heap problem such as OutOfMemoryError. Since this can be a time consuming analysis process, I recommend this analysis for the scenarios below:

  • The need to understand & tune your application and / or surrounding API or Java EE container itself memory footprint
  • Java heap memory leak troubleshooting
  • Java classloader memory leaks
  • Sudden Java heap increase problems or trigger events (has to be combined with thread dump analysis as a starting point)

 Now find below some limitations associated with heap dump analysis:

  • JVM heap dump generation is an intensive computing task which will hang your JVM until completed. Proper due diligence is required in order to reduce impact to your production environment
  • Analyzing the heap dump will not give you the full Java process memory footprint e.g. native heap. For this purpose, you will need to rely on other tools and OS commands for that purpose
  • You may face problems opening & parsing heap dumps generated from older version of JDK’s such as 1.4 or 1.5
Heap dump generation techniques

JVM heap dumps are typically generated as a result of 2 actions: 

  • Auto-generated or triggered as a result of a java.lang.OutOfMemoryError (e.g. Java Heap, PermGen or native heap depletion)
  • Manually generated via the usage of tools such as jmap, VisualVM (via JMX) or OS level command
# Auto-triggered heap dumps

If you are using the HotSpot Java VM 1.5+ or JRockit R28+ then you will need to add the following parameter below at your JVM start-up:

-XX:+HeapDumpOnOutOfMemoryError

The above parameter will enable to HotSpot VM to automatically generate a heap dump following an OOM event. The heap dump format for those JVM types is HPROF (*.hprof).

If you are using the IBM JVM 1.4.2+, heap dump generation as a result of an OOM event is enabled by default. The heap dump format for the IBM JVM is PHD (*.phd).

# Manually triggered heap dumps

Manual JVM heap dumps generation can be achieved as per below:

  • Usage of jmap for HotSpot 1.5+
  • Usage of VisualVM for HotSpot 1.6+ * recommended *
** Please do your proper due diligence for your production environment since JVM heap dump generation is an intrusive process which will hang your JVM process until completion **

If you are using the IBM JVM 1.4.2, you will need to add the following environment variables from your JVM start-up:

export IBM_HEAPDUMP=true
export IBM_HEAP_DUMP=true


For IBM JVM 1.5+ you will need to add the following arguments at the Java start-up:

-Xdump:heap

EX:
java -Xdump:none -Xdump:heap:events=vmstop,opts=PHD+CLASSIC
JVMDUMP006I Processing Dump Event "vmstop", detail "#00000000" - Please Wait.
JVMDUMP007I JVM Requesting Heap Dump using
 'C:\sdk\jre\bin\heapdump.20050323.142011.3272.phd'
JVMDUMP010I Heap Dump written to
 C:\sdk\jre\bin\heapdump.20050323.142011.3272.phd
JVMDUMP007I JVM Requesting Heap Dump using
 'C:\sdk\jre\bin\heapdump.20050323.142011.3272.txt'
JVMDUMP010I Heap Dump written to
 C:\sdk\jre\bin\heapdump.20050323.142011.3272.txt
JVMDUMP013I Processed Dump Event "vmstop", detail "#00000000".


Please review the Xdump documentation  for IBM JVM1.5+. 

For Linux and AIX®, the IBM JVM heap dump signal is sent via kill –QUIT or kill -3. This OS command will trigger JVM heap dump generation (PHD format).

I recommend that you review the MAT summary page on how to acquire JVM heap dump via various JVM & OS combinations.

Heap dump analysis tools

My primary recommended tool for opening and analyzing a JVM heap dump is Eclipse Memory Analyzer (MAT). This is by far the best tool out there with contributors such as SAP & IBM. The tool provides a rich interface and advanced heap dump analysis capabilities, including a “leak suspect” report. MAT also supports both HPROF & PHD heap dump formats.

I recommend my earlier post for a quick tutorial on how to use MAT and analyze your first JVM heap dump. I have also a few heap dump analysis case studies useful for your learning process.



Final words

I really hope that you will enjoy JVM heap dump analysis as much as I do. Future articles will provide you with generic tutorials on how to analyze a JVM heap dump and where to start. Please feel free to provide your comments.

11.18.2011

java.lang.OutOfMemoryError – Weblogic Session size too big

A major problem was brought to our attention recently following a migration of a Java EE Portal application from Weblogic 8.1 to Weblogic 11g.

This case study will demonstrate how you can analyze an IBM JRE Heap Dump (.phd format) in order to determine the memory footprint of your application HttpSession objects.

Environment specifications (case study)

-         Java EE server: Oracle Weblogic Server 11g
-         Middleware OS: AIX 5.3
-         Java VM: IBM JRE 1.6.0
-         Platform type: Portal application

Monitoring and troubleshooting tools

-         Memory Analyzer 1.1 via IBM support assistant (IBM  JRE Heap Dump analysis)

Step #1 – Heap Dump generation

A Heap Dump file was generated following an OutOfMemoryError. The IBM JRE Heap Dump format is as per below (phd extension stands for Portal Heap Dump).

* Please note that you can also manually generate a IBM JRE Heap Dump by either using dumpHeap JMX operation via JConsole or by adding IBM_HEAPDUMP=true in your Java environment variables along with kill –QUIT command *

// Portal Heap Dump generated on 2011-09-22
heapdump.20110922.003510.1028306.0007.phd

Step #2 – Load the Heap Dump file in MAT

We used the Memory Analyzer (MAT) tool from IBM Support Assistant in order to load our generated Heap Dump. The Heap Dump parsing process can take quite some time. Once the processing is completed, you will be able to see a Java Heap overview along with a Leak Suspect report.




Step #3 – Locate the Weblogic Session Data objects

In order to understand the runtime footprint of your application HttpSession (user session) objects, you first need to locate the Weblogic HttpSession internal data structure. Such internal data structured with using memory Session persistence is identified as:

weblogic.servlet.internal.session.MemorySessionData

Unless these objects are showing up from the Leak Suspects report, the best way to locate them is to simply load the Histogram as per below and sort the Class Names. You should be able to easily locate the MemorySessionData class name and determine how many Objects instances. 1 instance of MemorySessionData corresponds to one user session of one of your Java EE Web Application.










Step #4 – HttpSession memory footprint analysis

It is now time to analyze the memory footprint of each of your HttpSession data. Simply right click on your mouse over weblogic.servlet.internal.session.MemorySessionData
 and select: List objects > with incoming references


For our case study, large HttpSession (MemorySessionData) objects were found using up to 52 MB for a single session objects which did explain why our Java Heap was depleted under heavy load.

At this point, you can explore the Session data object dig within a single instance of MemorySessionData. This will allow you to look at all your application Session data attributes and determine the source(s) of memory allocation. Simply select right click on one MemorySessionData and select >> List objects > with outgoing references.


Using this approach, our development team was able to identify the source of high memory allocation within our application HttpSession data and fix the problem.

Conclusion

I hope this tutorial along with our case study has helped you understand how useful and powerful a Heap Dump analysis can be in order to understand and identify your application HttpSession memory footprint.

Please don’t hesitate to post any comment or question.

11.05.2011

Memory Analyzer download

Memory Analyzer (MAT) is an extremely useful tool that analyzes Java Heap Dump files with hundreds of millions of objects, quickly calculate the retained sizes of objects, see who is preventing the Garbage Collector from collecting objects, run a report to automatically extract leak suspects.

Why do I need it?

OutOfMemoryError problems are quite complex and require proper analysis. The tool is a must for any Java EE production support person or developer who requires to investigate memory leak and / or to analyze their application memory footprint.

Where can I find tutorials and case studies?

You will find from this Blog several case studies and tutorial on how to use this tool to pinpoint Java Heap memory leaks. Find below a few examples:

http://javaeesupportpatterns.blogspot.com/2011/09/gc-overhead-limit-exceeded-java-heap.html
http://javaeesupportpatterns.blogspot.com/2011/02/ibm-sdk-heap-dump-httpsession-footprint.html

Where can I download it?

Memory Analyzer can be downloaded for free as a standalone software from the Eclipse site or integrated within the IBM Support Assistant tool.

## Eclipse MAT

## MAT via IBM Support Assistant tool (select Memory Analyzer from the Tools add-ons)

HPROF – Memory leak analysis tutorial

This article will provide you with a tutorial on how you can analyze a JVM memory leak problem by generating and analyzing a Sun HotSpot JVM HPROF Heap Dump file.

A real life case study will be used for that purpose: Weblogic 9.2 memory leak affecting the Weblogic Admin server.

Environment specifications

·         Java EE server: Oracle Weblogic Server 9.2 MP1
·         Middleware OS: Solaris 10
·         Java VM: Sun HotSpot 1.5.0_22
·         Platform type: Middle tier

Monitoring and troubleshooting tools

·         Quest Foglight (JVM and garbage collection monitoring)
·          jmap (hprof / Heap Dump generation tool)
·         Memory Analyzer 1.1 via IBM support assistant (hprof Heap Dump analysis)
·         Platform type: Middle tier

Step #1 – WLS 9.2 Admin server JVM monitoring and leak confirmation

The Quest Foglight Java EE monitoring tool was quite useful to identify a Java Heap leak from our Weblogic Admin server. As you can see below, the Java Heap memory is growing over time.

If you are not using any monitoring tool for your Weblogic environment, my recommendation to you is to at least enable verbose:gc of your HotSpot VM. Please visit my Java 7 verbose:gc tutorial on this subject for more detailed instructions.


Step #2 – Generate a Heap Dump from your leaking JVM

Following the discovery of a JVM memory leak, the goal is to generate a Heap Dump file (binary format) by using the Sun JDK jmap utility.

** please note that jmap Heap Dump generation will cause your JVM to become unresponsive so please ensure that no more traffic is sent to your affected / leaking JVM before running the jmap utility **

<JDK HOME>/bin/jmap -heap:format=b <Java VM PID>


This command will generate a Heap Dump binary file (heap.bin) of your leaking JVM. The size of the file and elapsed time of the generation process will depend of your JVM size and machine specifications / speed.

For our case study, a binary Heap Dump file of ~ 2 GB was generated in about 1 hour elapsed time.

Sun HotSpot 1.5/1.6/1.7 Heap Dump file will also be generated automatically as a result of a OutOfMemoryError and by adding -XX:+HeapDumpOnOutOfMemoryError in your JVM start-up arguments.


Step #3 – Load your Heap Dump file in Memory Analyzer tool

It is now time to load your Heap Dump file in the Memory Analyzer tool. The loading process will take several minutes depending of the size of your Heap Dump and speed of your machine.




Step #4 – Analyze your Heap Dump

The Memory Analyzer provides you with many features, including a Leak Suspect report. For this case study, the Java Heap histogram was used as a starting point to analyze the leaking objects and the source.


For our case study, java.lang.String and char[] data were found as the leaking Objects. Now question is what is the source of the leak e.g. references of those leaking Objects. Simply right click over your leaking objects and select >> List Objects > with incoming references



As you can see, javax.management.ObjectName objects were found as the source of the leaking String & char[] data. The Weblogic Admin server is communicating and pulling stats from its managed servers via MBeans / JMX which create javax.management.ObjectName for any MBean object type. Now question is why Weblogic 9.2 is not releasing properly such Objects…

Root cause: Weblogic javax.management.ObjectName leak!

Following our Heap Dump analysis, a review of the Weblogic known issues was performed which did reveal the following Weblogic 9.2 bug below:

·         Weblogic Bug ID: CR327368
·         Description: Memory leak of javax.management.ObjectName objects on the Administration Server used to cause OutOfMemory error on the Administration Server.
·         Affected Weblogic version(s): WLS 9.2
·         Fixed in: WLS 10 MP1

 This finding was quite conclusive given the perfect match of our Heap Dump analysis, WLS version and this known problem description.

Conclusion

I hope this tutorial along with case study has helped you understand how you can pinpoint the source of a Java Heap leak using jmap and the Memory Analyzer tool.
Please don’t hesitate to post any comment or question.
I also provided free Java EE consultation so please simply email me and provide me with a download link of your Heap Dump file so I can analyze it for you and create an article on this Blog to describe your problem, root cause and resolution.

2.20.2011

IBM SDK Heap Dump HttpSession footprint analysis

A JVM Heap Dump is a crucial collection of information that provides full view on your Java EE application memory footprint. This article provides a step by step tutorial on how you can analyze an AIX IBM SDK Heap Dump in order to identify the Java Heap memory session data footprint of your Java EE Web application.

Please note that we will use a real production system as an example following a session data footprint analysis we did on an older Weblogic 8.1 Web ordering application.

Environment specifications

·         Java EE server: Weblogic 8.1 SP6
·         Hardware: IBM,9117-MMA - PowerPC_POWER6
·         OS: AIX 5.3
·         JDK: IBM AIX SDK 1.4.2 SR9
·         Platform type: Ordering Portal


Monitoring and troubleshooting tools

·         JVM Heap Dump (IBM AIX format)
·         Memory Analyzer 0.6.0.2 (via IBM Support Assistant 4.1)

Problem overview

An increased of Java Heap memory footprint was observed following a major release of our application. As part of our capacity planning process, it was agreed to delay the production release in order to identify the root cause along with a resolution.

Memory Analyzer background

The Memory Analyzer is one of best tool available that allows you to load and analyze both HotSpot (xyz.hprof format) and IBM SDK (heapdump.xyz.phd format) Java VM Heap Dump files.

This tool can be downloaded as a plug-in within the IBM Support Assistant (ISA) tool:

http://www-01.ibm.com/software/support/isa/

Heap Dump analysis

The first step is to download the Heap Dump file from your server to your workstation then please follow the instructions below:

1) Open ISA > Launch Activity > Analyze Problem


2) Launch Memory Analyzer > Browse > Remote Artifact > Select your file and click Next


3) Wait until parsing is completed; this may take several minutes depending on Heap Dump size and specifications of your system


4) Once parsing is completed, select Leak Suspects and click Finish



At this point you should get a pie chart with a list of potential leak suspects. In our example, the tool found one leak suspect as per below. This does not mean necessarily that you are facing a memory leak. The tool is simply presenting you with possible leak candidates or components using larger portion of your Java Heap memory.



5) Leak suspect analysis

The next step is to scroll down and have a closer look at your leak suspects. As you can see in our example below, the primary problem is that our Weblogic Session Data (client HttpSession object) is showing a footprint between 4.5 MB – 11.5 MB; much larger than the original size before this application change (~ 1 MB).

6) Memory footprint deep dive

The next step is to do a deep dive and better understand why our session data size is so big e.g. what the main application contributors are.

For this exercise, click on the Overview tab and left click a portion of the pie chart you are interesting in. You will see a menu with a list of options; select List objects > with outgoing references. A new tab will then open allowing you to inspect this Object data further. At this point, you just need to explode the object references and navigate through all object relationships from up to bottom.



In our example, we did choose to inspect a Weblogic session data object of 11.5 MB. The idea is now to navigate through object references until we find the root cause. A Weblogic session data is basically a Hashtable that contain a list of attribute objects. Each of such object instances normally represents an attribute from your application HttpSession data. The analysis found several object instances from our application using close to 1 MB of data footprint.



Root cause and resolution

The Heap Dump analysis did reveal some very clear problems with a few of our application code client session attribute objects using too much Java Heap memory. Such analysis did allow us to fix the problem without further need to profile our application.

Additional JVM Heap Dump snapshots were captured post code fix and were analysed in order to confirm problem resolution.

Conclusion

This Heap Dump analysis did clearly demonstrate how powerful such Heap Dump data is when combined with a tool like Memory Analyzer.

Heap Dump files allow you to perform a fast analysing of your application footprint without any need to install additional application profilers. There are some scenarios that you will need a real application profiler but Heap Dump analysis is quite sufficient in several scenarios and is available out-of-the-box for most JVM vendors.