/ Java EE Support Patterns

5.02.2013

HotSpot 32-bit to 64-bit upgrade: what to look for

This short post will test your knowledge on JVM and project delivery skills; especially regarding JVM upgrades. I’m looking forward for your comments and answers on how to approach this type of projects in order to say away from performance problems.

Background

I was recently involved in a recent problem case affecting a production environment running on Weblogic 10 and using the HotSpot JVM 1.6 @32-bit. Given recent challenges and load increase forecast, the decision was taken to upgrade the HotSpot JVM 1.6 from 32-bit to 64-bit.

Please note that no change was applied to the JVM arguments.

After a few weeks of functional testing and planning, the upgrade was deployed successfully to the production environment. However, the support team did observe the next day major performance degradation, including thread lock contention, forcing the deployment team to rollback the upgrade.

The root cause was eventually found and the upgrade will be re-attempted in the near future.

Question:

Based on the above background, provide a list of possible root causes that may explain this performance degradation.

Propose a list of improvements to the project delivery and recommendations on how to properly manage and de-risk this type of upgrade.

Answer:

I often hear the assumption that switching from a 32-bit JVM to 64-bit JVM will automatically bring performance improvement. This is partially true. Performance improvements will only be observed if you are dealing with existing memory footprint problem(s) prior to the upgrade such as excessive GC or java.lang.outofmemoryerrorjava heap space conditions and if you performed proper tuning & Java heap sizing.

Unfortunately, we often overlook the fact that for a 64-bit JVM process, native pointers in the system takes up 8 bytes instead of 4. This can result in an increased memory footprint of your application and leading to more frequent GC and performance degradation.

Here is the official explanation from Oracle:

What are the performance characteristics of 64-bit versus 32-bit VMs?

"Generally, the benefits of being able to address larger amounts of memory come with a small performance loss in 64-bit VMs versus running the same application on a 32-bit VM.  This is due to the fact that every native pointer in the system takes up 8 bytes instead of 4.  The loading of this extra data has an impact on memory usage which translates to slightly slower execution depending on how many pointers get loaded during the execution of your Java program.  The good news is that with AMD64 and EM64T platforms running in 64-bit mode, the Java VM gets some additional registers which it can use to generate more efficient native instruction sequences.  These extra registers increase performance to the point where there is often no performance loss at all when comparing 32 to 64-bit execution speed.  
The performance difference comparing an application running on a 64-bit platform versus a 32-bit platform on SPARC is on the order of 10-20% degradation when you move to a 64-bit VM.  On AMD64 and EM64T platforms this difference ranges from 0-15% depending on the amount of pointer accessing your application performs."   

Now back to our original problem case, this memory footprint increase was found to be significant and at the root cause of our problem. Depending of the GC policy that you are using, an increase in GC major collections will lead to higher JVM & thread pause times, opening the door for thread lock contention and other problems. As you can see below, the upgrade to 64-bit JVM did increase our existing application static memory footprint (tenured space) by 45%. 

Java Heap footprint ~900 MB after major collections (32-bit)



 Java Heap footprint ~1.3 GB after major collections (64-bit)



** 45% increase of the application Java heap memory footprint (retained tenured space). Again, this is due to the expanded size of managed native pointers.

The other part of the problem is that no performance and load testing was performed prior to production implementation, functional testing only. Also, since no change or tuning was applied to the JVM settings, this automatically triggered an increased frequency of major collections and JVM pause time. The final solution did involve increasing the Java heap capacity form 2 GB to 2.5 GB and using the Compressed Oops option available with HotSpot JDK 6u23.

Now that you understand the root cause, find below my recommendations when performing this type of upgrade:

  • Execute performance and load testing cycles and compare your application memory footprint and GC behaviour before (32-bit) and after (64-bit).
  • Ensure you spend enough time tuning your GC settings and the Java heap size in order to minimize the JVM pause time.
  • If you are using HotSpot JVM 1.6 6u23 and later, take advantage of the Compressed Oops tuning parameter. Compressed oops will allow the HotSpot JVM to represent many (not all) managed pointers as 32-bit object offsets from the 64-bit Java heap base address; resulting in a reduced memory footprint following the upgrade.
  • Perform proper capacity planning of the hardware hosting your JVM processes. Ensure that you have enough physical RAM and CPU capacity to handle the extra memory and CPU footprint associated with this upgrade.
  • Develop a low risk implementation strategy by upgrading only a certain percentage of your production environment to 64-bit JVM e.g. 25%-50%. This will also allow you to compare the behavior with the existing 32-bit JVM processes and make sure performance is aligned with your P & L results.

4.16.2013

java.lang.NullPointerException Minecraft


I have been receiving recently many comments and emails related to Minecraft Java related errors such as java.lang.NullPointerException

I recommend to look at the following YouTube video:




While this Blog is dedicated to general Java and Java EE troubleshooting, I’m still willing to help non Java programmers and Minecraft gamers who are facing these problems such as java.lang.NullPointerException when developing Mods and understand what is null in Java.

If you are in fact facing such problem, please follow the steps below:

  • Provide me with the Java error and complete stack trace from Minecraft and/or your code.
  • Post the complete error as a comment of this post (Post a Comment section below).
I also encourage you to search and post your problems to the Minecraft community forum.

4.15.2013

HotSpot GC Thread CPU footprint on Linux


The following question will test your knowledge on garbage collection and high CPU troubleshooting for Java applications running on Linux OS. This troubleshooting technique is especially crucial when investigating excessive GC and / or CPU utilization.

It will assume that you do not have access to advanced monitoring tools such as Compuware dynaTrace or even JVisualVM. Future tutorials using such tools will be presented in the future but please ensure that you first master the base troubleshooting principles.

Question:

How can you monitor and calculate how much CPU % each of the Oracle HotSpot or JRockit JVM garbage collection (GC) threads is using at runtime on Linux OS?

Answer:

On the Linux OS, Java threads are implemented as native Threads, which results in each thread being a separate Linux process. This means that you are able to monitor the CPU % of any Java thread created by the HotSpot JVM using the top –H command (Threads toggle view).

That said, depending of the GC policy that you are using and your server specifications, the HotSpot & JRockit JVM will create a certain number of GC threads that will be performing young and old space collections. Such threads can be easily identified by generating a JVM thread dump. As you can see below in our example, the Oracle JRockit JVM did create 4 GC threads identified as "(GC Worker Thread X)”.

===== FULL THREAD DUMP ===============
Fri Nov 16 19:58:36 2012
BEA JRockit(R) R27.5.0-110-94909-1.5.0_14-20080204-1558-linux-ia32

"Main Thread" id=1 idx=0x4 tid=14911 prio=5 alive, in native, waiting
    -- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0xfd0a4b0[fat lock]
    at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method)
    at java/lang/Object.wait(J)V(Native Method)
    at java/lang/Object.wait(Object.java:474)
    at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:730)
    ^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0xfd0a4b0[fat lock]
    at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:380)
    at weblogic/Server.main(Server.java:67)
    at jrockit/vm/RNI.c2java(IIIII)V(Native Method)
    -- end of trace

"(Signal Handler)" id=2 idx=0x8 tid=14920 prio=5 alive, in native, daemon

"(GC Main Thread)" id=3 idx=0xc tid=14921 prio=5 alive, in native, native_waiting, daemon

"(GC Worker Thread 1)" id=? idx=0x10 tid=14922 prio=5 alive, in native, daemon

"(GC Worker Thread 2)" id=? idx=0x14 tid=14923 prio=5 alive, in native, daemon

"(GC Worker Thread 3)" id=? idx=0x18 tid=14924 prio=5 alive, in native, daemon

"(GC Worker Thread 4)" id=? idx=0x1c tid=14925 prio=5 alive, in native, daemon
………………………

Now let’s put all of these principles together via a simple example.

Step #1 - Monitor the GC thread CPU utilization

The first step of the investigation is to monitor and determine:

  • Identify the native Thread ID for each GC worker thread shown via the Linux top –H command.
  • Identify the CPU % for each GC worker thread.



Step #2 – Generate and analyze JVM Thread Dumps

At the same time of Linux top –H, generate 2 or 3 JVM Thread Dump snapshots via kill -3 <Java PID>.

  • Open the JVM Thread Dump and locate the JVM GC worker threads.
  • Now correlate the "top -H" output data with the JVM Thread Dump data by looking at the native thread id (tid attribute).



As you can see in our example, such analysis did allow us to determine that all our GC worker threads were using around 20% CPU each. This was due to major collections happening at that time. Please note that it is also very useful to enable verbose:gc as it will allow you to correlate such CPU spikes with minor and major collections and determine how much your JVM GC process is contributing to the overall server CPU utilization.

3.13.2013

OpenJPA: Memory Leak Case Study

This article will provide the complete root cause analysis details and resolution of a Java heap memory leak (Apache OpenJPA leak) affecting an Oracle Weblogic server 10.0 production environment.

This post will also demonstrate the importance to follow the Java Persistence API best practices when managing the javax.persistence.EntityManagerFactory lifecycle.

Environment specifications

  • Java EE server: Oracle Weblogic Portal 10.0
  • OS: Solaris 10
  • JDK: Oracle/Sun HotSpot JVM 1.5 32-bit @2 GB capacity
  • Java Persistence API: Apache OpenJPA 1.0.x (JPA 1.0 specifications)
  • RDBMS: Oracle 10g
  • Platform type: Web Portal

Troubleshooting tools


Problem description & observations

The problem was initially reported by our Weblogic production support team following production outages. An initial root cause analysis exercise did reveal the following facts and observations:

  • Production outages were observed on regular basis after ~2 weeks of traffic.
  • The failures were due to Java heap (OldGen) depletion e.g. OutOfMemoryError: Java heap space error found in the Weblogic logs.
  • A Java heap memory leak was confirmed after reviewing the Java heap OldGen space utilization over time from Foglight monitoring tool along with the Java verbose GC historical data.



Following the discovery of the above problems, the decision was taken to move to the next phase of the RCA and perform a JVM heap dump analysis of the affected Weblogic (JVM) instances.

JVM heap dump analysis

** A video explaining the following JVM Heap Dump analysis is now available here.

In order to generate a JVM heap dump, the supported team did use the HotSpot 1.5 jmap utility which generated a heap dump file (heap.bin) of about ~1.5 GB. The heap dump file was then analyzed using the Eclipse Memory Analyzer Tool. Now let’s review the heap dump analysis so we can understand the source of the OldGen memory leak.

MAT provides an initial Leak Suspects report which can be very useful to highlight your high memory contributors. For our problem case, MAT was able to identify a leak suspect contributing to almost 600 MB or 40% of the total OldGen space capacity.




At this point we found one instance of java.util.LinkedList using almost 600 MB of memory and loaded to one of our application parent class loader (@ 0x7e12b708). The next step was to understand the leaking objects along with the source of retention.

MAT allows you to inspect any class loader instance of your application, providing you with capabilities to inspect the loaded classes & instances. Simply search for the desired object by providing the address e.g. 0x7e12b708 and then inspect the loaded classes & instances by selecting List Objects > with outgoing references.




As you can see from the above snapshot, the analysis was quite revealing. What we found was one instance of org.apache.openjpa.enhance.PCRegistry at the source of the memory retention; more precisely the culprit was the _listeners field implemented as a LinkedList.

For your reference, the Apache OpenJPA PCRegistry is used internally to track the registered persistence-capable classes. Find below a snippet of the PCRegistry source code from Apache OpenJPA version 1.0.4 exposing the _listeners field.


/**
 * Tracks registered persistence-capable classes.
 *
 * @since 0.4.0
 * @author Abe White
 */
public class PCRegistry {
    // DO NOT ADD ADDITIONAL DEPENDENCIES TO THIS CLASS

    private static final Localizer _loc = Localizer.forPackage
        (PCRegistry.class);

    // map of pc classes to meta structs; weak so the VM can GC classes
    private static final Map _metas = new ConcurrentReferenceHashMap
        (ReferenceMap.WEAK, ReferenceMap.HARD);

    // register class listeners
    private static final Collection _listeners = new LinkedList();
……………………………………………………………………………………

Now the question is why is the memory footprint of this internal data structure so big and potentially leaking over time? The next step was to deep dive into the _listeners LinkedLink instance in order to review the leaking objects.




We finally found that the leaking objects were actually the JDBC & SQL mapping definitions (metadata) used by our application in order to execute various queries against our Oracle database. A review of the JPA specifications, OpenJPA documentation and source did confirm that the root cause was associated with a wrong usage of the  javax.persistence.EntityManagerFactory such of lack of closure of a newly created EntityManagerFactory instance.




If you look closely at the above code snapshot, you will realize that the close() method is indeed responsible to cleanup any recently used metadata repository instance. It did also raise another concern, why are we creating such Factory instances over and over…

The next step of the investigation was to perform a code walkthrough of our application code, especially around the life cycle management of the JPA EntityManagerFactory and EntityManager objects.

Root cause and solution

A code walkthrough of the application code did reveal that the application was creating a new instance of EntityManagerFactory on each single request and not closing it properly.

public class Application {
      
       @Resource
       private UserTransaction utx = null;
       
       // Initialized on each application request and not closed!
       @PersistenceUnit(unitName = "UnitName")
       private EntityManagerFactory emf = Persistence.createEntityManagerFactory("PersistenceUnit"); 

       public EntityManager getEntityManager() {
             return this.emf.createEntityManager();
       }
      
       public void businessMethod() {
            
             // Create a new EntityManager instance via from the newly created EntityManagerFactory instance
             // Do something...
             // Close the EntityManager instance
       }
}

This code defect and improver use of JPA EntityManagerFactory was causing a leak or accumulation of metadata repository instances within the OpenJPA _listeners data structure demonstrated from the earlier JVM heap dump analysis.

The solution of the problem was to centralize the management & life cycle of the thread safe javax.persistence.EntityManagerFactory via the Singleton pattern. The final solution was implemented as per below:

  • Create and maintain only one static instance of javax.persistence.EntityManagerFactory per application class loader and implemented via the Singleton Pattern.
  • Create and dispose new instances of EntityManager for each application request.
Please review this discussion from Stackoverflow as the solution we implemented is quite similar.

Following the implementation of the solution to our production environment, no more Java heap OldGen memory leak is observed.

Please feel free to provide your comments and share your experience on the same.

3.06.2013

Web and Java learning application


You may have noticed that we started to release training and tutorial videos in 2013.

In order to further improve and simplify your learning process, we will be releasing a free Java and Java EE troubleshooting learning platform (application) in the following weeks.

This application will be platform independent (Tomcat, JBoss, Weblogic, WAS etc.) and allow you to:

  • Improve your Java & Java EE troubleshooting skills such as JVM monitoring & tuning, thread dump analysis, common problem patterns etc.
  • Improve your knowledge on Java programming and troubleshooting.
  • Improve your knowledge on some key design patterns and emerging Web & language technologies.
  • Simulate common performance & stability problems.
  • Practice and learn via programming puzzles.
Some of the future articles and videos available from this Blog will be using this new learning platform for an enhanced learning experience.

We will also leverage this application along with other Web load testing and profiling technologies in order to conduct some future product reviews such as the memory leak analysis tool Plumbr.

I’m looking forward for your comments and suggestions.

Thank you.
Pierre-Hugues Charbonneau
Your author and teacher on Java EE Support Patterns.

2.21.2013

ClassCastException and ClassLoader Puzzle

The following question and puzzle will test your knowledge on Java class loaders and more precisely on one of the Java language specifications. It will also help you better troubleshoot problems such as java.lang.NoClassDefFoundError.

I highly suggest that you do not look at the explanation and solution until you review the code and come up with your own explanation.

You can download the Java program source code and binaries (compiled with JDK 1.7) here. In order to run the program, simply use the following command:

<JDK 1.7 HOME>\bin\java -classpath MainProgram.jar org.ph.javaee.training9.ChildFirstCLPuzzle

** Make sure that you also download the 3 JAR files below before you run the program.

  • MainProgram.jar contains the main program along with super class ProgrammingLanguage.
  • ProgrammingLanguage.jar contains the super class ProgrammingLanguage.
  • JavaLanguage.jar contains the implementation class JavaLanguage, which extends ProgrammingLanguage.

Question (puzzle)

Review closely the program source and packaging along with the diagram below reflecting the class loader delegation model used for this program.



Why can’t we cast (ChildFirstCLPuzzle.java @line 53) the Object javaLanguageInstance of type JavaLanguage, into ProgrammingLanguage?

...............................                   
// Finally, cast the object instance into ProgrammingLanguage
/** Question: why is the following code failing with ClassCastException given the fact JavaLanguage is indeed a ProgrammingLanguage?? **/
ProgrammingLanguage programmingLanguage = (ProgrammingLanguage)javaLanguageInstance;

............................... 

Propose a solution to allow the above cast to succeed without changing the original source code. Hint: look again at the class loader delegation model, packaging and diagram.

Answer & solution

The Java program is attempting to demonstrate a Java language specification rule related to class loaders: two classes loaded by different class loaders are considered to be distinct and hence incompatible.

If you review closely the program source, packaging and diagram, you will realize the following facts:

  • Our main program is loaded to the parent class loader e.g. $AppClassLoader.
  • The super class ProgrammingLanguage is also loaded to the parent class loader since it is referenced by our main program at line 53.
  • The implementation class JavaLanguage is loaded to our child class loader e.g. ChildFirstClassLoader which is following a “child first” delegation model.
  • Finally, the super class ProgrammingLanguage is also loaded to our child class loader.

The key point to understand is that the super class is loaded by 2 different class loaders. As per Java language specification, two classes loaded by different class loaders are considered to be distinct and hence incompatible. This means that ProgrammingLanguage loaded from the “child firs” class loader is different and not compatible with ProgrammingLanguage loaded from the parent classloader. This is why the cast attempted at line 53 failed with the error below:

ChildFirstCLPuzzle execution failed with ERROR: java.lang.ClassCastException: org.ph.javaee.training9.JavaLanguage cannot be cast to org.ph.javaee.training9.ProgrammingLanguage

Now how can we fix this problem without actually changing the source code? Well please keep in mind that we are using a “child first” delegation model here. This is why we end up with 2 versions of the same ProgrammingLanguage class. The solution can be visualized as per below.


In order to fix this problem, we can simply ensure that we have only one version of ProgrammingLanguage loaded. Since our main program is referencing ProgrammingLanguage directly, the solution is to remove the ProgrammingLanguage.jar file from the child class loader. This will force the child class loader to look for the class from the parent class loader, problem solved! In order to test the solution, simply remove the ProgrammingLanguage.jar from your testing folder and re-run the program.

I hope you appreciated this puzzle related to “child first” class loader delegation model and class loader rules. This understanding is especially important when you are dealing with complex Java EE deployments involving many class loaders, exposing you to this type of problems at runtime.

Please do not hesitate to post any comment or question on this puzzle.

2.01.2013

Java 8: From PermGen to Metaspace

As you may be aware, the JDK 8 Early Access is now available for download. This allows Java developers to experiment with some of the new language and runtime features of Java 8.

One of these features is the complete removal of the Permanent Generation (PermGen) space which has been announced by Oracle since the release of JDK 7. Interned strings, for example, have already been removed from the PermGen space since JDK 7. The JDK 8 release finalizes its decommissioning.

This article will share the information that we found so far on the PermGen successor: Metaspace. We will also compare the runtime behavior of the HotSpot 1.7 vs. HotSpot 1.8 (b75) when executing a Java program “leaking” class metadata objects.

The final specifications, tuning flags and documentation around Metaspace should be available once Java 8 is officially released.

Metaspace: A new memory space is born

The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace; similar to the Oracle JRockit and IBM JVM's.

The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore…not so fast. While this change is invisible by default, we will show you next that you will still need to worry about the class metadata memory footprint. Please also keep in mind that this new feature does not magically eliminate class and classloader memory leaks. You will need to track down these problems using a different approach and by learning the new naming convention.

I recommend that you read the PermGen removal summary and comments from Jon on this subject. 

In summary:

PermGen space situation

  • This memory space is completely removed.
  • The PermSize and MaxPermSize JVM arguments are ignored and a warning is issued if present at start-up.

Metaspace memory allocation model

  • Most allocations for the class metadata are now allocated out of native memory.
  • The klasses that were used to describe class metadata have been removed.

Metaspace capacity

  • By default class metadata allocation is limited by the amount of available native memory (capacity will of course depend if you use a 32-bit JVM vs. 64-bit along with OS virtual memory availability).
  • A new flag is available (MaxMetaspaceSize), allowing you to limit the amount of native memory used for class metadata. If you don’t specify this flag, the Metaspace will dynamically re-size depending of the application demand at runtime.

Metaspace garbage collection

  • Garbage collection of the dead classes and classloaders is triggered once the class metadata usage reaches the “MaxMetaspaceSize”.
  • Proper monitoring & tuning of the Metaspace will obviously be required in order to limit the frequency or delay of such garbage collections. Excessive Metaspace garbage collections may be a symptom of classes, classloaders memory leak or inadequate sizing for your application.

Java heap space impact

  • Some miscellaneous data has been moved to the Java heap space. This means you may observe an increase of the Java heap space following a future JDK 8 upgrade.

Metaspace monitoring

  • Metaspace usage is available from the HotSpot 1.8 verbose GC log output.
  • Jstat & JVisualVM have not been updated at this point based on our testing with b75 and the old PermGen space references are still present.

Enough theory now, let’s see this new memory space in action via our leaking Java program…

PermGen vs. Metaspace runtime comparison

In order to better understand the runtime behavior of the new Metaspace memory space, we created a class metadata leaking Java program. You can download the source here.

The following scenarios will be tested:

  • Run the Java program using JDK 1.7 in order to monitor & deplete the PermGen memory space set at 128 MB.
  • Run the Java program using JDK 1.8 (b75) in order to monitor the dynamic increase and garbage collection of the new Metaspace memory space.
  • Run the Java program using JDK 1.8 (b75) in order to simulate the depletion of the Metaspace by setting the MaxMetaspaceSize value at 128 MB.

JDK 1.7 @64-bit – PermGen depletion

  • Java program with 50K configured iterations
  • Java heap space of 1024 MB
  • Java PermGen space of 128 MB (-XX:MaxPermSize=128m)


As you can see form JVisualVM, the PermGen depletion was reached after loading about 30K+ classes. We can also see this depletion from the program and GC output.

Class metadata leak simulator
Author: Pierre-Hugues Charbonneau
http://javaeesupportpatterns.blogspot.com
ERROR: java.lang.OutOfMemoryError: PermGen space


Now let’s execute the program using the HotSpot JDK 1.8 JRE.

JDK 1.8 @64-bit – Metaspace dynamic re-size

  • Java program with 50K configured iterations
  • Java heap space of 1024 MB
  • Java Metaspace space: unbounded (default)





As you can see from the verbose GC output, the JVM Metaspace did expand dynamically from 20 MB up to 328 MB of reserved native memory in order to honor the increased class metadata memory footprint from our Java program. We could also observe garbage collection events in the attempt by the JVM to destroy any dead class or classloader object. Since our Java program is leaking, the JVM had no choice but to dynamically expand the Metaspace memory space.

The program was able to run its 50K of iterations with no OOM event and loaded 50K+ Classes.

Let's move to our last testing scenario.

JDK 1.8 @64-bit – Metaspace depletion

  • Java program with 50K configured iterations
  • Java heap space of 1024 MB
  • Java Metaspace space: 128 MB (-XX:MaxMetaspaceSize=128m)





As you can see form JVisualVM, the Metaspace depletion was reached after loading about 30K+ classes; very similar to the run with the JDK 1.7. We can also see this from the program and GC output. Another interesting observation is that the native memory footprint reserved was twice as much as the maximum size specified. This may indicate some opportunities to fine tune the Metaspace re-size policy, if possible, in order to avoid native memory waste.

Now find below the Exception we got from the Java program output.

Class metadata leak simulator
Author: Pierre-Hugues Charbonneau
http://javaeesupportpatterns.blogspot.com
ERROR: java.lang.OutOfMemoryError: Metadata space
Done!

As expected, capping the Metaspace at 128 MB like we did for the baseline run with JDK 1.7 did not allow us to complete the 50K iterations of our program. A new OOM error was thrown by the JVM. The above OOM event was thrown by the JVM from the Metaspace following a memory allocation failure.

#metaspace.cpp



Final words

I hope you appreciated this early analysis and experiment with the new Java 8 Metaspace. The current observations definitely indicate that proper monitoring & tuning will be required in order to stay away from problems such as excessive Metaspace GC or OOM conditions triggered from our last testing scenario. Future articles may include performance comparisons in order to identify potential performance improvements associated with this new feature.

Please feel free to provide any comment.