/ Java EE Support Patterns: JDBC
Showing posts with label JDBC. Show all posts
Showing posts with label JDBC. Show all posts

5.09.2011

SocketInputStream hang defaultRowPrefetch tuning

 This case study will provide you with background on this type of problem and demonstrates how you can improve your performance for some of your SQL queries by tuning the Oracle JDBC driver defaultRowPrefetch value.

Environment specifications (case study)

·         Java EE server: Oracle Weblogic Server 10.3.3
·         Hardware: HP DL360 – 2 x Intel Xeon Processor x5670 (6 cores) @2.93 GHz & 72 GB RAM
·         OS: 64-bit Red Hat Enterprise Linux (RHEL) version 5.5
·         JDK: Sun 64-bit JDK/JRE 1.6.0_11+
·         RDBMS: Oracle 10gR2
·         Database Persistence API: Oracle Toplink version 11.1.1.3.0
·          Platform type: Middle tier platform


Monitoring and troubleshooting tools

-          UNIX / Linux traceroute command
-          Oracle AWR Report (to access health of the database and SQL response time)

Problem overview

-          ·        Problem type: Oracle JDBC driver Socket read hang and slower SQL response time from one regional location of our Weblogic application server

Higher response time was observed from one of one Weblogic application server location. Our environment is built as per below:

-          The application is deployed across 2 different regional locations (region #1 & region #2) with one Weblogic cluster deployed in each region
-          The primary Oracle database server is deployed to the region #1
-          The cold standby Oracle database (used for fail-over only) is deployed to the region #2
-          The Weblogic cluster #1 is deployed to the region #1
-          The Weblogic cluster #2 is deployed to the region #2

Major slowdown was observed for SQL transactions executed from region #2 (cross regional communication with the database).

Gathering and validation of facts

As usual, a Java EE problem investigation requires gathering of technical and non technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:

·         What is the client impact? MEDIUM (higher response time observed in an intermittent manner; depending on which region the user is hitting)
·         Is slowdown affecting all transactions? No, slowdown was isolated to transactions executed from Weblogic cluster #2; involved in a cross regional communication with our Oracle database
·         Did DBA investigate the health of the database and SQL performance? Yes, no problem was found with the database and no slow running SQL was found from the Oracle AWR report

-          Conclusion #1: The problem appear to be isolated to transactions executed from Weblogic cluster #2 which is communication with the primary Oracle database server located in the region #1. The decision was then taken to assess any network latency problem between region #1 and region #2


Network latency analysis

Network latency between the Weblogic clusters and the Oracle database server was measured at high level via the traceroute command.

Latency between Weblogic cluster #1 (region #1) and database server (region #1)

bash-2.05$ traceroute <server hostname>

traceroute: Warning: Multiple interfaces found; using xyz.xyz.xyz.xyz @ eri0
traceroute to <server hostname> (xyz.xyz.xyz.xyz), 30 hops max, 40 byte packets
1         lt;server hostname> (xyz.xyz.xyz.xyz)  0.228 ms  0.151 ms  0.335 ms



** Latency of ~0.3 ms with no HOP in between **

Latency between Weblogic cluster #2 (region #2) and database server (region #1)
$ traceroute <db server hostname>
traceroute: Warning: Multiple interfaces found; using xyz.xyz.xyz.xyz @ eri0
traceroute to <db server hostname> (xyz.xyz.xyz.xyz), 30 hops max, 40 byte packets
1  <router hostname> (xyz.xyz.xyz.xyz)  1.854 ms  1.786 ms  1.460 ms
2  <router hostname> (xyz.xyz.xyz.xyz)  0.298 ms  0.284 ms  0.322 ms
3  <router hostname> (xyz.xyz.xyz.xyz)  0.648 ms  0.389 ms  0.414 ms
4  <router hostname> (xyz.xyz.xyz.xyz)  0.294 ms  0.562 ms  0.330 ms
5  <router hostname> (xyz.xyz.xyz.xyz)  0.695 ms  0.837 ms  0.695 ms
6  <router hostname> (xyz.xyz.xyz.xyz)  10.180 ms  10.173 ms  10.039 ms
7  <router hostname> (xyz.xyz.xyz.xyz)  20.091 ms  19.767 ms  20.157 ms
8  <router hostname> (xyz.xyz.xyz.xyz)  9.732 ms  9.908 ms  9.464 ms
9  <db server hostname> xyz.xyz.xyz.xyz  9.814 ms  10.238 ms  9.765 ms


** Latency of ~50 ms with 8 HOPS (router, switch etc.) in between **


Graphical view of the network latency between region #1 and region #2

-          Conclusion #2: A higher network latency was confirmed between Weblogic cluster #2 and database server (region #1). Now question was to determine why such latency was causing so much impact on the SQL response time


Thread Dump analysis

A few Thread Dump snapshot were taken from Weblogic cluster #2 which did reveal the following bottleneck:

ExecuteThread: '61' for queue: 'weblogic.kernel.Default'" daemon prio=5 tid=0x00970d58 nid=0x51 runnable [a8b7e000..a8b7fc28]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at oracle.net.ns.Packet.receive(Unknown Source)
at oracle.net.ns.NetInputStream.getNextPacket(Unknown Source)
at oracle.net.ns.NetInputStream.read(Unknown Source)
at oracle.net.ns.NetInputStream.read(Unknown Source)
at oracle.net.ns.NetInputStream.read(Unknown Source)
at oracle.jdbc.ttc7.MAREngine.unmarshalUB1(MAREngine.java:718)
at oracle.jdbc.ttc7.MAREngine.unmarshalSB1(MAREngine.java:690)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:373)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1405)
at oracle.jdbc.ttc7.TTC7Protocol.fetch(TTC7Protocol.java:889)
- locked <0xc7660c90> (a oracle.jdbc.ttc7.TTC7Protocol)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:242)
- locked <0xbaaa51d0> (a oracle.jdbc.driver.OracleResultSetImpl)
at weblogic.jdbc.wrapper.ResultSet_oracle_jdbc_driver_OracleResultSetImpl.next(Unknown Source)
at oracle.toplink.internal.databaseaccess.DatabaseAccessor.executeCall(Unknown Source)
at oracle.toplink.threetier.ServerSession.executeCall(Unknown Source)
at oracle.toplink.internal.queryframework.CallQueryMechanism.executeCall(Unknown Source)
at oracle.toplink.internal.queryframework.CallQueryMechanism.executeCall(Unknown Source)
at oracle.toplink.internal.queryframework.CallQueryMechanism.executeSelectCall(Unknown Source)
at oracle.toplink.internal.queryframework.CallQueryMechanism.selectAllRows(Unknown Source)
at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.selectAllRowsFromTable(Unknown Source)
at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.selectAllRows(Unknown Source)
at oracle.toplink.queryframework.ReadAllQuery.execute(Unknown Source)
…………………………………………………………………………………………………………………………………………………………………………………………………

As you can see in the above Thread Dump data snippet, most of the time was spent by our application / Toplink trying to fetch the data while consuming / iterating through the ResultSet object.

The Oracle JDBC Driver is configured with a default row pre fetch of 10 (If more than 10 rows exist in the result set of an SQL query, the driver will iteratively return 10 rows at a time up to the total of the result set. The fetch size is the amount of rows the driver is keeping in its cache during a transaction).

It was found that a lot of our SQL was returning about 100 up to 2000 data rows per execution so we then approximately calculated the network latency impact based on the default JDBC driver fetch settings:

·       Formula:  Query Response Time = Query Execution Time + (network latency * (#rows returned / defaultRowPrefetch))
·       The SQL used as for our problem reproduction was returning 1500 data rows with an Oracle SQL execution of only 100ms:
·       Total Query response time (WL cluster #2) = 100ms + (50ms * (1500 / 10)) = 7600 ms
·       Total Query response time (WL cluster #1) = 100ms + (0.3ms * (1500 / 10)) = 145 ms

As you can see in the above results, the difference of response time was drastic between each regional Weblogic cluster and completely dependent on the network latency. A combination of higher response time from region #2 to region #1 and low default fetch value was identified as the root cause of the problem.


Solution

Since network latency could not be improved in our situation, the next logical solution was to attempt to increase the default Oracle JDBC fetch to a higher value (500 in our case) in order to limit the number of back and forward to the database to a minimum level. This was achieved by adding a Property in the Weblogic Connection Pool settings:

·         Find below the SQL response time after increasing defaultRowPrefetch to 500:
·         Total Query response time (WL cluster #2) = 100ms + (50ms * (1500 / 500)) = 250 ms
·         Total Query response time (WL cluster #1) = 100ms + (0.3ms * (1500 / 500)) = 100 ms

As you can see, the above results were quite conclusive and we were able to reduce the impact of the Network latency to a minimum level by increasing the Oracle JDBC driver defaultRowPrefetch.

Recommendations

I hope this article has helped you understand the analysis approach and how this simple tuning can have such a positive impact on your SQL response time. Fine below my general recommendations regarding defaultRowPrefetch tuning:

-          When facing abnormal slowdown of your SQL response time, please review your network latency and # of rows returned from your application. You may benefit from increasing the default Oracle JDBC driver defaultRowPrefetch to a higher value if your SQL queries is returning 50 rows+
-          Be careful and do not increase the defaultRowPrefetch too high. Please keep in mind the such data is stored in the JDBC driver memory buffer cache so always do proper analysis, capacity planning and assess any negative impact on your Java memory following any defaultRowPrefetch increase

3.28.2011

Network Adapter could not establish the connection problem using Weblogic 8.1

This case study describes the complete steps from root cause analysis to resolution of an intermittent Weblogic Connection Pool connectivity problem experienced between an Oracle Weblogic 8.1 application server and an Oracle 10g database.






It will also demonstrate the importance for an application support person of mastering some basic network troubleshooting skill sets and techniques in order to do proper problem isolation and root cause analysis of such type of problem.

Environment specifications

·         Java EE server: Oracle Weblogic Platform 8.1 SP6
·         OS: AIX 5.3 TL9 64-bit
·          JDK: IBM JRE 1.4.2 SR13 32-bit
·         RDBMS: Oracle 10gr2
·         Platform type: Ordering application

Monitoring and troubleshooting tools

·         AIX 5.3 PING comman
·         AIX 5.3 TRACEROUTE command
·         Weblogic dbping utility

Problem overview

-          Problem type: The DBMS driver exception was: Io exception: The Network Adapter could not establish the connection

An intermittent connectivity problem was observed in our production environment between our application server and database server. Such Weblogic error was spotted in our log during problem reproduction.

Initial problem mitigation did involve restarting the affected Weblogic managed server(s) almost on a daily basis until successful connection established with the remote database server.

Gathering and validation of facts

A Java EE problem investigation requires gathering of technical and non technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:

·         What is the client impact? Low, problem was intermittent and our platform has proper load balancing and fail-over in place
·         Recent change of the affected platform? No
·         Any recent traffic increase to the affected platform? No
·         Any recent activity or restart of the application server or database server? Yes, the application server is restarted on daily basis. The remote database server was last physically restarted a few weeks ago following a network incident in the server farm
·         Since how long this problem has been observed?  Since a few weeks
·          Is the JDBC Connection Pool connectivity problem is consistent or intermittent? Problem is intermittent
·         Did a restart of the Weblogic server resolve the problem? No, currently only used as a mitigation strategy
·         Did the DBA team found any problem with the Oracle 10g database? No problem found with the database itself
·         Did the support team analysis the Weblogic logs and any error? Yes, Weblogic JDBC error was found and as per Weblogic  / JDBC Driver documentation, this indicates that the JDBC driver is unable to physically connect to the remote Oracle database

·         Conclusion #1: The problem and error type appear to point to a network / connectivity problem between Weblogic application server and remote database server
·         Conclusion #2: The recent network problem and physical restart of the Oracle database server is a potential trigger suspect

Weblogic error log review

The error below was found during problem reproduction. Such error prevented the initialization and deployment of our primary application JDBC Data Source and application.
                    
<Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "<App Conn Pool>": Io exception: The Network Adapter could not establish the connection

<Error> <JDBC> <BEA-001150> <Connection Pool "<App Conn Pool>" deployment failed with the following error: 0:Could not create pool connection. The DBMS driver exception was: Io exception: The Network Adapter could not establish the connection.>

<Error> <JDBC> <BEA-001151> <Data Source "<App DS>" deployment failed with the following error: DataSource(App DS) can't be created with non-existent Pool (connection or multi) (App Conn Pool).>

Network health check using PING, TRACEROUTE and other utilities

Given the intermittent behaviour of this problem, the support team decided to perform some additional analysis of the network situation between our application and database server. The AIX PING command was used for that purpose as per below.

# Send 5 packets of 64 bytes to the remote database IP address
ping -c 5 -q -s 64 <IP address>

Network packet loss
# Validate the connectivity and route through the different HOP(s)) from the source server (Weblogic application server) to the destination server (Oracle 10g database server)
traceroute <IP address>

Traceroute connectivity problem

As per the above results, ~50% of loss packets were found between our application and database server. The intermittent connectivity problem was also reproduced using the traceroute command.

Please note that Weblogic has also a database "ping" utility that you can use to test your network connectivity and database listener availability from the WebLogic physical server to the remote DB server. This utility basically simulates the creation of a new JDBC Connection via the java.sql.DriverManager.


DB Ping Usage

../<JAVA_HOME>/bin/java -classpath ../<WL_HOME>/<WL_SERVER_HOME>/server/lib/weblogic.jar utils.dbping ORACLE_THIN <dbUserName> <dbPasswoes> <dbURL>


DB Ping - Other RDBMS Provider Usage

java utils.dbping DB2B  [-d dynamicSections] USER PASS HOST:PORT/DBNAME\nor    
java utils.dbping JCONN2       USER PASS HOST:PORT/DBNAME\nor    
java utils.dbping JCONN3       USER PASS HOST:PORT/DBNAME\nor    
java utils.dbping JCONNECT     USER PASS HOST:PORT/DBNAME\nor    
java utils.dbping INFORMIXB    USER PASS HOST:PORT/DBNAME/INFORMIXSERVER\nor    
java utils.dbping MSSQLSERVERB USER PASS HOST:PORT/[DBNAME]\nor    
java utils.dbping MYSQL        USER PASS [HOST][:PORT]/[DBNAME]\nor    
java utils.dbping ORACLEB      USER PASS HOST:PORT/DBNAME\nor    
java utils.dbping ORACLE_THIN  USER PASS HOST:PORT:DBNAME\nor    
java utils.dbping POINTBASE    USER PASS HOST[:PORT]/DBNAME\nor    
java utils.dbping SYBASEB      USER PASS HOST:PORT/DBNAME"); 



Finally, you can also use the TELNET command for that same purpose:

Telnet Usage
telnet <DB hostname> <DB listener port>

Network sniffer analysis

Following the findings, our application support team did engage a network sniffer team to troubleshoot the problem further. Analysis was done by sniffing the inbound and outbound traffic packets generated by the ping and traceroute commands at the network switch level between the source and destination server.

The sniffer team found that the lost packets were actually not coming out of the remote database server; which did isolate the problem further at the remote database server level.

Suspected root cause

The combination of the gathered facts along with application and network support teams did conclude on a routing problem affecting the Oracle 10g database and causing intermittent but consistent packet loss with our application server.

Given the recent network problem in the server farm and physical reboot of the server, it was suspected that the root cause of the problem was related to an invalid ARP table of the network switch and/or our Oracle database server.

ARP stands for address resolution protocol. The delivery an IP packet to the next hop is encapsulated in an Ethernet frame. Such frame must contain a destination address which is determined by inspecting the ARP cache table. If the table does not have an entry the switch will issue an ARP request and wait for a response from the next hop. Any problem with this cached table could lead to routing and connectivity problem; requiring a reset.

Resolution and results

As per the root cause analysis, the physical server support team did proceed with a physical reboot of the affected database server which did reset/clear both ARP tables at the switch and server level.

The results were quite conclusive post restart and the % of packet loss dropped from 50% to 0%. The traceroute command also indicated fast connectivity with no delay.

Conclusion and recommendations

·         When facing “The Network Adapter could not establish the connection” problem with Weblogic, do not assume that the database server is down; gather all the data and facts instead and proceed with simple network problem isolation using ping & traceroute and telnet (port health check
·         Perform your due diligence and problem isolation before engaging a network sniffer team, this will help you speed up the root cause analysis process
·          Please make sure to keep track on key events/deployments inside your environment, including network problem episodes as those type of events are often trigger of Java EE related problems

3.02.2011

Connection pool leak using Hibernate 3.2.3 and Weblogic 10.3

This case study describes the complete steps from root cause analysis to resolution of a JDBC connection pool leak problem experienced with Oracle Weblogic 10.3 and using the open source Java persistence framework Hibernate 3.2.3.

This case study will also demonstrate the importance of best coding practices when using Hibernate and how a simple code problem can have severe consequences on a production environment.

Environment specifications
 
·         Java EE server: Oracle Weblogic Portal 10
·         OS: AIX 5.3 TL9 64-bit
·         JDK: IBM JRE 1.6.0 SR2 64-bit
·         RDBMS: Oracle 11g
·         Persistence API: Hibernate 3.2.3
·         Platform type: Portal application


Monitoring and troubleshooting tools
·         Enterprise internal remote JMX client monitoring tool

Problem overview
 
·         Problem type: JDBC Connection Pool Leak

JDBC Data Source Connection Pool depletion was observed in our production environment following an upgrade from Weblogic Portal 8.1 to Weblogic Portal 10.3.

Initial problem mitigation did involve restarting all the Weblogic managed servers almost every hour until decision was taken to rollback to the previous Weblogic Portal 8.1 environment.

Gathering and validation of facts

As usual, a Java EE problem investigation requires gathering of technical and non technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:

·         What is the client impact? HIGH
·         Recent change of the affected platform? Yes, the application was migrated recently from Weblogic Portal 8.1 to Weblogic Portal 10.3. Also, the application code was migrated from EJB Entity Beans to Hibernate for the read and write operations to the Oracle database
·         Any recent traffic increase to the affected platform? No
·          Since how long this problem has been observed?  Right after the upgrade project deployment
·          Is the JDBC Connection Pool depletion happening suddenly or over time? It was observed via our internal enterprise JMX client monitoring tool that the connection pool is increasing over time at a fast rate. The current pool capacity is 25
·         Did a restart of the Weblogic server resolve the problem? No, rollback to the previous environment was required

·         Conclusion #1: The problem is related to a JDBC Connection Pool leak of the primary application JDBC data source
·         Conclusion #2: This problems correlates with the Weblogic upgrade and migration of our application code to Hibernate

JDBC Pool utilization

The history of JDBC Pool utilization was captured using our internal enterprise remote JMX monitoring tool. Such monitoring is achieved by connecting remotely to the Weblogic server and pull detail from the runtime Weblogic MBeans:

StringBuffer canonicalObjNameBuffer = new StringBuffer();

// MBean query
canonicalObjNameBuffer.append("com.bea:Name=");
canonicalObjNameBuffer.append(jdbcDSName);
canonicalObjNameBuffer.append(",ServerRuntime=");
canonicalObjNameBuffer.append(instanceName);

canonicalObjName = CanonicalObjectNameObjectCacheFactory.getInstance().getCanonicalObjectName(canonicalObjNameBuffer.toString(), null);

// JDBC data source MBean metrics extraction
int activeConnectionCount = (Integer)adapter.getJMXService().getJMXConnection().getMBeanAttribute(canonicalObjName, "ActiveConnectionsCurrentCount");
int leakedConnectionCount = (Integer)adapter.getJMXService().getJMXConnection().getMBeanAttribute(canonicalObjName, "LeakedConnectionCount");
long reservedRequestCount = (Long)adapter.getJMXService().getJMXConnection().getMBeanAttribute(canonicalObjName, "ReserveRequestCount");
int connectionDelayTime = (Integer)adapter.getJMXService().getJMXConnection().getMBeanAttribute(canonicalObjName, "ConnectionDelayTime");

The graph below represents a history of the ActiveConnectionsCurrentCount MBean metric. This corresponds to your current connection pool utilization.  


The results were quire conclusive as it did reveal such connection pool leak and some sudden surge leading to full depletion.

Error log review

The log review did reveal the following error during these 2 episodes of connection leak surge. There was a very good correlation with # of errors found in these logs and the # of leaked connections increase observed from graph. The error was thrown during execution of the Hibernate Session.flush() method due to null value injection.

Unexpected exception has occured: org.hibernate.PropertyValueException: not-null property references a null or transient value: app.AppComponent.method UnExpected Exception has been occured:
org.hibernate.PropertyValueException: not-null property references a null or transient value: app.AppComponent.method
               at org.hibernate.engine.Nullability.checkNullability(Nullability.java:72)
               at org.hibernate.event.def.DefaultFlushEntityEventListener.scheduleUpdate(DefaultFlushEntityEventListener.java:263)
               at org.hibernate.event.def.DefaultFlushEntityEventListener.onFlushEntity(DefaultFlushEntityEventListener.java:121)
               at org.hibernate.event.def.AbstractFlushingEventListener.flushEntities(AbstractFlushingEventListener.java:196)
               at org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:76)
               at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:26)
               at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
               at app.DaoComponent.insert

Weblogic Profile Connection Leak

The next step was to enable to Profile Connection Leak; similar to what we did with another JDBC connection pool leak using Weblogic 9.0.


The exercise did reveal the source of the leak from one of our application DAO using Hibernate; same DAO component which was generating errors in our log due to null value injection.

Hibernate best practices and code review

Proper coding best practices are important when using Hibernate to ensure proper Session / Connection resource closure. Any Hibernate Session along with JDBC resource must be closed in a finally{} block to properly handle any failure scenario. Such finally{} block must also be shielded against any failure condition in order to guarantee closure of the Hibernate session by adding proper try{} catch{} block when applicable.

The code analysis did reveal a problem with the Hibernate Session / Connection closure code; bypassing the execution of the Hibernate session.close() in the scenario an Exception is thrown during execution of session.flush().

finally {
       if (session != null) {
       session.flush();  
       session.close();
       }
}

Root cause

The root cause of the connection leak was concluded as a combination code defect within our DAO component. The code portion that is taking care of the Hibernate session flush was not embedded in a try{} catch{} block. Any failure of the Session.flush() was bypassing closure of the Hibernate session; triggering a leak of the associated JDBC connection.

Solution and results

A code fix was applied to the problematic DAO component as per below and deployed to our production environment following proper testing.

BEFORE
Hibernate Session Closure Code Problem
AFTER

Hibernate Session Closure Code Fix

The results were quite conclusive and did confirm a complete resolution of the JDBC connection pool leak.

Conclusion and recommendations

·         Perform regular code walkthrough of any new code with an increased focus for any new code dealing with JDBC and third party API implementation such as Hibernate. Always ensure that your JDBC and Hibernate related resource closure code is bullet proof against any negative scenario.
·          Perform regular monitoring of your production Weblogic JDBC Pool utilization, ensure no JDBC Pool leak is present (proactive monitoring). Same exercise should be done in your load testing environment before deploying a major release.

2.28.2011

Enabling Oracle JDBC Driver Debug

Problems with the Oracle JDBC Driver can be hard to pinpoint. Sometimes, a particular SQLException might be a symptom of another problem such as intermittent network latency, packet loss etc. The good news is that Oracle has a “debug” version of the JDBC Driver available which provides a lot of debugging information.

This article will provide you a step by step on how setup and activate the “debug” version of the Oracle JDBC Driver within Weblogic 11g.

Environment specifications

·         Java EE server: Weblogic 11g
·         JDK: HotSpot VM 1.6
·         Oracle JDBC Driver: Oracle JDBC Driver version - 11.1.0.6.0-Production
·          Oracle JDBC Driver Library: ojdbc6_g.jar

Library and configuration file location

The debug library is located within your Weblogic 11g home directory as per below:
<WL11g HOME>/wlserver_10.3/server/ext/jdbc/oracle/11g/ojdbc6_g.jar


You will also need to edit and configure the “OracleLog.properties” file as per your desired logging level along with the output log file name and location. 

The default configuration file is located under:
<WL11g HOME>/wlserver_10.3/server/ext/jdbc/oracle/11g/OracleLog.properties

Installation within Weblogic 11g system classpath

In order to use the debug library, you first have to add it within the Weblogic 11g system classpath. You can add it to the commEnv.sh / commEnv.cmd at the beginning of the WEBLOGIC_CLASSPATH or to the Weblogic PRE_CLASSPATH. 

The commEnv script file is located under:
<WL11g HOME>/wlserver_10.3/common/bin/


 ** Please note that any PRE_CLASSPATH present in setDomainEnv.sh can override any entry in the WEBLOGIC_CLASSPATH; no affect in this case. In that situation, simply add ojdbc6_g.jar to your WL11g user domain setDomainEnv.sh PRE_CLASSPATH or add the library in commEnv.sh by adding the following line below:

# Add ojdbc6_g.jar first in PRE_CLASSPATH
PRE_CLASSPATH=%WL_HOME%\server\ext\jdbc\oracle\11g\ojdbc6_g.jar

Debug library activation

The next step is to add 2 Java System Property parameters to your JVM start-up arguments:

-Doracle.jdbc.Trace=true
-Djava.util.logging.config.file=/tempDir/OracleLog.properties

The first –D parameter will activate the JDBC debug trace.
The second –D parameter is used to specify the location of your choice of the debug library property file (please see detail below).

Once all the above steps are completed, simply shutdown and restart your Weblogic server and monitor your JDBC debug output log for debugging traces.

Find below a snapshot of the OracleLog.properties as a reference.

Finally, if you are not using Weblogic, you can still download the OracleLog.properties from the link below which is part of the Oracle JDBC driver download package within the demo directory.