Saturday, September 29, 2012

Suggestions to increase BPEL performance

The performance improvements suggested in this post are partially based on suggestions from the following book;

If you're interested in how you can improve SOA Suite performance (and of course several other aspects of SOA Suite administration), I suggest reading it.

I've used the Oracle VM which can be downloaded from  .


I've written about some performance suggestions earlier; Recently I've been doing several measures on different systems specifically concerning BPEL performance and have updated the above post to reflect those findings. In this post I will describe some new conclusions.

Test scenario's

I've created 5 test scenario's;
- a dummy scenarion (empty BPEL activity)
- a synchronous locally optimized webservice call
- an asynchronous locally optimized webservice call
- a synchronous not locally optimized webservice call
- a database call

Every scenario I've tested in a blocking invoke and nonblocking invoke setting. I've performed every test 200 times in a parallel executed for-each loop and measured the time by saving the start and end time in a database table.

When I'm talking about 'performance improved' or 'performance worsened, I'm talking about all scenarios. I've not found settings which improved the performance for one test scenario and decreased it for another.

I've found that testing just after a server restart caused performance measures to worsen. This is most likely due to the connection pools which need to create initial database connections (which is slow...). That's why I've done all tests at least 3 times. When testing on the Oracle VM I've purged the dehydration store after every run in order to get the same initial situation for the tests.

The settings which increased the performance on my systems can of course be system specific. If other people have different experiences concerning some of the settings described in this post, it will be interesting to know. Also keep in mind that the settings I've found to improve performance on my system, might worsen the performance if other test scenario's are used. You should always test the settings on your own system to make sure they have the desired result and to confirm system stability is not effected. I've not tested in a clustered environment.

I have results concerning the performance improvements for reducing audit and log levels. Reducing audit and other logging however and can make it harder to debug problems. Putting BPEL process audit logging on 'Development' however in a production environment is in my opinion overkill and should be avoided or only used for a short period in order to solve issues.


Locally optimized webservice calls

Locally optimized calls are faster then not locally optimized calls (non locally optimized calls have among other things SOAP overhead). I've not yet looked at this behavior when working in a cluster with a load balancer. The following might be interesting and would suggest local optimizations does not work through a load balancer if the server URL differs from the load balancer URL; would be interesting to test this and try and make it work in clustered environments since the performance increase for using locally optimized calls is (not statistically tested) significant (about 25%).

Asynchronous callbacks

I've seen in my measures that asynchronous callbacks are a lot slower (about 5 times) then synchronous interactions. This is of course to be expected since correlation data needs to be saved and incoming messages need to be matched to the correlation data. If performance is important and processes are short-running, I'd advice to avoid asynchronous constructions.


Setting the nonBlockingInvoke property on a partnerlink caused a decrease in performance. This is something I had not expected based on; In my tests however, the services were fast and did little. It could be that if slower services were used, an improvement could be measured when using the nonBlockingInvoke option. I have however not tested this.

Tuning datasources (SOAINFRA schema)

My measures concerning the performance changes achieved when increasing the connection pool size of the SOALocalTxDataSource and SOADataSource are not consistent. On the Oracle VM I found that increasing the connection pool size, performance was improved. When trying this setting on a customer system however, performance was worsened. On the Oracle VM, a dedicated XE database is installed which is setup to allow plenty of sessions. On the customer system, the database was not only installed on a (most likely physically) different server but also not dedicated; several SOA Suite instances used the same database for their dehydration store. At the customer, the database sessions parameter (see for example was found to be limiting in the past. An educated guess is that this caused the performance decrease on that system.

One can conclude from the above that only looking at JDBC datasource settings on the Weblogic server is not enough to achieve consistent improvement across environments. One should also look at the database settings.

JVM settings

The following settings increased performance on all systems;


PORT_MEM_ARGS="-Xms768m -Xmx1536m"
PORT_MEM_ARGS="-Xms1536m -Xmx1536m -Xgcprio:throughput -XX:+HeapDumpOnOutOfMemoryError -XXtlasize:min=16k,preferred=128k,wasteLimit=8k"
The following settings decreased performance (from;

BPEL Engine tuning

I've tried and measured several settings related to the BPEL engine. There are 4 settings related to the number of threads used by the BPEL Engine;

Increasing them did not lead to performance improvements

I've also tried playing with the AuditStorePolicy setting. This caused errors or worsened performance when combined with several of the other audit settings I've tried (for example in combination with deferred audit logging as described in the post referred to in the introduction). I got the best results leaving this set to syncSingleWrite.

Setting the QualityOfService from DirectWrite to CacheEnabled led to worsened performance.

Setting this from async.persist to async.cache improved performance

Asynchronous audit logging
See It improves performance but check the log files to see if the audit store does not encoutner unique key constraints.

I've not tried altering audit thresholds. This can also lead to performance improvements.

Process improvements

The process of using, monitoring and maintaining the SOA Suite installation is maybe even more important in increasing the performance then providing technical tweaks.

Based on the topics discussed in the Administrators handbook ( there are several activities which are considered to be part of the SOA Administrators responsibilities. This might be interesting for customers to consider. If for example a SOA Administrator has little background in application server maintenance or the SOA Suite in specific, there's a high probability project developers are more knowledgeable in some area's. This often leads to shift of responsibilities to developers.

Some of the tasks mentioned as part of the SOA Administrators job are listed below. These usually end up being done by developers.
- automating composite management and structuring composite deployments using ant scripts
- using and managing configuration plans
- administration of services, engines and MDS
- tuning SOA Suite 11g (operating system, application server, SOA infrastructure, database connections, dehydration store)
- monitoring (sometimes 'it's up and running!' is not enough...)

Other suggestions

The book contains several other suggestions for increasing performance. Interesting is also how the SOAINFRA database/schema can be tweaked. I've however not tested/measured this.

Since BPEL often uses services from different systems, tweaking those services can increase the performance of the process as a whole. Most noteworthy here are of course databases and JDBC settings.