We all get in touch with different processes on daily basis. Most of the times there are tools available to monitor if a process ran successfully.  But in many cases the performance of these processes aren’t measured before they are deployed to the production environment.

We noticed the same problem with some of our customers. There were processes that should deal with a large amount of traffic. So it was critical that the performance was as good as possible.

How did we get started?          

  • We recognized different steps in the process
    • Validation
    • Xslt transformations
    • Database transactions
    • Queuing
  • We created SQL scripts to measure the time these process steps ran
    • Based on custom audit data from the customer
    • Based on the webMethods audit tables (service execution)
  • We analyzed the results with our custom excel template

Performance Analyze:

We wanted to see if there were bottlenecks in our process and if the load had any effect on the process. 
We noticed that all the steps ran a bit longer on a heavy loaded server than on a dedicated one. This seemed to be normal. Abnormal behavior would be when a server started to freeze and the more instances ran, the longer it would take. The curve you can see below is a normal curve. The average runtime doesn’t start to be longer and longer over time.
We also noticed that there wasn’t  a normal curve in the queuing step. After some debugging we noticed that though the process was async.
The webMethods triggers were defined as serial.  The processing curve improved after changing these triggers.
Author: Pieter Van de Broeck