This is the second article in a series about internals and performance of concurrent managers. In this post, we’ll take a look at three important settings that affect the performance of the concurrent managers: number of processes, “sleep seconds”, and “cache size”. This article might be a bit on the theoretical side, but it should provide a good understanding of how these settings actually affect the behavior and performance of concurrent managers. Most of the statements in this article build off of information from my previous post: The Internal Workflow of e-Business Suite Concurrent Manager Process. It may be helpful to take a look at it before continuing with this one.
Life cycle of a Concurrent Request
The interesting thing about tuning concurrent managers is the fact that we don’t tune a particular query or a running process, but we actually tune the pending time of concurrent requests. The goal of the tuning is to make sure concurrent requests start executing soon enough after the time they have been scheduled for. Let’s take a look at the life cycle of a concurrent request:
Based on the diagram above, the pending time of the request is the interval between the time the request was scheduled to start and the time it actually started. This time can be split in two parts:
- Pending for Conflict Resolution Manager (CRM) – Here the CRM checks the incompatibility rules effective for the pending concurrent request against other running requests. The CRM allows the request to execute only when all incompatible requests have completed.
- Pending for Concurrent Manager (CM) – This is the time spent waiting for an available concurrent manager process. It also includes the time the CM process takes to fetch the request from FND_CONCURRENT_REQUESTS table and to start executing it. “Pending for CM” is the interval that can be tuned by altering the number of manager processes, “sleep seconds” and the “cache size” settings. Read more…
Concurrent processing is one of the key elements of any e-Business Suite system. It provides scheduling and queueing functionality for background jobs and it’s used by most of the applications modules. As many things depend on concurrent processing it’s important to make sure the configuration is tuned for your requirements and hardware specification.
This is the first article in a series about performance of concurrent processing. We’ll take a closer look at the internals of concurrent managers, the settings that affect their performance and the ways of diagnosing performance and configuration issues. Today we’ll start with an overview of the internal workflow of a concurrent manager process. Enjoy the reading!
As we all know proper use of bind variables in SQL statements is a must to make transaction processing applications scalable. So how do we find the queries that don’t use bind variables and have to be parsed each time they are executed? There is number of ways, but this article is all about the most effective way I know. If you have a better one – let me know please! Read more…
One of the hot topics at the UKOUG 2011 Technology and E-Business Suite Conference last December was the upcoming release of Oracle e-Business Suite R12.2. The new release will bring us lots of new features, usability improvements and new versions of technology stack components (Oracle Database 11g R2 and Oracle Fusion Middleware 11g R1 as the application server), but the most important and impressive new feature of course will be online patching. Online patching is supposed to change the game completely. All owners of E-Business Suite environments know that patching requires downtime. Although it can be reduced with various techniques (e.g. staged APPL_TOP), some downtime is still required to apply a number of changes. Online patching will not eliminate downtime completely, but will reduce it significantly by using “Edition Based Redefinition” (EBR) at the database level and using a secondary applications file system for online patching. In fact, all patching activity will be an online operation; downtime will be required only to switch from one version to another. Read more…
“Hello World!” I guess that’s the most appropriate way to start my 1st blog post under pythian.com domain. I’m going to start slow, but hopefully will pick up speed and have at least couple of posts each month to share with you. I’ve been blogging at http://appsdbalife.wordpress.com until now and I haven’t decided yet what the future will be for my previous blog, I wouldn’t like it to become some kind of a zombie page that’s been long dead but still wandering around the internet world.
Enough intros, let’s get to business! I hope this blog post doesn’t get lost in the huge amount of posts related to OOW 2011.
A few days ago I was asked to estimate how much space needed to be added to the ASM diskgroup to handle the database growth for one year without additional need of adding disks. Obviously, to estimate the disk space that needed to be added I had to know what the DB size will be in one year from now. It looked like an easy task as I knew we were recording the size of the database every day. Read more…
A while ago I worked on a performance troubleshooting case where frequent short time degradation of IO performance on NetApp storage was suspected to be the root cause. The problem was to get some proof as looking at the averages of IO service time was not alarming enough. I decided to write a tool that could be used to monitor wait times for any DB wait event in short intervals, e.g. so I could get measures for db file sequential/scattered read performance each second. I thought for a while and figured out the requirements:
- It should be lightweight and easy to set up;
- the results have to be visible real-time;
- it should be possible to spool results to file;
- it should allow monitoring any wait event;
- it should allow to define interval length between measurement points.
In the end I came up with the solution – a pipelined function, with interval size and name of wait event as parameters, that “queries” the performance metrics using a simple select statement, making it possible to spool the results into a file and seeing results real-time. You can take a look at it in the video below. Continue reading if you’re interested in seeing the source code and reading some explanations on key implementation tricks that made this possible.
Have you ever wondered why the table segment consumes as much space as it does and how does one know if the space allocated by each of the segments is actually used for storing data and is not mostly empty? Those question did bother me time to time and I was looking for a method that would not require to license any packs (like Diagnostic Pack for Segment Advisor, because it requires AWR) and which would not do lots of IOs by scanning the segments. In the end I found a simple solution for this…