Too Many PDB’s Part 2

Now that we have our environment setup with 20 PDB’s, 27 actually if you take into account the existing PDB’s, we can run through some tests to see the behaviour of the system.

Initial tests won’t be very scientific. I’ll basically log into Cloud Control, navigate into the instance only on node 1 and go to various random pages. I’ll then run an awr report and look at the performance pages from DB Express 12c and see what comes up.

So here we go. Please note that this system does not have any other load apart from myself and Cloud Control. My initial impression from navigating around Cloud Control with 27PDB’s is slowness in certain pages but usable overall.

Here’s the activity via DB Express:


Here’s the PDB activity:


And CPU….


And here’s a look at my custom perl tool which shows a “top” like view of database activity :


From the above graphs we can clearly see a lot of horsepower being spent on the overhead of managing 27 PDB’s via Cloud Control. The sampling rate of the CPU details mask some of the spikes to 90% which I found quite surprising.

There is also a bit of i/o on these tests which may indicate an issue with our cheaper NFS storage tier for a couple of existing PDB’s.

Here’s a few interesting snippets from the awr report :

So the initial test run has highlighted a number of issues that we need to investigate. Findings include:

  • Queries over cdb_* views are using DOP(degree of parallelism) of up to 16 over 1 node.
  • CPU usage sat at close to %50 for a 10 minute block and there were spikes of up to 90%
  • Cloud Control was initially very slow and almost unusable but got a little better once things were cached(?)
  • The following sql_id was a poor performer and consumed a bit of i/o

So there we have it. Initial tests done and lots of little things to investigate.

2 thoughts on “Too Many PDB’s Part 2

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA * Time limit is exhausted. Please reload CAPTCHA.