jump to navigation

Audio semi-Visual Presentation on Clustering Data in Oracle November 12, 2014

Posted by mwidlake in performance, Presenting, SQL.
Tags: , , ,
add a comment

I suppose it had to happen eventually but one of my presentations has ended up on YouTube. It’s a recent presentation I did for the Oracle Midlands user group in September.

The topic is (as the title of this blog post hints at!)Boosting select performance by clustering data. The video consists of the slides I presented, changing as the presentation progresses, with my audio over the top. It goes on for a bit, close to an hour, but you could watch a section and then go and do something else before watching a bit more.

I have to say, it is very odd hearing my voice (and the slight touch of the “brummie” {Birmingham} accent coming through) and I do wince at the places where I blather or say something slightly wrong or make a joke that involved a visual element that is lost. Oh well, at least you don’t see me wandering around and jumping up,literally, to point out bits on the slides.

I’m glad to say I will be repeating a slightly more polished version of the presentation at this year’s UKOUG Tech14 conference in December. I was a bit under the weather back on September the 16th, due to having just returned to the Working Life, and with this perfect example of what I did then I should be able to make the next shot at it a cracker… maybe.

On the topic of Oracle Midlands, I like this user group that is run by Mike Mckay Dirden, the meetings consist of evening presentations in Birmingham with a bit of support from Redgate. This includes half-time samosas to keep you going. The next meeting is described here and is on Tuesday 25th November. This meeting has two presentations by my friend Bjoern Rost, who is an Oracle Ace Director {gasps of appreciation from the audience} and a consummate presenter. I wish I could make it there as I would like to share a pint with Bjoern (well, he’ll have wine or a cocktail I suspect as he is not a beer fan) as well as some of my other friends up that part of the country.

Friday Philosophy – Is the CBO becoming way too complex? October 19, 2012

Posted by mwidlake in Friday Philosophy, performance.
Tags: , ,
20 comments

I was at the SIOUG annual conference in Slovenia this week (and a very good conference it was too) and I was watching a presentation by Christian Antognini about how the CBO learns by it’s mistakes. This was of course mostly about adaptive cursor sharing and cardinality feedback. Chris was also able to share a few tid-bits about 12c enhancements in this area. I can’t go into details, but basically it looks like the CBO is going to not only enhance those two features but there is a new one where the CBO can change the plan on the fly, as the same query progresses.

As I watched the presentation I found I was in two minds about this. Part of me was thinking “Yes, these features do help in the CBO realising it has chosen a poor plan and improving things for the next execution – and it will get better at it”. The other part was thinking “Oh dear, yet another source of confusion and of performance of queries changing when ‘nothing has changed’“.

It also solidified an idea I have for a new presentation, about how what you see in the execution plan may well not be what is actually executed. ie there are optional bits in there that do not get run at execution time, depending on the results of prior steps.

But I digress. What I ended up wondering at the end of Chris’s talk was this. Is the CBO getting just too complex? It is all very good that extra abilities are being added so that there is a better chance of a good plan being chosen in the first place and the various ways we can control the choice of the plan grows and grows. But we are long past the point when you could write even a simple sql statement and know what the plan will be before you explain it – let alone run it, check the plan and then run it again a few minutes later and be sure the plan will be the same.

Why does this bother me? For three reasons:

1) Performance will not be stable, so the user experience will not be consistent – and so they will be unhappy. Users will generally be happy if they run a report and it takes one minute, if it takes one minute each time. If it takes one minute and then it takes 5 seconds, they want it to always run in 5 seconds, else they fell cheated or that the system is broken. In fact, I am pretty sure a user will be happier if a report always takes 2 minutes rather than it take 5 seconds or 40 seconds but they never know which! (So long, that is, that they never, ever see it run in less than 2 minutes).

2) If performance of a particular SQL statement is absolutely crucial, I need to know how the CBO can and probably will satisfy it so that I can work that into my solution. Otherwise I am reduced to trying out informed options (or maybe random changes :-) ) in order to find the optimal solution – and I cannot be sure that the plan won’t suddenly swap later unless I fix the plan. OK, it might swap to be faster, but I only notice when it swaps to be a lot slower.

3) I’ve said this before, but I am starting to really wonder how anyone new to this starts understanding oracle performance tuning any more. Reading blogs and books about how Oracle works and what impacts the CBO teaches you a lot about what is possible but it only makes sense when you do little tests and proofs of concepts. ie you actually run simple examples and see what happens. But with all these options, considerations and now on-the-fly alterations to plans by the CBO, it is very confusing. Even I, after 20 years of doing performance tuning on and off, am constantly finding myself looking at relatively simple SQL and having to work out why the plan is as it is and satisfying myself that it has not changed. I started with the CBO when it was a lot simpler and I’ve been introduced to the complexities gradually, as they have been introduced, so learning about it has been easier for me.

Perhaps I should not worry about this. I mean, the CBO is getting better at doing what it does, that is good. It is getting hard for those younger and smarter than me to learn about it, so my career is protected. And there is more stuff to talk about so I can keep going to conferences and talking about them.

And with that happy idea, I’m off to start the weekend. Where was that cork screw?

DBMS_APPLICATION_INFO for Instrumentation September 3, 2012

Posted by mwidlake in development, Instrumentation, performance.
Tags: , , , ,
15 comments

I just wanted to put up a post about DBMS_APPLICATION_INFO. This is a fantastic little built-in PL/SQL package that Oracle has provided since Oracle 8 to allow you to instrument your code. i.e record what it is doing. I’m a big fan of DBMS_APPLICATION_INFO and have used it several times to help identify where in an application time is being spent and how that pattern of time has altered.

Some PL/SQL developers use it and some don’t. It seems to me that it’s use comes down to where you work, as most PL/SQL developers are aware of it – but not everyone uses it (a friend of mine made the comment recently that “all good PL/SQL developers use it“. I can understand his point but don’t 100% agree).

It is incredibly easy to use. You use the procedures SET_MODULE(module_name,action_name), SET_ACTION(action_name) and SET_CLIENT_INFO(client_info) to set the values of the corresponding columns in V$SESSION for your current session. This is done in memory, there is no inserting or updating of any table rows, so it is incredibly quick and light-weight. The below shows a subset of V$SESSION including the columns that get set:

desc v$session
Name Null? Type
----------------------------------------- -------- ----------------
SADDR RAW(8)
SID NUMBER
SERIAL# NUMBER
AUDSID NUMBER
PADDR RAW(8)
USER# NUMBER
USERNAME VARCHAR2(30)
COMMAND NUMBER
...
MODULE VARCHAR2(64)
MODULE_HASH NUMBER
ACTION VARCHAR2(64)
ACTION_HASH NUMBER
CLIENT_INFO VARCHAR2(64)
...

Just something odd to note. MODULE, ACTION and CLIENT_INFO are shown at VC(64) but if you check the documentation you will see that:

MODULE is limited to 48 bytes
ACTION is limited to 32 bytes
CLIENT_INFO can be set to the whole 64 bytes

I have no idea why MODULE and ACTION are limited in this way but it might be historic, backward compatibility with prior versions.

As a quick example of their use:

test_11_2> --demo_dai.sql
test_11_2> -- quick demo of dbms_application_info
test_11_2> COL sid form 9999
test_11_2> col username form a12
test_11_2> col module form a18
test_11_2> col action form a22
test_11_2> col client_info form a15
test_11_2> -- set module and action
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_MODULE(MODULE_NAME=>'SALES_LOAD',ACTION_NAME=>'VALIDATE_ALL_PRICES')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC SALES_LOAD VALIDATE_ALL_PRICES
test_11_2> --
test_11_2> -- Update the action within a module
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_ACTION(ACTION_NAME=>'Update all Prices')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC SALES_LOAD Update all Prices

test_11_2> -- clear them
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_MODULE(null,null)
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC

test_11_2> -- you can set just an action
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_action(ACTION_NAME=>'orphan process')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC orphan process

test_11_2> -- set them to something sensible
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_MODULE(MODULE_NAME=>'RETURNS_LOAD',ACTION_NAME=>'MATCH_TO_SALE')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC RETURNS_LOAD MATCH_TO_SALE

test_11_2> -- client_info can be longer
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_client_info(client_info=>'entered this module at '||sysdate)
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ --------------- --------------- ----------------------------------------
650 ERIC RETURNS_LOAD MATCH_TO_SALE entered this module at 03-SEP-2012 13:07

The intention is that you set the MODULE as you enter a functional chunk of your application (be it a PL/SQL package or a chunk of JAVA, whatever {but be aware of the stateless nature of systems using middle tiers and pooled connections}) and then update the ACTION as you progress. You can set an action with no module, as demonstrated above, but I would recommend against it. On that topic I would make the following recommendations:

  • Set the MODULE at the start, eg as you enter a PL/SQL package’s main module
  • Set the MODULE before you do all the validation – you may wish to see if that validation is as quick as you think.
  • Use SET_ACTION to update the action prudently. ie when the activity of the code alters but not all the time. It will come clearer with use, but what you are after is to be able to judge the relative run durations of parts of the application. Too much detail can mask the overall picture. If you can alter code you can always add more instrumentation.
  • Clear the MODULE and ACTION explicitly as you exit a package. The activity is lightweight and you do not want any code that lacks this instrumentation to inherit values from calling code.
  • Do not set an ACTION without having set the MODULE earlier in the code.
  • Keep the text of these values short and specific.

You do not have to look at V$SESSION to see these values, which is good as you may lack the privs needed to do so. Since oracle 9 there have also been procedures to check what the values are set to. e.g:

test_11_2> BEGIN
2 DECLARE
3 V_MODULE VARCHAR2 (64);
4 V_ACTION VARCHAR2 (64);
5 begin
6 dbms_application_info.READ_MODULE(V_MODULE,V_ACTION);
7 DBMS_OUTPUT.PUT_LINE('Module IS '||V_MODULE||' Action IS '||V_ACTION);
8 END;
9 END;
10 /

Module IS RETURNS_LOAD Action IS MATCH_TO_SALE
test_11_2>

So as you can see, these values can be set, they can be accessed via the data dictionary or the package itself and, if you believe my wild claims, they are very lightweight to use. However, these values really come into themselves with eg OEM’s performance screens. I’ll leave examples of that to a second post. Here I just wanted to cover how easy it is to set and get the information.

That does lead me onto what I think is the main reason that DBMS_APPLICATION_INFO is not as widely used as it could {and should} be. Developers often lack access to eg OEM and so do not see the full benefit of using it. Also, developers tend to be {note the get-out-clause) more concerned with making the code work and less with monitoring it after it is released. DBA-types tend to be more concerned with monitoring it. Thus the DBA types would want it in the code and developers would not want to spend the time coding it in. We are all one big team though, so some negotiation and beer-buying (or management enforcement) might well resolve that one.

Exclusion of Unioned SQL in Views – Followup August 20, 2012

Posted by mwidlake in database design, performance, SQL.
Tags: , ,
add a comment

Last week I put up a post about how Oracle can filter out sections of a union view..

Within the comments I put up another example where the CBO did not filter out all but one of the Union views despite my replicating the exact WHERE clause of one of the unioned statements. Jonathan Lewis posted a followup to say “Oracle can be very touchy about how it plays this game” and made a prediction of how the CBO would handle a slightly different scenario.

This was the extra scenario and I include brief details on creating the unioned view too. NB all on Oracle 11.2.0.2. {non-Exadata :-) }

  1  create table mdw_source
  2  tablespace users
  3  as select * from dba_objects

  1  create or replace view mdw_v1 as
  2  select
  3    '1'              src_type
  4   ,ms.OWNER ,ms.OBJECT_NAME,ms.SUBOBJECT_NAME ,ms.CREATED
  5  from mdw_source ms
  6  where ms.object_name like 'A%'
  7  union all
  8  select
  9    '2'              src_type
 10   ,ms.OWNER ,ms.OBJECT_NAME,ms.SUBOBJECT_NAME ,ms.CREATED
 11  from mdw_source ms
 12  where ms.object_name like 'B%'
 13  UNION ALL
 14  select
 15    '3'              src_type
 16   ,ms.OWNER ,ms.OBJECT_NAME,ms.SUBOBJECT_NAME ,ms.CREATED
 17  from mdw_source ms
 18  where ms.object_name like 'C%'

1 select count(created)
2 from mdw_v1 mv
3 where mv.object_name like ‘C%’

Execution Plan
———————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
———————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 75 | 551 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | 75 | | |
| 2 | VIEW | MDW_V1 | 824 | 61800 | 551 (1)| 00:00:07 |
| 3 | UNION-ALL | | | | | |
|* 4 | TABLE ACCESS FULL| MDW_SOURCE | 1 | 34 | 184 (1)| 00:00:03 |
|* 5 | TABLE ACCESS FULL| MDW_SOURCE | 2 | 68 | 184 (1)| 00:00:03 |
|* 6 | TABLE ACCESS FULL| MDW_SOURCE | 821 | 27914 | 184 (1)| 00:00:03 |
———————————————————————————–

Predicate Information (identified by operation id):
—————————————————
4 – filter(“MS”.”OBJECT_NAME” LIKE ‘A%’ AND “MS”.”OBJECT_NAME” LIKE ‘C%’)
5 – filter(“MS”.”OBJECT_NAME” LIKE ‘B%’ AND “MS”.”OBJECT_NAME” LIKE ‘C%’)
6 – filter(“MS”.”OBJECT_NAME” LIKE ‘C%’)

Statistics
———————————————————-
8 recursive calls
0 db block gets
2417 consistent gets 

So, as you can see, despite me providing the WHERE clause in my final select as “where mv.object_name like ‘C%’” and the WHERE clauses of the unioned view are all similar and one is “ms.object_name like ‘C%’” the CBO fails to realise it can exclude all but one of the unioned SQL statements – despite the mutally exclusive filter predicates:

4 – filter(“MS”.”OBJECT_NAME” LIKE ‘A%’ AND “MS”.”OBJECT_NAME” LIKE ‘C%’)
5 – filter(“MS”.”OBJECT_NAME” LIKE ‘B%’ AND “MS”.”OBJECT_NAME” LIKE ‘C%’)

I have to confess, in light of recent experience I would have wondered if these filter predicates would prevent the other two SQL statements being fired at execution time but the number of consistent gets confirms that it is not – 2417 compared to 805 or so for when the query exclusion occurs. You can check back to the prior post to verify that.

So, let’s do the tests Jonathan suggested. He suggested altering the view to use equality predicates so I have recreated the view as such:

TEST_DB> create or replace view mdw_v1 as
  2  select
  3    '1'              src_type
  4    ,ms.OWNER ,ms.OBJECT_NAME ,ms.SUBOBJECT_NAME ,ms.CREATED
  5  from mdw_source ms
  6  where substr(ms.object_name,1,1) ='A'
  7  union all
  8  select
  9    '2'              src_type
 10   ,ms.OWNER ,ms.OBJECT_NAME ,ms.SUBOBJECT_NAME ,ms.CREATED
 11  from mdw_source ms
 12  where substr(ms.object_name,1,1) ='B'
 13  UNION ALL
 14  select
 15    '3'              src_type
 16   ,ms.OWNER ,ms.OBJECT_NAME ,ms.SUBOBJECT_NAME ,ms.CREATED
 17  from mdw_source ms
 18  where substr(ms.object_name,1,1) ='C'

View created.

TEST_DB> set autotrace on
TEST_DB> --
TEST_DB> -- below will do a select of a non-indexed column across the view
TEST_DB> select count(created) from mdw_v1

COUNT(CREATED)
--------------
          1288

Execution Plan
-----------------------------------------------------------------------------------
| Id  | Operation            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |            |     1 |     9 |   552   (1)| 00:00:07 |
|   1 |  SORT AGGREGATE      |            |     1 |     9 |            |          |
|   2 |   VIEW               | MDW_V1     |  1677 | 15093 |   552   (1)| 00:00:07 |
|   3 |    UNION-ALL         |            |       |       |            |          |
|*  4 |     TABLE ACCESS FULL| MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
|*  5 |     TABLE ACCESS FULL| MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
|*  6 |     TABLE ACCESS FULL| MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='A')
   5 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='B')
   6 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='C')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
       2417  consistent gets


TEST_DB> -- below will do a select of a non-indexed column
TEST_DB> -- specific to one of the Union's in the view via the src_type
TEST_DB> select count(created)
  2  from mdw_v1
  3  where src_type='3'

COUNT(CREATED)
--------------
           129

Execution Plan
------------------------------------------------------------------------------------
| Id  | Operation             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |            |     1 |    12 |   184   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE       |            |     1 |    12 |            |          |
|   2 |   VIEW                | MDW_V1     |   561 |  6732 |   184   (1)| 00:00:03 |
|   3 |    UNION-ALL          |            |       |       |            |          |
|*  4 |     FILTER            |            |       |       |            |          |
|*  5 |      TABLE ACCESS FULL| MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
|*  6 |     FILTER            |            |       |       |            |          |
|*  7 |      TABLE ACCESS FULL| MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
|*  8 |     TABLE ACCESS FULL | MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(NULL IS NOT NULL)
   5 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='A')
   6 - filter(NULL IS NOT NULL)
   7 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='B')
   8 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='C')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        807  consistent gets


TEST_DB> -- below will do a select of a non-indexed column
TEST_DB> -- specific to one of the Union's in the view via matching the where clause
TEST_DB> select count(created)
  2  from mdw_v1
  3  where substr(object_name,1,1) ='C'

COUNT(CREATED)
--------------
           129

Execution Plan
------------------------------------------------------------------------------------
| Id  | Operation             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |            |     1 |    75 |   184   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE       |            |     1 |    75 |            |          |
|   2 |   VIEW                | MDW_V1     |   561 | 42075 |   184   (1)| 00:00:03 |
|   3 |    UNION-ALL          |            |       |       |            |          |
|*  4 |     FILTER            |            |       |       |            |          |
|*  5 |      TABLE ACCESS FULL| MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
|*  6 |     FILTER            |            |       |       |            |          |
|*  7 |      TABLE ACCESS FULL| MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
|*  8 |     TABLE ACCESS FULL | MDW_SOURCE |   559 | 19006 |   184   (1)| 00:00:03 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(NULL IS NOT NULL)
   5 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='A')
   6 - filter(NULL IS NOT NULL)
   7 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='B')
   8 - filter(SUBSTR("MS"."OBJECT_NAME",1,1)='C')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        807  consistent gets


TEST_DB> -- below will do a select of a non-indexed column
TEST_DB> -- specific to one of the Union's in the view via NOT matching the where clause
TEST_DB> -- but maybe logical exclusion can be managed
TEST_DB> select count(created)
  2  from mdw_v1
  3  where object_name ='C'

COUNT(CREATED)
--------------
             0

Execution Plan
-----------------------------------------------------------------------------------
| Id  | Operation            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |            |     1 |    75 |   551   (1)| 00:00:07 |
|   1 |  SORT AGGREGATE      |            |     1 |    75 |            |          |
|   2 |   VIEW               | MDW_V1     |     3 |   225 |   551   (1)| 00:00:07 |
|   3 |    UNION-ALL         |            |       |       |            |          |
|*  4 |     TABLE ACCESS FULL| MDW_SOURCE |     1 |    34 |   184   (1)| 00:00:03 |
|*  5 |     TABLE ACCESS FULL| MDW_SOURCE |     1 |    34 |   184   (1)| 00:00:03 |
|*  6 |     TABLE ACCESS FULL| MDW_SOURCE |     1 |    34 |   184   (1)| 00:00:03 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("MS"."OBJECT_NAME"='C' AND
              SUBSTR("MS"."OBJECT_NAME",1,1)='A')
   5 - filter("MS"."OBJECT_NAME"='C' AND
              SUBSTR("MS"."OBJECT_NAME",1,1)='B')
   6 - filter("MS"."OBJECT_NAME"='C' AND
              SUBSTR("MS"."OBJECT_NAME",1,1)='C')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
       2417  consistent gets

TEST_DB> -- below will do a select of a non-indexed column
TEST_DB> -- specific to one of the Union's in the view via NOT matching the where clause
TEST_DB> -- but logically it is the same.
TEST_DB> select count(created)
  2  from mdw_v1
  3  where object_name like 'C%'

COUNT(CREATED)
--------------
           129

Execution Plan
-----------------------------------------------------------------------------------
| Id  | Operation            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |            |     1 |    75 |   551   (1)| 00:00:07 |
|   1 |  SORT AGGREGATE      |            |     1 |    75 |            |          |
|   2 |   VIEW               | MDW_V1     |    24 |  1800 |   551   (1)| 00:00:07 |
|   3 |    UNION-ALL         |            |       |       |            |          |
|*  4 |     TABLE ACCESS FULL| MDW_SOURCE |     8 |   272 |   184   (1)| 00:00:03 |
|*  5 |     TABLE ACCESS FULL| MDW_SOURCE |     8 |   272 |   184   (1)| 00:00:03 |
|*  6 |     TABLE ACCESS FULL| MDW_SOURCE |     8 |   272 |   184   (1)| 00:00:03 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("MS"."OBJECT_NAME" LIKE 'C%' AND
              SUBSTR("MS"."OBJECT_NAME",1,1)='A')
   5 - filter("MS"."OBJECT_NAME" LIKE 'C%' AND
              SUBSTR("MS"."OBJECT_NAME",1,1)='B')
   6 - filter("MS"."OBJECT_NAME" LIKE 'C%' AND
              SUBSTR("MS"."OBJECT_NAME",1,1)='C')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
       2417  consistent gets

Above, I create the view, do a quick select count(*) to cause a full scan of all three section in the union view, then do selects using various WHERE predicates to see what sections of the unioned SQl statements are excluded.

The results are very intersting. The filtering on sub_type still works fine, but now the WHERE predicate “where substr(object_name,1,1) =’C’” allows the CBO to exclude sections of the union, the “Null is not null” filters appear and the consistent gets is 807. The WHERE predicate “where object_name =’C'” which at first glance you might think to be logically the same but is not (we selected object_names where only the first character is checked but the whole object_name is selected) acts does not do any exclusion and I thought it might. There are no OBJECT_NAMES of ‘C’ but logically as a human you can see that such an object name could not be found by the first two unioned SQL statements in any case.

The last test is most interesting. The WHERE clause of the SQL select over the view is “where object_name like ‘C%’” and is logically the same as the one WHERE clause in the unioned view “where substr(object_name,1,1) =’C’“. So no exclusion occurs and again we see 2417 consistent gets.

It would seem the CBO Pixies either have not coded for that or, as Jonathan says Oracle can be “touchy” about this feature. Bottom line, if you are expecting this sort of unioned view SQL exclusion to occur – TEST!

Exclusion of Unioned SQL in Views? August 16, 2012

Posted by mwidlake in performance, SQL.
Tags: , , ,
13 comments

Question – you have a view definition of the following format:

select 1 as d_type, col_a,col_b,col_c
from TAB_X, TAB_Y, TAB_Z
where {your predicates}
UNION
select 2 as d_type, col_a,col_b,col_c
from TAB_P, TAB_Q, TAB_R
where {your predicates}
UNION
select 3 as d_type, col_a,col_b,col_c
from TAB_X X, TAB_Y, TAB_Z
where {your predicates}

You now select from the view and your code includes a filter predicate such as:

“WHERE D_TYPE = 1″

What will the CBO do? Is it intelligent enough to do what a human would do and exclude the parts of the query that could not possibly return data you want? Ie exclude the SQL statements where d_type is set to 2 and 3. Or is that funcitonality not built into the CBO? After all, it is only software and someone would have to decide that there could be benefit of looking into set of unioned SQL selects to see if any could be excluded and then code that into the CBO.

After all, the CBO is not smart enough to realise it could easily satisfy “select min(col_a), max(col_a) from tab x” via two fast lookups on a supporting index. It scans the index.

So – what is your guess?

I asked 3 people. Not a massive straw poll but I got one vote for it will not exclude, one for it will and one smart answer of “I would expect it to exclude- but as you asked the question I am thinking it does not”.

Let’s do the test.

DB_TEST> @demo_union_view_2
DB_TEST> drop table mdw_source purge;
Table dropped.

  1  create table mdw_source
  2  tablespace users
  3  as select * from dba_objects
  4  /
Table created.

  1  create index mdso_idx on mdw_source(owner,object_name)
  2  tablespace users
  3  /
Index created.

DB_TEST> exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'MDW_SOURCE',method_opt=>'for all columns size 1')
PL/SQL procedure successfully completed.

  1  select table_name,num_rows from dba_tab_statistics
  2  where owner=USER and table_name='MDW_SOURCE';

TABLE_NAME      NUM_ROWS
--------------- ----------
MDW_SOURCE      55895

  1  create or replace view mdw_v1 as
  2  select
  3    '1'              src_type
  4   ,ms.OWNER
  5   ,ms.OBJECT_NAME
  6   ,ms.SUBOBJECT_NAME
  7   ,ms.CREATED
  8  from mdw_source ms
  9  where ms.object_name like 'A%'
 10  union all
 11  select
 12    '2'              src_type
 13   ,ms.OWNER
 14   ,ms.OBJECT_NAME
 15   ,ms.SUBOBJECT_NAME
 16   ,ms.CREATED
 17  from mdw_source ms
 18  where ms.object_name like 'B%'
 19  UNION ALL
 20  select
 21    '3'              src_type
 22   ,ms.OWNER
 23   ,ms.OBJECT_NAME
 24   ,ms.SUBOBJECT_NAME
 25   ,ms.CREATED
 26  from mdw_source ms
 27  where ms.object_name like 'C%'
 28  /
View created.

So, a simple table is created from DBA_OBJECTS and a view over the top of the table consisting on 3 simple selects unioned together, with a static value (1, 2 or 3) generated for each of the 3 parts as SRC_TYPE

Now let’s run some tests:

DB_TEST> set autotrace on
DB_TEST> --
DB_TEST> -- below will do a select of a non-indexed column across the view
DB_TEST> select count(created) from mdw_v1
  2  /

COUNT(CREATED)
--------------
          1284

Execution Plan
----------------------------------------------------------
Plan hash value: 4259489107
-----------------------------------------------------------------------------------
| Id  | Operation            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |            |     1 |     9 |   551   (1)| 00:00:07 |
|   1 |  SORT AGGREGATE      |            |     1 |     9 |            |          |
|   2 |   VIEW               | MDW_V1     |  2463 | 22167 |   551   (1)| 00:00:07 |
|   3 |    UNION-ALL         |            |       |       |            |          |
|*  4 |     TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  5 |     TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  6 |     TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   5 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   6 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
       2417  consistent gets


DB_TEST> -- below will do a select of a non-indexed column across the view
DB_TEST> -- limiting it to "one" of the types
DB_TEST> select count(created)
  2  from mdw_v1
  3  where src_type='3'
  4  /

COUNT(CREATED)
--------------
           129


Execution Plan
----------------------------------------------------------
Plan hash value: 1188403912
------------------------------------------------------------------------------------
| Id  | Operation             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |            |     1 |    12 |   184   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE       |            |     1 |    12 |            |          |
|   2 |   VIEW                | MDW_V1     |   823 |  9876 |   184   (1)| 00:00:03 |
|   3 |    UNION-ALL          |            |       |       |            |          |
|*  4 |     FILTER            |            |       |       |            |          |
|*  5 |      TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  6 |     FILTER            |            |       |       |            |          |
|*  7 |      TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  8 |     TABLE ACCESS FULL | MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(NULL IS NOT NULL)
   5 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   6 - filter(NULL IS NOT NULL)
   7 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   8 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        807  consistent gets


DB_TEST> -- below will do a select of an indexed column across the view
DB_TEST> select count(owner) from mdw_v1
  2  /

COUNT(OWNER)
------------
        1284


Execution Plan
----------------------------------------------------------
Plan hash value: 4219520050
------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |     1 |    17 |   221   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE         |          |     1 |    17 |            |          |
|   2 |   VIEW                  | MDW_V1   |  2463 | 41871 |   221   (1)| 00:00:03 |
|   3 |    UNION-ALL            |          |       |       |            |          |
|*  4 |     INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  5 |     INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  6 |     INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   5 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   6 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        989  consistent gets


DB_TEST> --
DB_TEST> -- below will do a select of an indexed column across the view
DB_TEST> -- limiting it to "one" of the types
DB_TEST> select count(owner)
  2  from mdw_v1
  3  where src_type='3'
  4  /

COUNT(OWNER)
------------
         129


Execution Plan
----------------------------------------------------------
Plan hash value: 815942527
-------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |     1 |    20 |    74   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE          |          |     1 |    20 |            |          |
|   2 |   VIEW                   | MDW_V1   |   823 | 16460 |    74   (2)| 00:00:01 |
|   3 |    UNION-ALL             |          |       |       |            |          |
|*  4 |     FILTER               |          |       |       |            |          |
|*  5 |      INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  6 |     FILTER               |          |       |       |            |          |
|*  7 |      INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  8 |     INDEX FAST FULL SCAN | MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(NULL IS NOT NULL)
   5 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   6 - filter(NULL IS NOT NULL)
   7 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   8 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        331  consistent gets


DB_TEST> -- below will select against the undelying object we are interested in
DB_TEST> select count(created)
  2  from mdw_source ms
  3  where ms.object_name like 'C%'
  4  /

COUNT(CREATED)
--------------
           129


Execution Plan
----------------------------------------------------------
Plan hash value: 152094671

---------------------------------------------------------------------------------
| Id  | Operation          | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |            |     1 |    34 |   184   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE    |            |     1 |    34 |            |          |
|*  2 |   TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        805  consistent gets

Look at the plans at the start.

When my query does not include SRC_TYPE,ie the first SQL statement, the plan is simple – it scans the table three times, estimates 821 rows per scan and a cost of 184 for each step. When I include the SRC_TYPE = ‘3’ in the next step we see that the plan still includes all three full table scans, each still estimated at a cost of 184.
When you provide a predicate that identifies the one sql statement that you want to run, it still runs all three. I tried a few further tests and could not force the exclusion of the “unwanted” queries from the plan.

I showed my friend above (the one who suggested it would not work as I had asked the question) the results and we agreed that it was a little unfortunate that the Oracle Optimizer Pixies had not coded in the ability to exclude unwanted Unioned statements, but we could understand it.

However, for those of you thinking “you are missing something” – you are right. {And, to be fair to my friend, my example was not quite so simplified as what I have posted here}.

I decided to quantify the level of the impact and so shifted my concentration to the buffer gets. Look at the first and second statements again. The cost of the total statement drops, from 551 to 184. The actual consistent gets resulting from executing the SQL is dropping from 2417 to 807. They are both dropping dramatically to approximately 1/3.

Now check out the filter predicates at step 4 and 6 – “NULL IS NOT NULL”. That will always be false. Everything “after” that statement in the plan will not be executed. The step in the plan is there and, as a step, has the same cost – but it is is not being executed.

The pattern is repeated when the simple SQL statement is on an indexed column and the index is being used (the next two statements) {and honesty forces me to admit I have included more output than I need for this example, as I wanted to confuse the issue slightly and maybe encourage you to not read the plans too carefully}.

So yes, the CBO IS excluding the individual unioned SQL statements based on the stated SRC_TYPE and the fixed value stated in each part of the union.

I’m quite impressed by that. Clever Oracle Optimizer Pixies.

To tie this back to reality, I initially looked at this as I am trying to tune some code that hides all the complexity of the application under one massive unioned uber-view over a set of 9 other complex, unioned views. The explain plan runs to close to 700 lines. I had utterly overlooked the filters and filter predicates in that – heck I missed it initially in my simple test :-) {which was more complex than in this post, in my defense}. Trying to work with a very slow statement based on nested, complex views that is used by pretty much the whole application is “fun”.

Also back in reality, the filter clause I see in my real code is along the lines of:

:B17 is null or is not null

This is as bind variable is being considerd – and that is actually one of the considerations back in this real world of the original issue. The level that the master query needs to run in is decided by this bind variable – otherwise the application developer would (should have?) just pointed at the sub-view they needed for that level. So the CBO does not know which chunk of the union’d code it can ignore until it sees that bind variable, so the plan shows what could be executed.

Of course, it could be argued that if you know the type at the time you query the database, rather than use the above union filtering trick to cut down your massively complex view to the part that is required, just query the sub view you need directly. A bit more code but a lot simpler system to look after.

It’s a nice SQL trick but I would keep it to simple examples!

Finaly, just for fun, here is the plan I’ve been trying to tune. As I said, close to 700 steps and only the highlighted grey section on the left is shown in more detail on the right.

Table High Water Mark and How Empty the Table Is April 30, 2012

Posted by mwidlake in performance, statistics.
Tags: , ,
1 comment so far

On a recent post about reducing the high water mark in a table by shrinking it someone asked if you could tell how much “space” there was in a table that could be recovered by compressing it.

Yes. As pseudo-code:

select number_of_rows*average_row_length. That is how much data there is.
Divided by the block size, to give how many blocks-worth of data you have.
Divided this by a “overhead” fudge factor to allow for wasted space (see later) of about 20%
Compare to the number of blocks used by the table.

Here is some code to do it, showing some extra data and how we calculate the effective percentage of space used (pct_spc_used). It runs against an example table ‘HWM’

select uts.blocks                                     blks_used
      ,uts.avg_space
      ,uts.num_rows
      ,uts.avg_row_len
      ,uts.empty_blocks                               empty_blks
      ,usse.blocks                                    alloc_blks
      ,greatest(uts.blocks,1)/greatest(usse.blocks,1) pct_hwm
      ,uts.num_rows*uts.avg_row_len                   data_in_bytes
      ,(uts.num_rows*uts.avg_row_len)/8192            data_in_blks
      ,((uts.num_rows*uts.avg_row_len)/8192)*1.25     mod_data_in_blks
      ,(((uts.num_rows*uts.avg_row_len)/8192)*1.25)/usse.blocks pct_spc_used
from user_tab_statistics uts
    ,user_segments       usse
where uts.table_name='HWM'
and   uts.table_name=usse.segment_name
/
--
 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1221      50000         213         34       1664   .97957
     10650000   1300.04883       1625.06104    .97659918

I am collecting the data from USER_TAB_STATISTICS and USER_SEGMENTS. For this code to work you must have reasonably good stats against the table.

I’d like to run through some of the columns I have selected:
First of all, columns AVG_SPACE and EMPTY_BLKS are not populated by dbms_stats.gather_table_stats.. They are populated by the deprecated ANALYZE command that you should NOT use to gather table stats since V10 came along. These columns are populated as I did an ANALYZE to get the data in there, as well as a dbms_stats.
Next, I collect BLOCKS_ALLOCATED from DBA_SEGMENTS {and for this demo I just ignored the potential for partitioned tables) and I compare this to the BLOCKS_USED to get the High Water Mark, as a percentage of the table. I do this as EMPTY_BLOCKS is set to zero if you have never used ANALYZE and, even if you did, unless you use this deprecated command all the time, the value will not change.
On the second line of output I calculate the DATA_IN_BYTES as a simple num_rows*avg_row_len, convert it into blocks {for simplicity I do not collect the block size, I know it is 8k}. I then apply my “Overhead” fudge factor. A block has a header, using around 100 bytes {I’ve not checked the exact figure for years}, pctfree can be varied but defaults to 10% and as only whole rows fit, then an average of half a row of space is empty in each “full” block. Thus I reduce the space available by 20-25%. In this case, 25% as my rows are large.
Finally, I compare this modified data volume to the used blocks to get the actual space

Below I run through creating some test data, looking at the stats and my calculated High Water Mark and pct_space_used and finally shrink my table to see if my guesstimate is a reasonable guesstimate:

populate table

drop table hwm purge;
prompt populate table
set feed on
create table hwm
(id   number(10)
,num1 number(2)
,vc1  varchar2(100)
,vc2  varchar2(100)
)
/
insert into hwm 
select rownum
,trunc(dbms_random.value(1,100))
,lpad('A',100,'A')
,lpad('B',100,'B')
from dual connect by level < 50001
/
50000 rows created.

exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'HWM')

--where is the HWM compared to total segment size

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630          0      50000         210          0       1664   .97957
     10500000   1281.73828       1602.17285   .962844262

NB AVG_SPC and EMPTY_BLKS are NULL. 
The high water mark is 1630 blocks out of 1664 in the segment
My calculated PCT_SPC_USED is 96%. That is probably close enough.
{remember, the last used block will be only partly used, accounting for a bit of the difference}


-- I will use ANALYZE to fill the missing columns
analyze table hwm compute statistics;

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1150      50000         213         34       1664   .97957
     10650000   1300.04883       1625.06104    .97659918

Now those two columns are populated. 
Not the slightly different AVG_ROW_LEN even though dbms_stats used 100% (as the table is so small)
and ANALYZE was compute 


-- clear 90% of the data randomly

45461 rows deleted.

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1150       4539         210         34       1664   .97957
       953190   116.356201       145.445251   .087407002

PCT_HWM is not altered of course but PCT_SPC_USED has dropped dramatically. 
The table is now only 8.7% used, according to my calculations (compared to
90% empty) 

The BLKS_USED does not change. The AVG_SPACE and EMPTY_BLOCKS are the same as I 
used dbms_stats to update the statistics and it DOES NOT ALTER the columns that it does not
populate. Thus you have no idea how recent those columns are if you use a mixture of commands.


-- clear some blocks completely by deleting a range

2181 rows deleted.


 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1150       2358        210         34       1664   .97957
       495180   60.4467773       75.5584717   .045407735

Now the PCT_SPC_USED is down to 4.5%


-- has EMPTY_BLOCKS changed if I use ANALYZE?
analyze table hwm compute statistics;

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       7682       2358         213         34       1664   .97957
       502254   61.3103027       76.6378784   .046056417

As you can see, if I use ANALYZE AVG_SPACE alters. But EMPTY_BLOCKS does not, even though I cleared
a chunk of the table. So there are blocks that can be reused but not listed as empty.

I'll just take a quick side-step and show a quick "oddity" about dbms_stats
--deleting the stats (using dbms_stats)

exec dbms_stats.delete_table_stats(ownname=>user,tabname=>'HWM')

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
                                                              1664

dbms_stats.delete_table_statistics clears ALL statistics, even the ones it does not populate

--and now collect them via dbms_stats again

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630          0       2358         210          0       1664   .97957
       495180   60.4467773       75.5584717   .045407735


--now to shrink the table
alter table hwm enable row movement;
alter table hwm shrink space;


 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
        72          0       2358         210          0         80   .90000
       495180   60.4467773       75.5584717   .944480896

So I calculated that there was about 75 blocks of data in that table. having shrunk it, I was
a little bit out.

Having run through those examples we can see that the accuracy of the PCT_SPC_USED is down to the fudge factor employed but is probably close enough at 25%. After all, you are only likely to shrink a table that very clearly would benefit from it.

Friday Philosophy – It’s not “Why Won’t It Work!” it’s “What Don’t I Understand?” April 27, 2012

Posted by mwidlake in Friday Philosophy.
Tags: , ,
add a comment

I had a tricky performance problem to solve this week. Some SQL was running too slow to support the business need. I made the changes and additions I could see were needed to solve the problem and got the code running much faster – but it would not run faster consistently. It would run like a dream, then run slow, then run like a dream again 2 or 3 times and then run like a wounded donkey 3 or 4 times. It was very frustrating.

For many this would provoke the cry of “Why won’t it work!!!”. But I didn’t, I was crying “What don’t I understand???”. {I think I even did a bit of fist-pounding, but only quietly as my boss was sitting on the desk opposite me.}

I think I’ve always been a bit like that in respect of How Things Work”, but it has been enhanced within me by being blessed to work with or meet people for whom it is more important for them to understand why something is not working than fixing it.

I was reminded of this by a thoughtful comment in an email that one of the oaktable sent to the list. They made the comment that what they felt was common between members of the oaktable is “that we’re not interested (really) in what the solution is of most of our problems, but actually, what is the underlying issue that really causes the problem?”

It struck a real chord with me. Quite a few people I’ve come across seem to be fixated on wanting to know solutions – so that they can look knowledgeable and be “one of the best”. But that’s just stamp collecting really. It’s like one of those ‘games card’ fads that each generation of children has, where you want to collect the best cards so you can win. I never got it as a kid as there are few rules, tactics, ‘how it works’ to elucidate. What success is there in winning when it’s just down to the cards you have? {And being candid, I didn’t like them as partly as I never had the money to buy many cards and partly I was rubbish at trading them. No sales skills.}

I know the solve-it-don’t-just-fix-it position is a topic I have touched on before, but I think the attitude of trying to fix problems by understanding how it works is far more satisfying than doing so by knowing a set of solutions. You develop a deeper understanding to help solve new problems than any amount of solution-stamp-collecting ever will. However, another wise voice on the Oaktable discussion pointed out that you can be in a work environment where there is no time to investigate and you simply have to try your set of fixes and move on if you hit one that works. Your work environment can strongly influence how you work and, it some ways, the ways you think.

I bet some people are wondering what my problem at the start of this post actually was? Well, a nice technical blog about it may appear over the weekend, but the core reason for the toggling of working/not-working was partition swap. We have data coming into the system very fast. We build a new summary of the key data in one table and then swap it into active play via partition swap. On the live system, stats had not been gathered on the “swap” table we had introduced but had on the active table. So, each time the partition swapped, we went from good stats to “empty” stats or the other way around. The empty stats gave a quite, quite dreadful execution plan.

Shrinking Tables to Aid Full Scans April 18, 2012

Posted by mwidlake in performance, SQL.
Tags: ,
14 comments

{This blog is about shrinking tables where the High Water Mark is higher than it needs to be. Prompted by one of the comments, I wrote a follow-up post on finding the High Water Mark and tables that consist mostly of empty space, which would be candidates for shrinking.}

This blog is about one of those things I do as almost an autonomous “not thinking about it” performance housekeeping task, one which I have been meaning to mention for ages.

There can be quite a lot to gain by checking out full scans on “small” tables and seeing if it is as efficient as it can be. Sometimes it is, sometimes it is not. Often it is “good enough”. Occasionally it is awful.

Recently I was just casting an eye over the “top 20″ SQL on a system for any code generating a lot of consistent gets. I came across the below:

FRST_LOAD_TIME SQL_ID PRSE EXCS BUFFS DISCS RWS
-------------------- ------------- --------- ----------- -------------- ---------- ------------
CPU_MS ELAPSD_MS SORTS DIR_W OPT_COST
-------------- -------------- ---------- ---------- --------------
SQL_TEXT
----------------------------------------------------------------------------------------------------
2-03-14/18:00:10 bk9b5u6zyvy59 36,262 36,262 90,634,158 7 36,261
320,102.3 326,920.9 0 0 546
SELECT count(*) RUNNING_SESSIONS from SNP_SESSION WHERE SESS_STATUS = :1 and AGENT_NAME=:2

It caught my eye as it struck me as a lot of buffer gets for what I thought was quite a small table and it is executed quite often. In this case, a few thousand times a day. It takes 2500 buffer gets per count(*). All buffered, but the CPU has to chunk through it.

I check, it is a small table:

ABCD01> select count(*) from dev_ODI_XXXX.snp_session

COUNT(*)
----------
73

ABCD01> @tab_lst
Enter value for tab_name: snp_session

TABLE_NAME OWNER NUM_ROWS
------------------------------ ------------------------------ -------------
SNP_SESSION DEV_ODI_XXXX 49

-- and a quick check on those key columns

OWNER COLUMN_NAME NUM_DISTINCT N_NULLS LOW_V HI_V BKTS AVG_L
-------- -------------------- ------------ ---------- --------------- --------------- ---- -----
DEV_ODI_ SESS_STATUS 4 0 D W 4 2
DEV_ODI_ AGENT_NAME 4 6 AAA_ODI_AGENT BBB_ODI_AGENT 4 13

The table IS small, only 73 rows. It is also interesting that the stats are out by quite a percentage, but the table is still seen as small according to the stats. This indicates the table is quite volatile, but consistently small. Another thing to notice is that the number of distinct values for each column in the WHERE clause is only 4 and thus an index lookup would not be very specific. With such a small number of records in the table and not very specific index I “know” that an index lookup would not be of much benefit {BTW Everything that you “know” in this way – do check it in reality every now and again…}. I don’t look at adding an index, I look at why such a small table needs so many buffer gets.

So, why so many buffer gets?

First of all, I need to be careful. The original source of information I was looking as was V$SQL and that holds cumulative stats for all executions of the same SQL statement. Thus the buffer gets could have been very high a while ago (say, yesterday) and now are small, and the average is 2500 BGs per select but right now it is running fine. So I verify that the code is still doing a lot of work for such a small number of records:

set autotrace on
SELECT count(*) RUNNING_SESSIONS from dev_ODI_XXXX.SNP_SESSION 
WHERE SESS_STATUS = 'D' --:1 
and AGENT_NAME= 'DEV_ODI_AGENT'--:2

RUNNING_SESSIONS
----------------
1
Elapsed: 00:00:00.99

Execution Plan
----------------------------------------------------------
Plan hash value: 1837310352
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 546 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 15 | | |
|* 2 | TABLE ACCESS STORAGE FULL| SNP_SESSION | 1 | 15 | 546 (1)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - storage("SESS_STATUS"='D' AND "AGENT_NAME"='DEV_ODI_AGENT')
filter("SESS_STATUS"='D' AND "AGENT_NAME"='DEV_ODI_AGENT')

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2013 consistent gets
0 physical reads

That’s still a lot of consistent gets to scan all of 75 rows and do a count (which is the sort aggregate step). A second to process and 2013 consistent gets to look at 75 rows in a skinny table – just trust me it is a skinny table. And the optimiser cost the full scan step at 546 “equivalent single IOs”.
{Oh, if anyone is wondering what the “table access storage full” and predicate step “storage(“SESS_STATUS”=’D’ AND “AGENT_NAME”=’DEV_ODI_AGENT’)” it’s because this system is an exadata box. Just ignore this for now, it is not relevant to this issue. I’ll blog about it *next week*}.

What is the problem?

Well, let’s see how big the table is.

ABCD01> @seg_dets
Enter value for seg_name: snp_session
Enter value for owner: dev_odi%
Any Key>

OWNER SEG_NAME SEG TS_NAME BYTES_K BLOCKS exts INI_K NXT_K
-------- --------------- --- -------- ---------- --------- ---- ------- -------
DEV_ODI_ SNP_SESSION TAB DEV_ODI_ 16,384 2,048 31 64 1024
REPO USER

The table is 2048 blocks in size, the consistent gets to look at the table is 2013. That’s close.

The problem is when you full scan a table, it scans up to the high water mark of that table, ie the highest block that Oracle knows there has been data in. As Oracle inserts new data into a table and extends it, it keeps a note of the highest block data has been put into, the High Water Mark (HWM). When you delete data from a table, Oracle marks individual blocks to be reused as they empty (drop below a certain percentage full). But Oracle does NOT check to see if many blocks are being fully emptied and it can reduce the high water mark. In fact, there could well be a few records in the first block of the table and then nothing until a few records in the last block that has been used in the table, the High Water Mark. Oracle just scans all blocks and counts relevant records up to the high water mark. This is actually a very long-standing performance tweek, you can allocate GBs of data to a table but Oracle knows the very highest block it has put data into for that segment and only scans up to the High Water Mark. Usually, the HWM is useful.

I happen to know that this table used to have several thousand records in it but is now used far less extensively and holds only a few records. They are constantly deleted and new ones created. I have seen {very regularly, at least once a year} some extreme situations where a table has had many millions of rows in it at one point, setting the HWM to several hundred thousand blocks into the table segment, but the table now only holds a few hundred records. But a Full Table Scan scans all the way up to the HWM, taking several minutes to access a few hundred records. Often that table is being inefficiently cleared with a “delete from table_x” statement and then re-populated.

What is the answer? Well, I could truncate the table and set the HWM to zero, but this table actually holds some real data. I can’t truncate it in this situation. I could MOVE the table, ie get Oracle to recreate the table and move the existing data into it. That would set the HWM of the new segment to block one and then raise it as it inserts the data neatly into the first few blocks. The third option, and the one I took, is to SHRINK the table. This basically tells oracle to shuffle the data down to the start of the table and reset the HWM. You can do this on the fly and I have done it on relatively active systems, but I must give the usual advice to test this yourself and you may have locking issues on a busy system hitting the table you shrink very hard. Also, you do need to enable row movement on the table for Shrink to work, as is demonstrated below:

ABCD01> alter table dev_ODI_XXXX.snp_session shrink space cascade;
alter table dev_ODI_XXXX.snp_session shrink space cascade
*
ERROR at line 1:
ORA-10636: ROW MOVEMENT is not enabled

ABCD01> alter table dev_ODI_XXXX.snp_session enable row movement;

Table altered.

ABCD01> alter table dev_ODI_XXXX.snp_session shrink space cascade;

Table altered.

Elapsed: 00:00:01.98

So, how does my little select perform now?

set autotrace on
SELECT count(*) RUNNING_SESSIONS from dev_ODI_XXXX.SNP_SESSION 
WHERE SESS_STATUS = 'D' --:1 
and AGENT_NAME= 'DEV_ODI_AGENT'--:2

RUNNING_SESSIONS
----------------
1
Elapsed: 00:00:00.04

Execution Plan
----------------------------------------------------------
Plan hash value: 1837310352
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 546 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 15 | | |
|* 2 | TABLE ACCESS STORAGE FULL| SNP_SESSION | 1 | 15 | 546 (1)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
2 - storage("SESS_STATUS"='D' AND "AGENT_NAME"='DEV_ODI_AGENT')
filter("SESS_STATUS"='D' AND "AGENT_NAME"='DEV_ODI_AGENT')

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads

Fabulous. The run time is very short and the number of consistent gets is tiny. You might notice that the Explain Plan still thinks the cost of the full scan will be 546 (ie the equivalent of 546 single block reads) but that is as I have not re-gathered stats. I left the system to do this automatically that night.

I’ll just mention one more thing. Shrinking a table is a bit like the old trick of rebuilding indexes to compact them. People misunderstood what rebuilding indexes was achieving and why the indexes were so much larger than they “needed” to be and would spend CPU and IO rebuilding indexes overnight – only for them to quickly explode back to the original size (creating a lot of overhead as they did). Shrinking oversized tables can have a great benefit but keep in mind that if some process pushes the HWM right up every day by shoving lots of data into the table and then clearing it down later, shrinking the table is just a waste of time. It gives benefit when the table really has shrunk down in size for a while. Or it only occasionally explodes to a massive size. I have in the past written “regular shrink” scripts for tables I know explode once in a while (eg once a week) but are usually small, but you need to know what the pattern is. Don’t just blindly shrink tables.

For me in this situation, that code scanning that table dropped out of my “top 20 most demanding SQL statements” and has stayed out of it for weeks. The code now completed in considerably under 0.1 seconds as opposed to a whole second – but no one has noticed. I’d removed about 5% of the workload of the system – but no one noticed. But it took me 1/2 an hour to investigate and fix and I’d freed up a non-trivial chunk of processing resource. In my Performance Tuning heart I know I did a good job on that day.

Dropped Tables, Hiding Extents and Slow DBA_FREE_SPACE Queries February 16, 2012

Posted by mwidlake in database design, internals, performance.
Tags: , , ,
8 comments

My last post was on slow dictionary queries caused by having many, many thousands of extents in your database. This post is about a special case of this problem, which I encountered recently. It was on an Exadata box – Exadata is not particularly relevant to the issue, but I’m curious to see if mentioning Exadata will make this post more popular :-)

I was querying the used and free space on a very small database on the client’s X2-2 1/4 rack Exadata machine. The query was really slow, taking about 30 seconds. This is a FAST box, what is going on?

I quickly realised that the problem was specific to one tablespace:

>@SPC_SUM
Enter the tablespace (or leave null)&gt; DATA_01

TS_NAME              ORD      SUM_BLKS        SUM_K  MAX_CHNK_K NUM_CHNK
-------------------- ----- ----------- ------------ ----------- --------
DATA_01              alloc     262,144    2,097,152   2,097,152        1
                     free       63,128      505,024     504,384       11
2 rows selected.
Elapsed: 00:00:00.21

&gt; @SPC_SUM
Enter the tablespace (or leave null)&gt; USERS

TS_NAME              ORD      SUM_BLKS        SUM_K  MAX_CHNK_K NUM_CHNK
-------------------- ----- ----------- ------------ ----------- --------
USERS                alloc     748,320    5,986,560   5,372,160        2
                     free      127,904    1,023,232       6,144    3,058
2 rows selected.

Elapsed: 00:00:26.05

We can see it takes 0.21 seconds for tablespace DATA_01, 26.05 seconds for the USERS. Yes, USERS is larger but not 150 times larger. What is going on? Well, as the title of this post suggests, it is down to dropping tables. The below shows my route to that conclusion.

What I could see was that for the USERS tablespace the maximum chunk of free space was relatively small compared to the sum of free space – 6MB out of 1GB, pretty much – and that there was a lot of individual pieces of free space, 3,058. This tablespace was shattered into a lot of bits. So, what sort of extent size management do we have? How big are the tables and indexes in this tablespace {NB I already knew we had no partitions so I did not have to worry about that}.

select tablespace_name,initial_extent,next_extent
      ,extent_management,min_extlen,allocation_type
from dba_tablespaces where tablespace_name ='USERS'

TABLESPACE_NAME                INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN MIN_EXTLEN ALLOCATIO
------------------------------ -------------- ----------- ---------- ---------- ---------
USERS                                   65536             LOCAL           65536 SYSTEM

select tablespace_name,blocks,count(*) from dba_extents
where tablespace_name = 'USERS'
group by tablespace_name,blocks
having count(*) >1
order by blocks desc,tablespace_name

TABLESPACE_NAME                    BLOCKS   COUNT(*)
------------------------------ ---------- ----------
USERS                                2560          2
USERS                                2048          3
USERS                                1536          7
USERS                                1408          5
USERS                                1280          2
USERS                                1248          2
USERS                                1152          2
USERS                                1024        229
USERS                                 896         15
USERS                                 768         21
USERS                                 736          3
USERS                                 720          3
USERS                                 704          2
USERS                                 672          2
USERS                                 640         25
USERS                                 624          2
USERS                                 576          2
USERS                                 512        117
USERS                                 400          2
USERS                                 384         34
USERS                                 360          2
USERS                                 312          2
USERS                                 288          4
USERS                                 256         49
USERS                                 248          2
USERS                                 240          2
USERS                                 192          5
USERS                                 160          4
USERS                                 128       1165
USERS                                   8       1788

30 rows selected.

So we have system controlled extent allocation, this should be fine. I did a quick check of the table contents – 300 or so tables and 200 or so indexes. And, as you can see from the check of extents above, very few larger extents and a lot of small ones. {As a little aside, note the relatively large number of 8-block, 128-block, 256-block and 1024-block extents; These are the sizes that LMTs with system managed extents tend to use unless a similar sized chunk is found to be available, in which case it is used}.

I did some other checks on indexes and segment sizes and it all boiled down to one thing. There were not a lot of things in this tablespace, what was there was small and… The total of all those table and index blocks was way less than the used space in the tablespace.

That is the major indicator of what is going on here. The difference in space used in the tablespace and the total of the visible tables and indexes.

The last link in the chain is the recycle bin.

select owner,ts_name,count(*) from dba_recyclebin group by owner,ts_name

OWNER                          TS_NAME                          COUNT(*)
------------------------------ ------------------------------ ----------
USER1                          USERS                              542356
USER1                                                                  2
WEGWEGWEG                      USERS                                  97
KKKUKUYLLX                     USERS                                 149
USOVFPKEKS                     USERS                                   3
....
ERHJTRTTTURT                   USERS                                   4

11 rows selected.

That’s 542,356 objects in the recyclebin for one user, in the one tablespace. My problem tablespace. The penny clicked, something I had already noticed and was a little uncomfortable about fell into place.

The client is using Oracle Data Integrator (ODI) to pull data together and put it into the database. The process they are using basically creates a table, uses it to load some data into and then pushes the data into the target tables. Then drops the table. The drop does not do a “DROP TABLE … PURGE;”. This is done many, many times per load cycle, which is intended to run several times a day.

Something you should always keep in mind with the recyclebin is that the tables, indexes and their extents that go into the recycle bin do not get reported in several of the key data dictionary views. I did mention this in a blog post way back but maybe a whole post on it is called for.

So, the tablespace was being shattered by the constant creating and dropping of small tables. It was hidden from easy view due to how the recyclebin is (not) exposed in data dictionary views.

It is not good practice to constantly create and drop lots of tables. As well as the admittedly rather odd impact that this posting is all about, there is a lot of internal work involved for the oracle database in creating and dropping tables. It is an overhead best avoided. The client had good reasons for this approach but now they are going to look at the alternatives.

I’ve not actually proven in this posting that all those dropped tables is the actual cause of the slow performance in querying the free space in that tablespace {though the corroborative evidence is very strong}. Neither have I said how we cleared up the mess. I’ll cover both of those in the next post on this mini-thread.

Slow querying of DBA_EXTENTS, DBA_FREE_SPACE and dropping tables February 14, 2012

Posted by mwidlake in performance.
Tags: ,
3 comments

Are you finding queries to identify free space, the size of segments or details of extents in your database are very slow? You could have a database with a very, very large number of extents. If you are on version 9 or up, I would suggest you check the extent management settings for any tablespace holding large segments (see later).

{Caveat – I’m writing this posting based on what I remember of some issues I had on two V10.2 databases 2 and 3 years ago respectively, and those systems are no longer available to me. So please treat everything I say here with some caution – I’ve not checked and verified it to the level I normally would. But I wanted to move this information from another posting I am writing on a recent experience I will verify…}

First, what this article is NOT about. It has been known for a very long time that creating tables with a large number of extents can have a negative impact on SQL performance on that table. Except it never really did and it was a myth. Way back prior to Oracle 7.3 you could only create so many extents per segment, depending on block size, but that was another issue. It used to be argued that SQL select against such tables with many extents was slower. I did some tests and it was not – unless you were in the strange situation where your extent size was less than your multi-block read count, and even then the impact was not huge, it was a slowdown of a few percent to maybe 25%.

However, dropping such tables, truncating such tables and queries against the dictionary objects that deal with extents and free space could and still can become very slow. As we have progressed through the Oracle versions from 8 ,9 and 10 this problem has become less common and the impact has become less, mostly due to Locally Managed Tablespaces (LMTs) and Automatic Segment Space Management {though that is more in respect of concurrent DML than select}.

LMTs in particular have generally removed the issue. If you do not use LMTS and have no very,very pressing reason to not {like 3rd party support for applications}, then swap to LMTs. LMTs have been the default for user defined tablespaces since oracle 9 and have two options. Uniform (Fixed) extent sizes or Automatic, where oracle uses 8 block extents for a new segment to start, then 64 block extents, then 1024 block extents as the segment grows {I might be wrong on the exact size details but don’t worry about it, I certainly don’t}. You can check the settings for tablespaces as is demonstrated below. I create two tablespaces, one with uniform extent sizes and then one with automanaged extent sizes, and check the relevant information (this is on 11.2.0.3):

create tablespace mdw_uni_1m
datafile 'C:\APP\MARTIN\ORADATA\MDW1123\MDW_UNI_1M'
size 100m
extent management local uniform size 1M;

create tablespace mdw_auto
datafile 'C:\APP\MARTIN\ORADATA\MDW1123\MDW_AUTO'
size 100m
extent management local autoallocate;

select tablespace_name,initial_extent,next_extent
      ,extent_management,min_extlen,allocation_type
from dba_tablespaces where tablespace_name like 'MDW%';

TABLESPACE_NAME    INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN MIN_EXTLEN ALLOCATIO
------------------ -------------- ----------- ---------- ---------- ---------
MDW_UNI_1M                1048576     1048576 LOCAL         1048576 UNIFORM
MDW_AUTO                    65536             LOCAL           65536 SYSTEM

As you can see, tablespace MDW_UNI_1M uses uniform extents of 1M and MDW_AUTO has system managed allocation and starts with 64K extents – 8 blocks with my 8k block size.

As a quick demo, I’ll just create and populate two simple tables and see what extent sizes are created for them:

create table mdw_big
tablespace mdw_auto
as select rownum id,lpad('A',1000,'A') pad
from dual connect by level <10000

create table mdw_big_uni
tablespace mdw_uni_1m
as select rownum id,lpad('A',1000,'A') pad
from dual connect by level <10000

select owner,segment_name,blocks,count(*)
from dba_extents
where segment_name like 'MDW%'
group by owner,segment_name,blocks

OWNER           SEGMENT_NAME        BLOCKS   COUNT(*)
--------------- --------------- ---------- ----------
ERIC            MDW_BIG_UNI            128         12
ERIC            MDW_BIG                  8         16
ERIC            MDW_BIG                128         11

3 rows selected.

So, how do issues with large numbers of extents still arise with modern oracle systems? Well, the two situations I’ve seen on Oracle 10 had the same cause {and, as a teaser for later this week, I’ve seen a variation of this issue on a nice, shiny Exadata X2-2 Oracle 11.2.0.3 box :-) You can readabout that here }. What alerted me was slow performance querying the data dictionary, in particular my scripts for checking free space, the size of segments and how many extents they consisted of.

If you create a tablespace with fixed extent sizes of eg 64K and then create a table in there that is 100GB in size, you will get an awful lot of extents. Now make it worse and have a partitioned table that ends up being several hundreds of GB in size with all those segments in that tablespace (or similarly defined tablespaces).

Since I hit the problem twice myself, I’ve chatted to others who have had the same issue. The above usually happens because of a mistake. The tablespace(s) in question are set up to hold small tables and then get used for large tables, either by a mistake in stating the exact tablespace to use or having the tablespace default to your default tablespace – which just happens to be a tablespace with fixed but small extent sizes.

The end result is a massive number of small extents in these tablespaces, usually with extents for different objects mixed in. Some dictionary queries slow down and, in particular, anything to do with looking at extents. For one site, I was trying to use my own code to gather statistics on tables that replaced the standard automated job. It’s fairly “smart” code and chooses a sample size based on the size of the segments. Only, the data dictionary was performing so slowly for the check on segment size that it was taking over 5 seconds to get the information – longer than some of the stats gathers.

You can logically understand why dropping or truncating the table is slow. Oracle has to sort out all those extents, remove the information from the data dictionary. This is not helped by the fact that part of the data dictionary is being slowed down due to all those pesky records…

You MAY be able to get some relief from this situation by gathering fixed object statistics. I did so at one site, where the queries against free_space and segment size sped up by around 80%. I have no demonstrated proof of this, it is just what I saw in one situation, so feel free to try it but don’t sue me if it does not help. Also, it took over 3 hours to gather the fixed object stats and you only do this sort of thing, untested, on a production system if you are already in a bad place.

{update – I just tested this on a private 11.2 db that was taking 10.3 seconds to count all extents, all 12,742 of them. Gathering fixed object stats made no difference at all.}

However, the real answer is to laboriously rebuild those segments in tablespaces with correctly specified uniform extent sizes. Which we did, over several weeks, and it made a difference.

If I was doing this task today, if I could get the outage to do it, I would create COPIES of those segments that were in the wrong tablespaces, re-name and re-apply any constraints and move the other other, smaller tables and indexes to a new tablespace – and then drop the tablespaces including contents. Why? As dropping a table with lots and lots of small extents seemed to take a very long time (many minutes per partition and we had thousands of them). Again, my proof is lost in the mists of time, but that’s what I would aim to do.

Follow

Get every new post delivered to your Inbox.

Join 166 other followers