jump to navigation

Merry Christmas and Happy New Year December 25, 2009

Posted by mwidlake in Private Life.
Tags:
3 comments

I just wanted to wish a Merry Christmas and Happy New Year to everyone who reads my blog, especially those who add comments and help make it a better source of information and, I hope, occasionally amusement.

Martin

What were you doing 10 years ago? December 24, 2009

Posted by mwidlake in Perceptions.
Tags: , ,
4 comments

It is coming towards the end of December 2009. What were you doing 10 years ago today? If you were at school or college I don’t want to know, it just depresses me. You might have been doing last-minute Christmas things, you could have been traveling to see friends , loved ones or maybe {and often less pleasurably} relatives. If, however, you were working in IT I probably know what you were doing:

You were somehow involved in preparing for “The Y2K bug!!!” (Cue dramatic drum roll, ominous music, thunder-and-lightening video and the quiet shrill laughter of consultancy firms running all the way to the bank).

Remember that? I’m a little surprised not to have seen anything much in the media yet celebrating it, {I’ve not seen it in the UK anyway}, which is odd as it was such a very big deal back then. You remember? All the nuclear power plants were going to blow up, air control systems go haywire, bank systems crash and generally the whole of modern civilisation was going to crumble.

It did not of course. It’s biggest impacts seemed to be firstly to give old Cobol and Fortran programmers a bit of a financial boost to help them bolster the pension fund and secondly so much time, effort and planning was spent on Y2K preparation that 75% of other IT programs were shut down to cope. There certainly seemed a little less work to be had in the immediate aftermath.

I never decided who was more to blame for the hype and the fear. The Media, who can never pass a chance to boost revenue by outrageous scare tactics, or business/it consultancies who can never pass a chance to boost revenue by… I better stop there, in case I ever decide to go back to working for a consultancy.

I personally learnt a couple of things.

One was to prepare. In my particular case, I had planned a big Y2K party with a bunch of friends, hired a big house to hold a dozen of us plus kids and found somewhere to buy big fireworks from. All in 1998. And for 18 months before the event told anyone I went to work for that I would not be available for that particular week. I put it into my contract. Of the two or three companies I picked up contracts with during that period, none of them batted an eyelid when I mentioned this. Of course, this meant nothing. With 3 months to go before Y2K, the missive came rolling out from top management that no one, absolutely no one in IT was being allowed to take New years eve off.
I said to my boss “except me”. No, no exceptions. “It’s in my contract, I stated when I joined I was not available that week”. No exceptions. “Bye then”. Huh? “Well, I said at the time and I am sorry to upset you, but you see, this is a job, we had an agreement and what I have organised is my life and well, you lose”. I was a little more diplomatic about it, but I insisted. After all, we had fully Y2K tested the app I was responsible for and I had an agreement.
I had the week off (with a mobile phone by my side, admittedly, but I was not in a fit state to do much by the time midnight came around). I learnt that if you have an agreement and you calmly refuse to capitulate, and you negotiate a little, you can avoid “no exceptions”. {My friend Nasty Mike took the more direct approach of swearing loud defiance. He won also, but maybe with more bad feeling…}

The other thing I learnt was that companies will not pay less than they expect for a job. The five of us had written this app and it used four digit year dates all the way through the system. It was on Oracle 8. It worked. But no, Top Management wanted the system Y2K proving. So they asked a company to test it. This company wanted something like £50,000 to test it and it was to come out of our development budget. Ouch. That was pretty much half the budget.
So one of the team put forward a proposal to Y2K test the system via their company, for about £5,000.This was refused; it was too cheap to be believed.
So we put exactly the same proposal forward through another of our companies for £15,000 plus expenses and an exorbitantly hourly rate if extra work was needed.
This proposal was accepted.
So we did the work, we ran all the tests we specified, rolled the system past Y2K, repeated the tests, then…did a full refresh of the O/S, oracle and the app and recovered a full backup from before the initial tests. We were delayed by 24 hours as central IT screwed up the full oracle restore, so we got to charge the exorbitant hourly rate.
We handed the test results pack to the central IT team and their one question was “Why had we refreshed the O/S and re-installed Oracle? Well, we said, how do you know that going past Y2K had not set some internal variables within the O/S or the database that just setting back the system clock would not fix? The O/S is a complex thing.
The head of central IT looked ever so worried. No one had mentioned that before. And they had spent a lot on external Y2K testing consultancy…

Isn’t business odd at times?

Using AWR data for OS information December 22, 2009

Posted by mwidlake in performance.
Tags: ,
5 comments

This is a demo of using AWR data to get some OS performance information.

This posting was actually a comment by (and is the work of) Bernard Polarski on a recent post about command_types I did a week or two back. I’ve lifted it into a separate posting as (a) I thought it might be useful to people and it would not get as much exposure as a comment as it would as a post and (b) I want to try and lay it out better for Bernard (WordPress likes to garble code in comments, which it did to Bernard’s efforts) and (c) I strognly believe in nicking other people’s code, if they are happy to put it “out there”. Why re-code what someone has already done and is happy to share :-)

I tried the code (I added the SYS. ownership on tables, so it will work even if you can’t log in as sys but have access to the objects) and it worked fine for me.

I should just add, use of AWR data is covered by an oracle licence, the diagnostic pack. If you do not have a licence for it, Oracle deem the WRH$ views as not yours to view, even though they are on your box, in the database you paid for, containing data gathered by your CPUs by code that is damned tricky to turn off and is on by default. Shesh.

Any mistakes or typoes, blame me not Bernard. I took the liberty of tweaking some of the English, hope that is OK Bernard.

*************************************
Okay a challenge : Let me try to surprise you with something improbable. Some months ago while investigating an unsual load on a Linux box, the unix sysadmin told us he could not go back more than 2 weeks with ‘sar’ history. Since Oracle takes also system stats I quickly enriched smenu with an Oracle sar.
The example below gives the sar stats for the last 10 snaps, it is stored into shortcut sts and called with param ‘-sar’

( replace workd Superior_TO and Inferior_to with mathematic signs, system eat them)

/tmp: sts -sar 10

+ sqlplus -s '/ as sysdba'
+
set feed off verify off lines 190 pages 66
col inst  new_value inst noprint;
col id1      head 'Idle time(%)'    justify c
col usr1     head 'User time(%)'    justify c
col sys1     head 'Sys time(%)'     justify c
col io1      head 'Io Wait time(%)' justify c
col nice1    head 'Nice time (%)'   justify c
col snap_len head 'Interval| (Secs)' justify c
col num_cpus   new_value p_num_cpus  head 'Number of CPU';
col a1         new_value secs noprint;
col SNAP_BEGIN format a20 head 'Snap begin' justify c
col SNAP_END   format a20 head 'Snap end'   justify c

 select instance_number inst from v$instance;
 SELECT value num_cpus 
 FROM v$osstat WHERE stat_name = 'NUM_CPUS';
 prompt
 prompt Negatives values correspond to Shutdown:
 prompt
select  snap_id, snap_len, round(id1 /snap_len *100,2) id1,
                       round(usr1 /snap_len *100,2) usr1,
                       round(sys1 /snap_len *100,2) sys1,
                       round(io1 /snap_len *100,2) io1,
                       round(nice1 /snap_len *100,2) nice1 
                     , snap_begin, snap_end
from (
     select  snap_id,  id1, usr1,sys1, io1, nice1
           , snap_begin, snap_end ,
             round( extract( day from diffs) *24*60*60*60+
                    extract( hour from diffs) *60*60+
                    extract( minute from diffs )* 60 +
                    extract( second from diffs )) snap_len 
             -- above is the exact length of the snapshot in seconds
     from ( select /*+ at this stage, each row show the cumulative value.
                       r1    7500  8600
                       r2    7300  8300
                       r3    7200  8110
                    we use [max(row) - lag(row)] to have the difference 
                    between [row and row-1], to obtain differentials values:
                       r1    200   300
                       r2    100   190
                       r3    0       0
                    */
        a.snap_id,
        (max(id1)    - lag( max(id1))   over (order by a.snap_id))/100        id1 ,
        (max(usr1)   - lag( max(usr1))  over (order by a.snap_id))/100        usr1,
        ( max(sys1)  - lag( max(sys1))  over (order by a.snap_id))/100        sys1,
        ( max(io1)   - lag( max(io1))   over (order by a.snap_id))/100        io1,
        ( max(nice1) - lag( max(nice1)) over (order by a.snap_id))/100        nice1,
          -- for later display
          max(to_char(BEGIN_INTERVAL_TIME,' YYYY-MM-DD HH24:mi:ss'))          snap_begin,
          -- for later display
          max(to_char(END_INTERVAL_TIME,' YYYY-MM-DD HH24:mi:ss'))            snap_end,
          -- exact len of snap used for percentage calculation
        ( max(END_INTERVAL_TIME)-max(BEGIN_INTERVAL_TIME))                    diffs
        from ( /*+  perform a pivot table so that the 5 values selected appear 
                    on one line. The case, distibute col a.value among 5 new 
                    columns, but creates a row for each. We will use the group 
                    by (snap_id) to condense the 5 rows into one. If you don't 
                    see how this works, just remove the group by and max function, 
                    then re-add it and you will see the use of max(). 
                    Here is what you will see:
        Raw data :   1000     2222
                     1000     3333
                     1000     4444
                     1000     5555
                     1000     6666
        The SELECT CASE creates populate our inline view with structure:
                       ID    IDLE    USER   SYS   IOWAIT   NICE
                     1000    2222
                     1000             3333
                     1000                   4444
                     1000                          5555
                     1000                                  6666
         the group by(1000) condenses the rows in one:
                       ID    IDLE    USER   SYS   IOWAIT  NICE
                      1000   2222    3333   4444   5555   6666
               */
               select a.snap_id,
                  case b.STAT_NAME
                       when 'IDLE_TIME' then a.value / &p_num_cpus
                   end  id1,
                  case b.STAT_NAME
                       when 'USER_TIME' then a.value / &p_num_cpus
                  end usr1 ,
                  case b.STAT_NAME
                       when 'SYS_TIME' then  a.value / &p_num_cpus
                  end sys1 ,
                  case b.STAT_NAME
                       when 'IOWAIT_TIME' then  a.value / &p_num_cpus
                  end io1,
                  case b.STAT_NAME
                       when 'NICE_TIME' then  a.value / &p_num_cpus
                  end nice1
                  from  sys.WRH$_OSSTAT a,  sys.WRH$_OSSTAT_NAME b
                       where
                            a.dbid      = b.dbid       and
                            a.STAT_ID   = b.stat_id    and
                            instance_number = &inst    and
                            b.stat_name in ('IDLE_TIME','USER_TIME','SYS_TIME','IOWAIT_TIME','NICE_TIME') and
                            a.snap_id > (( select max(snap_id) from  sys.WRH$_OSSTAT) - 10  -1 )
                   order by 1 desc
              ) a,  sys.wrm$_snapshot s
         where  a.snap_id = s.snap_id
         group by a.snap_id
         order by snap_id desc
        )
   )where rownum < (10+1);





Number of CPU
-------------
            2

Negatives values correspond to Shutdown:


            Interval
   SNAP_ID   (Secs)   Idle time(%) User time(%) Sys time(%) Io Wait time(%) Nice time (%)      Snap begin            Snap end
---------- ---------- ------------ ------------ ----------- --------------- ------------- -------------------- --------------------
      2212       3603        87.07         9.43        2.84           16.19             0  2009-12-15 14:00:19  2009-12-15 15:00:21
      2211       3613        90.15         7.31        2.14           18.97             0  2009-12-15 13:00:06  2009-12-15 14:00:19
      2210       3593        92.58         5.91         1.4            7.91             0  2009-12-15 12:00:13  2009-12-15 13:00:06
      2209       3574        91.24         6.78        1.73            9.37             0  2009-12-15 11:00:38  2009-12-15 12:00:13
      2208       3620        90.04          7.8        1.96           11.81             0  2009-12-15 10:00:19  2009-12-15 11:00:38
      2207       3594         89.8         6.89        2.37           12.99             0  2009-12-15 09:00:25  2009-12-15 10:00:19
      2206       3611        91.75         5.81        1.87            8.29             0  2009-12-15 08:00:14  2009-12-15 09:00:25
      2205       3599        92.38         5.63        1.68            8.57             0  2009-12-15 07:00:15  2009-12-15 08:00:14
      2204       3605        91.72         6.51        1.58           10.79             0  2009-12-15 06:00:11  2009-12-15 07:00:15
      2203       3601        86.36        10.35        2.94           10.62             0  2009-12-15 05:00:10  2009-12-15 06:00:11

Ensuring Correlated Partition Exclusion #2 December 20, 2009

Posted by mwidlake in performance, Uncategorized, VLDB.
Tags: , ,
2 comments

<Previous Post

A few days ago {oh dear, it is now two weeks!} I showed how you could better control Correlated Partition Exclusion by creating a lookup table that showed, for each partition, the minimum and maximum ID (the partition key) and the minimum and maximum CRE_DATETIME (what you want to limit your query on). Using this range table in the way I described, you did not have to take an educated guess as to what range of IDs you used to include the partitions for the date range you were interested in, you could check the table.

But this only worked reliably where the ID and CRE_DATETIME increase directly in proportion to each other. It was a special case.

What about the more normal case, where there is a correlation between the two columns but it is not a perfect correlation? This would happen, for example, if the CRE_DATETIME is entered from another system, or from paper records, where the order of the records is not enforced. So some records from today get put into the system before some from yesterday. Or if the correlation is even loser than this. eg you are looking at orders for new customers. You “know” there are going to be no orders for these customers from 5 years ago but you are not sure how far back in the orders table you should go to find what you want.

You can still use the lookup table method. The lookup table in effect becomes a meta-index – an index of the segments where you will find data but not actually the rows.

To demonstrate this, I created a table where the ID and CRE_DATETIME increase in order:

create a partitioned table test_p4
(id           number(10) not null
,cre_datetime date not null
,status        number(1) not null
,num_1         number(4) not null -- random 20
,num_2         number(4) -- random 500
,num_3         number(5) -- cycle smoothly
,num_4         number(5) -- Random 10000
,vc_1          varchar2(10)
,vc_2          varchar2(10)
,vc_pad        varchar2(2000))
tablespace users
partition by range (id)
(partition id_01k values less than (1000)
tablespace users
,partition id_02k values less than (2000)
tablespace users
...
,partition id_45k values less than (45000)
tablespace users
,partition id_max values less than (maxvalue)
tablespace users
)
--
-- local partitioned indexes on the table
 IND_NAME           TAB_NAME           PSN       COL_NAME
 ------------------ ------------------ --------- ------------
 TP4_CRE_DT         TEST_P4            1         CRE_DATETIME

 TP4_PK             TEST_P4            1         ID
--
-- populate the table with data
insert into test_p4(id,cre_datetime,status,num_1,num_2,num_3,num_4
                  ,vc_1,vc_2,vc_pad)
select rownum
,to_date('01-JUL-2009','DD-MON-YYYY')+(rownum/360)
,decode(mod(rownum,100),0,1
              ,0)
,trunc(dbms_random.value(1,20)) 
,trunc(dbms_random.value(1,50)) 
,mod(rownum,10)+1
,trunc(dbms_random.value(1,10000))
,dbms_random.string('U',5)
,lpad(chr(mod(rownum,6)+65),5,chr(mod(rownum,6)+65) )
,lpad('A',1000,'A')
from dba_objects
where rownum <43000

I then messed with the data, updating 10% of records and setting the CRE_DATETIME to plus or minus a random amount up to 2 days different, so data in partitions would overlap. I then created a range table in the same way as I did for the previous post.

I ended up with a range table like the below:

MIN_CRE_DATI      MAX_CRE_DATI          MIN_ID     MAX_ID
----------------- ----------------- ---------- ----------
29-JUN-2009 20:35 05-JUL-2009 14:24          1        999
02-JUL-2009 04:50 07-JUL-2009 12:56       1000       1999
05-JUL-2009 02:34 10-JUL-2009 08:31       2000       2999
08-JUL-2009 05:23 13-JUL-2009 03:32       3000       3999
11-JUL-2009 08:07 15-JUL-2009 18:41       4000       4999
14-JUL-2009 00:14 18-JUL-2009 18:27       5000       5999
16-JUL-2009 08:58 21-JUL-2009 14:18       6000       6999
19-JUL-2009 01:28 24-JUL-2009 11:21       7000       7999
22-JUL-2009 08:02 27-JUL-2009 07:01       8000       8999
24-JUL-2009 22:06 30-JUL-2009 05:37       9000       9999
28-JUL-2009 04:59 01-AUG-2009 10:57      10000      10999
...
24-SEP-2009 01:52 28-SEP-2009 18:36      31000      31999
26-SEP-2009 16:49 01-OCT-2009 01:26      32000      32999
29-SEP-2009 13:20 04-OCT-2009 13:43      33000      33999
02-OCT-2009 08:40 07-OCT-2009 10:11      34000      34999
05-OCT-2009 04:29 10-OCT-2009 04:09      35000      35999
08-OCT-2009 02:04 12-OCT-2009 17:34      36000      36999
10-OCT-2009 20:03 15-OCT-2009 08:39      37000      37999
13-OCT-2009 15:09 18-OCT-2009 12:01      38000      38999
16-OCT-2009 06:49 21-OCT-2009 03:53      39000      39999
18-OCT-2009 20:16 23-OCT-2009 21:01      40000      40999
21-OCT-2009 12:10 26-OCT-2009 07:13      41000      41999
25-OCT-2009 01:29 29-OCT-2009 20:56      42000      42999

You can see that the MIN_CRE_DATI-MAX_CRE_DATI from record to record (partition to partition) overlap but generally increase.

How do you find the start and end of the ID range to cover a date period you are interested in? I always have to sit down with a pen a paer to work this out. It is the classic “overlapping ranges” check, but my brain cannot hold on to it. So here goes. I want all records between the 1st of October and the 6th, this year.

You want to find the partition with the lowest ID range which has a record that falls into the date range you want. ie the Maximum CRE_DATETIME record is as late or later than the range you are interested in. If the maximum CRE_DATETIME is less than the date range you are interested in, no records will be in that partition.

Here is the code to find the lowest partition

select min(min_id)
from tp4_range 
where MAX_CRE_DATI>=to_date('01-OCT-2009','DD-MON-YYYY')
MIN(MIN_ID)
-----------
      32000

Execution Plan
----------------------------------------------------------
| Id  | Operation                    | Name           | Rows  | Bytes | Cost 
----------------------------------------------------------
|   0 | SELECT STATEMENT             |                |     1 |    12 |     2
|   1 |  SORT AGGREGATE              |                |     1 |    12 |         
|   2 |   TABLE ACCESS BY INDEX ROWID| TP4_RANGE      |    12 |   144 |     2
|*  3 |    INDEX RANGE SCAN          | TP4R_MACD_MIAD |    12 |       |     1
----------------------------------------------------------

To cover the whole of the partition of interest you need to look for records with and I greater than the minimum in that partition, thus the selection of the min(min_id)

Similarly, you only want partitions where the minimum CRE_DATETIME is before the end of the date range you want. If all records in the partition have a CRE_DATETIME beyond the range, you are not interested in it.

select max(max_id)
             from tp4_range where MIN_CRE_DATI<=to_date('06-OCT-2009','DD-MON-YYYY')
MAX(MAX_ID)
-----------
      35999

Execution Plan
----------------------------------------------------------
| Id  | Operation                    | Name           | Rows  | Bytes | Cost 
----------------------------------------------------------
|   0 | SELECT STATEMENT             |                |     1 |    13 |     2
|   1 |  SORT AGGREGATE              |                |     1 |    13 |         
|   2 |   TABLE ACCESS BY INDEX ROWID| TP4_RANGE      |    37 |   481 |     2
|*  3 |    INDEX RANGE SCAN          | TP4R_MICD_MIID |    37 |       |     1
----------------------------------------------------------

Then you put these sub-queries into the query to select the data you want. Below I show the “original” code which does not use the partition range table and then using the partition range table, to prove the same number of records come back and to see the plan and statistics change:

select count(*) from test_p4
where cre_datetime between to_date('01-OCT-2009','DD-MON-YYYY')
                   and     to_date('06-OCT-2009','DD-MON-YYYY')
and num_2 = 5

  COUNT(*)
----------
        48

Execution Plan
------------------------------------------------------
| Id  | Operation                           | Name       | R
ows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------
------------------------------------------------------
|   0 | SELECT STATEMENT                    |            |
   1 |    11 |   356   (1)| 00:00:02 |       |       |
|   1 |  SORT AGGREGATE                     |            |
   1 |    11 |            |          |       |       |
|   2 |   PARTITION RANGE ALL               |            |
  36 |   396 |   356   (1)| 00:00:02 |     1 |    46 |
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID| TEST_P4    |
  36 |   396 |   356   (1)| 00:00:02 |     1 |    46 |
|*  4 |     INDEX RANGE SCAN                | TP4_CRE_DT |
1788 |       |    52   (0)| 00:00:01 |     1 |    46 |
------------------------------------------------------------
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        676  consistent gets
          0  physical reads

Note the 676 consistent gets and the Pstart/Pstop of 1-46

select count(*) from test_p4
where cre_datetime between to_date('01-OCT-2009','DD-MON-YYYY')
                   and     to_date('06-OCT-2009','DD-MON-YYYY')
and id> (select min(min_id)
             from tp4_range where MAX_CRE_DATI
                 >=to_date('01-OCT-2009','DD-MON-YYYY'))
and id < (select max(max_id)
             from tp4_range where MIN_CRE_DATI
                <=to_date('06-OCT-2009','DD-MON-YYYY'))
and num_2 = 5

  COUNT(*)
----------
        48

Execution Plan
-----------------------------------------------------------
| Id  | Operation                            | Name
  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------
-----------------------------------------------------------
|   0 | SELECT STATEMENT                     |
  |     1 |    16 |    66   (5)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE                      |
  |     1 |    16 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR           |
  |     1 |    16 |    62   (5)| 00:00:01 |   KEY |   KEY |
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID | TEST_P4
  |     1 |    16 |    62   (5)| 00:00:01 |   KEY |   KEY |
|   4 |     BITMAP CONVERSION TO ROWIDS      |
  |       |       |            |          |       |       |
|   5 |      BITMAP AND                      |
  |       |       |            |          |       |       |
|   6 |       BITMAP CONVERSION FROM ROWIDS  |
  |       |       |            |          |       |       |
|   7 |        SORT ORDER BY                 |
  |       |       |            |          |       |       |
|*  8 |         INDEX RANGE SCAN             | TP4_PK
  |  1788 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
|   9 |          SORT AGGREGATE              |
  |     1 |    12 |            |          |       |       |
|  10 |           TABLE ACCESS BY INDEX ROWID| TP4_RANGE
  |    12 |   144 |     2   (0)| 00:00:01 |       |       |
|* 11 |            INDEX RANGE SCAN          | TP4R_MACD_MIA
D |    12 |       |     1   (0)| 00:00:01 |       |       |
|  12 |          SORT AGGREGATE              |
  |     1 |    13 |            |          |       |       |
|  13 |           TABLE ACCESS BY INDEX ROWID| TP4_RANGE
  |    37 |   481 |     2   (0)| 00:00:01 |       |       |
|* 14 |            INDEX RANGE SCAN          | TP4R_MICD_MII
D |    37 |       |     1   (0)| 00:00:01 |       |       |
|  15 |       BITMAP CONVERSION FROM ROWIDS  |
  |       |       |            |          |       |       |
|  16 |        SORT ORDER BY                 |
  |       |       |            |          |       |       |
|* 17 |         INDEX RANGE SCAN             | TP4_CRE_DT
  |  1788 |       |    52   (0)| 00:00:01 |   KEY |   KEY |
------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        350  consistent gets
          0  physical reads

The plan is admittedly more complex and the SQL can be tricky to understand if you are not used to code that looks for overlap of ranges. But Consistent gets is down to 350 and the Pstart/Pstop values are KEY-KEY. The CBO cannot tell you WHAT the ranges will be when the code is parsed {I think it should check but in 10.2.0.3 at least, the CBO is not that smart at parse time}, but it knows there will be a start and stop.

I am using tiny partitions for my example and only 45 of them for the table. When the partitions are for millions of rows and there are a couple of thousand of them, excluding partitions in a manner you can rely on is a powerful tool. Which leads onto a final word of caution.

Nothing is enforcing that the range table is maintained.

You could do things with triggers or regular re-calculation of the ranges table but this will be something you need to consider if you use ranges tables to help partition exclusion. Flipping tablespaces to read-only can help the worry go away though… :-)

As an example of the issue of needing to maintain the ranges table and also a demonstration that the ranges table does work correctly if maintainted, I’ll update a record well outside of the “expected window”, show that it does not appear in my range-check controlled code, then update the ranges table and try again.

update test_p4
set cre_datetime=to_date(’02-OCT-2009 11:15′,’DD-MON-YYYY HH24:MI’)
,NUM_2=5
where id=22100
/
3> select count(*) from test_p4
where cre_datetime between to_date(’01-OCT-2009′,’DD-MON-YYYY’)
and to_date(’06-OCT-2009′,’DD-MON-YYYY’)
and id> (select min(min_id)
from tp4_range where MAX_CRE_DATI>=to_date(’01-OCT-2009′,’DD-MON-YYYY’))
and id < (select max(max_id)
from tp4_range where MIN_CRE_DATI (select min(min_id)
from tp4_range where MAX_CRE_DATI>=to_date(’01-OCT-2009′,’DD-MON-YYYY’))
and id < (select max(max_id)
from tp4_range where MIN_CRE_DATI<=to_date('06-OCT-2009','DD-MON-YYYY'))
and num_2 = 5

COUNT(*)
———-
49
— success

I wonder if I will do the next posting on this topic in less than two weeks!

The Evenings are Drawing Out December 14, 2009

Posted by mwidlake in Perceptions.
Tags: ,
1 comment so far

Today, sunset was later than yesterday. In London it was 15:51 and 50ish seconds. Tomorrow, the sun will resolutely stay in the sky until 15:52 {and a few seconds}. The days are drawing out at last.

English Sunset by Angie Tianshi

But it is not the shortest day of the year {I should say daytime really, all days are the same length give or take odd leap-seconds)

What is, I hear you all cry?

The shortest day is December 21st

{or December 22nd, depending on how long ago the last leap-year was}. And you would be right, the date with the shortest period of daylight is the the 21st/22nd December. And everyone knows that the the shortest day will also be the day where the sun sets earliest, it makes sense.

Except it does not quite work like that.

We probably all remember from our school days that the earth goes around the sun at an angle from the “vertical”, if vertical is taken as at 90 degrees to the circle the planet takes as it spins around the sun. Think of it like an old man sitting in a rocking chair. He is rocked back in his chair, head pointing back away from the sun in the Northern Hemisphere’s winter and feet pointing slightly towards the sun. Come Midsummers day, he has rocked forward, head pointing towards the sun for the Northern Hemisphere summer. One rocking motion takes a year.

Well, old earth is also slumped slightly to one side in his chair. This results in a slight skew on when sunset starts drawing out and when sunrise starts drawing in. Sunrise will continue to get later until we hit the New Year (Western New Year, not Chinese :-) ). It just so happens that sunset gets later at a slower rate than sunrise gets later until the 21st, when we hit the Shortest Day.

To be fair, I missed the boat slightly, the point at which the evenings started to stretch out was actually the 13th or 14th December but I did not have time to blog until today.

The below tables help make this situation clearer, I include one for the UK and one for Australia. The joly nice site it links to allows you to change the location to wherever you are in the world (well, the nearest Capital).

Table of sunrise/sunset times for London

Table of surise/sunset for Sydney, Australia

What has this to do with Oracle, Performance and VLDBs? Nothing much, except to highlight that the obvious is not always correct, just as it is with Databases and IT in general.

I’ll finish with a sunset picture from Auz. Ahhhh.

Outback sunset from ospoz.wordpress.com

Friday Philosophy – What Was My Job Again? December 11, 2009

Posted by mwidlake in Management.
Tags: , ,
2 comments

How much control do we have over what job we are in?

I know I have mentioned this before, but I did not choose to work with Oracle – I fell into it by accident. I found myself in a job which I was no longer enjoying, for a boss who had issues with me and I had issues with a salary (mine, of course). So I decided to apply for a job with a company a friend of mine worked for:

For those of you who are not UK-born or have seen less than three decades go by, in 1990 “Oracle” was a teletext company that provided information on your TV screen for the TV company Channel 4. Basic news, entertainment, sports etc via chunky text and chunkier graphics thrown up in glorious low-res.

I honestly thought I was going for an interview with that company, as opposed to the other “Oracle”, some database company with the same name which went on to pretty much conquer the corporate database and applications world. That was a pretty lucky break for me and I made a conscious decision to stick with this database stuff.

Many years later I was on a contract doing database performance tuning. Someone, a manager, came and asked me about how they could save space, not in the database world but in the Unix filesystem world. So I made the suggestion that they check out the man page on compress. We compressed some big files and he went away happy, leaving me to get on with my database stuff. My big mistake was, when he came back a week later and asked how he could get the compressed data out, I promptly showed him. I revealed too much knowledge.

I came in to work the next Monday morning and my desk had gone. There was an oblong square of dust and hula-hoop crumbs, nothing else. Even my pile of “documents to get back to” from under the desk was gone. Had I been sacked? No, I had been put in the Unix system administration team. My desk had been picked up and physically moved across the room to the Unix Corral, along with everthing on, under or next to it.

No one has asked me, it had not been mentioned to me at all, the managers had just realised on Friday that half the existing team (contractors) had left at the end of the week and no replacements had been found. That devious manager I had helped had told the others I was a whizz at Unix and so my fate was sealed. I was not a whizz at Unix, I was barely competent at basic shell programming. But I learnt a bit before deciding I wanted to stick at the Database stuff and went off to a contract doing that again. I still kind of wish I’d done the Unix a little longer though.

The final shift I’ll mention, and is probably more commonly echoed in other people’s experience, is coming in one day to find you are a manager. This had happened to me small-scale a couple of times, taking on a contract where I ended up in charge of a team, but in this particular case I was a permanent employee managing a team of 4 DBAs and my boss left. Within the week I found that I was being treated as the manager of 5 or 6 teams, totalling about 30 people. More by them than by upper management, but upper management cottoned on and asked me to do the job. Long story cut short, I resisted the move upwards but it happened anyway. Not, at that time, what I wanted at all.

I’ve told the above story a few times when doing presentations on management-related topics and many people, a surprising number to me,  have said to me afterwards that the same sort of thing happened to them. I am also now chair of the Management and Infrastructure Special Interest Group of the UK Oracle user group. That SIG is full of people with a similar story.

What is the point of this particular Friday Philosophy? Well, these experiences have made me realise that a lot of people are probably doing jobs they just found themselves in, or in the case of managers, just got pushed into.

If you did not chose your job, you are unlikely to be a good fit, especially to start with.

I’m sure most of us have experienced this and, looking back, can see that initially we lacked the skills, the background, even the inclination for the role. But we either got on, moved on, or become morose and bitter.

This also means that we are all probably encountering a lot of people in that exact situation, all the time – People doing a job they just found themselves in. So, if someone seems to not be doing a job as well as they could, check how long they have been doing it. If it is a recent change, remember your own experience and cut them some slack. You would have appreciated it when you were them.

It also explains Morose and Bitter Geoff who manages Accounts too, doesn’t it?

COMMAND_TYPE Values December 10, 2009

Posted by mwidlake in internals.
Tags: , ,
18 comments

Follow-up post on finding most of the COMMAND_TYPES in the data dictionary is here and thanks to Christian Antognini who’s comment led me in that direction.

Spoiler – See end for getting a full list of COMMAND_TYPE values.

If you are in the habit of looking at V$SQLAREA and V$SQL to see what is going on in the database {for those of use who occasionally step out of the “GUI DBA Tool” environment} you will see a lot of stuff in there. I tend to look for the most demanding SQL so I am not interested in PL/SQL, I just want to see the SQL executed via it.

To do this, you can ignore those entires in V$SQL/V$SQLAREA with a “COMMAND_TYPE=47″ filter.

This COMMAND_TYPE is useful. 3=SELECT, the most common thing you probablly see and look at, 6=UPDATE, 7=DELETE. 50=EXPLAIN.

If you google/bing/whatever COMMAND_TYPE you get many hits that list of some of the commands, all looking very similar. But very partial lists.

So I decided to amalgamate these partial lists, verrify what COMMAND_TYPE links to what commands in the databases I have access to and publish this fuller, verified list. I quickly found some commands do not stay in the SGA after issuing, so I could not confirm them {“drop index” being one}.

I got this far:

-- mdw 10/12/09
-- check for cmd types not seen before, as lists on web are not complete
-- * = I have verified
col sql_txt form a60
select command_type,sql_id,substr(sql_text,1,60) sql_txt
from gv$sqlarea
where command_type not in (
 1 --  create table  *
,2 --  INSERT        *
,3 --  SELECT        *
,6 --  UPDATE        *
,7 --  DELETE        *
,9 --  create index  *
,11 -- ALTER INDEX   *
,26 -- LOCK table    *
,42 -- ALTER_SESSION (NOT ddl)
--two postings suggest 42 is alter session
,44 -- COMMIT
,45 -- rollback
,46 -- savepoint
,47 -- PL/SQL BLOCK' or begin/declare *
,48 -- set transaction   *
,50 -- explain           *
,62 -- analyze table     *
,90 -- set constraints   *
,170 -- call             *
,189 -- merge            *
)
and rownum < 20
/

Please feel free to add to it.

Or save yourself the bother and check out the list provided in the Oracle documentation. Not under the descriptions for V$SQL or V$SQLAREA, which would have been the obvious place Mr Larry Ellison thank you, but under V$SESSION. . Further, the fact that in the V$SESSION table the column is called just COMMAND and not COMMAND_TYPE does not assist in locating this information (don’t google COMMAND and ORACLE, you get many millions of hits…). Just click the below.

This is the full table of values and meanings.

(This is the listing for 10.2 and 11.1 is very similar).

But it does not include 189 – MERGE, so that is one up for practical testing than just reading the manual :-)

Back to the day job…

Scalar Functions Not Behaving as Scalar Functions December 8, 2009

Posted by mwidlake in performance, Uncategorized.
Tags: ,
3 comments

My thanks go to Graeme Hobbs, who I currently have the pleasure of working with, for explaining this to me.

This is all about scalar functions that do not act like scalar functions.

Where I am spending much of my time at the moment, we have an issue with a very simple sql statement. It takes on average 2 buffer gets per execution. How could that be an issue? Well, it is being executed a few thousand times an hour. By “a few thousand” read 50,000 times. Even a simple statement being executed 15 times a second is not good if it is not needed. Especially, as in this case, there is a context switch.

I got a look at the code and it is doing something like this:

HAVING SUM( col.vol_gbp ) >=
my_user.pkg_application_settings.get_number_value( ‘rp157_eric’ )

I quickly batted an email to my esteemed colleague Mr Hobbs saying “but this is a scalar. It will be executed once for the statement and the value passed into each execution of the whole query, it can’t be the source of the 50,000 executions an hour”.

Oh woe, for my ignorance has undone me. But it’s OK, Graeme not only took time to explain, but knocked up the following nice demo of WHY this is a problem. And said I could put it on my Blog. What a nice chap.

From Graeme:
******************************************************

Consider package:

CREATE OR REPLACE PACKAGE BODY ghtest
IS
l_called_count NUMBER := 0;

-- Function and procedure implementations
FUNCTION f_get_called_count
RETURN NUMBER
IS
BEGIN
RETURN( l_called_count );
END;

FUNCTION f_call
RETURN NUMBER
IS
BEGIN
l_called_count := l_called_count + 1;
RETURN 1;
END;

FUNCTION f_reset
RETURN NUMBER
IS
BEGIN
l_called_count := 0;
RETURN 1;
END;
END ghtest;

– nothing up my sleeve
SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
——————
0
ie no calls made yet

SQL> SELECT   id
           , COUNT( * )
        FROM ( SELECT *
                FROM MY_USR.tbl_SOME_DATA
               WHERE ROWNUM < 10 )
    GROUP BY id
      HAVING COUNT( * ) > ghtest.f_call;

ID     COUNT(*)
---------- ----------
1624361594 3

SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
------------------
7

NOTE the counter has gone up to 7 as there were 7 IDs found from that code -6 IDs with one record and one with 3.

SELECT ghtest.f_reset
FROM DUAL;

F_RESET
———-
1

That just blanks the counter (see the package).

-- nothing up my sleeve
SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
------------------
0

SELECT   id
          , COUNT( * )
       FROM ( SELECT *
               FROM my_usr.tbl_some_data
              WHERE ROWNUM < 10 )
   GROUP BY id
     HAVING COUNT( * ) > (SELECT ghtest.f_call FROM dual);

ID     COUNT(*)
---------- ----------
1624361594 3

SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
------------------
1

You can see that the scalar subquery
HAVING COUNT( * ) > (SELECT ghtest.f_call FROM dual);
calls f_call once.

Whereas
HAVING COUNT( * ) > ghtest.f_call;
results in it being called 7 times – the number of rows returned from the group by

That is precisely the problem.
******************************************************

Apparently our developers should know this, as it is a “known issue”. But not known to me until now (unless I knew it and forgot it)

So, hats of to Graeme.

{apologies, wordpress utterly screwed the layout/truncated text when I pasted in the example, have fixed now}

Ensuring Correlated Partition Exclusion December 7, 2009

Posted by mwidlake in performance, Uncategorized, VLDB.
Tags: , ,
9 comments

<Previous Post…Next Pos >
I’ve posted a couple of times recently about a tuning method with partition exclusion where you infer a relationship between the value you want to limit a query on and the partition key. It takes a while to explain the theory so I am going to give it a name, in the vain hope it catches on {forgive me if someone has already done this – in fact, just flame me with pointers if you already know a name for this}. At the very least, for my own postings, I can use this name from now on and link back to a single posting explaining it.

I’m going to call it Correlated Partition Exclusion. In essence, you have partitioned the table on a key, in my case the primary key ID, which is an ascending numeric probably sourced from a sequence.
You have a second column, in my case CRE_DATETIME, which increases in line with the PK. If you limit your query on the CRE_DATETIME partition exclusion is not possible as there is no guarantee which CRE_DATETIME values appear in which partition. But, as a human, you understand that if you create 10,000 records a day, if you want to look at the last week’s date you can use:

WHERE ID > MAX_ID-(7*10000)

to exclude partitions with an id more than 7 days worth ago.

So, you have your Correlated Partition Exclusion.

How can you be sure that going back 70,000 IDs is going to safely cover one week of data and how can you maximise your efficiency of including only partitions that cover the date range? {note, I am using a numeric ID as my partition range key and a datetime as my correlated column, this principle works just as well if you partition on datetime and want to correlate to a (generally) ascending numeric key}

Here is my test table :-

create table test_p3
(id number(10) not null
,cre_datetime date not null
,status number(1) not null
,num_1 number(4) not null -- random 20
,num_2 number(4) -- random 500
,num_3 number(5) -- cycle smoothly
,num_4 number(5) -- Random 10000
,vc_1 varchar2(10)
,vc_2 varchar2(10)
,vc_pad varchar2(2000))
tablespace users
partition by range (id)
(partition id_01k values less than (1000)
tablespace users
,partition id_02k values less than (2000)
tablespace users
,partition id_03k values less than (3000)
tablespace users
,partition id_04k values less than (4000)
tablespace users
...
,partition id_45k values less than (45000)
tablespace users
--
,partition id_max values less than (maxvalue)
tablespace users
)
/
--@ind_cols
IND_NAME TAB_NAME PSN COL_NAME
------------------ ------------------ --------- --------------------
TP3_PK TEST_P3 1 ID

TP_CRE_DT TEST_P3 1 CRE_DATETIME

If I want to look for all records between two dates I could use code like the below (based in that suggested by Bernard Polarski, any mistakes are mine).

with get_min_id as
(select max(id) min_id from test_p3
where cre_datetime >= TO_DATE('18-OCT_2009','DD-MON-YYYY') )
,get_max_id as
(select min(id) max_id from test_p3
where cre_datetime <= TO_DATE('20-OCT_2009','DD-MON-YYYY') )
select count(*)
from test_p3
,get_min_id
,get_max_id
where id >get_min_id.min_id
and id < get_max_id.max_id
and cre_datetime between TO_DATE('18-OCT_2009','DD-MON-YYYY')
and TO_DATE('20-OCT_2009','DD-MON-YYYY')
/
COUNT(*)
----------
721

1 row selected.

ie find the minimum ID for the start date and the maximum ID for the end date and query between them.

This works fine. Unfortunately, you get a plan like the below

-------------------------------------------------------
| Id | Operation | Name |
Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------
-------------------------------------------------------
| 0 | SELECT STATEMENT | |
1 | 39 | 887 (4)| 00:00:04 | | |
| 1 | SORT AGGREGATE | |
1 | 39 | | | | |
| 2 | NESTED LOOPS | |
2 | 78 | 887 (4)| 00:00:04 | | |
| 3 | NESTED LOOPS | |
1 | 26 | 717 (5)| 00:00:03 | | |
| 4 | VIEW | |
1 | 13 | 505 (5)| 00:00:02 | | |
|* 5 | FILTER | |
| | | | | |
| 6 | SORT AGGREGATE | |
1 | 26 | | | | |
|* 7 | VIEW | index$_join$_001 |
38393 | 974K| 505 (5)| 00:00:02 | | |
|* 8 | HASH JOIN | |
| | | | | |
| 9 | PARTITION RANGE ALL | |
38393 | 974K| 418 (6)| 00:00:02 | 1 | 46 |
|* 10 | INDEX RANGE SCAN | TP_CRE_DT |
38393 | 974K| 418 (6)| 00:00:02 | 1 | 46 |
| 11 | PARTITION RANGE ALL | |
38393 | 974K| 152 (4)| 00:00:01 | 1 | 46 |
| 12 | INDEX FAST FULL SCAN | TP3_PK |
38393 | 974K| 152 (4)| 00:00:01 | 1 | 46 |
| 13 | VIEW | |
1 | 13 | 212 (5)| 00:00:01 | | |
| 14 | SORT AGGREGATE | |
1 | 26 | | | | |
|* 15 | VIEW | index$_join$_002 |
2972 | 77272 | 212 (5)| 00:00:01 | | |
|* 16 | HASH JOIN | |
| | | | | |
| 17 | PARTITION RANGE ALL | |
2972 | 77272 | 59 (6)| 00:00:01 | 1 | 46 |
|* 18 | INDEX RANGE SCAN | TP_CRE_DT |
2972 | 77272 | 59 (6)| 00:00:01 | 1 | 46 |
| 19 | PARTITION RANGE ALL | |
2972 | 77272 | 152 (4)| 00:00:01 | 1 | 46 |
| 20 | INDEX FAST FULL SCAN | TP3_PK |
2972 | 77272 | 152 (4)| 00:00:01 | 1 | 46 |
| 21 | PARTITION RANGE ITERATOR | |
2 | 26 | 170 (1)| 00:00:01 | KEY | KEY |
|* 22 | TABLE ACCESS BY LOCAL INDEX ROWID| TEST_P3 |
2 | 26 | 170 (1)| 00:00:01 | KEY | KEY |
|* 23 | INDEX RANGE SCAN | TP_CRE_DT |
707 | | 48 (0)| 00:00:01 | KEY | KEY |
------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
892 consistent gets
0 physical reads

If you look at the plan {towards the end, and sorry about the naff layout, my blog is not wide enough for this sort of printout} there are two checks on the TP_CRE_DT indexes that scan all partitions of the index – Pstart/Pstop are 1-46. This is the CBO looking for the partitions where the stated CRE_DATETIME records occur in the whole table. Cost is 892 consistent gets, much of which is the checking of local index partitions that will hold no relevant entries.

Bernard also spotted that the code was considering all partitions and was not as efficient as it could be but did not know how to get around it except with a Global index, which brings it’s own issues.

One way is to just say “well, I know how many records per day I get so I will fake up the ID limits based on this”. The problem is, and this is why the CBO cannot make the decision for you, is that there is no guarantee, no fixed rule, saying that CRE_DATETIME will always increment with the ID. In fact, there is nothing stopping you altering a record which has an ID from yesterday having a CRE_DATETIME from 10 years ago {but forget I said that until tomorrow’s post}. Now, in my example the CRE_DATETIME is going to increment with ID. We will use this special case for now, I know it is flawed and will address that flaw {tomorrow}.

So to ensure yo do not miss data you end up making your ID window pretty large to endure you do not miss records. Say you want all records for the last day, you process 1500 records a day, so you consider a window covering all samples with an ID within the last 10,000. It will still be more efficient than scanning all partitions and should be safe. Fairly safe.

The way out of this is to have a ranges table. Something that tells you, for each partition, which is the maximum and minimum CRE_DATE and the IDs covered by that range. You can then use that to identify the partitions that cover the date range you are interested in.

Here is an example. I will create a simple table to hold the ranges:

create table tp3_range
(min_cre_dati date
,max_cre_dati date
,min_id number
,max_id number)

Now you have to populate it.
The below code will create the first record for the first partition.

insert into tp3_range
SELECT MIN(CRE_DATETIME)
,MAX(CRE_DATETIME)
,MIN(ID)
,MAX(ID) FROM TEST_P3 PARTITION (ID_01K)
/
Execution Plan
----------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (
%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 12 | 102
(1)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 12 |
| | | |
| 2 | PARTITION RANGE SINGLE| | 999 | 11988 | 102
(1)| 00:00:01 | 1 | 1 |
| 3 | TABLE ACCESS FULL | TEST_P3 | 999 | 11988 | 102
(1)| 00:00:01 | 1 | 1 |
------------------------------------------------------------------

Statistics
----------------------------------------------------------
1 recursive calls
3 db block gets
195 consistent gets
0 physical reads

As you can see from the plan and cost, this is not very efficient as it has to scan the whole partition. Maybe not a problem if you do this once, but there are indexes on both these columns, can’t this be done more efficiently? Yes, if you split up the code into four in-line selects (if you want more details about it being more performant to do MIN and MAX on their own than in one statement then see this post on the topic ):

insert into tp3_range
SELECT (SELECT MIN(CRE_DATETIME) FROM TEST_P3 PARTITION (ID_04K))
,(SELECT MAX(CRE_DATETIME) FROM TEST_P3 PARTITION (ID_04K))
,(SELECT MIN(id ) FROM TEST_P3 PARTITION (ID_04K))
,(SELECT MAX(ID ) FROM TEST_P3 PARTITION (ID_04K))
FROM DUAL

PLAN
---------------------------------------------------------
SORT AGGREGATE
PARTITION RANGE SINGLE cst:2 rws:1000
INDEX FULL SCAN (MIN/MAX) TP_CRE_DT cst:2 rws:1000
Statistics
----------------------------------------------------------
1 recursive calls
5 db block gets
9 consistent gets
0 physical reads

{You may wonder why the Explain Plan section above has a different look. This is because there seems to be a bug in 10.2.0.3 where the autotrace plan for the insert statement comes out wrong, as a FAST DUAL access, so I had to Explain the statement in a different way}

You would run one of the above statements against each partition. Probably, the “Best Practice” way would be to generate a SQL script from a query against DBA_TAB_PARTITIONS.

To populate my table I cheated – I just assumed all my partitions are of the same size (1000 records) and used:-
insert into tp3_range
select min(cre_datetime),max(cre_datetime),min(id),max(id)
from test_p3
— where id between 10000 and 20000
group by trunc(id/1000)
/

Note I left in a commented line. You could run the above against the whole table and then limit it to just new partitions as you add them. This is a tad risky though, you are relying on the partitioning being perfect and it would not scale to hundreds of very large partitions. You would be better off with the partition-by-partition methods above.

You end up with a table like the below:-

select * from tp3_range order by min_cre_dati


MIN_CRE_DATI MAX_CRE_DATI MIN_ID MAX_ID
----------------- ----------------- ---------- ----------
01-JUL-2009 00:04 03-JUL-2009 18:36 1 999
03-JUL-2009 18:40 06-JUL-2009 13:16 1000 1999
06-JUL-2009 13:20 09-JUL-2009 07:56 2000 2999
09-JUL-2009 08:00 12-JUL-2009 02:36 3000 3999
12-JUL-2009 02:40 14-JUL-2009 21:16 4000 4999
14-JUL-2009 21:20 17-JUL-2009 15:56 5000 5999
17-JUL-2009 16:00 20-JUL-2009 10:36 6000 6999
20-JUL-2009 10:40 23-JUL-2009 05:16 7000 7999
23-JUL-2009 05:20 25-JUL-2009 23:56 8000 8999
26-JUL-2009 00:00 28-JUL-2009 18:36 9000 9999
28-JUL-2009 18:40 31-JUL-2009 13:16 10000 10999
31-JUL-2009 13:20 03-AUG-2009 07:56 11000 11999
...
14-OCT-2009 13:20 17-OCT-2009 07:56 38000 38999
17-OCT-2009 08:00 20-OCT-2009 02:36 39000 39999
20-OCT-2009 02:40 22-OCT-2009 21:16 40000 40999
22-OCT-2009 21:20 25-OCT-2009 15:56 41000 41999
25-OCT-2009 16:00 28-OCT-2009 10:36 42000 42999

add the two below indexes:
create index tp3r_micd_miid on tp3_range(min_cre_dati,min_id);
create index tp3r_macd_miad on tp3_range(max_cre_dati,max_id);

And now you can find the upper and lower ID bounds for a date range with the following code:

select min_id from tp3_range
where min_cre_dati = (select max(min_cre_dati)
from tp3_range
where min_cre_dati <TO_DATE('18-OCT_2009','DD-MON-YYYY')
)
MIN_ID
----------
39000

Execution Plan
---------------------------------------------------------
| Id  | Operation                     | Name           | Rows  | Bytes | Cost 
---------------------------------------------------------
|   0 | SELECT STATEMENT              |                |     1 |    22 |     2
|*  1 |  INDEX RANGE SCAN             | TP3R_MICD_MIID |     1 |    22 |     1
|   2 |   SORT AGGREGATE              |                |     1 |     9 |        
|   3 |    FIRST ROW                  |                |    40 |   360 |     1
|*  4 |     INDEX RANGE SCAN (MIN/MAX)| TP3R_MICD_MIID |    40 |   360 |     1
---------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads

select max_id from tp3_range
where max_cre_dati = (select min(max_cre_dati)
from tp3_range
where max_cre_dati >to_date('20-OCT_2009','DD-MON-YYYY')
)

MAX_ID
----------
39999

Execution Plan
----------------------------------------------------------
| Id  | Operation                     | Name           | Rows  | Bytes | Cost 
----------------------------------------------------------
|   0 | SELECT STATEMENT              |                |     1 |    22 |     2
|*  1 |  INDEX RANGE SCAN             | TP3R_MACD_MIAD |     1 |    22 |     1
|   2 |   SORT AGGREGATE              |                |     1 |     9 |        
|   3 |    FIRST ROW                  |                |     4 |    36 |     1
|*  4 |     INDEX RANGE SCAN (MIN/MAX)| TP3R_MACD_MIAD |     4 |    36 |     1
----------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads

Put it all together into one statement and let’s see how much it costs:

with get_min_id as
(select min_id from tp3_range
where min_cre_dati = (select max(min_cre_dati)
from tp3_range
where min_cre_dati <TO_DATE('18-OCT_2009','DD-MON-YYYY')
) )
,get_max_id as
(select max_id from tp3_range
where max_cre_dati = (select min(max_cre_dati)
from tp3_range
where max_cre_dati >to_date('20-OCT_2009','DD-MON-YYYY')
) )
select count(*)
from test_p3
,get_min_id
,get_max_id
where id >get_min_id.min_id
and id < get_max_id.max_id
and cre_datetime between TO_DATE('18-OCT_2009','DD-MON-YYYY')
and TO_DATE('20-OCT_2009','DD-MON-YYYY')

COUNT(*)
----------
721
1 row selected.

Execution Plan
----------------------------------------------------------
| Id | Operation | Name | Row
s | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------
----------------------------------------------------
| 0 | SELECT STATEMENT | |
1 | 57 | 67 (5)| 00:00:01 | | |
| 1 | SORT AGGREGATE | |
1 | 57 | | | | |
| 2 | TABLE ACCESS BY LOCAL INDEX ROWID | TEST_P3 |
2 | 26 | 65 (5)| 00:00:01 | | |
| 3 | NESTED LOOPS | |
2 | 114 | 65 (5)| 00:00:01 | | |
| 4 | MERGE JOIN CARTESIAN | |
1 | 44 | 2 (0)| 00:00:01 | | |
|* 5 | INDEX RANGE SCAN | TP3R_MICD_MIID |
1 | 22 | 1 (0)| 00:00:01 | | |
| 6 | SORT AGGREGATE | |
1 | 9 | | | | |
| 7 | FIRST ROW | |
40 | 360 | 1 (0)| 00:00:01 | | |
|* 8 | INDEX RANGE SCAN (MIN/MAX) | TP3R_MICD_MIID |
40 | 360 | 1 (0)| 00:00:01 | | |
| 9 | BUFFER SORT | |
1 | 22 | 1 (0)| 00:00:01 | | |
|* 10 | INDEX RANGE SCAN | TP3R_MACD_MIAD |
1 | 22 | 1 (0)| 00:00:01 | | |
| 11 | SORT AGGREGATE | |
1 | 9 | | | | |
| 12 | FIRST ROW | |
4 | 36 | 1 (0)| 00:00:01 | | |
|* 13 | INDEX RANGE SCAN (MIN/MAX) | TP3R_MACD_MIAD |
4 | 36 | 1 (0)| 00:00:01 | | |
| 14 | PARTITION RANGE ITERATOR | |
| | | | KEY | KEY |
| 15 | BITMAP CONVERSION TO ROWIDS | |
| | | | | |
| 16 | BITMAP AND | |
| | | | | |
| 17 | BITMAP CONVERSION FROM ROWIDS| |
| | | | | |
| 18 | SORT ORDER BY | |
| | | | | |
|* 19 | INDEX RANGE SCAN | TP3_PK | 7
07 | | 6 (0)| 00:00:01 | KEY | KEY |
| 20 | BITMAP CONVERSION FROM ROWIDS| |
| | | | | |
| 21 | SORT ORDER BY | |
| | | | | |
|* 22 | INDEX RANGE SCAN | TP_CRE_DT | 7
07 | | 48 (0)| 00:00:01 | KEY | KEY |
------------------------------------------------------------------


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
133 consistent gets
0 physical reads

If you look way back up this post, you will see that the number of records selected by the above is the same as from the code indicated by Bernard (721 records) and for a lot less cost – 133 consistent gets compared to 892.

So I have hopefully explained the issue (having to visit all the index partitions to check the date range), shown how a ranges table can help, given some simple and a less-simple-but-more-efficient examples of code to populate the table and finally shown that using the range table can be more efficient.

And I arrogantly gave it a name – Correlated Partition Exclusion :-)

Some of you may remember tha this example has been a special case, as there is no overlap between the dates; the relationship between the CRE_DATETIME and ID is perfect. You are unlikely to have that in real life. Also, dome of you may also be tired of reading this post. So I will cover the general case in the NEXT post.

More on Assisting Partition Exclusion December 6, 2009

Posted by mwidlake in performance, VLDB.
Tags: , ,
2 comments

A couple of weeks ago I posted about how you could help the oracle CBO carry out partition exclusion and thus aid performance. In essence, you have a table partitioned on ID (based on an ascending numeric) and you also have a date column (CRE_DATETIME, indexed using a locally partitioned index), which you often want to limit queries on.
The ID and the CRE_DATETIME both increase over time, so as a human you can infer that a date created range will match to an ID range that you can limit any queries on and get partition exclusion. The CBO does not understand this relationship and so can’t help. If you want to look at records for the last week and you know that you generate 10,000 records a day maximum, you can add a where clause of {and this is pseudocode}
WHERE ID BETWEEN (MAX_ID-(10000*7))
AND (MAX_ID)
I’m afraid you will have to look back at the referenced post if this does not make sense.

The issue I want to address is that how can you be sure that you are using a fake ID range to cover the date range?

A simple real-world example of this is when someone (a business objects user or helpdesk user seems to be the usual source) wants to look up the details of a fairly new customer on the datawarehouse. They don’t know the ID of the record, but they know the new account was created this week and some other filtering data. So they run a query saying:

WHERE CRE_DATETIME >SYSDATE-7
AND COL_A = ‘XXXX’
AND COL_B = ‘YYYY’

This visits every partition in the table, even if there is a locally partitioned index on CRE_DATETIME. See this post for details of this problem. If you are really unlucky, and this seems to be the usual situation, there is no index on the CRE_DATETIME either, and a full scan of the billion-row table is initiated, tying up one CPU and giving that expansive {Sorry, type, expensive – though both are true) storage sub-system something to work on for a few hours.

However, in this situation, do this. Select max(ID) {the primary key} from the partitioned table, which will take less time than it does to type “select”. Then you say to the user:

 “OK, the max ID is currently 987,000,000. Add this to the WHERE clause:

AND ID > 980000000

If that does not find your record now change that additional clause to:

AND ID BETWEEN 970000000 AND 980000000

and just keep going down by 10 million each time.”

By doing this, the user will be limiting the query on the partition key, the ID, and partition exclusion will be possible and each query will come back quickly. The user will usually find the record they want in the first couple of attempts {or will be kept quiet and less demanding of the system until they get bored and come back to tell you “you are an idiot”, having scanned back over several 10’s of millions of records, but then you know they lied about the record being recent so you need to help them in some other way}.

This post is actually to remind you of where I had got on this thread, my being so unforgivably slow in keeping this thread running. Tomorrow (HONEST!) I will cover HOW you ensure your inferred ID range is covering the CRE_DATETIME range you are interested in.

Follow

Get every new post delivered to your Inbox.

Join 161 other followers