jump to navigation

Table High Water Mark and How Empty the Table Is April 30, 2012

Posted by mwidlake in performance, statistics.
Tags: , ,
1 comment so far

On a recent post about reducing the high water mark in a table by shrinking it someone asked if you could tell how much “space” there was in a table that could be recovered by compressing it.

Yes. As pseudo-code:

select number_of_rows*average_row_length. That is how much data there is.
Divided by the block size, to give how many blocks-worth of data you have.
Divided this by a “overhead” fudge factor to allow for wasted space (see later) of about 20%
Compare to the number of blocks used by the table.

Here is some code to do it, showing some extra data and how we calculate the effective percentage of space used (pct_spc_used). It runs against an example table ‘HWM’

select uts.blocks                                     blks_used
      ,uts.avg_space
      ,uts.num_rows
      ,uts.avg_row_len
      ,uts.empty_blocks                               empty_blks
      ,usse.blocks                                    alloc_blks
      ,greatest(uts.blocks,1)/greatest(usse.blocks,1) pct_hwm
      ,uts.num_rows*uts.avg_row_len                   data_in_bytes
      ,(uts.num_rows*uts.avg_row_len)/8192            data_in_blks
      ,((uts.num_rows*uts.avg_row_len)/8192)*1.25     mod_data_in_blks
      ,(((uts.num_rows*uts.avg_row_len)/8192)*1.25)/usse.blocks pct_spc_used
from user_tab_statistics uts
    ,user_segments       usse
where uts.table_name='HWM'
and   uts.table_name=usse.segment_name
/
--
 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1221      50000         213         34       1664   .97957
     10650000   1300.04883       1625.06104    .97659918

I am collecting the data from USER_TAB_STATISTICS and USER_SEGMENTS. For this code to work you must have reasonably good stats against the table.

I’d like to run through some of the columns I have selected:
First of all, columns AVG_SPACE and EMPTY_BLKS are not populated by dbms_stats.gather_table_stats.. They are populated by the deprecated ANALYZE command that you should NOT use to gather table stats since V10 came along. These columns are populated as I did an ANALYZE to get the data in there, as well as a dbms_stats.
Next, I collect BLOCKS_ALLOCATED from DBA_SEGMENTS {and for this demo I just ignored the potential for partitioned tables) and I compare this to the BLOCKS_USED to get the High Water Mark, as a percentage of the table. I do this as EMPTY_BLOCKS is set to zero if you have never used ANALYZE and, even if you did, unless you use this deprecated command all the time, the value will not change.
On the second line of output I calculate the DATA_IN_BYTES as a simple num_rows*avg_row_len, convert it into blocks {for simplicity I do not collect the block size, I know it is 8k}. I then apply my “Overhead” fudge factor. A block has a header, using around 100 bytes {I’ve not checked the exact figure for years}, pctfree can be varied but defaults to 10% and as only whole rows fit, then an average of half a row of space is empty in each “full” block. Thus I reduce the space available by 20-25%. In this case, 25% as my rows are large.
Finally, I compare this modified data volume to the used blocks to get the actual space

Below I run through creating some test data, looking at the stats and my calculated High Water Mark and pct_space_used and finally shrink my table to see if my guesstimate is a reasonable guesstimate:

populate table

drop table hwm purge;
prompt populate table
set feed on
create table hwm
(id   number(10)
,num1 number(2)
,vc1  varchar2(100)
,vc2  varchar2(100)
)
/
insert into hwm 
select rownum
,trunc(dbms_random.value(1,100))
,lpad('A',100,'A')
,lpad('B',100,'B')
from dual connect by level < 50001
/
50000 rows created.

exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'HWM')

--where is the HWM compared to total segment size

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630          0      50000         210          0       1664   .97957
     10500000   1281.73828       1602.17285   .962844262

NB AVG_SPC and EMPTY_BLKS are NULL. 
The high water mark is 1630 blocks out of 1664 in the segment
My calculated PCT_SPC_USED is 96%. That is probably close enough.
{remember, the last used block will be only partly used, accounting for a bit of the difference}


-- I will use ANALYZE to fill the missing columns
analyze table hwm compute statistics;

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1150      50000         213         34       1664   .97957
     10650000   1300.04883       1625.06104    .97659918

Now those two columns are populated. 
Not the slightly different AVG_ROW_LEN even though dbms_stats used 100% (as the table is so small)
and ANALYZE was compute 


-- clear 90% of the data randomly

45461 rows deleted.

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1150       4539         210         34       1664   .97957
       953190   116.356201       145.445251   .087407002

PCT_HWM is not altered of course but PCT_SPC_USED has dropped dramatically. 
The table is now only 8.7% used, according to my calculations (compared to
90% empty) 

The BLKS_USED does not change. The AVG_SPACE and EMPTY_BLOCKS are the same as I 
used dbms_stats to update the statistics and it DOES NOT ALTER the columns that it does not
populate. Thus you have no idea how recent those columns are if you use a mixture of commands.


-- clear some blocks completely by deleting a range

2181 rows deleted.


 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       1150       2358        210         34       1664   .97957
       495180   60.4467773       75.5584717   .045407735

Now the PCT_SPC_USED is down to 4.5%


-- has EMPTY_BLOCKS changed if I use ANALYZE?
analyze table hwm compute statistics;

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630       7682       2358         213         34       1664   .97957
       502254   61.3103027       76.6378784   .046056417

As you can see, if I use ANALYZE AVG_SPACE alters. But EMPTY_BLOCKS does not, even though I cleared
a chunk of the table. So there are blocks that can be reused but not listed as empty.

I'll just take a quick side-step and show a quick "oddity" about dbms_stats
--deleting the stats (using dbms_stats)

exec dbms_stats.delete_table_stats(ownname=>user,tabname=>'HWM')

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
                                                              1664

dbms_stats.delete_table_statistics clears ALL statistics, even the ones it does not populate

--and now collect them via dbms_stats again

 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
      1630          0       2358         210          0       1664   .97957
       495180   60.4467773       75.5584717   .045407735


--now to shrink the table
alter table hwm enable row movement;
alter table hwm shrink space;


 BLKS_USED  AVG_SPACE   NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS  PCT_HWM
---------- ---------- ---------- ----------- ---------- ---------- --------
DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED
------------- ------------ ---------------- ------------
        72          0       2358         210          0         80   .90000
       495180   60.4467773       75.5584717   .944480896

So I calculated that there was about 75 blocks of data in that table. having shrunk it, I was
a little bit out.

Having run through those examples we can see that the accuracy of the PCT_SPC_USED is down to the fudge factor employed but is probably close enough at 25%. After all, you are only likely to shrink a table that very clearly would benefit from it.

DBA_TAB_MODIFICATIONS July 2, 2010

Posted by mwidlake in internals, performance, statistics.
Tags: , ,
16 comments

I’ve mentioned DBA_TAB_MODIFICATIONS (and there is also of course ALL_TAB_MODIFICATIONS and USER_TAB_MODIFICATIONS) in a couple of posts, but mostly buried in long, rambling posts about other things. So here is an overview and demonstration of DBA_TAB_MODIFICATIONS. Skip to the end if you just want to see an example of how it works.

The below example is from 10.2.0.3 but for V11 it is pretty much the same, as is 9 except where I say otherwise.

SYS.DBA_TAB_MODIFICATIONS is used by oracle internally to track how many inserts, updates and deletes have been done to a table, table partition or table subpartition since the segment last had stats gathered on it with dbms_stats. It also records if the segment has been truncated since the last stats gather. Why since the last stats gather? Because Oracle uses it internally to detect how much a segment has changed and if it is 10% or more oracle will deem it stale for stats gathering purposes. (under 11 you can alter the stale percentage). Note I said SYS.DBA_TAB_MODIFICATIONS – on 10 at least the public synonym for it is missing, so unlike most other DBA_ views you need to state the owner (or create a synonym).

OWNER.TABLE_NAME     INSERTS    UPDATES    DELETES TIMESTAMP         TRU
------------------- ---------- ---------- ---------- ----------------- ---
XXXXXXX.YYYYYYYYYYY   22598264          0          1 01-MAR-2010 02:08 NO
XXXXXXX.YYYYYYYYYYY          5          0          0 19-SEP-2007 01:47 NO
XXXXXXX.YYYYYYYYYYY     888766          0          0 01-MAR-2010 02:08 NO
XXXXXXX.YYYYYYYYYYY       3191       1486          0 27-NOV-2009 05:11 NO
XXXXXXX.YYYYYYYYYYY      34742          0          0 08-MAR-2010 15:16 NO
XXXXXXX.YYYYYYYYYYY          0       7192          0 02-JUL-2010 05:00 NO
XXXXXXX.YYYYYYYYYYY          0          1          0 10-MAR-2010 15:16 NO
XXXXXXX.YYYYYYYYYYY          8          8          8 26-JAN-2010 08:05 NO
XXXXXXX.YYYYYYYYYYY    1533536          0          2 01-MAR-2010 02:08 NO
XXXXXXX.YYYYYYYYYYY        281          0          0 11-SEP-2009 03:00 NO 

Under oracle 9 you have to register a table as MONITORED before this information is gathered. Under 10 and 11 all tables/partitions/subpartitions are monitored and you can’t turn that monitoring off {you can try, but oracle ignores you :-) }.

The information can be very useful for checking how volatile a segment is, if it has been changed a lot since the last time stats were gathered on it and you can also add the values held in DBA_TAB_MODIFICATIONS to the value for NUM_ROWS held for the segment and get a very accurate estimate of the current number of rows. It is a lot, lot faster than actually counting them!

The information on inserts/updates and deletes is gathered for pretty much all DML against tables (see an up-coming post for an example of this not being true). Direct load SQL*Loader and other direct-io activity can skip being recorded but insert-append, using the /*+ append */ hint is recorded correctly {I suspect this was not true for V9 and 10.1 but am no longer sure}. This information is initially held in memory and only later pushed into DBA_TAB_MODIFICATIONS and so you may not see the latest information. Under oracle 9 this information is flushed down every 15 minutes I believe, under 10.1 it is 3 hours and under 10.2 onwards the information is only flushed down when stats are gathered against the segment OR you manually flush the information down to the database.

flushing the latest information is achieved with DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO. It is relatively quick to run, normally taking less than a few seconds.

When statistics are gathered on a segment, any corresponding rows in DBA_TAB_MODIFOCATIONS is deleted, not updated to zeros, and is recreated only when relevent inserts,updates, deletes or truncates occur on the segment.

the DBA_TAB_MODIFICATIONS view sits on top of sys.mon_mods_all$ as well as obj$,user$ and the usual suspects. sys.mon_mods_all$ does not contain any more information that the view exposes.

desc sys.dba_tab_modifications
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------
 TABLE_OWNER                                                    VARCHAR2(30)
 TABLE_NAME                                                     VARCHAR2(30)
 PARTITION_NAME                                                 VARCHAR2(30)
 SUBPARTITION_NAME                                              VARCHAR2(30)
 INSERTS                                                        NUMBER
 UPDATES                                                        NUMBER
 DELETES                                                        NUMBER
 TIMESTAMP                                                      DATE
 TRUNCATED                                                      VARCHAR2(3)
 DROP_SEGMENTS                                                  NUMBER

--View description
TEXT
-----------------------------------------------------------------------
SYS                            DBA_TAB_MODIFICATIONS                  9
select u.name, o.name, null, null,
       m.inserts, m.updates, m.deletes, m.timestamp,
       decode(bitand(m.flags,1),1,'YES','NO'),
       m.drop_segments
from sys.mon_mods_all$ m, sys.obj$ o, sys.tab$ t, sys.user$ u
where o.obj# = m.obj# and o.obj# = t.obj# and o.owner# = u.user#
union all
select u.name, o.name, o.subname, null,
       m.inserts, m.updates, m.deletes, m.timestamp,
       decode(bitand(m.flags,1),1,'YES','NO'),
       m.drop_segments
from sys.mon_mods_all$ m, sys.obj$ o, sys.user$ u
where o.owner# = u.user# and o.obj# = m.obj# and o.type#=19
union all
select u.name, o.name, o2.subname, o.subname,
       m.inserts, m.updates, m.deletes, m.timestamp,
       decode(bitand(m.flags,1),1,'YES','NO'),
       m.drop_segments
from sys.mon_mods_all$ m, sys.obj$ o, sys.tabsubpart$ tsp, sys.obj$ o2,
     sys.user$ u
where o.obj# = m.obj# and o.owner# = u.user# and
      o.obj# = tsp.obj# and o2.obj# = tsp.pobj#

-- underlying sys.$ table
desc sys.mon_mods_all$
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------
 OBJ#                                                           NUMBER
 INSERTS                                                        NUMBER
 UPDATES                                                        NUMBER
 DELETES                                                        NUMBER
 TIMESTAMP                                                      DATE
 FLAGS                                                          NUMBER
 DROP_SEGMENTS                                                  NUMBER

Here is a demo of it in action:

TDB>-- clear down the test table.
TDB>-- if you do not have access to DBA_TAB_MODIFICATIONS change to ALL_TAB_MODIFICATIONS
TDB>drop table test_1 purge;

Table dropped.

TDB>create table test_1
  2  as select * from all_objects where rownum<1
  3  /

Table created.

TDB>select count(*) from test_1;

  COUNT(*)                                                                      
----------                                                                      
         0                                                                      

1 row selected.

TDB>-- WHAT IS IN DBA_TAB_MODIFICATIONS
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- OK, let's flush down the information
TDB>--
TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>-- Still nothing as no activity has occurred on the table.
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- Now create some data
TDB>insert into test_1
  2  select * from dba_objects where rownum <= 100
  3  /

100 rows created.

TDB>commit;

Commit complete.

TDB>select count(*) from test_1;

  COUNT(*)                                                                      
----------                                                                      
       100                                                                      

1 row selected.

TDB>-- WHAT IS IN DBA_TAB_MODIFICATIONS
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- OK, let's flush down the information
TDB>--
TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>--
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              100          0          0 NO   02-JUL-2010 10:21            

1 row selected.

TDB>-- The information in DBA_TAB_MODIFICATIONS is used by Oracle to detect if a table
TDB>-- (or partition) in a tables is stale - changed by 10%
TDB>-- Gathering statistics on an object DELETES the record from DBA_TAB_MODIFICATIONS
TDB -- rather than setting all the values to zero.
TDB>--
TDB>exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST_1')

PL/SQL procedure successfully completed.

TDB>--
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- now do some activity again and flush it down to the dictionary
TDB>insert  into test_1
  2  select * from dba_objects where rownum <= 150
  3  /

150 rows created.

TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              150          0          0 NO   02-JUL-2010 10:21            

1 row selected.

TDB>-- Direct inserts via insert-append are also captured (I think this might have changed)
TDB>insert /*+ append */ into test_1
  2  select * from dba_objects where rownum <= 170
  3  /

170 rows created.

TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              320          0          0 NO   02-JUL-2010 10:21            

1 row selected.

TDB>-- now a mixed bag of inserts, updates and deletes
TDB>insert into test_1
  2  select * from dba_objects where rownum <= 75
  3  and owner not in ('SYS','SYSTEM')
  4  /

75 rows created.

TDB>--
TDB>update test_1 set created=sysdate
  2  where object_type !='TABLE'
  3  /

289 rows updated.

TDB>delete from test_1
  2  where object_type='SEQUENCE'
  3  /

10 rows deleted.

TDB>commit;

Commit complete.

TDB>--
TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              395        289         10 NO   02-JUL-2010 10:21            

1 row selected.

If you want to see what else I said about DBA_TAB_MODIFICATIONS then the end of this {way too long} post shows how to use it to get a very fast count of rows in a table and if you have issues with permissions flushing database monitoring see this link about permissions needed

Question. As the data gets flushed from memory to a data dictionary table, it persists the database being bounced. What happens to the data in memory about table changes when thers is a controlled shutdown and when the database crashes?

dbms_stats.set_table_stats “defaults” June 21, 2010

Posted by mwidlake in internals, statistics.
Tags: , ,
add a comment

What happens if you call dbms_stats.set_table_stats without passing in any of the values to set?

I know, why would you do it anyway? Well, I did so by accident. If I want to gather quick stats on a test table I execute something like:

exec dbms_stats.gather_table_stats(OWNNAME =>user,TABNAME =>’TEST_TAB1′)
If I am feeling generous I might state ESTIMATE_PERCENT too.

I was doing some testing work and was gathering stats and also setting stats manually. Then I started to see several of my test tables all had 2000 rows and were 100 blocks in size – at least according to the stats. I knew this was not possible. It turned out to be Cut ‘n’ Paste fingerf the trouble and I was issuing.

exec dbms_stats.set_table_stats(OWNNAME =>user,TABNAME =>’TEST_TAB1′)

If the table did not already have stats this set the stats on the table to default values of 2000 rows, 100 blocks. If the table already had stats then they were left as they were.

If those figures ring a bell, then that is because they are the default values used if a table has no stats and you have no dynamic sampling. See this table of defaults

Anyway, below is a little worked example of these default values being set. Oh, version is 10.2.0.3.

TDB> drop table TEST1 purge
  2  /

TDB> select sysdate from dual;
SYSDATE
-----------------
21-JUN-2010 16:52


TDB> CREATE TABLE TEST1
  2  AS SELECT	    ROWNUM ID
  3    ,OBJECT_NAME OBJ_NAME
  4  FROM DBA_OBJECTS


TDB> SELECT TABLE_NAME,NUM_ROWS,BLOCKS,SAMPLE_SIZE
  2  FROM DBA_TABLES
  3  WHERE OWNER=USER AND TABLE_NAME = 'TEST1'
  4  /

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1

-- New table, no stats yet gathered, the columns hold null

-- Call dbms_stats.SET_TABLE_STATS, setting nothing
TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1')

TDB> SELECT TABLE_NAME,NUM_ROWS,BLOCKS,SAMPLE_SIZE
  2  FROM DBA_TABLES
  3  WHERE OWNER=USER AND TABLE_NAME = 'TEST1'

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                                2000        100        2000

-- The columns are set to defaults

-- Gather proper stats
TDB> exec dbms_stats.gather_table_stats(ownname =>user,tabname =>'TEST1',estimate_percent=>10)

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                              205430        956       20543


-- Now use SET_TABLE_STATS as intended, setting numrows to a value
TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1',numrows=>5000)

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                                5000        956       20543

-- Try the naked SET_TABLE_STATS
TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1')

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                                5000        956       20543

TDB> select sysdate from dual;
Any Key>
SYSDATE
-----------------
21-JUN-2010 16:52


-- And let us see how the stats have changed over the last few minutes.
TDB> select table_name,stats_update_time
  2  from dba_tab_stats_history
  3  where table_name = 'TEST1'
  4  /
Any Key>
TABLE_NAME                     STATS_UPDATE_TIME
------------------------------ --------------------------------
TEST1                          21-JUN-10 16.52.03.028086 +00:00
TEST1                          21-JUN-10 16.52.05.109905 +00:00
TEST1                          21-JUN-10 16.52.06.906204 +00:00
TEST1                          21-JUN-10 16.52.08.329664 +00:00

Performance Tipping Points April 13, 2010

Posted by mwidlake in performance, statistics.
Tags: , ,
add a comment

This week I came across a nice example of a performance tipping point. This is where Everything is OK until you reach a point where it all quickly cascades to Not OK.

The below shows the timings for a regulalry run “Alert”. Four times an hour we want to know if something quite important has happened on the system.

ID S START_DATE END_DATE DUR TIME_RAN START_PARAM END_PARAM
---- - ------------- ------------- --------- ------------- -------------------- -----------------
292 S 0410 03:06:00 0410 03:06:31 .000359 0410 03:06:00 20100410 02:51:00 20100410 03:06:00
292 S 0410 03:21:00 0410 03:21:35 .000405 0410 03:21:00 20100410 03:06:00 20100410 03:21:00
292 S 0410 03:36:00 0410 03:36:38 .000440 0410 03:36:00 20100410 03:21:00 20100410 03:36:00
292 S 0410 03:51:00 0410 03:51:33 .000382 0410 03:51:00 20100410 03:36:00 20100410 03:51:00
292 S 0410 04:06:00 0410 04:06:28 .000324 0410 04:06:00 20100410 03:51:00 20100410 04:06:00
292 S 0410 04:21:00 0410 04:21:44 .000509 0410 04:21:00 20100410 04:06:00 20100410 04:21:00
292 S 0410 04:36:00 0410 04:36:27 .000313 0410 04:36:00 20100410 04:21:00 20100410 04:36:00
292 S 0410 04:51:00 0410 04:51:34 .000394 0410 04:51:00 20100410 04:36:00 20100410 04:51:00
292 S 0410 05:06:00 0410 05:06:44 .000509 0410 05:06:00 20100410 04:51:00 20100410 05:06:00
292 S 0410 05:21:00 0410 05:21:43 .000498 0410 05:21:00 20100410 05:06:00 20100410 05:21:00
292 S 0410 05:36:00 0410 05:37:01 .000706 0410 05:36:00 20100410 05:21:00 20100410 05:36:00
292 S 0410 05:51:00 0410 05:52:21 .000938 0410 05:51:00 20100410 05:36:00 20100410 05:51:00
292 S 0410 06:06:00 0410 06:08:09 .001493 0410 06:06:00 20100410 05:51:00 20100410 06:06:00
292 S 0410 06:21:01 0410 06:24:00 .002072 0410 06:21:01 20100410 06:06:00 20100410 06:21:01
292 S 0410 06:36:00 0410 06:40:12 .002917 0410 06:36:00 20100410 06:21:01 20100410 06:36:00
292 S 0410 06:51:00 0410 06:56:54 .004097 0410 06:51:00 20100410 06:36:00 20100410 06:51:00
292 S 0410 07:06:00 0410 07:13:17 .005058 0410 07:06:00 20100410 06:51:00 20100410 07:06:00
292 S 0410 07:21:00 0410 07:29:42 .006042 0410 07:21:00 20100410 07:06:00 20100410 07:21:00
292 S 0410 07:36:00 0410 07:47:48 .008194 0410 07:36:00 20100410 07:21:00 20100410 07:36:00
292 S 0410 07:51:00 0410 08:08:07 .011887 0410 07:51:00 20100410 07:36:00 20100410 07:51:00
292 S 0410 08:08:07 0410 08:29:43 .015000 0410 08:08:07 20100410 07:51:00 20100410 08:08:07
292 S 0410 08:29:43 0410 08:50:10 .014201 0410 08:29:43 20100410 08:08:07 20100410 08:29:43
292 S 0410 08:50:10 0410 09:22:28 .022431 0410 08:50:10 20100410 08:29:43 20100410 08:50:10
292 S 0410 09:22:28 0410 10:27:11 .044942 0410 09:22:28 20100410 08:50:10 20100410 09:22:28
292 S 0410 10:27:11 0410 12:57:16 .104225 0410 10:27:11 20100410 09:22:28 20100410 10:27:11
292 F 0410 12:57:16 0410 14:50:26 .078588 0410 12:57:16 20100410 10:27:11 20100410 12:57:16
292 F 0410 14:50:26 0410 16:49:42 .082824 0410 14:50:26 20100410 10:27:11 20100410 14:50:26
292 F 0410 16:49:42 0410 19:06:10 .094769 0410 16:49:42 20100410 10:27:11 20100410 16:49:42

The first half dozen records show the program ID 292 running in half aminute or so. It varies a little, from just under 30 seconds to 45 seconds. Each run kicks off 15 minutes after the previous and, if you check the START_PARAM and END_PARAM on the right of the listing, you can see that the report runs for the previous 15 minutes. ie the period since the last run ( including the running time of the last itteration).

Then, at 05:36 something happens. The execution takes a full minute. The next run takes 1 minute and 20 seconds. The next run takes over two minutes and each following execution takes a little longer and longer.

At 07:51, 1.5 hours later, something significant happens. The execution takes more than 15 minutes to run. This report is run every 15 minutes. I’m sure you can appreciate that this is a problem. What happens now depends on your architecture.

If you have a scheduler that simply kicks off the report every 15 minutes, at the next scheduled time (08:06 in my case) a new execution will start and you will have two version of the report running for a couple of minutes. As time progresses this overlap increases. When the run time reaches 30 minutes you will now start having 3 reports running at the same time. And soon you will get three, four, five etc version running at the same time. Depending on the number of CPUs and IO bandwidth of your system, how long will it be until it will be doing nothing but running this code?
If the code is doing somthing more than simply selecting data, the chance of the concurrent versions competing for locks or resources is high and can mean that the slow-down of the code escalates rapidly and in a very, very short time, your system is on it’s knees.

In our case, we have some protection against this. The scheduler detects that the previous version has not finished and it waits until it has done so before starting the next execution. so the next execution is delayed. In my example the 08:06 execution is delayed until 08:08, 2 minutes late.

We still have two possible situations. The report could simply continue to slow down at the previous rate and just be delayed longer and longer. In my example it has been slowing down at around 1 minute 20 seconds per run for the last 5 runs.

However, The next execution is further delayed until 08:29 – which is 6 minutes beyond the 15 minute window from 08:08. This latets run of the code is 5 minutes 30 seconds slower than the previous execution, not 1 minute 20 seconds slower. We still have a tipping point. If you remember, this report runs to cover the period since the last execution. As the last execution took more than 15 minutes, the next run has to report on more than 15 minutes. 21 minutes in this case.

The slow-down now rapidly escalates due to not just whatever was slowing the report down initially but also the growing reporting period.

Just 5 runs and 5 hours later, at 12:57:16, the report fails. It now runs for so long that it cannot recreate the data as it looked at the start of the run and we get snapshot-too-old errors. In the listing you just see the second column go from S to F.

It is game over. Thankfully only one copy of the report is running at any time (A version is kicked off as soon as the previous one fails) and so, worst case, is using only one process on the box, one CPU maximum and the IO that one thread can demand.

What was the root cause? Time and statistics and data volume.
Statistics because the job that collects statistics failed earlier.
Time because at 05:50 or so, the run window of 15 minutes was far enough out of the range of known date-times for a partition that the CBO decided only a few rows would be found and swapped to a nested-loop execution.
Data volume as data was coming in quicker as the day progressed and the nested loop plan performance was highly susceptible to increases in data volume.

For an excellent posting on why code suddenly changes it’s execution plan when nothing has happened but time passing, seethis excellent post and comments on the topic on Richard Foote’s blog

{And isn’t his blog so much skinnier than mine? :-) }

Friday Philosophy – I killed a presentation April 1, 2010

Posted by mwidlake in Friday Philosophy, performance, statistics.
Tags: , , ,
9 comments

Well, this week saw the latest Management and Infrastructure SIG. I won’t go into the SIG meeting itself just now, maybe later this weekend. If you want an opinion on it, you might like to see what Graham Oaks said.

Being Chair of the SIG I have to find presenters. Being also one of those rare people who enjoys presenting, I know I can always tap one person to present. Me. {If only I could sing, play an instrument, act, do impression, put both my ankles behind my ears(*) or anything that was of any general interest, getting an audience would be so much easier}.

I decided to tackle a topic I have tried before and which I previously did a very poor show of: “General principles on gathering database statistics”.
It is not a title to strike joy into the soul, I know, but it is really important to getting good and reliable performance out of any modern Oracle database. Even other Oracle DBA types tend to find the topic worthy but boring, but I seem to have been forced to know a lot about it and I’ve become sadly very passionate about it.

So, I tried again. I mean, how long should it take to describe the issues with database statistics and the general principles to gathering them? I took my old presentation and stripped out all code, all SQL syntax, all listing of tables and got it down to describing the process. Then I started adding the bits and pieces you need to know to get this half-right. 132 slides. Hmmmm

I tried again and took it higher level. 48 slides. I can do 48 slides in something between 45 minutes and an hour. I was ready.

I had the last presentation slot of the day. I figure if I am going to foist myself on the audience, they should have the right to leave before I start, without missing any of the good stuff. I had 3/4 of the day’s attendees still with me when I started.

I think I killed my audience. Actually, I know I killed my audience. I lost a few as the slot progressed {I kidded myself they had to get back to the office/catch a train} and I made the fatal mistake of not shutting up when I had done 45 minutes. You see, there was all this important stuff I had not mentioned yet! Thankfully, I had a friend in the second row and when I saw him lose the will to live, I stopped. I’d been at it for 70 minutes. What we really needed was the Chair to shut me up but I was the chair.

Anyway, I came away from the talk with two realisations.

  • The topic of gathering database statistics is too large to cover in one presentation session.
  • It really is very boring, even when you are passionate about it.

I think I have no choice but to make this a blog thread (as I threatened to do so about 1 month ago). But I have also promised myself to finish off SQL Audit before I start a new thread and that was 3 months ago.

So I think I am going to have to put some time into this blogging lark. It’s OK though, I seem to go on about database statistics so much that even my cat is staring to avoid me. I’ll just stop watching Star Trek and drinking wine in the evenings and switch to preparing technical blogs. And drinking wine of course.

(*) Up until 2007 I could do this. I get out more now. Despite the stats fixation

Follow

Get every new post delivered to your Inbox.

Join 156 other followers