Getting Your Transaction SCN – USERENV(COMMITSCN) January 19, 2016
Posted by mwidlake in development, performance, SQL.Tags: data dictionary, performance, PL/SQL, SQL
3 comments
A few days ago I was introduced (or re-introduced) to USERENV(‘COMMITSCN’) by Jonathan Lewis. This is an internal function that allows limited access to the SCN of your transaction.
I was trying to find a way to get the actual commit SCN easily as it struck me that Oracle would have it to hand somewhere and it would be unique to the change and generated very efficiently. I could not find anything to do it so I asked Jonathan and he pointed me straight to this post he did about it a while back. What a nice chap. However, the post is from 1999 (last CENTURY!) so I thought I should just check it out first…
I’ll say up front that this is an undocumented feature of a now-deprecated function. Well, almost undocumented – older SQL reference manuals mention that ‘commitscn’ returns a NUMBER as opposed to the VARCHAR2 returned by most parameters you can use with USERENV, but it does not list it as a valid parameter for that function.
USERENV has been deprecated since Oracle 10g (see the old 10g documentation link here about USERENV) and you have been instructed to use SYS_CONTEXT(‘userenv’,’parameter‘) as described in the 12c database SQL reference manual here. However, there is no way to get the commit SCN from SYS_CONTEXT that I can find, so I thought I’d check out if USERENV(‘COMMITSCN’) still works. It does, on my version of Oracle 12.1.0.2!
There are some strict limits to this function. To begin with, you can’t select it, you can only use it on insert/update:
-- attempt a simple select of the SCN mdw> select userenv('commitscn') from dual; select userenv('commitscn') from dual * ERROR at line 1: ORA-01725: USERENV('COMMITSCN') not allowed here --But I can use in an insert, mdw> insert into test_scn (seq_no,vc1,scn_no) 2 values (100,'abcd',userenv('commitscn')) 1 row created. mdw> select * from test_scn where seq_no=100 SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 100 abcd 11144739 -- Now commit my new record mdw> commit; Commit complete. mdw> select * from test_scn where seq_no=100 2 / SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 100 abcd 11144753 --LOOK at the value for SCN_NO now! Compare to before the commit!
If you look at the above you will see a couple of things. The first is that, as I said, you cannot SELECT the function USERENV(‘COMMITSCN’).
The other is, though a value is put into the column when I insert a row using it, and I see that when I query the information back, it changes when I commit. This is because Oracle is recording something at the point of commit, not at the point of the SQL statement running – and the new SCN is only generated when you commit. A lot could have happened since I did the INSERT, I might have gone for a cup of tea and a batch job kicked off doing 1 million transactions, each with it’s own SCN. So though a placeholder of the current SCN is put into your view of the table row, the value put in the actual table is generated at the time of the commit.
Another limiting rule is that you can only reference USERENV(‘COMMITSCN’) once in a transaction, for one row. If I try and create 2 rows using the function in the same transaction I get an error, if I try to update more than 1 row I get an error:
mdw> insert into test_scn (seq_no,vc1,scn_no) 2 values (101,'abcd',userenv('commitscn')) 1 row created. mdw> insert into test_scn (seq_no,vc1,scn_no) 2 values (102,'abcd',userenv('commitscn')) insert into test_scn (seq_no,vc1,scn_no) * ERROR at line 1: ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction -- now test updating several records mdw> commit; Commit complete. mdw> select * from test_scn; SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 1 AAAAAAA 11143743 2 AAAAAAA 11143746 3 AAAAAAA 11143749 4 AAAAAAA 11143774 5 AAAAAAA 11143777 100 abcd 11144753 101 abcd 11145543 mdw> update test_scn set scn_no = userenv('commitscn'); update test_scn set scn_no = userenv('commitscn') * ERROR at line 1: ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction -- but one row works mdw> update test_scn set scn_no = userenv('commitscn') where rownum =1; 1 row updated.
USERENV(‘COMMITSCN’) is old, undocumented and limited in use. So why am I looking at it, let alone even telling you all about it? {Apart from the fact that a lot of you *love* this sort of obscure, tid-bitty stuff}. Well, as there is no replacement for it that I am aware of. You can get the current SCN in a couple of ways, the easiest probably being to get it from V$DATABASE:
mdw> select current_scn from v$database; any key> CURRENT_SCN ----------- 11146718
However, that is the last SCN used at the time you check it and is not the SCN when you commit. ie it is a different thing. I always find it irksome on those odd occasions when something is deprecated in Oracle with nothing really to replace it.
I just demonstrate again that USERENV(‘COMMITSCN’) is a little special below, and not the same as just selecting SCN from V$DATABASE. Before I go any further, I think the value USERENV(‘COMMITSCN’) puts into the table is the actual COMMIT SCN minus 1. I mostly think this as Jonathan said so :-). I do see each time I run this test that the first select from V$DATABASE and then my insert and a commit straight away results in a value in the table 2 higher than the select.
Further iterations (2nd and 3rd in this case) show the value selected from V$DATABASE and the value inserted into TEST_SCN immediately after are the same, and are 3 higher than the previous iteration. I anticipated an increase of two, once for the change to the UNDO tablespace for the insert and once for the insert. I am not sure where the third one comes in.
However, in the fourth iteration I have a PAUSE in my SQL*Plus script between checking V$DATABASE and doing my insert and, in a second session, I do some simple inserts and commits {it does not matter what, so I don’t show it}. Thus the difference between the SCN I collected from V$DATABASE and the value inserted into the table.
Finally, in the fifth iteration, I check the value in V$DATABASE, do the insert, query it back and see the two are the same. And THEN I pause so I can do some changes and commit them in my second session. After I’ve done that I continue my first session which commits my latest insert into TEST_SCN. I check the values actually stored in the table and, just as at the top of this post, you see that the value actually preserved in the table is a later SCN than the placeholder one. It is doing something special.
(the below has the noddy code to create my test table and sequence as well as the test)
-- test_scn1 --create table test_scn (seq_no number,vc1 varchar2(10),scn_no number) -- create sequence scn_seq; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; pause 'commit some stuff in second session and then press any key' insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); pause 'commit some stuff in second session again and then press any key' select * from test_scn; commit; select * from test_scn; select current_scn from v$database; -- the output of the test mdw> @test_scn1 --check V$DATABASE SCN CURRENT_SCN ----------- 11147809 -- Create and commit 1 row 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 -- note that the inserted SCN is 2 higher than the current SCN. --Same steps, 2nd iteration CURRENT_SCN ----------- 11147814 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 -- select SCN and inserted SCN are the same, 3 higher than first iteration -- same steps, 3rd iteration CURRENT_SCN ----------- 11147817 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 -- select SCN and inserted SCN are the same, 3 higher than first iteration -- 4th iteration, a pause CURRENT_SCN ----------- 11147820 'commit some stuff in second session and then press any key' -- I did indeed change and commit some stuff in second session, before I create my record in test_scn 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 14 AAAAAAA 11147831 -- larger gap in SCN (11147817 to 11147831 -- 5th iteration, pause now after insert and before commit CURRENT_SCN ----------- 11147834 1 row created. 'commit some stuff in second session again and then press any key' -- I did indeed change and commit some stuff in second session SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 14 AAAAAAA 11147831 15 AAAAAAA 11147834 -- Notice the current_scn from V$DATABASE and the last row in the table match, 11147834.. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 14 AAAAAAA 11147831 15 AAAAAAA 11147842 -- But after the commit the SCN in row "15" has increased to 11147842. CURRENT_SCN ----------- 11147851 -- and the next check of V$DATABASE SCN shows the usual increase of 3 by the commit.
As you can see from the above, USERENV(‘COMMITSCN’) is doing something a little special and, despite all the limitations, I might actually have a use for it…
STANDARD date considerations in Oracle SQL and PL/SQL July 29, 2015
Posted by mwidlake in internals, PL/SQL.Tags: data dictionary, PL/SQL, SQL
7 comments
Most of us know that the Oracle DATE datatype has upper and lower limits. From the Oracle 11g Database Concepts manual:
Oracle Database can store dates in the Julian era, ranging from January 1, 4712 BCE through December 31, 9999 CE (Common Era, or ‘AD’). Unless BCE (‘BC’ in the format mask) is specifically used, CE date entries are the default.
I never believe 100% anything I read, so I’ll try that. I’ll set my session to show dates with the AD/BC indicator and step back in time:
ora122> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYYAD'; Session altered. -- select today ora122> select sysdate from dual; SYSDATE ------------- 29-JUL-2015AD --now let us go back to "the edge of time" ora122> select sysdate -2457232 from dual; SYSDATE-24572 ------------- 01-JAN-4712BC ora122> select sysdate -2457233 from dual; select sysdate -2457233 from dual * ERROR at line 1: ORA-01841: (full) year must be between -4713 and +9999, and not be 0 -- Now to do similar in PL/SQL declare out_text varchar2(100); begin select to_char(sysdate) into out_text from dual; dbms_output.put_line (out_text); end; ora122> @std1 31-DEC-4713BC PL/SQL procedure successfully completed.
How did I do that? We can see from the SQL that the documentation is correct and SQL refuses to accept a date before the lower limit. How did I get a date before 01-JAN-4712BC in my PL/SQL? Especially as my default SYSDATE?
I’ll let you think about that for 30 seconds, you can look at a picture of my recently gone and much missed cat (NB she is happily snoozing in this shot!).
So how did I do it? I cheated.
But I cheated in a very, very interesting way. I did not show you all of my PL/SQL code, which I now reveal below…:
declare sysdate varchar2(20) := '31-DEC-4713BC'; begin declare out_text varchar2(100); begin select to_char(sysdate) into out_text from dual; dbms_output.put_line (out_text); end; end; /
So, showing you my whole code (see, don’t believe everything you read – sometimes things are simply wrong and sometimes people deceive you) you can see the critical part at the start. My anonymous PL/SQL block is in fact a child block to another. And in that parent block, right at the start, I over-ride the definition of SYSDATE. in the declaration section
sysdate varchar2(20) := ’31-DEC-4713BC’;
I not only set it to a specific value, I set it to be a Varchar2 datatype. The TO_CHAR of it later on in the logic, which I included in the code I originally showed you, was just more subterfuge on my part. The PL/SQL engine does not care if you TO_CHAR an already CHAR-type field, but it hid the fact that I’d played this trick.
You could define a local SYSDATE variable, as a date, set to a specific date and time if you wish. Even one in the future. And anyone seeing odd behaviour and checking to see if the initialization paramater FIXED_DATE had been set would find that it had not and might soon be questioning their own sanity.
How many of you knew you could do that? You can over-ride what most of us would consider a Reserved Word in PL/SQL. I suspect it is something that people new to PL/SQL might find out by accident (because no one told them you could not use reserved words for variable names) but experienced people would not as it is simply a daft thing to do. I’d go further, it is a stupid thing to do. Think how much trouble it could cause in introducing bugs and making the code hard to understand. {And thinking further about this, I might see if I can get re-employed at a couple of places and starting doing things like this, just for the evil pleasure, as it could be an utter nightmare to spot}.
The reason this trick works is that SYSDATE, along with many interesting things, are not “Built In” to the PL/SQL language but are defined in two key packages – SYS.STANDARD and SYS.DBMS_STANDARD. These are always there and define many core things. You can DESC both of them in SQL*Plus or pull the package specification out of DBA_SOURCE and, unlike many of the other Built In packages, the code is not wrapped for STANDARD, so you can look at it. You can do this with a “lowly” DBA-type user, you do not need to be on as SYS or SYSTEM.
I am not sure of the exact rules but I think that when you use a locally qualified variable (ie you do not state the code block, package or stored function/procedure it comes from) it looks at the current variables as defined in the current and parent PL/SQL blocks first and then looks at STANDARD and then DBMS_STANDARD. I am not going to mess with STANDARD or DBMS_STANDARD, even on my play box, to find out the exact order of the two. If I spent 10 minutes looking at the specifications I might be able to see that one references the others I suppose…
This is part of the specification from DBMS_STANDARD:
package dbms_standard is -- types type ora_name_list_t is table of varchar2(64); -- DBMS_ID and DBMS_QUOTED_ID define the length of identifiers -- in objects for SQL, PL/SQL and users. subtype dbms_id is varchar2(30); subtype dbms_quoted_id is varchar2(32); subtype dbms_id_30 is varchar2(30); subtype dbms_quoted_id_30 is varchar2(32); subtype dbms_id_128 is varchar2(128); subtype dbms_quoted_id_128 is varchar2(130); -- Trigger Operations procedure raise_application_error(num binary_integer, msg varchar2, keeperrorstack boolean default FALSE); pragma interface (C, raise_application_error); -- 1 (see psdicd.c) pragma restrict_references (raise_application_error, WNPS, RNPS, WNDS, RNDS); function inserting return boolean; pragma interface (C, inserting); -- 2 pragma restrict_references (inserting, WNPS, RNPS, WNDS); function deleting return boolean; pragma interface (C, deleting); -- 3 pragma restrict_references (deleting, WNPS, RNPS, WNDS); function updating return boolean; pragma interface (C, updating); -- 4 pragma restrict_references (updating, WNPS, RNPS, WNDS);
You won’t find a package body of DBMS_STANDARD – that is because, I believe, all entries in the package specification are types or functions/procedures that lead to C functions, via the ADA-like {If you did not know, PL/SQL is based on the ADA language} pragma directives of “pragma interface (C, {something}), which says this function/procedure is coded in another language (C in this case) and is called {something}. Don’t ask me more, I don’t know.
eg:
procedure commit;
pragma interface (C, commit);
Even the base data types are defined in STANDARD:
package STANDARD AUTHID CURRENT_USER is -- careful on this line; SED edit occurs! /********** Types and subtypes, do not reorder **********/ type BOOLEAN is (FALSE, TRUE); type DATE is DATE_BASE; type NUMBER is NUMBER_BASE; subtype FLOAT is NUMBER; -- NUMBER(126) subtype REAL is FLOAT; -- FLOAT(63) subtype "DOUBLE PRECISION" is FLOAT; subtype INTEGER is NUMBER(38,0); subtype INT is INTEGER; subtype SMALLINT is NUMBER(38,0); subtype DECIMAL is NUMBER(38,0); subtype NUMERIC is DECIMAL; subtype DEC is DECIMAL; subtype BINARY_INTEGER is INTEGER range '-2147483647'..2147483647; subtype NATURAL is BINARY_INTEGER range 0..2147483647; subtype NATURALN is NATURAL not null; subtype POSITIVE is BINARY_INTEGER range 1..2147483647; subtype POSITIVEN is POSITIVE not null; subtype SIGNTYPE is BINARY_INTEGER range '-1'..1; -- for SIGN functions type VARCHAR2 is NEW CHAR_BASE; subtype VARCHAR is VARCHAR2; subtype STRING is VARCHAR2; subtype LONG is VARCHAR2(32760); ...
Anyway, I leave the reader to go and look at the package specifications and the STANDARD package body {some of which I show at the end} but I leave you with a repeat of the above warnings: Don’t go replacing the core variables and functions in your PL/SQL code just because you can and do not, repeat, do NOT mess with those two packages. I am sure Oracle Corp will throw your support contract out the window if you do.
As promised above, here is the code for SYSDATE, in SYS.STANDARD, and it is very interesting – in the manner as mentioned above it calls a function (pessdt) that only calls a C program (presumably to get the datetime from the server clock) and failing that, reverts to the SQL method of selecting the pseudocolumn from dual. SYSTIMESTAMP below it is the same:
function pessdt return DATE; pragma interface (c,pessdt); -- Bug 1287775: back to calling ICD. -- Special: if the ICD raises ICD_UNABLE_TO_COMPUTE, that means we should do -- the old 'SELECT SYSDATE FROM DUAL;' thing. This allows us to do the -- SELECT from PL/SQL rather than having to do it from C (within the ICD.) function sysdate return date is d date; begin d := pessdt; return d; exception when ICD_UNABLE_TO_COMPUTE then select sysdate into d from sys.dual; return d; end; -- -- -- function pessts return timestamp_tz_unconstrained; pragma interface (c,pessts); -- Special: if the ICD raises ICD_UNABLE_TO_COMPUTE, that means we should do -- the old 'SELECT systimestamp FROM dual;' thing. This allows us to do the -- SELECT from PL/SQL rather than having to do it from C (within the ICD.) FUNCTION systimestamp RETURN timestamp_tz_unconstrained IS t timestamp_tz_unconstrained; BEGIN t := pessts; RETURN t; EXCEPTION WHEN ICD_UNABLE_TO_COMPUTE THEN SELECT systimestamp INTO t FROM sys.dual; RETURN t; END;
Table High Water Mark and How Empty the Table Is April 30, 2012
Posted by mwidlake in performance, statistics.Tags: data dictionary, performance, statistics
1 comment so far
On a recent post about reducing the high water mark in a table by shrinking it someone asked if you could tell how much “space” there was in a table that could be recovered by compressing it.
Yes. As pseudo-code:
select number_of_rows*average_row_length. That is how much data there is.
Divided by the block size, to give how many blocks-worth of data you have.
Divided this by a “overhead” fudge factor to allow for wasted space (see later) of about 20%
Compare to the number of blocks used by the table.
Here is some code to do it, showing some extra data and how we calculate the effective percentage of space used (pct_spc_used). It runs against an example table ‘HWM’
select uts.blocks blks_used ,uts.avg_space ,uts.num_rows ,uts.avg_row_len ,uts.empty_blocks empty_blks ,usse.blocks alloc_blks ,greatest(uts.blocks,1)/greatest(usse.blocks,1) pct_hwm ,uts.num_rows*uts.avg_row_len data_in_bytes ,(uts.num_rows*uts.avg_row_len)/8192 data_in_blks ,((uts.num_rows*uts.avg_row_len)/8192)*1.25 mod_data_in_blks ,(((uts.num_rows*uts.avg_row_len)/8192)*1.25)/usse.blocks pct_spc_used from user_tab_statistics uts ,user_segments usse where uts.table_name='HWM' and uts.table_name=usse.segment_name / -- BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1630 1221 50000 213 34 1664 .97957 10650000 1300.04883 1625.06104 .97659918
I am collecting the data from USER_TAB_STATISTICS and USER_SEGMENTS. For this code to work you must have reasonably good stats against the table.
I’d like to run through some of the columns I have selected:
First of all, columns AVG_SPACE and EMPTY_BLKS are not populated by dbms_stats.gather_table_stats.. They are populated by the deprecated ANALYZE command that you should NOT use to gather table stats since V10 came along. These columns are populated as I did an ANALYZE to get the data in there, as well as a dbms_stats.
Next, I collect BLOCKS_ALLOCATED from DBA_SEGMENTS {and for this demo I just ignored the potential for partitioned tables) and I compare this to the BLOCKS_USED to get the High Water Mark, as a percentage of the table. I do this as EMPTY_BLOCKS is set to zero if you have never used ANALYZE and, even if you did, unless you use this deprecated command all the time, the value will not change.
On the second line of output I calculate the DATA_IN_BYTES as a simple num_rows*avg_row_len, convert it into blocks {for simplicity I do not collect the block size, I know it is 8k}. I then apply my “Overhead” fudge factor. A block has a header, using around 100 bytes {I’ve not checked the exact figure for years}, pctfree can be varied but defaults to 10% and as only whole rows fit, then an average of half a row of space is empty in each “full” block. Thus I reduce the space available by 20-25%. In this case, 25% as my rows are large.
Finally, I compare this modified data volume to the used blocks to get the actual space
Below I run through creating some test data, looking at the stats and my calculated High Water Mark and pct_space_used and finally shrink my table to see if my guesstimate is a reasonable guesstimate:
populate table drop table hwm purge; prompt populate table set feed on create table hwm (id number(10) ,num1 number(2) ,vc1 varchar2(100) ,vc2 varchar2(100) ) / insert into hwm select rownum ,trunc(dbms_random.value(1,100)) ,lpad('A',100,'A') ,lpad('B',100,'B') from dual connect by level < 50001 / 50000 rows created. exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'HWM') --where is the HWM compared to total segment size BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1630 0 50000 210 0 1664 .97957 10500000 1281.73828 1602.17285 .962844262 NB AVG_SPC and EMPTY_BLKS are NULL. The high water mark is 1630 blocks out of 1664 in the segment My calculated PCT_SPC_USED is 96%. That is probably close enough. {remember, the last used block will be only partly used, accounting for a bit of the difference} -- I will use ANALYZE to fill the missing columns analyze table hwm compute statistics; BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1630 1150 50000 213 34 1664 .97957 10650000 1300.04883 1625.06104 .97659918 Now those two columns are populated. Not the slightly different AVG_ROW_LEN even though dbms_stats used 100% (as the table is so small) and ANALYZE was compute -- clear 90% of the data randomly 45461 rows deleted. BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1630 1150 4539 210 34 1664 .97957 953190 116.356201 145.445251 .087407002 PCT_HWM is not altered of course but PCT_SPC_USED has dropped dramatically. The table is now only 8.7% used, according to my calculations (compared to 90% empty) The BLKS_USED does not change. The AVG_SPACE and EMPTY_BLOCKS are the same as I used dbms_stats to update the statistics and it DOES NOT ALTER the columns that it does not populate. Thus you have no idea how recent those columns are if you use a mixture of commands. -- clear some blocks completely by deleting a range 2181 rows deleted. BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1630 1150 2358 210 34 1664 .97957 495180 60.4467773 75.5584717 .045407735 Now the PCT_SPC_USED is down to 4.5% -- has EMPTY_BLOCKS changed if I use ANALYZE? analyze table hwm compute statistics; BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1630 7682 2358 213 34 1664 .97957 502254 61.3103027 76.6378784 .046056417 As you can see, if I use ANALYZE AVG_SPACE alters. But EMPTY_BLOCKS does not, even though I cleared a chunk of the table. So there are blocks that can be reused but not listed as empty. I'll just take a quick side-step and show a quick "oddity" about dbms_stats --deleting the stats (using dbms_stats) exec dbms_stats.delete_table_stats(ownname=>user,tabname=>'HWM') BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1664 dbms_stats.delete_table_statistics clears ALL statistics, even the ones it does not populate --and now collect them via dbms_stats again BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 1630 0 2358 210 0 1664 .97957 495180 60.4467773 75.5584717 .045407735 --now to shrink the table alter table hwm enable row movement; alter table hwm shrink space; BLKS_USED AVG_SPACE NUM_ROWS AVG_ROW_LEN EMPTY_BLKS ALLOC_BLKS PCT_HWM ---------- ---------- ---------- ----------- ---------- ---------- -------- DATA_IN_BYTES DATA_IN_BLKS MOD_DATA_IN_BLKS PCT_SPC_USED ------------- ------------ ---------------- ------------ 72 0 2358 210 0 80 .90000 495180 60.4467773 75.5584717 .944480896 So I calculated that there was about 75 blocks of data in that table. having shrunk it, I was a little bit out.
Having run through those examples we can see that the accuracy of the PCT_SPC_USED is down to the fudge factor employed but is probably close enough at 25%. After all, you are only likely to shrink a table that very clearly would benefit from it.
Dropped Tables, Hiding Extents and Slow DBA_FREE_SPACE Queries February 16, 2012
Posted by mwidlake in database design, internals, performance.Tags: data dictionary, design, performance, SQL
10 comments
My last post was on slow dictionary queries caused by having many, many thousands of extents in your database. This post is about a special case of this problem, which I encountered recently. It was on an Exadata box – Exadata is not particularly relevant to the issue, but I’m curious to see if mentioning Exadata will make this post more popular 🙂
I was querying the used and free space on a very small database on the client’s X2-2 1/4 rack Exadata machine. The query was really slow, taking about 30 seconds. This is a FAST box, what is going on?
I quickly realised that the problem was specific to one tablespace:
>@SPC_SUM Enter the tablespace (or leave null)> DATA_01 TS_NAME ORD SUM_BLKS SUM_K MAX_CHNK_K NUM_CHNK -------------------- ----- ----------- ------------ ----------- -------- DATA_01 alloc 262,144 2,097,152 2,097,152 1 free 63,128 505,024 504,384 11 2 rows selected. Elapsed: 00:00:00.21 > @SPC_SUM Enter the tablespace (or leave null)> USERS TS_NAME ORD SUM_BLKS SUM_K MAX_CHNK_K NUM_CHNK -------------------- ----- ----------- ------------ ----------- -------- USERS alloc 748,320 5,986,560 5,372,160 2 free 127,904 1,023,232 6,144 3,058 2 rows selected. Elapsed: 00:00:26.05
We can see it takes 0.21 seconds for tablespace DATA_01, 26.05 seconds for the USERS. Yes, USERS is larger but not 150 times larger. What is going on? Well, as the title of this post suggests, it is down to dropping tables. The below shows my route to that conclusion.
What I could see was that for the USERS tablespace the maximum chunk of free space was relatively small compared to the sum of free space – 6MB out of 1GB, pretty much – and that there was a lot of individual pieces of free space, 3,058. This tablespace was shattered into a lot of bits. So, what sort of extent size management do we have? How big are the tables and indexes in this tablespace {NB I already knew we had no partitions so I did not have to worry about that}.
select tablespace_name,initial_extent,next_extent ,extent_management,min_extlen,allocation_type from dba_tablespaces where tablespace_name ='USERS' TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN MIN_EXTLEN ALLOCATIO ------------------------------ -------------- ----------- ---------- ---------- --------- USERS 65536 LOCAL 65536 SYSTEM select tablespace_name,blocks,count(*) from dba_extents where tablespace_name = 'USERS' group by tablespace_name,blocks having count(*) >1 order by blocks desc,tablespace_name TABLESPACE_NAME BLOCKS COUNT(*) ------------------------------ ---------- ---------- USERS 2560 2 USERS 2048 3 USERS 1536 7 USERS 1408 5 USERS 1280 2 USERS 1248 2 USERS 1152 2 USERS 1024 229 USERS 896 15 USERS 768 21 USERS 736 3 USERS 720 3 USERS 704 2 USERS 672 2 USERS 640 25 USERS 624 2 USERS 576 2 USERS 512 117 USERS 400 2 USERS 384 34 USERS 360 2 USERS 312 2 USERS 288 4 USERS 256 49 USERS 248 2 USERS 240 2 USERS 192 5 USERS 160 4 USERS 128 1165 USERS 8 1788 30 rows selected.
So we have system controlled extent allocation, this should be fine. I did a quick check of the table contents – 300 or so tables and 200 or so indexes. And, as you can see from the check of extents above, very few larger extents and a lot of small ones. {As a little aside, note the relatively large number of 8-block, 128-block, 256-block and 1024-block extents; These are the sizes that LMTs with system managed extents tend to use unless a similar sized chunk is found to be available, in which case it is used}.
I did some other checks on indexes and segment sizes and it all boiled down to one thing. There were not a lot of things in this tablespace, what was there was small and… The total of all those table and index blocks was way less than the used space in the tablespace.
That is the major indicator of what is going on here. The difference in space used in the tablespace and the total of the visible tables and indexes.
The last link in the chain is the recycle bin.
select owner,ts_name,count(*) from dba_recyclebin group by owner,ts_name OWNER TS_NAME COUNT(*) ------------------------------ ------------------------------ ---------- USER1 USERS 542356 USER1 2 WEGWEGWEG USERS 97 KKKUKUYLLX USERS 149 USOVFPKEKS USERS 3 .... ERHJTRTTTURT USERS 4 11 rows selected.
That’s 542,356 objects in the recyclebin for one user, in the one tablespace. My problem tablespace. The penny clicked, something I had already noticed and was a little uncomfortable about fell into place.
The client is using Oracle Data Integrator (ODI) to pull data together and put it into the database. The process they are using basically creates a table, uses it to load some data into and then pushes the data into the target tables. Then drops the table. The drop does not do a “DROP TABLE … PURGE;”. This is done many, many times per load cycle, which is intended to run several times a day.
Something you should always keep in mind with the recyclebin is that the tables, indexes and their extents that go into the recycle bin do not get reported in several of the key data dictionary views. I did mention this in a blog post way back but maybe a whole post on it is called for.
So, the tablespace was being shattered by the constant creating and dropping of small tables. It was hidden from easy view due to how the recyclebin is (not) exposed in data dictionary views.
It is not good practice to constantly create and drop lots of tables. As well as the admittedly rather odd impact that this posting is all about, there is a lot of internal work involved for the oracle database in creating and dropping tables. It is an overhead best avoided. The client had good reasons for this approach but now they are going to look at the alternatives.
I’ve not actually proven in this posting that all those dropped tables is the actual cause of the slow performance in querying the free space in that tablespace {though the corroborative evidence is very strong}. Neither have I said how we cleared up the mess. I’ll cover both of those in the next post on this mini-thread.
Slow querying of DBA_EXTENTS, DBA_FREE_SPACE and dropping tables February 14, 2012
Posted by mwidlake in performance.Tags: data dictionary, performance
4 comments
Are you finding queries to identify free space, the size of segments or details of extents in your database are very slow? You could have a database with a very, very large number of extents. If you are on version 9 or up, I would suggest you check the extent management settings for any tablespace holding large segments (see later).
{Caveat – I’m writing this posting based on what I remember of some issues I had on two V10.2 databases 2 and 3 years ago respectively, and those systems are no longer available to me. So please treat everything I say here with some caution – I’ve not checked and verified it to the level I normally would. But I wanted to move this information from another posting I am writing on a recent experience I will verify…}
First, what this article is NOT about. It has been known for a very long time that creating tables with a large number of extents can have a negative impact on SQL performance on that table. Except it never really did and it was a myth. Way back prior to Oracle 7.3 you could only create so many extents per segment, depending on block size, but that was another issue. It used to be argued that SQL select against such tables with many extents was slower. I did some tests and it was not – unless you were in the strange situation where your extent size was less than your multi-block read count, and even then the impact was not huge, it was a slowdown of a few percent to maybe 25%.
However, dropping such tables, truncating such tables and queries against the dictionary objects that deal with extents and free space could and still can become very slow. As we have progressed through the Oracle versions from 8 ,9 and 10 this problem has become less common and the impact has become less, mostly due to Locally Managed Tablespaces (LMTs) and Automatic Segment Space Management {though that is more in respect of concurrent DML than select}.
LMTs in particular have generally removed the issue. If you do not use LMTS and have no very,very pressing reason to not {like 3rd party support for applications}, then swap to LMTs. LMTs have been the default for user defined tablespaces since oracle 9 and have two options. Uniform (Fixed) extent sizes or Automatic, where oracle uses 8 block extents for a new segment to start, then 64 block extents, then 1024 block extents as the segment grows {I might be wrong on the exact size details but don’t worry about it, I certainly don’t}. You can check the settings for tablespaces as is demonstrated below. I create two tablespaces, one with uniform extent sizes and then one with automanaged extent sizes, and check the relevant information (this is on 11.2.0.3):
create tablespace mdw_uni_1m datafile 'C:\APP\MARTIN\ORADATA\MDW1123\MDW_UNI_1M' size 100m extent management local uniform size 1M; create tablespace mdw_auto datafile 'C:\APP\MARTIN\ORADATA\MDW1123\MDW_AUTO' size 100m extent management local autoallocate; select tablespace_name,initial_extent,next_extent ,extent_management,min_extlen,allocation_type from dba_tablespaces where tablespace_name like 'MDW%'; TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN MIN_EXTLEN ALLOCATIO ------------------ -------------- ----------- ---------- ---------- --------- MDW_UNI_1M 1048576 1048576 LOCAL 1048576 UNIFORM MDW_AUTO 65536 LOCAL 65536 SYSTEM
As you can see, tablespace MDW_UNI_1M uses uniform extents of 1M and MDW_AUTO has system managed allocation and starts with 64K extents – 8 blocks with my 8k block size.
As a quick demo, I’ll just create and populate two simple tables and see what extent sizes are created for them:
create table mdw_big tablespace mdw_auto as select rownum id,lpad('A',1000,'A') pad from dual connect by level <10000 create table mdw_big_uni tablespace mdw_uni_1m as select rownum id,lpad('A',1000,'A') pad from dual connect by level <10000 select owner,segment_name,blocks,count(*) from dba_extents where segment_name like 'MDW%' group by owner,segment_name,blocks OWNER SEGMENT_NAME BLOCKS COUNT(*) --------------- --------------- ---------- ---------- ERIC MDW_BIG_UNI 128 12 ERIC MDW_BIG 8 16 ERIC MDW_BIG 128 11 3 rows selected.
So, how do issues with large numbers of extents still arise with modern oracle systems? Well, the two situations I’ve seen on Oracle 10 had the same cause {and, as a teaser for later this week, I’ve seen a variation of this issue on a nice, shiny Exadata X2-2 Oracle 11.2.0.3 box 🙂 You can readabout that here }. What alerted me was slow performance querying the data dictionary, in particular my scripts for checking free space, the size of segments and how many extents they consisted of.
If you create a tablespace with fixed extent sizes of eg 64K and then create a table in there that is 100GB in size, you will get an awful lot of extents. Now make it worse and have a partitioned table that ends up being several hundreds of GB in size with all those segments in that tablespace (or similarly defined tablespaces).
Since I hit the problem twice myself, I’ve chatted to others who have had the same issue. The above usually happens because of a mistake. The tablespace(s) in question are set up to hold small tables and then get used for large tables, either by a mistake in stating the exact tablespace to use or having the tablespace default to your default tablespace – which just happens to be a tablespace with fixed but small extent sizes.
The end result is a massive number of small extents in these tablespaces, usually with extents for different objects mixed in. Some dictionary queries slow down and, in particular, anything to do with looking at extents. For one site, I was trying to use my own code to gather statistics on tables that replaced the standard automated job. It’s fairly “smart” code and chooses a sample size based on the size of the segments. Only, the data dictionary was performing so slowly for the check on segment size that it was taking over 5 seconds to get the information – longer than some of the stats gathers.
You can logically understand why dropping or truncating the table is slow. Oracle has to sort out all those extents, remove the information from the data dictionary. This is not helped by the fact that part of the data dictionary is being slowed down due to all those pesky records…
You MAY be able to get some relief from this situation by gathering fixed object statistics. I did so at one site, where the queries against free_space and segment size sped up by around 80%. I have no demonstrated proof of this, it is just what I saw in one situation, so feel free to try it but don’t sue me if it does not help. Also, it took over 3 hours to gather the fixed object stats and you only do this sort of thing, untested, on a production system if you are already in a bad place.
{update – I just tested this on a private 11.2 db that was taking 10.3 seconds to count all extents, all 12,742 of them. Gathering fixed object stats made no difference at all.}
However, the real answer is to laboriously rebuild those segments in tablespaces with correctly specified uniform extent sizes. Which we did, over several weeks, and it made a difference.
If I was doing this task today, if I could get the outage to do it, I would create COPIES of those segments that were in the wrong tablespaces, re-name and re-apply any constraints and move the other other, smaller tables and indexes to a new tablespace – and then drop the tablespaces including contents. Why? As dropping a table with lots and lots of small extents seemed to take a very long time (many minutes per partition and we had thousands of them). Again, my proof is lost in the mists of time, but that’s what I would aim to do.
Lack of Index and Constraint Comments November 24, 2011
Posted by mwidlake in Architecture, database design, development.Tags: Architecture, data dictionary, design, documentation, sql*plus
12 comments
Something I’ve just reminded myself of is that under Oracle you cannot add a comment on an index or a constraint. You can only add comments on tables, views, materialized views, columns of those object types and a couple of esoteric things like Operators, Editions and Indextypes.
Here is an example of adding comments to tables and columns:
set pause off feed off drop table mdw purge; create table mdw(id number,vc1 varchar2(10)); comment on table mdw is 'Martin Widlake''s simple test table'; comment on column mdw.id is 'simple numeric PK sourced from sequence mdw_seq'; comment on column mdw.vc1 is'allow some random text up to 10 characters'; -- desc user_tab_comments Name Null? Type ----------------------------------------------------- -------- ------------------------------------ TABLE_NAME NOT NULL VARCHAR2(30) TABLE_TYPE VARCHAR2(11) COMMENTS VARCHAR2(4000) -- select * from dba_tab_comments where table_name='MDW' / OWNER TABLE_NAME TABLE_TYPE ------------------------------ ------------------------------ ----------- COMMENTS ---------------------------------------------------------------------------------------------------- MDW MDW TABLE Martin Widlake's simple test table select * from dba_col_comments where table_name='MDW' order by column_name / OWNER TABLE_NAME COLUMN_NAME ------------------------------ ------------------------------ -------------- COMMENTS ---------------------------------------------------------------------------------------------------- MDW MDW ID simple numeric PK sourced from sequence mdw_seq MDW MDW VC1 allow some random text up to 10 characters -- now to add a big comment so need to use the '-' line continuation character in sqlplus -- comment on table mdw is 'this is my standard test table.- As you can see it is a simple table and has only two columns.- It will be populated with 42 rows as that is the solution to everything.' / select * from dba_tab_comments where table_name='MDW' OWNER TABLE_NAME TABLE_TYPE ------------------------------ ------------------------------ ----------- COMMENTS ---------------------------------------------------------------------------------------------------- MDW MDW TABLE this is my standard test table. As you can see it is a simple table and has only two columns. It w ill be populated with 42 rows as that is the solution to everything. -- /
Adding comments on tables, views and columns seems to have dropped out of fashion over the years but I think it is still a very useful feature of oracle and I still do add them (though I am getting a little slack about it myself over the last 3 or 4 years, which I must stop).
Comments are great, you can put 4000 characters of information into the database about each table, view and column. This can be a brief description of the object, a full explanation of what a column is to hold or even a list of typical entries for a column or table.
But you can’t add a comment on indexes or constraints. Why would I want to? Well, constraints and indexes should only be there for a reason and the reason is not always obvious from either the names of the columns or the name of the constraint or index, especially where you have a naming standard that forces you to name indexes and constraints after the columns they reference.
When you design a database, do a schema diagram or an ERD, you label your relationships between entities/tables. It tells you exactly what the relationship is. You might create an index to support a specific method of access or particular business function. You might alter the index in a way not immediately obvious to the casual observer, such as to allow queries that use the index to avoid having to visit the table. All of those things will, of course, be fully documented in the maintained project documentation in the central repository, available and used by all…
If I was able to add comments to constraints and indexes within the database then they would there. You move the system from one platform to the other, they are there. If for any wildly unlikely reason the central documentation lets you down, the information is always there in the database and easy to check. You may not be able to track down the original design documents but you have the database in front of you, so comments in that will persist and be very easy to find.
Lacking the ability to add comments on indexes and constraints, I have to put them at the table level, which I always feel is a kludge. I might actually raise an enhancement request for this, but as Oracle 12 is already nailed down, it will have to wait until Oracle 14. (A little bird told me Larry said there would be no Oracle 13…).
Why is my SYSAUX Tablespace so Big? Statistics_level=ALL June 2, 2011
Posted by mwidlake in AWR, performance.Tags: AWR, data dictionary, performance
14 comments
One of my most popular postings is about why your SYSTEM tablespace could be rather large. Recently I’ve had issues with a SYSAUX tablespace being considerably larger than I expected, so I thought I would do a sister posting on the reason.
The client I was working with at the time was about to go live with a new application and database. For various reasons I was a little anxious about how the Java application (the User Interface) would actually call the stored PL/SQL code I had helped develop. Initial workloads would be low and so I asked that the STATISTICS_LEVEL be set to ALL, so that bind variables (amongst other things) would be gathered. This is on version 10.2.0.4, btw, enterprise edition and 4-node RAC.
We went live, issues were encountered and resolved, the usual way these things work. Then, a few weeks in and when everything was still very “exciting” from a problem resolution perspective, I got an odd email from the DBA team. Would they like us to add another datafile to the SYSAUX tablespace. Huh? I checked. I’d been watching the size of our application’s tablespaces but not the others {well, I was not supposed to be a DBA and I was watching an awful lot of other things}. Our SYSAUX tablespace was around 160GB in size, having pretty much filled it’s 5th datafile. Why? I checked to see what was taking up the space in the tablespace:
select * from (select owner,segment_name||'~'||partition_name segment_name,bytes/(1024*1024) size_m from dba_segments where tablespace_name = 'SYSAUX' ORDER BY BLOCKS desc) where rownum < 40 OWNER SEGMENT_NAME SIZE_M ------------------ -------------------------------------------------- ------------ SYS WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_3911 27,648 SYS WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_3911 26,491 SYS WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_3537 23,798 SYS WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_3537 22,122 SYS WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_4296 17,378 SYS WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_4296 16,818 SYS WRH$_ACTIVE_SESSION_HISTORY~WRH$_ACTIVE_14459270_3 136 911 SYS WRH$_SQLSTAT~WRH$_SQLSTA_14459270_3911 96 SYS WRH$_SQLSTAT~WRH$_SQLSTA_14459270_3537 72 SYS WRH$_SQLSTAT~WRH$_SQLSTA_14459270_4296 47 SYS WRH$_LATCH_MISSES_SUMMARY_PK~WRH$_LATCH__14459270_ 45 3537 SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST~ 41 SYS WRH$_SYSMETRIC_SUMMARY~ 40 SYS WRH$_LATCH_MISSES_SUMMARY_PK~WRH$_LATCH__14459270_ 37
As you can see, almost all the space is being taken up by WRH$_LATCH_CHILDREN and WRH$_LATCH_CHILDREN_PK partitions. They are massive compared to other objects. A quick goggle did not come up with much other than many hits just listing what is in SYSAUX and the odd person also seeing SYSAUX being filled up with these objects and suggested ways to clear down space, nothing about the cause.
I had a chat with the DBAs and we quickly decided that this was going to be something to do with AWR given the name of objects – “WRH$_” objects are the things underlying AWR. The DBA suggested my settings of 15 minute intervals and 35 day retention was too aggressive. I knew this was not the case, I’ve had more aggressive snapshot intervals and longer retention periods on far busier systems than this. I did not have access to Metalink at that point so I asked the DBAs to raise a ticket, which they duly did.
Oracle support cogitated for a couple of days and came back with the advice to reduce the retention period. Hmmmm. Via the DBA I asked Oracle support to explain why those objects were so large when I had not seen this issue on several other systems. Was it a bug? I had by now corroborated with a friend from a previous site with 5 minute snapshot intervals and two months retention period and their SYSAUX tablespace was about 10GB all in. I did not want to go changing things if we did not know it would fix the issue as we really wanted to stop the growth of SYSAUX as soon as possible, not just try a hunch.
As you probably realise from the title of this blog, the issue was not the snapshot interval or retention period but the STATISTICS_LEVEL=ALL. The one and only hit you get in metalink if you search on WRH$_LATCH_CHILDREN is note 874518.1. From V10.1.0.2 to V11.1.0.7 setting this parameter to ALL is known to create a lot of data about Latch children and not clear it down when the AWR data is purged (Bug 8289729). The advice was to change STATISTICS_LEVEL and make the snapshot interval larger. I’d suggest you just need to alter the STATISTICS_LEVEL, unless you really, really need that extra information gathered. It seemed to take Oracle Support an extra day or two to find that note for us. {I’ve since checked out Metalink directly to confirm all this}.
So with a known issue we felt confident that altering the initialisation parameter would solve the issue. It took a while for us to change the STATISTICS_LEVEL on the production system – Change Control for that site is rather robust. This allowed us to see some other impacts of this issue.
The mmon process which looks after AWR data was becoming a top session in our OEM performance screens. In particular, a statement with SQL id 2prbzh4qfms7u that inserted into the WRH$_LATCH_CHILDREN table was taking several seconds to run each time and was running quite often {I include the SQL ID as it may be the same on many oracle V10 systems as it is internal code}:
This was doing a lot of IO, by far the majority of the IO on our system at the time – it was a new system and we had been able to tune out a lot of the physical IO.

The physical IO requirements and 15-20 second elapsed time made this out most demanding statement on the system
We also now started to have issues with mmon running out of undo space when it ran at the same time as our daily load. This was particularly unfortunate as it coincided in a period of “intense management interest” in the daily load…
What was happening to the size of the SYSAUX tablespace?
Enter the tablespace (or leave null)> sys TS_NAME ORD SUM_BLKS SUM_K MAX_CHNK_K NUM_CHNK -------------------- ----- ----------- ------------ ----------- -------- SYSAUX alloc 58,187,904 465,503,232 33,553,408 14 free 10,728 85,824 21,504 20 SYSTEM alloc 128,000 1,024,000 1,024,000 1 free 68,360 546,880 546,752 3 4 rows selected. select * from (select owner,segment_name||'~'||partition_name segment_name,bytes/(1024*1024) size_m from dba_segments where tablespace_name = 'SYSAUX' ORDER BY BLOCKS desc) where rownum < 40 OWNER SEGMENT_NAME SIZE_M -------- ------------------------------------------------------------ ---------- SYS WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_6201 30262 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_5817 29948 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_5435 28597 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_4675 28198 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_3911 27648 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_5817 27144 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_6585 26965 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_6201 26832 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_4675 26741 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_3911 26491 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_4296 26307 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_5435 26248 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_4296 25430 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_6585 25064 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_5058 24611 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_5058 23161 WRH$_LATCH_CHILDREN~WRH$_LATCH__14459270_6966 9209 WRH$_LATCH_CHILDREN_PK~WRH$_LATCH__14459270_6966 8462 WRH$_SYSMETRIC_SUMMARY~ 152 WRH$_ACTIVE_SESSION_HISTORY~WRH$_ACTIVE_14459270_3911 136 WRH$_SQLSTAT~WRH$_SQLSTA_14459270_3911 96 @sysaux_conts OWNER OBJ_PART_NAME SIZE_M ------------------------------ ---------------------------------------- ---------- SYS WRH$_LATCH_CHILDREN-WRH 231745.063 SYS WRH$_LATCH_CHILDREN_PK-WRH 215573.063 SYS WRH$_SQLSTAT-WRH 711.0625 SYS WRH$_LATCH_MISSES_SUMMARY_PK-WRH 439.0625 SYS WRH$_ACTIVE_SESSION_HISTORY-WRH 437.0625 SYS WRH$_LATCH_PARENT-WRH 292.0625 SYS WRH$_LATCH-WRH 276.0625 SYS WRH$_LATCH_MISSES_SUMMARY-WRH 273.0625 SYS WRH$_SEG_STAT-WRH 268.0625 SYS WRH$_LATCH_PARENT_PK-WRH 239.0625 SYS WRH$_SYSSTAT_PK-WRH 237.0625
Yes, that is close to half a terabyte of SYSAUX and it is all used, more partitions have appeared and the total size of the largest segments in SYSAUX show how WRH$_LATCH_CHILDREN and WRH$_LATCH_CHILDREN_PK make up the vast majority of the space used.
Shortly after, we finally got permission to change the live system. The impact was immediate, mmon dropped from being the most demanding session, that SQL code dropped down the rankings and the issues with running out of undo ceased.
I was anxious to see if the old data got purged, as the Metalink note had suggested the data would not be purged. Thankfully, that was not the case. The space was slowly released as normal purging of data outside the retention period took place and after just over a month, the SYSAUX tablespace contained a lot less information and was mostly free space:
OWNER OBJ_PART_NAME SIZE_M ---------- ---------------------------------------- ---------- SYS WRH$_LATCH_MISSES_SUMMARY_PK-WRH 512.0625 SYS WRH$_LATCH_MISSES_SUMMARY-WRH 350.0625 SYS WRH$_LATCH-WRH 304.0625 SYS WRH$_SQLSTAT-WRH 280.0625 SYS WRH$_LATCH_PK-WRH 259.0625 SYS WRH$_SYSSTAT_PK-WRH 247.0625 SYS WRH$_SERVICE_STAT_PK-WRH 228.0625 SYS WRH$_PARAMETER_PK-WRH 201.0625 SYS WRH$_PARAMETER-WRH 169.0625 SYS WRH$_SYSSTAT-WRH 169.0625 SYS WRH$_SEG_STAT-WRH 161.0625 SYS WRH$_SYSTEM_EVENT_PK-WRH 156.0625 SYS WRH$_SYSMETRIC_SUMMARY- 152 SYS WRH$_SYSTEM_EVENT-WRH 133.0625 SYS WRH$_SERVICE_STAT-WRH 123.0625 SYS WRH$_ACTIVE_SESSION_HISTORY-WRH 115.0625 TS_NAME ORD SUM_BLKS SUM_K MAX_CHNK_K NUM_CHNK -------------------- ----- ----------- ------------ ----------- -------- SYSAUX alloc 58,251,904 466,015,232 33,553,408 15 free 57,479,400 459,835,200 4,063,232 1,208 SYSTEM alloc 128,000 1,024,000 1,024,000 1 free 68,048 544,384 544,320 2
Now, how do we get that space back? I left that with the DBA team to resolve.
Oh, one last thing. I mentioned the above to a couple of the Oaktable lot in the pub a few weeks back. Their instant response was to say “You set STATISTICS_LEVEL to ALL on a live system?!? You are mad!”
🙂
{Update, I’ve just spotted this posting by Colbran which is related. Last time I googled this I just got a stub with no information}
Database Sizing – How much Disk do I need? (The Easy Way) November 11, 2010
Posted by mwidlake in Architecture, development, VLDB.Tags: Architecture, data dictionary, Storage, system development, VLDB
7 comments
How much Disk do I need for my new Oracle database? Answer:-
- 8-10 times the volume of raw data for an OLTP system
- 2-4 times the raw data volume for a Data Warehouse.
- The bigger the database, the nearer you will be to the lower multiplication factors.
{Disclaimer. This is of course just my opinion, based on some experience. If you use the above figures for a real project and get the total disc space you need wrong, don’t blame me. If you do and it is right, then of course you now owe me a beer.}
Many of us have probably had to calculate the expected size a database before, but the actual database is only one component of all the things you need to run the Oracle component of your system. You need to size the other components too – Archived redo logs, backup staging area, dataload staging area, external files, the operating system, swap space, the oracle binaries {which generally gets bigger every year but shrink in comparison to the average size of an Oracle DB} etc…
In a similar way to my thoughts on how much database space you need for a person, I also used to check out the total disk space every database I created and those that I came across took up. {A friend emailed me after my earlier posting to ask if I had an obsession about size. I think the answer must be “yes”}.
First of all, you need to know how much “raw data” you have. By this I mean what will become the table data. Back in the early 90’s this could be the total size of the flat files the old system was using, even the size of the data as it was in spreadsheets. An Oracle export file of the system gives a pretty good idea of the raw data volume too. Lacking all these then you need to roughly size your raw data. Do a calculation of “number_of_rows*sum_of_columns” for your biggest 10 tables (I might blog more on this later). Don’t be tempted to overestimate, my multipliers allow for the padding.
Let us say you have done this and it is 60GB of raw data for an OLTP system. Let the storage guys know you will probably want about 500GB of space. They will then mentally put it down as “of no consequence” as if you have dedicated storage guys you probably have many terabytes of storage. {Oh, I should mention that I am not considering redundancy at all but space that is provided. The amount of actual spinning disk is down to the level and type of RAID you storage guys make you use. That is a whole other discussion}.
If you come up with 5TB of raw data for a DW system then you need around 12-15TB of disk storage.
If you come up with more than a Terabyte or so of raw data for an OLTP system or 10 to 20 Terabytes for a DW, when you give you figures to the storage guys/procurement people then they may well go pale and say something like “you have got to be kidding!”. This is part of why the multiplication factor for Data Warehouses and larger systems in general is less, as you are forced to be more careful about the space you allocate and how you use it.
The overhead of total disk space over Raw data reduces as the database gets bigger for a number of reasons:
- The size of the Oracle binaries and the OS does not change as the database gets bigger.
- The size of swap space does not increase in line wiht the database as, generally speaking, if you increase the database size from 100GB to 1TB you do not have the luxury of increasing the system memory of your server. It probably doubles.
- Very large databases tend to have something making them big, like images or embedded documents, which are not indexed. Thus the ratio of table segments to index segments increases.
- If you have a very large database you start removing indexes (often those that support constraints) to aid performance of data load and management, again improving the ratio of table segments to index segments.
- Backups become partial or incremental to reduce the size and duration of the backup.
- As mentioned before, the sheer size of system is such that you just take more care over cleaning up work areas, paring down the archived redo log areas (those files to compress well) and other areas.
- If things get extreme or you have been doing this for donkeys years {note to none-UK people, this means many, many years} you start altering PCTFREE and checking over extent sizes.
My best ever ratio of database size to raw data was around 1.6 and it took an awful lot of effort and planning to get there. And an IT manager who made me very, very aware of how much the storage was costing him (it is not the disks, it’s all the other stuff).
I should also just mention that the amount of disk you need is only one consideration. If you want your database to perform well you need to consider the number of spindles. After all, you can create a very large database indeed using a single 2TB disc – but any actual IO will perform terribly.
How often is v$sys_time_model updated? July 14, 2010
Posted by mwidlake in internals, performance.Tags: data dictionary, performance
6 comments
I think this posting might go down as one of my more pointless contributions to the Oracle knowledge sphere 🙂
I was looking at V$SYS_TIME_MODEL and V$SESS_TIME_MODEL and I just happened to run “select * from V$SYS_TIME_MODEL” several times in very quick succession. And I noticed the values for the various counters stayed the same between a couple of the runs.
“Hmmm, interesting” I thought “The values are only flushed down to the view ‘periodically’. I wonder how periodically?”… and thus I wasted a lunch time.
I used the below sql*plus-PL/SQL script to investigate the refreshing of v$sess_time_model. Yes, I know the title says v$sys_time_model but the numbers are smaller and easier to check for the session version of the view and they are virtually the same, I can bet on the results being very similar. This is my script (and it is on 10.2.0.3 on linux):
--test_vstm_upd -- how often is v$sessstat updated set trims on set term off set serveroutput on size unli spool test_vstm_upd begin for a in 1..1000 loop for vrec in (select rpad(rpad(to_char(a),4)||' '||to_char(systimestamp,'ss.ff')||' '|| stat_name||' ',45)||value rec from v$sess_time_model -- replace with your current session ID where sid=1989 and stat_id in (3649082374,2748282437,2821698184,2643905994) ) loop dbms_output.put_line(vrec.rec); end loop; dbms_output.put_line(chr(9)); end loop; end; / set term on spool off
As you can see, it simply loops around selecting four of the values from v$sess_time_model, including the loop counter and current timestamp. Timetamp is evaluated afresh for each executed sql statement.
Here is the output for the first three iterations;
1 53.389576 DB time 475860419 1 53.389576 DB CPU 402642660 1 53.389576 sql execute elapsed time 209780319 1 53.389576 PL/SQL execution elapsed time 52290858 2 53.408944 DB time 475860419 2 53.408944 DB CPU 402642660 2 53.408944 sql execute elapsed time 209780319 2 53.408944 PL/SQL execution elapsed time 52290858 3 53.429159 DB time 475860419 3 53.429159 DB CPU 402642660 3 53.429159 sql execute elapsed time 209780319 3 53.429159 PL/SQL execution elapsed time 52290858
As you can see, the timetamp is increasing by 2/100s of a second or so per loop. Which is not as quick as I hoped but it is a test box. Note that the counters for DB Time, CPU time, SQL execute elapsed time and PL/SQL execution elapsed time are constant.
A few iterations later we see the v$sess_time_model counters increment:
7 53.509351 DB time 475860419 7 53.509351 DB CPU 402642660 7 53.509351 sql execute elapsed time 209780319 7 53.509351 PL/SQL execution elapsed time 52291610 --all change! 8 53.531378 DB time 475871716 8 53.531378 DB CPU 402653957 8 53.531378 sql execute elapsed time 209786745 8 53.531378 PL/SQL execution elapsed time 52292793 -- and stable 9 53.555889 DB time 475871716 9 53.555889 DB CPU 402653957 9 53.555889 sql execute elapsed time 209786745 9 53.555889 PL/SQL execution elapsed time 52292793
The counters all increment between iteration 7 and 8 and then stay the same. I can’t tell how long the counters had been the same, I need to wait and see when they change again. How long until they increment again? Well, not very long, in fact just around 0.12 seconds:
14 53.650154 DB time 475871716 14 53.650154 DB CPU 402653957 14 53.650154 sql execute elapsed time 209786745 14 53.650154 PL/SQL execution elapsed time 52293064 -- change 15 53.670358 DB time 475881268 15 53.670358 DB CPU 402663509 15 53.670358 sql execute elapsed time 209792803 15 53.670358 PL/SQL execution elapsed time 52294180 -- still changing 16 53.689011 DB time 475887530 16 53.689011 DB CPU 402669771 16 53.689011 sql execute elapsed time 209794387 16 53.689011 PL/SQL execution elapsed time 52294180 -- and still changing 17 53.710875 DB time 475889549 17 53.710875 DB CPU 402671790 17 53.710875 sql execute elapsed time 209796393 17 53.710875 PL/SQL execution elapsed time 52295342 -- ...still changing... 18 53.728168 DB time 475893032 18 53.728168 DB CPU 402675273 18 53.728168 sql execute elapsed time 209797665 18 53.728168 PL/SQL execution elapsed time 52295342 -- and stable 19 53.744725 DB time 475893032 19 53.744725 DB CPU 402675273 19 53.744725 sql execute elapsed time 209797665 19 53.744725 PL/SQL execution elapsed time 52295342
This time, the increment occurs over several iterations of the loop before becoming stable again. All four values I am pulling out increment over these iterations.
The next increment comes four iterations or 0.1 seconds later and happens swiftly, between two iterations:
22 53.802486 DB time 475893032 22 53.802486 DB CPU 402675273 22 53.802486 sql execute elapsed time 209797665 22 53.802486 PL/SQL execution elapsed time 52295342 -- change 23 53.822231 DB time 475897963 23 53.822231 DB CPU 402680204 23 53.822231 sql execute elapsed time 209800369 23 53.822231 PL/SQL execution elapsed time 52296904 -- stable 24 53.840085 DB time 475905724 24 53.840085 DB CPU 402687965 24 53.840085 sql execute elapsed time 209803330 24 53.840085 PL/SQL execution elapsed time 52296904
So it seem that v$sess_time_model is incremented in steps, not constantly, and does so every 0.10 to 0.13 seconds or so. My work here is done.
Or is it?
No, it is not, as there is now a “massive” gap where the counters do not increment for almost 3/4 of a second, until iteration 127:
126 55.530398 DB time 475905724 126 55.530398 DB CPU 402687965 126 55.530398 sql execute elapsed time 209803775 126 55.530398 PL/SQL execution elapsed time 52297583 -- change 127 55.545085 DB time 475914013 127 55.545085 DB CPU 402696254 127 55.545085 sql execute elapsed time 209809518 127 55.545085 PL/SQL execution elapsed time 52298886 -- still changing 128 55.560141 DB time 475921342 128 55.560141 DB CPU 402703583 128 55.560141 sql execute elapsed time 209812345 128 55.560141 PL/SQL execution elapsed time 52299359 -- still changing 129 55.574806 DB time 475922705 129 55.574806 DB CPU 402704946 129 55.574806 sql execute elapsed time 209812345 129 55.574806 PL/SQL execution elapsed time 52299359 -- stable 130 55.589541 DB time 475922705 130 55.589541 DB CPU 402704946 130 55.589541 sql execute elapsed time 209812345 130 55.589541 PL/SQL execution elapsed time 52299359
Again, the incrementing ran over a small number of iterations of my loop.
I think I have shown that all the values increment together so I will reduce my output to just the one counter and see when it increments and over how many iterations and see if a pattern appears:
25 53.860550 DB time 475905724 -- 53.84 to 55.54 0 1.7 seconds of stability 126 55.530398 DB time 475905724 127 55.545085 DB time 475914013 128 55.560141 DB time 475921342 129 55.574806 DB time 475922705 136 55.682402 DB time 475922705 137 55.697191 DB time 475956738 138 55.712266 DB time 475969859 139 55.727820 DB time 475974350 140 55.743315 DB time 475982356 141 55.758749 DB time 475994069 142 55.773602 DB time 476004596 143 55.788472 DB time 476004596 144 55.803295 DB time 476007541 145 55.818136 DB time 476011172 146 55.832886 DB time 476020336 147 55.847772 DB time 476025376 148 55.865303 DB time 476036347 -- incrementd with a couple of brief pauses over 0.34 seconds 149 55.881480 DB time 476041481 150 55.896735 DB time 476041481 ... 200 56.664783 DB time 476041481 -- 55.88 to 56.67 0.8 seconds of stability 201 56.679455 DB time 476049162 -- increments over two iterations, 0.03 seconds 202 56.694092 DB time 476052385 203 56.708733 DB time 476052385 ... 261 57.566902 DB time 476052385 -- 56.69 to 57.59 0.9 seconds of stability 262 57.581582 DB time 476052842 263 57.596218 DB time 476058537 ... 270 57.700212 DB time 476058537 271 57.715371 DB time 476060552 272 57.730797 DB time 476063551 273 57.745700 DB time 476074383 274 57.760351 DB time 476079741 ... 279 57.835162 DB time 476079741 280 57.849966 DB time 476080090 281 57.864782 DB time 476090799 282 57.879446 DB time 476100404 283 57.894553 DB time 476103222 -- stable again after 0.3 seconds and a couple of mini-pauses 284 57.910592 DB time 476103222 ... 335 58.677438 DB time 476103222 -- 57.91 to 58.69 0.8 seconds of stability 336 58.694704 DB time 476113168 337 58.709995 DB time 476113909 338 58.724782 DB time 476119452 339 58.740756 DB time 476119795 340 58.758659 DB time 476129752 341 58.776040 DB time 476132036 ... 345 58.854895 DB time 476132036 346 58.869516 DB time 476138982 347 58.884100 DB time 476145880 348 58.898772 DB time 476160301 349 58.913401 DB time 476178139 350 58.935391 DB time 476183281 -- stable again after 0.27 seconds 351 58.955195 DB time 476183281 ... 395 59.608368 DB time 476183281 -- 57.93 to 59.60 0.68 seconds of stability 396 59.623062 DB time 476187188 ... 402 59.713566 DB time 476187188 403 59.728220 DB time 476194591 404 59.742900 DB time 476204006 405 59.757544 DB time 476210666 406 59.774934 DB time 476216338 407 59.796595 DB time 476228874 ... 413 59.890172 DB time 476228874 414 59.908436 DB time 476238680 415 59.923166 DB time 476251316 416 59.937805 DB time 476259466 417 59.952540 DB time 476261228 418 59.967215 DB time 476277094 419 59.981914 DB time 476282108 -- stable again after 0.29 seconds 420 00.000358 DB time 476298216 ... 529 01.684500 DB time 476298216 -- 00.00 to 01.69 1.69 seconds of stability 530 01.699165 DB time 476301888 531 01.714307 DB time 476312510
I would say that we can draw a few conclusions from the above
- It is dangerous to look at a little bit of data and draw a firm conclusion, as I nearly did
- The data in v$sess_time_model is only maintained in near-time not real-time
- The counters in v$sess_time_model increment together
- The counters seem to increment in a slightly messy way over part of a second and then are stable for 3/4 of a second to a second or two
I wonder how many of you went “Oh dear” when I said I could derive what is true for v$sys_time_model from v$sess_time_model? Could I? well, here is the modified script for v$sys_time_model:
--test_vstm_upd2 -- how often is v$sysstat updated set trims on set term off set serveroutput on size unli spool test_vstm_upd2 begin for a in 1..1000 loop for vrec in (select rpad(rpad(to_char(a),4)||' '||to_char(systimestamp,'ss.ff')||' '|| stat_name||' ',45)||value rec from v$sys_time_model --where sid=1989 where stat_id in (3649082374) ) loop dbms_output.put_line(vrec.rec); end loop; -- dbms_output.put_line(chr(9)); end loop; end; / set term on spool off
And a sample of my output:
1 43.187666 DB time 14429733395433 2 43.188523 DB time 14429733395755 3 43.188642 DB time 14429733395905 4 43.188733 DB time 14429733395905 5 43.188822 DB time 14429733395905 6 43.188909 DB time 14429733395905 -- 7 43.188995 DB time 14429733396491 8 43.189080 DB time 14429733396491 9 43.189164 DB time 14429733396491 10 43.189258 DB time 14429733396491 11 43.189345 DB time 14429733396491 12 43.189430 DB time 14429733396491 13 43.189515 DB time 14429733396491 14 43.189600 DB time 14429733396491 15 43.189687 DB time 14429733396491 16 43.189774 DB time 14429733396491 17 43.189858 DB time 14429733396491 18 43.189942 DB time 14429733396491 19 43.190026 DB time 14429733396491 20 43.190111 DB time 14429733396491 -- 21 43.190200 DB time 14429733397436 22 43.190287 DB time 14429733397436 23 43.190371 DB time 14429733397436 24 43.190454 DB time 14429733397436 25 43.190540 DB time 14429733397436 26 43.190624 DB time 14429733397436 27 43.190708 DB time 14429733397436 -- 28 43.190793 DB time 14429733397764 29 43.190877 DB time 14429733397764 30 43.190961 DB time 14429733397764 31 43.191045 DB time 14429733397764 32 43.191132 DB time 14429733397764 33 43.191221 DB time 14429733397764 34 43.191309 DB time 14429733397764 35 43.191392 DB time 14429733397764 -- 36 43.191475 DB time 14429733402416 37 43.191558 DB time 14429733402416 -- 38 43.191641 DB time 14429733403070 39 43.191725 DB time 14429733403070 40 43.191809 DB time 14429733403070 41 43.191893 DB time 14429733403070 42 43.191976 DB time 14429733403070 43 43.192060 DB time 14429733403070 44 43.192144 DB time 14429733403070 45 43.192230 DB time 14429733403070 46 43.192315 DB time 14429733403070 47 43.192400 DB time 14429733403070 48 43.192484 DB time 14429733403070 49 43.192569 DB time 14429733403070 50 43.192654 DB time 14429733403070 -- 51 43.192737 DB time 14429733407045 52 43.192821 DB time 14429733407045 53 43.192904 DB time 14429733407045 54 43.192985 DB time 14429733407045 55 43.193069 DB time 14429733407045 56 43.193152 DB time 14429733407045 57 43.193237 DB time 14429733407045 58 43.193321 DB time 14429733407045 59 43.193404 DB time 14429733407045 60 43.193488 DB time 14429733407045 61 43.193574 DB time 14429733407045 -- 62 43.193660 DB time 14429733408897 63 43.193743 DB time 14429733408897 64 43.193828 DB time 14429733408897 65 43.193912 DB time 14429733408897 66 43.193994 DB time 14429733408897 67 43.194076 DB time 14429733408897 -- 68 43.194160 DB time 14429733409208 69 43.194283 DB time 14429733409208 70 43.194378 DB time 14429733409208 -- 71 43.194465 DB time 14429733409267 72 43.194551 DB time 14429733409267 73 43.194635 DB time 14429733409267 74 43.194719 DB time 14429733409267 75 43.194801 DB time 14429733409267 76 43.194884 DB time 14429733409267 -- 77 43.194967 DB time 14429733409863 78 43.195052 DB time 14429733409863 -- 79 43.195136 DB time 14429733410499 80 43.195245 DB time 14429733410499 81 43.195329 DB time 14429733410499 82 43.195412 DB time 14429733410499 83 43.195495 DB time 14429733410499 84 43.195577 DB time 14429733410499 85 43.195660 DB time 14429733410499 86 43.195743 DB time 14429733410499 87 43.195825 DB time 14429733410499 88 43.195909 DB time 14429733410499 89 43.195991 DB time 14429733410499 90 43.196074 DB time 14429733410499 91 43.196156 DB time 14429733410499 92 43.196244 DB time 14429733410499 93 43.196326 DB time 14429733410499 94 43.196409 DB time 14429733410499 -- 95 43.196493 DB time 14429733411732 96 43.196577 DB time 14429733411732 97 43.196661 DB time 14429733411732 98 43.196745 DB time 14429733411732 99 43.196826 DB time 14429733411732 -- 100 43.196910 DB time 14429733412107 101 43.196992 DB time 14429733412410 102 43.197076 DB time 14429733412410 103 43.197158 DB time 14429733412410 104 43.197245 DB time 14429733412410 105 43.197327 DB time 14429733412410 106 43.197410 DB time 14429733412410 107 43.197493 DB time 14429733412410 108 43.197575 DB time 14429733412410 109 43.197658 DB time 14429733412410 -- 110 43.197741 DB time 14429733412981 111 43.197824 DB time 14429733412981 112 43.197907 DB time 14429733412981 113 43.197990 DB time 14429733412981 114 43.198072 DB time 14429733413001 115 43.198156 DB time 14429733413001 116 43.198247 DB time 14429733413001 117 43.198330 DB time 14429733413001 -- 118 43.198414 DB time 14429733413300 119 43.198499 DB time 14429733413300 120 43.198581 DB time 14429733413300 121 43.198665 DB time 14429733413300 122 43.198748 DB time 14429733413300 123 43.198830 DB time 14429733413300 124 43.198913 DB time 14429733413300 -- 125 43.198997 DB time 14429733414262 126 43.199081 DB time 14429733414262 127 43.199165 DB time 14429733414262 128 43.199252 DB time 14429733414262 129 43.199336 DB time 14429733414262 130 43.199419 DB time 14429733414262 131 43.199503 DB time 14429733414262 -- 132 43.199586 DB time 14429733414569 133 43.199669 DB time 14429733414569 134 43.199752 DB time 14429733414569 135 43.199834 DB time 14429733414569 136 43.199918 DB time 14429733414569 137 43.200000 DB time 14429733414569 138 43.200083 DB time 14429733414569 139 43.200166 DB time 14429733414569 140 43.200252 DB time 14429733414569 -- 141 43.200334 DB time 14429733415145 142 43.200418 DB time 14429733415145 -- 143 43.200504 DB time 14429733415335 144 43.200588 DB time 14429733415335 145 43.200672 DB time 14429733415335 146 43.200756 DB time 14429733415335 147 43.200838 DB time 14429733415335 148 43.200921 DB time 14429733415335 149 43.201003 DB time 14429733415335 150 43.201086 DB time 14429733415335 151 43.201169 DB time 14429733415335 152 43.201259 DB time 14429733415335
I would say that we can draw a few conclusions from this latest test above!
- It is dangerous to look at one thing and assume something closely related will be the same!
- The data in v$sys_time_model is also being updated in bursts
- The data in v$sys_time_model is actually updated very, very frequently, at around 1/1000 of a second intervals
- It might be that v$sess_time_model is being updated for sessions in some sort of round-robin fashion and v$sys_time_model each time the v$sess version is updated
- You can spend a lot of time looking at really quite obscure and possibly pointless stuff
- The reason I am losing weight is I keep skipping lunch.
DBA_TAB_MODIFICATIONS can miss multi-table inserts July 5, 2010
Posted by mwidlake in internals.Tags: data dictionary, SQL
4 comments
Last week I blogged a piece all about DBA_TAB_MODIFICATIONS and how it captures all normal DML against table. However, a word of warning. This may not be true for multi-table inserts. It appears that inserts via multi-table inserts are not recorded in DBA_TAB_MODIFICATIONS for 10.2.0.3 and a couple of 9.2 versions of oracle.
{My thanks got to my comrade-in-arms Basavaraj Patil for this. He not only noticed the issue but tracked it down to multi-table inserts, came and told me about it and then said I could put it in my blog. This is his work really}.
I will not go into multi-table inserts right now but it is a very nice feature that was introduced with oracle 9 which allows you to insert a row into several tables at a time or into one of different tables. In fact, I think it is a relatively unknown feature. Check out this oracle-base article on new 9i features that mentions it or go pull up the Oracle 10 SQL reference manual.
There is a bug with Oracle up to 10.2.0.3 and 9.2.0.6 and 9.2.0.7.
It is fixed in 9.2.0.8, 10.2.0.4 and 11.1.0.6
Bug number 4572043 it you have metalink.
It impacts direct load merge statements as well as multi-table inserts, but I have not tested and verified the direct load merge statement.
Below is a demonstration of the issue:
TDB>drop table obj_nonsys purge; Table dropped. TDB>drop table obj_sys purge; Table dropped. TDB>create table obj_nonsys 2 as select * from all_objects where rownum<1 3 / Table created. TDB>create table obj_sys 2 as select * from all_objects where rownum<1 3 / Table created. TDB>insert into obj_sys 2 select * from dba_objects 3 where owner in ('SYS','SYSTEM') 4 and rownum <= 200 5 / 200 rows created. TDB>insert into obj_nonsys 2 select * from dba_objects 3 where owner not in ('SYS','SYSTEM') 4 and rownum <= 150 5 / 150 rows created. TDB>commit; Commit complete. TDB>-- flush the changes down to the DBA_TAB_MODIFICATIONS table. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS'); TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- OBJ_NONSYS 150 0 0 NO 05-JUL-2010 20:59 OBJ_SYS 200 0 0 NO 05-JUL-2010 20:59 2 rows selected. TDB>-- As can be seen above, the inserts are correctly captured TDB>-- And the below counts confirm this TDB>select count(*) from obj_sys; COUNT(*) ---------- 200 1 row selected. TDB>select count(*) from obj_nonsys; COUNT(*) ---------- 150 1 row selected. TDB>-- Now the core of it. Multi-table inserts TDB>-- NB this is not the best example of a multi-table insert but it will do. TDB>insert when (owner='SYS' or owner ='SYSTEM') 2 then into obj_sys values ( OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID 3 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS 4 ,TEMPORARY,GENERATED,SECONDARY) 5 when (owner !='SYS' and owner !='SYSTEM') 6 then into obj_nonsys values ( OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID 7 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS 8 ,TEMPORARY,GENERATED,SECONDARY) 9 select OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID 10 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS 11 ,TEMPORARY,GENERATED,SECONDARY 12 from dba_objects 13 where object_type='TABLE' 14 and rownum <= 1000 15 / 1000 rows created. TDB>commit; Commit complete. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- And what do we see in DTM? TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS'); TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- OBJ_NONSYS 150 0 0 NO 05-JUL-2010 20:59 OBJ_SYS 200 0 0 NO 05-JUL-2010 20:59 2 rows selected. TDB>-- Argh! No change to the number of inserts! They have been missed TDB>-- TDB>-- Let me veryify what is in the tables... TDB>select count(*) from obj_sys; COUNT(*) ---------- 1025 1 row selected. TDB>select count(*) from obj_nonsys; COUNT(*) ---------- 325 1 row selected. TDB>-- And I will do similar inserts to what the multi-table insert would do just to make sure TDB>-- there is nothing odd going on. TDB>insert into obj_sys 2 select * from dba_objects 3 where object_type='TABLE' 4 and owner in ('SYS','SYSTEM') 5 and rownum <= 600 6 / 600 rows created. TDB>insert into obj_nonsys 2 select * from dba_objects 3 where object_type='TABLE' 4 and owner not in ('SYS','SYSTEM') 5 and rownum <= 400 6 / 400 rows created. TDB>commit; Commit complete. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS'); TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- OBJ_NONSYS 550 0 0 NO 05-JUL-2010 20:59 OBJ_SYS 800 0 0 NO 05-JUL-2010 20:59 2 rows selected. TDB>select count(*) from obj_sys; COUNT(*) ---------- 1625 1 row selected. TDB>select count(*) from obj_nonsys; COUNT(*) ---------- 725 1 row selected. TDB> TDB>-- Note, the counts have gone as well of course and now are adrift from DTM