Free Webinar – How Oracle Works! September 15, 2017
Posted by mwidlake in Architecture, internals, Knowledge, Presenting.Tags: Architecture, knowledge, performance, Presenting
3 comments
Next Tuesday (19th September) I am doing a free webinar for ProHuddle. It lasts under an hour and is an introduction to how some of the core parts of the Oracle RDBMS work, I call it “The Heart of Oracle: How the Core RDBMS Works”. Yes, I try and explain all of the core Oracle RDBMS in under an hour! I’m told I just about manage it. You can see details of the event and register for it here. I’ve done this talk a few times at conferences now and I really like doing it, partly as it seems to go down so well and people give me good feedback about it (and occasionally bad feedback, but I’ll get on to that).
The idea behind the presentation is not to do the usual “Intro” and list what the main Oracle operating systems processes – SMON, PMON, RECO etc – are or what the various components of the shared memory do. I always found those talks a little boring and they do not really help you understand why Oracle works the way it does when you use it. I aim to explain what redo is, why it is so important, what actually happens when you commit, how data is written to and read from storage to the cache – and what is actually put in the buffer cache. I explain the concept of point-in-time view, how Oracle does it and why it is so fantastic. And a few other bits and pieces.
I’m not trying to explain to people the absolute correct details of what goes on with all these activities that the database does for you. I’m attempting to give people an understanding of the principles so that more advanced topics make more sense and fit together. The talk is, of course, aimed at people who are relatively new to Oracle – students, new DBAS or developers who have never had explained to them why Oracle works the way it does. But I have found that even some very experienced DBA-types have learnt the odd little nugget of information from the talk.
Of course, in an hour there is only so much detail I can go into when covering what is a pretty broad set of topics. And I lie about things. I say things that are not strictly true, that do not apply if more advanced features of Oracle are used, or that ignore a whole bucket full of exceptions. But it’s like teaching astrophysics at school. You first learn about how the Sun is at the centre of the solar system, all the planets & moons revolve around each other due to gravity and the sun is hot due to nuclear fusion. No one mentions how the earth’s orbit varies over thousands and millions of years until you have the basics. Or that GPS satellites have to take into account the theory of relativity to be as accurate as they are. Those finer details are great to learn but they do not change the fundamental principles of planets going around suns and rocks falling out of the sky – and you need to know the simpler overall “story” to slot in the more complex information.
I start off the talk explaining this simplification and I do try to indicate where people will need to dig deeper if they, for example, have Exadata – but with a webinar I am sure people will join late, drop in and out and might miss that. I must remember to keep reminding people I’m ignoring details. And amongst the audience will be people who know enough to spot some of these “simplifications” and I think the occasional person might get upset. Remember I mentioned the bad feedback? I got accosted at a conference once after I had done this talk by a couple of experts, who were really angry with me that I had said something that was not accurate. But they had missed the start of the talk and my warnings of simplification and did not seem to be able to understand that I would have needed half an hour to explain the details of that on thing that they knew – but I had only 50 minutes in total for everything!
As I said, this is the first Webinar I will have done. I am sure it will be strange for me to present with “no audience” and I’m sure I’ll trip up with the pointer and the slides at some point. I usually have some humour in my presentations but that might not work with no crowd feedback and a worldwide audience. We will see. But I am excited about doing it and, if it works, I may well offer to do more.
As a taster, I explain the above diagram. A lot. I mostly just talk about pictures, there will be very few “wordy” slides.
I invite you all to register for the talk – as I said, it is free – and please do spread the word.
STANDARD date considerations in Oracle SQL and PL/SQL July 29, 2015
Posted by mwidlake in internals, PL/SQL.Tags: data dictionary, PL/SQL, SQL
7 comments
Most of us know that the Oracle DATE datatype has upper and lower limits. From the Oracle 11g Database Concepts manual:
Oracle Database can store dates in the Julian era, ranging from January 1, 4712 BCE through December 31, 9999 CE (Common Era, or ‘AD’). Unless BCE (‘BC’ in the format mask) is specifically used, CE date entries are the default.
I never believe 100% anything I read, so I’ll try that. I’ll set my session to show dates with the AD/BC indicator and step back in time:
ora122> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYYAD'; Session altered. -- select today ora122> select sysdate from dual; SYSDATE ------------- 29-JUL-2015AD --now let us go back to "the edge of time" ora122> select sysdate -2457232 from dual; SYSDATE-24572 ------------- 01-JAN-4712BC ora122> select sysdate -2457233 from dual; select sysdate -2457233 from dual * ERROR at line 1: ORA-01841: (full) year must be between -4713 and +9999, and not be 0 -- Now to do similar in PL/SQL declare out_text varchar2(100); begin select to_char(sysdate) into out_text from dual; dbms_output.put_line (out_text); end; ora122> @std1 31-DEC-4713BC PL/SQL procedure successfully completed.
How did I do that? We can see from the SQL that the documentation is correct and SQL refuses to accept a date before the lower limit. How did I get a date before 01-JAN-4712BC in my PL/SQL? Especially as my default SYSDATE?
I’ll let you think about that for 30 seconds, you can look at a picture of my recently gone and much missed cat (NB she is happily snoozing in this shot!).
So how did I do it? I cheated.
But I cheated in a very, very interesting way. I did not show you all of my PL/SQL code, which I now reveal below…:
declare sysdate varchar2(20) := '31-DEC-4713BC'; begin declare out_text varchar2(100); begin select to_char(sysdate) into out_text from dual; dbms_output.put_line (out_text); end; end; /
So, showing you my whole code (see, don’t believe everything you read – sometimes things are simply wrong and sometimes people deceive you) you can see the critical part at the start. My anonymous PL/SQL block is in fact a child block to another. And in that parent block, right at the start, I over-ride the definition of SYSDATE. in the declaration section
sysdate varchar2(20) := ’31-DEC-4713BC’;
I not only set it to a specific value, I set it to be a Varchar2 datatype. The TO_CHAR of it later on in the logic, which I included in the code I originally showed you, was just more subterfuge on my part. The PL/SQL engine does not care if you TO_CHAR an already CHAR-type field, but it hid the fact that I’d played this trick.
You could define a local SYSDATE variable, as a date, set to a specific date and time if you wish. Even one in the future. And anyone seeing odd behaviour and checking to see if the initialization paramater FIXED_DATE had been set would find that it had not and might soon be questioning their own sanity.
How many of you knew you could do that? You can over-ride what most of us would consider a Reserved Word in PL/SQL. I suspect it is something that people new to PL/SQL might find out by accident (because no one told them you could not use reserved words for variable names) but experienced people would not as it is simply a daft thing to do. I’d go further, it is a stupid thing to do. Think how much trouble it could cause in introducing bugs and making the code hard to understand. {And thinking further about this, I might see if I can get re-employed at a couple of places and starting doing things like this, just for the evil pleasure, as it could be an utter nightmare to spot}.
The reason this trick works is that SYSDATE, along with many interesting things, are not “Built In” to the PL/SQL language but are defined in two key packages – SYS.STANDARD and SYS.DBMS_STANDARD. These are always there and define many core things. You can DESC both of them in SQL*Plus or pull the package specification out of DBA_SOURCE and, unlike many of the other Built In packages, the code is not wrapped for STANDARD, so you can look at it. You can do this with a “lowly” DBA-type user, you do not need to be on as SYS or SYSTEM.
I am not sure of the exact rules but I think that when you use a locally qualified variable (ie you do not state the code block, package or stored function/procedure it comes from) it looks at the current variables as defined in the current and parent PL/SQL blocks first and then looks at STANDARD and then DBMS_STANDARD. I am not going to mess with STANDARD or DBMS_STANDARD, even on my play box, to find out the exact order of the two. If I spent 10 minutes looking at the specifications I might be able to see that one references the others I suppose…
This is part of the specification from DBMS_STANDARD:
package dbms_standard is -- types type ora_name_list_t is table of varchar2(64); -- DBMS_ID and DBMS_QUOTED_ID define the length of identifiers -- in objects for SQL, PL/SQL and users. subtype dbms_id is varchar2(30); subtype dbms_quoted_id is varchar2(32); subtype dbms_id_30 is varchar2(30); subtype dbms_quoted_id_30 is varchar2(32); subtype dbms_id_128 is varchar2(128); subtype dbms_quoted_id_128 is varchar2(130); -- Trigger Operations procedure raise_application_error(num binary_integer, msg varchar2, keeperrorstack boolean default FALSE); pragma interface (C, raise_application_error); -- 1 (see psdicd.c) pragma restrict_references (raise_application_error, WNPS, RNPS, WNDS, RNDS); function inserting return boolean; pragma interface (C, inserting); -- 2 pragma restrict_references (inserting, WNPS, RNPS, WNDS); function deleting return boolean; pragma interface (C, deleting); -- 3 pragma restrict_references (deleting, WNPS, RNPS, WNDS); function updating return boolean; pragma interface (C, updating); -- 4 pragma restrict_references (updating, WNPS, RNPS, WNDS);
You won’t find a package body of DBMS_STANDARD – that is because, I believe, all entries in the package specification are types or functions/procedures that lead to C functions, via the ADA-like {If you did not know, PL/SQL is based on the ADA language} pragma directives of “pragma interface (C, {something}), which says this function/procedure is coded in another language (C in this case) and is called {something}. Don’t ask me more, I don’t know.
eg:
procedure commit;
pragma interface (C, commit);
Even the base data types are defined in STANDARD:
package STANDARD AUTHID CURRENT_USER is -- careful on this line; SED edit occurs! /********** Types and subtypes, do not reorder **********/ type BOOLEAN is (FALSE, TRUE); type DATE is DATE_BASE; type NUMBER is NUMBER_BASE; subtype FLOAT is NUMBER; -- NUMBER(126) subtype REAL is FLOAT; -- FLOAT(63) subtype "DOUBLE PRECISION" is FLOAT; subtype INTEGER is NUMBER(38,0); subtype INT is INTEGER; subtype SMALLINT is NUMBER(38,0); subtype DECIMAL is NUMBER(38,0); subtype NUMERIC is DECIMAL; subtype DEC is DECIMAL; subtype BINARY_INTEGER is INTEGER range '-2147483647'..2147483647; subtype NATURAL is BINARY_INTEGER range 0..2147483647; subtype NATURALN is NATURAL not null; subtype POSITIVE is BINARY_INTEGER range 1..2147483647; subtype POSITIVEN is POSITIVE not null; subtype SIGNTYPE is BINARY_INTEGER range '-1'..1; -- for SIGN functions type VARCHAR2 is NEW CHAR_BASE; subtype VARCHAR is VARCHAR2; subtype STRING is VARCHAR2; subtype LONG is VARCHAR2(32760); ...
Anyway, I leave the reader to go and look at the package specifications and the STANDARD package body {some of which I show at the end} but I leave you with a repeat of the above warnings: Don’t go replacing the core variables and functions in your PL/SQL code just because you can and do not, repeat, do NOT mess with those two packages. I am sure Oracle Corp will throw your support contract out the window if you do.
As promised above, here is the code for SYSDATE, in SYS.STANDARD, and it is very interesting – in the manner as mentioned above it calls a function (pessdt) that only calls a C program (presumably to get the datetime from the server clock) and failing that, reverts to the SQL method of selecting the pseudocolumn from dual. SYSTIMESTAMP below it is the same:
function pessdt return DATE; pragma interface (c,pessdt); -- Bug 1287775: back to calling ICD. -- Special: if the ICD raises ICD_UNABLE_TO_COMPUTE, that means we should do -- the old 'SELECT SYSDATE FROM DUAL;' thing. This allows us to do the -- SELECT from PL/SQL rather than having to do it from C (within the ICD.) function sysdate return date is d date; begin d := pessdt; return d; exception when ICD_UNABLE_TO_COMPUTE then select sysdate into d from sys.dual; return d; end; -- -- -- function pessts return timestamp_tz_unconstrained; pragma interface (c,pessts); -- Special: if the ICD raises ICD_UNABLE_TO_COMPUTE, that means we should do -- the old 'SELECT systimestamp FROM dual;' thing. This allows us to do the -- SELECT from PL/SQL rather than having to do it from C (within the ICD.) FUNCTION systimestamp RETURN timestamp_tz_unconstrained IS t timestamp_tz_unconstrained; BEGIN t := pessts; RETURN t; EXCEPTION WHEN ICD_UNABLE_TO_COMPUTE THEN SELECT systimestamp INTO t FROM sys.dual; RETURN t; END;
Row Level Security Part 2 – permissions November 20, 2012
Posted by mwidlake in internals, security, SQL.Tags: security, SQL, system development
3 comments
<..Part 1, introduction..
..Part 3 summary in pictures..>
In this second post on the topic of “an introduction to Row Level Security” I want to cover a few things about what permissions you need to implement RLS and some of the consequences. In my introduction in part one I just said my main user has “DBA type Privileges”.
{NB This is all on Oracle V11.2 and I believe everything below is applicable to V10 as well. Also, I should point out that I am not an Oracle security expert – but despite repeatedly saying this, it seems like at least once a year I am asked to improve a system’s security on the grounds of “more than we have now is an improvement”}.
Firstly, you need the permission to create and drop RLS policies. This is controlled by having the EXECUTE privilege on SYS.DBMS_RLS. That is it. If you can execute this package you can create and drop RLS Policies, enable and disable them and also create, manage and use Policy Groups.
Obviously EXECUTE on SYS.DBMS_RLS is a privilege you need to restrict to only “empowered” accounts- but you also want to be very careful about your definition of empowered. If an account has this privilege, it can alter (disable!) any RLS policies that ANYONE created on the instance. You may need to think carefully about having several accounts that can alter all the RLS policies across all schemas in your instance.
You might want to create a single user that has the EXECUTE on SYS.DBMS_RLS and use that to implement and control RLS across the instance {and even have the account locked when it is not needed}. Of course, under normal Oracle anyone with SYS access can run the procedure – but you have SYS access under control already, don’t you?… :-).
By default, who has this privilege?
MDW> @obj_privs Enter value for obj_name: dbms_rls Enter value for obj_own: sys GRANTEE OWNER TABLE_NAME GRANTOR PRIVILEGE GRA -------------------- ------------ ---------------------- -------- ------------ --- EXECUTE_CATALOG_ROLE SYS DBMS_RLS SYS EXECUTE NO XDB SYS DBMS_RLS SYS EXECUTE NO WMSYS SYS DBMS_RLS SYS EXECUTE NO -- -- and who has that EXECUTE_CATALOG_ROLE? -- select * from dba_role_privs where granted_role='EXECUTE_CATALOG_ROLE' GRANTEE GRANTED_ROLE ADM DEF ------------------------------ ------------------------------ --- --- DBA EXECUTE_CATALOG_ROLE YES YES IMP_FULL_DATABASE EXECUTE_CATALOG_ROLE NO YES EXP_FULL_DATABASE EXECUTE_CATALOG_ROLE NO YES SYS EXECUTE_CATALOG_ROLE YES YES
So, our old friend the DBA role has that privilege, via the EXECUTE_CATALOG_ROLE and IMP_/EXP_FULL_DATABASE.
Let’s go back to creating policies. I’m sticking to row-level RLS for now, not column-level. When you create a policy you basically are saying “when object X is accessed, run function Y and apply the resulting string to all access, as a predicate”
Lifted an example from the manual, but adding in a couple of bits the authors were too lazy to put in { 🙂 } :-
DBMS_RLS.ADD_POLICY ( object_schema => 'hr' ,object_name => 'employees' ,policy_name => 'emp_policy' ,function_schema => 'hr_rls' ,policy_function => 'emp_sec' ,statement_types => 'select');
The function HR_RLS.EMP_SEC returns a string P1
When someone issues a statement that access the HR.EMPLOYEES table the function HR_RLS.EMP_SEC is executed as part of the parse and, internally, the following view is created and placed into the orginal statement:
SELECT * FROM hr.employees WHERE P1;
As this takes place as part of the parse part of the statement processing, errors with the function are found before any effort to access the actual data occur. It is also why it is not simple to avoid RLS – it is done as part of the overall SQL processing carried out by Oracle.
If the function is invalid, generates an error or the supplied “WHERE predicate” P1 causes the final statement to be syntactically incorrect, you get an error.
At this point Oracle fails “secure” and simply won’t fire the SQL statement. Even if you own the tables and have full privilege access to them. I think it is worth a quick demonstration
I am logged on as the owner of the table and RLS function, MDW:-
MDW> select * from test1 ID CP_NAME OTHER_NAME MASK_FL ---------- ------------------------------ ------------------------------ ---------- 1 ERIC ERIC THE RED 1 2 BROWN BOB THE BROWN 2 3 GREEN GARY THE GREEN 1 4 BLUE BILL THE BLUE 3 5 BLACK DAVE THE BLACK 4 6 PURPLE PETE THE PURPLE 5 7 RED ROGER THE RED 6 -- --All works OK MDW> @rls_func_damage -- CREATE OR REPLACE FUNCTION hide_rows ( v_schema IN VARCHAR2 ,v_objname IN VARCHAR2) RETURN VARCHAR2 AS con VARCHAR2 (200); BEGIN -- The line below states loser not user con := (loser='''||v_schema||''' or MASK_FL > 3)'; RETURN (con); END hide_rows; / Function created. -- -- Let us check the predicate coming from the function -- select hide_rows(v_schema=>user,v_objname=>'TEST1') from dual; HIDE_ROWS(V_SCHEMA=>USER,V_OBJNAME=>'TEST1') --------------------------------------------------------------------------------------- (loser='MDW' or MASK_FL > 3) -- "loser" is not a valid piece of syntax or a known variable. So... -- select * from TEST1 * ERROR at line 1: ORA-28113: policy predicate has error -- I now damage the statement creating the function so that it is syntactically incorrect CREATE OR REPLACE FUNCTION hide_rows ( ... -- missing quote con := (loser='''||v_schema||''' or MASK_FL > 3)'; ... Warning: Function created with compilation errors. select * from TEST1 * ERROR at line 1: ORA-28110: policy function or package MDW.HIDE_ROWS has error --re-create the function as I started with Function created. select * from TEST1 ID CP_NAME OTHER_NAME MASK_FL ---------- ------------------------------ ------------------------------ ---------- 1 ERIC ERIC THE RED 1 2 BROWN BOB THE BROWN 2 ... 7 RED ROGER THE RED 6 -- -- Back to a working system
As you can see, you get ORA-28110 if the function used by the RLS policy is damaged and ORA-28113 if the function returns a non-valid WHERE Predicate. In either case, the data in those tables is now inaccessible. That could be a problem…
It is of course very interesting that RLS is kind of “blind” to the functions it uses until it uses it. You can alter RLS activity if you can alter the functions.
Last post I mentioned that, as you RLS security is based on these functions, you should give thought to who can see them and change them. Hopefully the above highlights that. There is another access aspect to these functions that is key.
In the first example where I use DBMS_RLS.ADD_POLICY, I state function_schema – this is who owns the PL/SQL function. It defaults to the person who creates the Policy. The function is fired in the standard way with DEFINER RIGHTS – i.e. the rights of the function owner. {You could override this when creating the function such that it runs with executioner rights – but that strikes me as a potentially very confusing and a source of much pain and anguish}.
So the function can see what the owner can see, not the executioner. This has two key effects:
- You can restrict the access to any DB objects that the function requires to the owner of the function. The end user does not need to have access and it may be more secure if they do not.
- You have to grant access to objects directly to the function owner, via either object grants or system grants. Stored PL/SQL does not “see” roles if executed with definer rights.
So eg if your function is owned by MDW and references a master table called SEC_MASTER.MASKED_USERS then you need to grant select on SEC_MASTER.MASKED_USERS to MDW. No using roles.
{of course, calling a table MASKED_USERS might give anyone acquiring access to the DB a clue that it is important. I tend to call such “security important” tables things that are “obviously” boring, like ROTA_DATA.}
Finally, that WHERE Predicate P1 is in effect added to your SQL statement. It can be pretty much anything. It can reference other database objects. If it references other database objects THE EXECUTING USER MUST BE ABLE TO SEE THEM. Not the owner of the function; that function has been fired and the WHERE Predicate generated already. On actually running the SQL, the executing user must have access to all objects referenced, including those introduced by the P1 WHERE Predicate. Oh, and remember synonyms!
There is there a way to sidestep RLS and you sometimes need to.
If your logon has the EXEMPT ACCESS POLICY system privilege then RLS functions are utterly ignored by you. No check is made to see if policies apply to your SQL statements, let alone run them. You may need to use this privilege if you need to access functionality that RLS has issues with (see comments by Dom Brooks and Tony Sleight on my first post). You might also need it if you have errors with the policies but you need access to the data to keep the business moving. You probably need a locked-down user somewhere with the EXEMPT ACCESS POLICY privilege if you use RLS.
Of course, great care has to be taken in making use of this privilege as it side-steps all RLS_based security.
If you have master users that have full access to the data, this privilege also removes the overhead of the SQL engine identifying any policies, firing the function and including the extra predicates.
One very final point. I have found myself writing RLS functions that all check if the user was the owner of the table and, if so, allowing them to see all data. By granting EXEMPT ACCESS POLICY to that user I was able to do away with those checks and make the functions much simpler. In this case I did not so much think of EXEMPT ACCESS POLICY turning off security but the lack of it turning it on for restricted users.
That will do for part 2.
Dropped Tables, Hiding Extents and Slow DBA_FREE_SPACE Queries February 16, 2012
Posted by mwidlake in database design, internals, performance.Tags: data dictionary, design, performance, SQL
10 comments
My last post was on slow dictionary queries caused by having many, many thousands of extents in your database. This post is about a special case of this problem, which I encountered recently. It was on an Exadata box – Exadata is not particularly relevant to the issue, but I’m curious to see if mentioning Exadata will make this post more popular 🙂
I was querying the used and free space on a very small database on the client’s X2-2 1/4 rack Exadata machine. The query was really slow, taking about 30 seconds. This is a FAST box, what is going on?
I quickly realised that the problem was specific to one tablespace:
>@SPC_SUM Enter the tablespace (or leave null)> DATA_01 TS_NAME ORD SUM_BLKS SUM_K MAX_CHNK_K NUM_CHNK -------------------- ----- ----------- ------------ ----------- -------- DATA_01 alloc 262,144 2,097,152 2,097,152 1 free 63,128 505,024 504,384 11 2 rows selected. Elapsed: 00:00:00.21 > @SPC_SUM Enter the tablespace (or leave null)> USERS TS_NAME ORD SUM_BLKS SUM_K MAX_CHNK_K NUM_CHNK -------------------- ----- ----------- ------------ ----------- -------- USERS alloc 748,320 5,986,560 5,372,160 2 free 127,904 1,023,232 6,144 3,058 2 rows selected. Elapsed: 00:00:26.05
We can see it takes 0.21 seconds for tablespace DATA_01, 26.05 seconds for the USERS. Yes, USERS is larger but not 150 times larger. What is going on? Well, as the title of this post suggests, it is down to dropping tables. The below shows my route to that conclusion.
What I could see was that for the USERS tablespace the maximum chunk of free space was relatively small compared to the sum of free space – 6MB out of 1GB, pretty much – and that there was a lot of individual pieces of free space, 3,058. This tablespace was shattered into a lot of bits. So, what sort of extent size management do we have? How big are the tables and indexes in this tablespace {NB I already knew we had no partitions so I did not have to worry about that}.
select tablespace_name,initial_extent,next_extent ,extent_management,min_extlen,allocation_type from dba_tablespaces where tablespace_name ='USERS' TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT EXTENT_MAN MIN_EXTLEN ALLOCATIO ------------------------------ -------------- ----------- ---------- ---------- --------- USERS 65536 LOCAL 65536 SYSTEM select tablespace_name,blocks,count(*) from dba_extents where tablespace_name = 'USERS' group by tablespace_name,blocks having count(*) >1 order by blocks desc,tablespace_name TABLESPACE_NAME BLOCKS COUNT(*) ------------------------------ ---------- ---------- USERS 2560 2 USERS 2048 3 USERS 1536 7 USERS 1408 5 USERS 1280 2 USERS 1248 2 USERS 1152 2 USERS 1024 229 USERS 896 15 USERS 768 21 USERS 736 3 USERS 720 3 USERS 704 2 USERS 672 2 USERS 640 25 USERS 624 2 USERS 576 2 USERS 512 117 USERS 400 2 USERS 384 34 USERS 360 2 USERS 312 2 USERS 288 4 USERS 256 49 USERS 248 2 USERS 240 2 USERS 192 5 USERS 160 4 USERS 128 1165 USERS 8 1788 30 rows selected.
So we have system controlled extent allocation, this should be fine. I did a quick check of the table contents – 300 or so tables and 200 or so indexes. And, as you can see from the check of extents above, very few larger extents and a lot of small ones. {As a little aside, note the relatively large number of 8-block, 128-block, 256-block and 1024-block extents; These are the sizes that LMTs with system managed extents tend to use unless a similar sized chunk is found to be available, in which case it is used}.
I did some other checks on indexes and segment sizes and it all boiled down to one thing. There were not a lot of things in this tablespace, what was there was small and… The total of all those table and index blocks was way less than the used space in the tablespace.
That is the major indicator of what is going on here. The difference in space used in the tablespace and the total of the visible tables and indexes.
The last link in the chain is the recycle bin.
select owner,ts_name,count(*) from dba_recyclebin group by owner,ts_name OWNER TS_NAME COUNT(*) ------------------------------ ------------------------------ ---------- USER1 USERS 542356 USER1 2 WEGWEGWEG USERS 97 KKKUKUYLLX USERS 149 USOVFPKEKS USERS 3 .... ERHJTRTTTURT USERS 4 11 rows selected.
That’s 542,356 objects in the recyclebin for one user, in the one tablespace. My problem tablespace. The penny clicked, something I had already noticed and was a little uncomfortable about fell into place.
The client is using Oracle Data Integrator (ODI) to pull data together and put it into the database. The process they are using basically creates a table, uses it to load some data into and then pushes the data into the target tables. Then drops the table. The drop does not do a “DROP TABLE … PURGE;”. This is done many, many times per load cycle, which is intended to run several times a day.
Something you should always keep in mind with the recyclebin is that the tables, indexes and their extents that go into the recycle bin do not get reported in several of the key data dictionary views. I did mention this in a blog post way back but maybe a whole post on it is called for.
So, the tablespace was being shattered by the constant creating and dropping of small tables. It was hidden from easy view due to how the recyclebin is (not) exposed in data dictionary views.
It is not good practice to constantly create and drop lots of tables. As well as the admittedly rather odd impact that this posting is all about, there is a lot of internal work involved for the oracle database in creating and dropping tables. It is an overhead best avoided. The client had good reasons for this approach but now they are going to look at the alternatives.
I’ve not actually proven in this posting that all those dropped tables is the actual cause of the slow performance in querying the free space in that tablespace {though the corroborative evidence is very strong}. Neither have I said how we cleared up the mess. I’ll cover both of those in the next post on this mini-thread.
Index Organized Tables – the Basics. July 18, 2011
Posted by mwidlake in development, internals, performance.Tags: Database Design, index organized tables, IOT, performance, system development
37 comments
..>IOT2 – Examples and proofs
….>IOT3 – Greatly reducing IO with IOTs
……>IOT4 – Boosting Buffer Cache Efficiency
……..>IOT5 – Primary Key issues
……….>IOT6a – Slowing Down Insert
…………>IOT6(B) – OLTP Inserts
I think Index Organized Tables(IOTs) are a much under-used and yet very useful feature of Oracle. Over the next few postings I’m going to cover some aspect of Index Organised Tables, both good and not-so-good. I am going to cover some benefits of IOTs that I think many people are unaware of. In this first post I am just going to run through the basics of IOTs.
The idea behind an IOT is simple. You hold all the data for the table in the ordered structure of an index. Why would you want to do that? Let us consider a very common requirement, accessing a row in a “large” table via a known, unique key.
Traditionally you have a heap table holding the data you want to access and a standard index to support access to that table. See the first diagram below. The 4-layer triangle represents the index, with a root block, two levels of branch blocks and then the leaf blocks at the “bottom”. The blue rectangle represents the table with the squares being individual rows. Of course, in a large table there would be thousands or millions of “squares”, this is just a simple diagram to show the idea.
When you issue a SQL statement to select the row via the indexed column(s) then oracle will read the root block (1), find the relevent block in the first level of branch blocks (2), then the relevant block in the second level of branch blocks (3) and finally (as far as the index is concerned) the relevant Leaf Block for the unique key. The leaf block holds the indexed column(s) and also the rowid. The rowid is the fastest way to look up a record, it states the file, block and row offset for the row. This allows oracle to go straight to the block and get the row. That is read number (5).
The number of branch blocks {and thus the number of blocks that need to be read to find a row} will vary depending on how much data is indexed, the number and size of the columns in the index, how efficiently the space has been used in the blocks and one or two other factors. In my experience most indexes for tables with thousands or millions of rows have one, two or three levels of branch blocks.
The second diagram shows a representation of the Index Organized Table. The table has in effect disappeared as a distinct object and the information has been moved into the leaf blocks of the index {part of me feels Index Organized Tables should really be called Table Organized Indexes or Table Containing Indexes as that would better indicate what is physically done}:
So with the IOT oracle reads the root block (1), the two branch level blocks (2 and 3) and finally the leaf block (4). The leaf block does not hold the rowid but rather the rest of the columns for the table {this can be changed, a more advanced feature allows you to store some or all the extra columns in an overflow segment}. Thus to access the same data, Oracle has to read only 4 blocks, not 5. Using an IOT saves one block read per unique lookup.
This saving of block reads is probably the main feature that IOTs are known for, but there are others which I will cover in later posts. Two things I will mention now is that, firstly, the use of IOTs is potentially saving disc space. An index is in effect duplication of data held in the table. When you create an index no new information is created but space is used up holding some of the table information in a structure suitable for fast lookup. Secondly, the index and table have to be maintained whenever a change is made to the columns that are indexed. IOTs reduce this maintenance overhead as there is only one thing to maintain.
Now for some drawbacks.
- The IOT has to be indexed on the primary key. There is no option to create an IOT based on other indexes. As such you have to either be accessing the table via the primary key to get the benefit – or you have to be a little cunning.
- The index is going to be larger than it was and very often larger than the original table. This can slow down range scans or full scans of the index and a “full table scan” will now be a full index scan on this large object, so that can also negatively impact performance. However, if a range scan would then have resulted in access to the table to get extra columns, the IOT gives a similar benefit in reducing IO to that for single row lookups.
- I just want to highlight that you now have no rowid for the rows.
- Secondary indexes are supported but will potentially be less efficient due to this lack of rowid.
So, a brief summary is that Index Organised Tables effectively move the table data into the Primary Key index, reduce the number of block lookups needed to select one row, can save some disc space. But you can only organize the table via the Primary Key and it can make full or partial table scans and lookups via other indexes slower.
There are several more important benefits to IOTs {in my opinion} which I will come to over the next week or two.
You can explain an invalid SQL statement November 27, 2010
Posted by mwidlake in internals.Tags: explain plan, Humour, SQL
6 comments
I’m in “nightmare weekend before presenting” mode. I’m up to my eyes at work (and have been for ages, thus the quiet blog) and my recent weekends have been full of normal {and abnormal} life.
As is the way, when up against it and putting together my proofs for wild claims, everything breaks subtly and makes my wild claims look a little, well, wild – even though they are real issues I’ve seen, worked through and fixed in the day job. *sigh*. It does not help when you come across little oddities you have never seen before and end up spending valuable time looking into them.
So here is one. I’m just putting together a very, very simple demo of how the number of rows the CBO expects to see drops off as you move outside the known range. In the below you can see the statement I am using (I keep passing in different days of the month and watching the expected number of rows drop until I hit 1 expected row), but look at how it progress to the last entry…
mdw11> select count(*) from date_test_flat where date_1=to_date('&day-02-2011','DD-MM-YYYY') 2 / Enter value for day: 01 Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 16 | 128 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) mdw11> / Enter value for day: 15 Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 2 | 16 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE(' 2011-02-15 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) mdw11> / Enter value for day: 21 Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 1 | 8 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE(' 2011-02-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) mdw11> / Enter value for day: 30 Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 99 | 792 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY')) mdw11>
The expected number of rows drops, becomes and – and has shot up to 99 again (which is the expected number in the known range, as I have 10,000 rows spread over 100 days). My immediate thought is “Wow! Maybe Oracle have put some odd fix in where when you go well out of range it reverts to expecting an average number of rows”. Nope. It is because I asked for the data for 30th February. And I did not get an error.
I think it is because I have set autotrace traceonly explain. This causes the SQL statement not to be executed {if it is just a select, not an insert, update or delete}. It seems the costing section of the CBO is not so good at spotting duff dates, but it then gets the costing wrong.
I’ve spotted that the format of the filter also changes when the date is invalid, I really want to check that out – but I better continue failing to write the presentation!
I know, pretty pointless knowing this but it just amused me. Below is just a quick continuation to show that if the statment is to be executed you get an error and no plan and that utterly duff dates can be passed in.
mdw11> / Enter value for day: 28 Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 1 | 8 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) mdw11> SET AUTOTRACE ON mdw11> / Enter value for day: 20 any key> COUNT(*) ---------- 0 1 row selected. Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 1 | 8 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE(' 2011-02-20 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 821 consistent gets 0 physical reads 0 redo size 421 bytes sent via SQL*Net to client 415 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed mdw11> / Enter value for day: 30 select count(*) from date_test_flat where date_1=to_date('30-02-2011','DD-MM-YYYY') * ERROR at line 1: ORA-01839: date not valid for month specified mdw11> set autotrace traceonly explain mdw11> / Enter value for day: 30 Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 99 | 792 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY')) mdw11> / Enter value for day: 45 Execution Plan ---------------------------------------------------------- Plan hash value: 247163334 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 8 | 215 (0)| 00:00:04 | | 1 | SORT AGGREGATE | | 1 | 8 | | | |* 2 | TABLE ACCESS FULL| DATE_TEST_FLAT | 99 | 792 | 215 (0)| 00:00:04 | ------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("DATE_1"=TO_DATE('45-02-2011','DD-MM-YYYY'))
How often is v$sys_time_model updated? July 14, 2010
Posted by mwidlake in internals, performance.Tags: data dictionary, performance
6 comments
I think this posting might go down as one of my more pointless contributions to the Oracle knowledge sphere 🙂
I was looking at V$SYS_TIME_MODEL and V$SESS_TIME_MODEL and I just happened to run “select * from V$SYS_TIME_MODEL” several times in very quick succession. And I noticed the values for the various counters stayed the same between a couple of the runs.
“Hmmm, interesting” I thought “The values are only flushed down to the view ‘periodically’. I wonder how periodically?”… and thus I wasted a lunch time.
I used the below sql*plus-PL/SQL script to investigate the refreshing of v$sess_time_model. Yes, I know the title says v$sys_time_model but the numbers are smaller and easier to check for the session version of the view and they are virtually the same, I can bet on the results being very similar. This is my script (and it is on 10.2.0.3 on linux):
--test_vstm_upd -- how often is v$sessstat updated set trims on set term off set serveroutput on size unli spool test_vstm_upd begin for a in 1..1000 loop for vrec in (select rpad(rpad(to_char(a),4)||' '||to_char(systimestamp,'ss.ff')||' '|| stat_name||' ',45)||value rec from v$sess_time_model -- replace with your current session ID where sid=1989 and stat_id in (3649082374,2748282437,2821698184,2643905994) ) loop dbms_output.put_line(vrec.rec); end loop; dbms_output.put_line(chr(9)); end loop; end; / set term on spool off
As you can see, it simply loops around selecting four of the values from v$sess_time_model, including the loop counter and current timestamp. Timetamp is evaluated afresh for each executed sql statement.
Here is the output for the first three iterations;
1 53.389576 DB time 475860419 1 53.389576 DB CPU 402642660 1 53.389576 sql execute elapsed time 209780319 1 53.389576 PL/SQL execution elapsed time 52290858 2 53.408944 DB time 475860419 2 53.408944 DB CPU 402642660 2 53.408944 sql execute elapsed time 209780319 2 53.408944 PL/SQL execution elapsed time 52290858 3 53.429159 DB time 475860419 3 53.429159 DB CPU 402642660 3 53.429159 sql execute elapsed time 209780319 3 53.429159 PL/SQL execution elapsed time 52290858
As you can see, the timetamp is increasing by 2/100s of a second or so per loop. Which is not as quick as I hoped but it is a test box. Note that the counters for DB Time, CPU time, SQL execute elapsed time and PL/SQL execution elapsed time are constant.
A few iterations later we see the v$sess_time_model counters increment:
7 53.509351 DB time 475860419 7 53.509351 DB CPU 402642660 7 53.509351 sql execute elapsed time 209780319 7 53.509351 PL/SQL execution elapsed time 52291610 --all change! 8 53.531378 DB time 475871716 8 53.531378 DB CPU 402653957 8 53.531378 sql execute elapsed time 209786745 8 53.531378 PL/SQL execution elapsed time 52292793 -- and stable 9 53.555889 DB time 475871716 9 53.555889 DB CPU 402653957 9 53.555889 sql execute elapsed time 209786745 9 53.555889 PL/SQL execution elapsed time 52292793
The counters all increment between iteration 7 and 8 and then stay the same. I can’t tell how long the counters had been the same, I need to wait and see when they change again. How long until they increment again? Well, not very long, in fact just around 0.12 seconds:
14 53.650154 DB time 475871716 14 53.650154 DB CPU 402653957 14 53.650154 sql execute elapsed time 209786745 14 53.650154 PL/SQL execution elapsed time 52293064 -- change 15 53.670358 DB time 475881268 15 53.670358 DB CPU 402663509 15 53.670358 sql execute elapsed time 209792803 15 53.670358 PL/SQL execution elapsed time 52294180 -- still changing 16 53.689011 DB time 475887530 16 53.689011 DB CPU 402669771 16 53.689011 sql execute elapsed time 209794387 16 53.689011 PL/SQL execution elapsed time 52294180 -- and still changing 17 53.710875 DB time 475889549 17 53.710875 DB CPU 402671790 17 53.710875 sql execute elapsed time 209796393 17 53.710875 PL/SQL execution elapsed time 52295342 -- ...still changing... 18 53.728168 DB time 475893032 18 53.728168 DB CPU 402675273 18 53.728168 sql execute elapsed time 209797665 18 53.728168 PL/SQL execution elapsed time 52295342 -- and stable 19 53.744725 DB time 475893032 19 53.744725 DB CPU 402675273 19 53.744725 sql execute elapsed time 209797665 19 53.744725 PL/SQL execution elapsed time 52295342
This time, the increment occurs over several iterations of the loop before becoming stable again. All four values I am pulling out increment over these iterations.
The next increment comes four iterations or 0.1 seconds later and happens swiftly, between two iterations:
22 53.802486 DB time 475893032 22 53.802486 DB CPU 402675273 22 53.802486 sql execute elapsed time 209797665 22 53.802486 PL/SQL execution elapsed time 52295342 -- change 23 53.822231 DB time 475897963 23 53.822231 DB CPU 402680204 23 53.822231 sql execute elapsed time 209800369 23 53.822231 PL/SQL execution elapsed time 52296904 -- stable 24 53.840085 DB time 475905724 24 53.840085 DB CPU 402687965 24 53.840085 sql execute elapsed time 209803330 24 53.840085 PL/SQL execution elapsed time 52296904
So it seem that v$sess_time_model is incremented in steps, not constantly, and does so every 0.10 to 0.13 seconds or so. My work here is done.
Or is it?
No, it is not, as there is now a “massive” gap where the counters do not increment for almost 3/4 of a second, until iteration 127:
126 55.530398 DB time 475905724 126 55.530398 DB CPU 402687965 126 55.530398 sql execute elapsed time 209803775 126 55.530398 PL/SQL execution elapsed time 52297583 -- change 127 55.545085 DB time 475914013 127 55.545085 DB CPU 402696254 127 55.545085 sql execute elapsed time 209809518 127 55.545085 PL/SQL execution elapsed time 52298886 -- still changing 128 55.560141 DB time 475921342 128 55.560141 DB CPU 402703583 128 55.560141 sql execute elapsed time 209812345 128 55.560141 PL/SQL execution elapsed time 52299359 -- still changing 129 55.574806 DB time 475922705 129 55.574806 DB CPU 402704946 129 55.574806 sql execute elapsed time 209812345 129 55.574806 PL/SQL execution elapsed time 52299359 -- stable 130 55.589541 DB time 475922705 130 55.589541 DB CPU 402704946 130 55.589541 sql execute elapsed time 209812345 130 55.589541 PL/SQL execution elapsed time 52299359
Again, the incrementing ran over a small number of iterations of my loop.
I think I have shown that all the values increment together so I will reduce my output to just the one counter and see when it increments and over how many iterations and see if a pattern appears:
25 53.860550 DB time 475905724 -- 53.84 to 55.54 0 1.7 seconds of stability 126 55.530398 DB time 475905724 127 55.545085 DB time 475914013 128 55.560141 DB time 475921342 129 55.574806 DB time 475922705 136 55.682402 DB time 475922705 137 55.697191 DB time 475956738 138 55.712266 DB time 475969859 139 55.727820 DB time 475974350 140 55.743315 DB time 475982356 141 55.758749 DB time 475994069 142 55.773602 DB time 476004596 143 55.788472 DB time 476004596 144 55.803295 DB time 476007541 145 55.818136 DB time 476011172 146 55.832886 DB time 476020336 147 55.847772 DB time 476025376 148 55.865303 DB time 476036347 -- incrementd with a couple of brief pauses over 0.34 seconds 149 55.881480 DB time 476041481 150 55.896735 DB time 476041481 ... 200 56.664783 DB time 476041481 -- 55.88 to 56.67 0.8 seconds of stability 201 56.679455 DB time 476049162 -- increments over two iterations, 0.03 seconds 202 56.694092 DB time 476052385 203 56.708733 DB time 476052385 ... 261 57.566902 DB time 476052385 -- 56.69 to 57.59 0.9 seconds of stability 262 57.581582 DB time 476052842 263 57.596218 DB time 476058537 ... 270 57.700212 DB time 476058537 271 57.715371 DB time 476060552 272 57.730797 DB time 476063551 273 57.745700 DB time 476074383 274 57.760351 DB time 476079741 ... 279 57.835162 DB time 476079741 280 57.849966 DB time 476080090 281 57.864782 DB time 476090799 282 57.879446 DB time 476100404 283 57.894553 DB time 476103222 -- stable again after 0.3 seconds and a couple of mini-pauses 284 57.910592 DB time 476103222 ... 335 58.677438 DB time 476103222 -- 57.91 to 58.69 0.8 seconds of stability 336 58.694704 DB time 476113168 337 58.709995 DB time 476113909 338 58.724782 DB time 476119452 339 58.740756 DB time 476119795 340 58.758659 DB time 476129752 341 58.776040 DB time 476132036 ... 345 58.854895 DB time 476132036 346 58.869516 DB time 476138982 347 58.884100 DB time 476145880 348 58.898772 DB time 476160301 349 58.913401 DB time 476178139 350 58.935391 DB time 476183281 -- stable again after 0.27 seconds 351 58.955195 DB time 476183281 ... 395 59.608368 DB time 476183281 -- 57.93 to 59.60 0.68 seconds of stability 396 59.623062 DB time 476187188 ... 402 59.713566 DB time 476187188 403 59.728220 DB time 476194591 404 59.742900 DB time 476204006 405 59.757544 DB time 476210666 406 59.774934 DB time 476216338 407 59.796595 DB time 476228874 ... 413 59.890172 DB time 476228874 414 59.908436 DB time 476238680 415 59.923166 DB time 476251316 416 59.937805 DB time 476259466 417 59.952540 DB time 476261228 418 59.967215 DB time 476277094 419 59.981914 DB time 476282108 -- stable again after 0.29 seconds 420 00.000358 DB time 476298216 ... 529 01.684500 DB time 476298216 -- 00.00 to 01.69 1.69 seconds of stability 530 01.699165 DB time 476301888 531 01.714307 DB time 476312510
I would say that we can draw a few conclusions from the above
- It is dangerous to look at a little bit of data and draw a firm conclusion, as I nearly did
- The data in v$sess_time_model is only maintained in near-time not real-time
- The counters in v$sess_time_model increment together
- The counters seem to increment in a slightly messy way over part of a second and then are stable for 3/4 of a second to a second or two
I wonder how many of you went “Oh dear” when I said I could derive what is true for v$sys_time_model from v$sess_time_model? Could I? well, here is the modified script for v$sys_time_model:
--test_vstm_upd2 -- how often is v$sysstat updated set trims on set term off set serveroutput on size unli spool test_vstm_upd2 begin for a in 1..1000 loop for vrec in (select rpad(rpad(to_char(a),4)||' '||to_char(systimestamp,'ss.ff')||' '|| stat_name||' ',45)||value rec from v$sys_time_model --where sid=1989 where stat_id in (3649082374) ) loop dbms_output.put_line(vrec.rec); end loop; -- dbms_output.put_line(chr(9)); end loop; end; / set term on spool off
And a sample of my output:
1 43.187666 DB time 14429733395433 2 43.188523 DB time 14429733395755 3 43.188642 DB time 14429733395905 4 43.188733 DB time 14429733395905 5 43.188822 DB time 14429733395905 6 43.188909 DB time 14429733395905 -- 7 43.188995 DB time 14429733396491 8 43.189080 DB time 14429733396491 9 43.189164 DB time 14429733396491 10 43.189258 DB time 14429733396491 11 43.189345 DB time 14429733396491 12 43.189430 DB time 14429733396491 13 43.189515 DB time 14429733396491 14 43.189600 DB time 14429733396491 15 43.189687 DB time 14429733396491 16 43.189774 DB time 14429733396491 17 43.189858 DB time 14429733396491 18 43.189942 DB time 14429733396491 19 43.190026 DB time 14429733396491 20 43.190111 DB time 14429733396491 -- 21 43.190200 DB time 14429733397436 22 43.190287 DB time 14429733397436 23 43.190371 DB time 14429733397436 24 43.190454 DB time 14429733397436 25 43.190540 DB time 14429733397436 26 43.190624 DB time 14429733397436 27 43.190708 DB time 14429733397436 -- 28 43.190793 DB time 14429733397764 29 43.190877 DB time 14429733397764 30 43.190961 DB time 14429733397764 31 43.191045 DB time 14429733397764 32 43.191132 DB time 14429733397764 33 43.191221 DB time 14429733397764 34 43.191309 DB time 14429733397764 35 43.191392 DB time 14429733397764 -- 36 43.191475 DB time 14429733402416 37 43.191558 DB time 14429733402416 -- 38 43.191641 DB time 14429733403070 39 43.191725 DB time 14429733403070 40 43.191809 DB time 14429733403070 41 43.191893 DB time 14429733403070 42 43.191976 DB time 14429733403070 43 43.192060 DB time 14429733403070 44 43.192144 DB time 14429733403070 45 43.192230 DB time 14429733403070 46 43.192315 DB time 14429733403070 47 43.192400 DB time 14429733403070 48 43.192484 DB time 14429733403070 49 43.192569 DB time 14429733403070 50 43.192654 DB time 14429733403070 -- 51 43.192737 DB time 14429733407045 52 43.192821 DB time 14429733407045 53 43.192904 DB time 14429733407045 54 43.192985 DB time 14429733407045 55 43.193069 DB time 14429733407045 56 43.193152 DB time 14429733407045 57 43.193237 DB time 14429733407045 58 43.193321 DB time 14429733407045 59 43.193404 DB time 14429733407045 60 43.193488 DB time 14429733407045 61 43.193574 DB time 14429733407045 -- 62 43.193660 DB time 14429733408897 63 43.193743 DB time 14429733408897 64 43.193828 DB time 14429733408897 65 43.193912 DB time 14429733408897 66 43.193994 DB time 14429733408897 67 43.194076 DB time 14429733408897 -- 68 43.194160 DB time 14429733409208 69 43.194283 DB time 14429733409208 70 43.194378 DB time 14429733409208 -- 71 43.194465 DB time 14429733409267 72 43.194551 DB time 14429733409267 73 43.194635 DB time 14429733409267 74 43.194719 DB time 14429733409267 75 43.194801 DB time 14429733409267 76 43.194884 DB time 14429733409267 -- 77 43.194967 DB time 14429733409863 78 43.195052 DB time 14429733409863 -- 79 43.195136 DB time 14429733410499 80 43.195245 DB time 14429733410499 81 43.195329 DB time 14429733410499 82 43.195412 DB time 14429733410499 83 43.195495 DB time 14429733410499 84 43.195577 DB time 14429733410499 85 43.195660 DB time 14429733410499 86 43.195743 DB time 14429733410499 87 43.195825 DB time 14429733410499 88 43.195909 DB time 14429733410499 89 43.195991 DB time 14429733410499 90 43.196074 DB time 14429733410499 91 43.196156 DB time 14429733410499 92 43.196244 DB time 14429733410499 93 43.196326 DB time 14429733410499 94 43.196409 DB time 14429733410499 -- 95 43.196493 DB time 14429733411732 96 43.196577 DB time 14429733411732 97 43.196661 DB time 14429733411732 98 43.196745 DB time 14429733411732 99 43.196826 DB time 14429733411732 -- 100 43.196910 DB time 14429733412107 101 43.196992 DB time 14429733412410 102 43.197076 DB time 14429733412410 103 43.197158 DB time 14429733412410 104 43.197245 DB time 14429733412410 105 43.197327 DB time 14429733412410 106 43.197410 DB time 14429733412410 107 43.197493 DB time 14429733412410 108 43.197575 DB time 14429733412410 109 43.197658 DB time 14429733412410 -- 110 43.197741 DB time 14429733412981 111 43.197824 DB time 14429733412981 112 43.197907 DB time 14429733412981 113 43.197990 DB time 14429733412981 114 43.198072 DB time 14429733413001 115 43.198156 DB time 14429733413001 116 43.198247 DB time 14429733413001 117 43.198330 DB time 14429733413001 -- 118 43.198414 DB time 14429733413300 119 43.198499 DB time 14429733413300 120 43.198581 DB time 14429733413300 121 43.198665 DB time 14429733413300 122 43.198748 DB time 14429733413300 123 43.198830 DB time 14429733413300 124 43.198913 DB time 14429733413300 -- 125 43.198997 DB time 14429733414262 126 43.199081 DB time 14429733414262 127 43.199165 DB time 14429733414262 128 43.199252 DB time 14429733414262 129 43.199336 DB time 14429733414262 130 43.199419 DB time 14429733414262 131 43.199503 DB time 14429733414262 -- 132 43.199586 DB time 14429733414569 133 43.199669 DB time 14429733414569 134 43.199752 DB time 14429733414569 135 43.199834 DB time 14429733414569 136 43.199918 DB time 14429733414569 137 43.200000 DB time 14429733414569 138 43.200083 DB time 14429733414569 139 43.200166 DB time 14429733414569 140 43.200252 DB time 14429733414569 -- 141 43.200334 DB time 14429733415145 142 43.200418 DB time 14429733415145 -- 143 43.200504 DB time 14429733415335 144 43.200588 DB time 14429733415335 145 43.200672 DB time 14429733415335 146 43.200756 DB time 14429733415335 147 43.200838 DB time 14429733415335 148 43.200921 DB time 14429733415335 149 43.201003 DB time 14429733415335 150 43.201086 DB time 14429733415335 151 43.201169 DB time 14429733415335 152 43.201259 DB time 14429733415335
I would say that we can draw a few conclusions from this latest test above!
- It is dangerous to look at one thing and assume something closely related will be the same!
- The data in v$sys_time_model is also being updated in bursts
- The data in v$sys_time_model is actually updated very, very frequently, at around 1/1000 of a second intervals
- It might be that v$sess_time_model is being updated for sessions in some sort of round-robin fashion and v$sys_time_model each time the v$sess version is updated
- You can spend a lot of time looking at really quite obscure and possibly pointless stuff
- The reason I am losing weight is I keep skipping lunch.
DBA_TAB_MODIFICATIONS can miss multi-table inserts July 5, 2010
Posted by mwidlake in internals.Tags: data dictionary, SQL
4 comments
Last week I blogged a piece all about DBA_TAB_MODIFICATIONS and how it captures all normal DML against table. However, a word of warning. This may not be true for multi-table inserts. It appears that inserts via multi-table inserts are not recorded in DBA_TAB_MODIFICATIONS for 10.2.0.3 and a couple of 9.2 versions of oracle.
{My thanks got to my comrade-in-arms Basavaraj Patil for this. He not only noticed the issue but tracked it down to multi-table inserts, came and told me about it and then said I could put it in my blog. This is his work really}.
I will not go into multi-table inserts right now but it is a very nice feature that was introduced with oracle 9 which allows you to insert a row into several tables at a time or into one of different tables. In fact, I think it is a relatively unknown feature. Check out this oracle-base article on new 9i features that mentions it or go pull up the Oracle 10 SQL reference manual.
There is a bug with Oracle up to 10.2.0.3 and 9.2.0.6 and 9.2.0.7.
It is fixed in 9.2.0.8, 10.2.0.4 and 11.1.0.6
Bug number 4572043 it you have metalink.
It impacts direct load merge statements as well as multi-table inserts, but I have not tested and verified the direct load merge statement.
Below is a demonstration of the issue:
TDB>drop table obj_nonsys purge; Table dropped. TDB>drop table obj_sys purge; Table dropped. TDB>create table obj_nonsys 2 as select * from all_objects where rownum<1 3 / Table created. TDB>create table obj_sys 2 as select * from all_objects where rownum<1 3 / Table created. TDB>insert into obj_sys 2 select * from dba_objects 3 where owner in ('SYS','SYSTEM') 4 and rownum <= 200 5 / 200 rows created. TDB>insert into obj_nonsys 2 select * from dba_objects 3 where owner not in ('SYS','SYSTEM') 4 and rownum <= 150 5 / 150 rows created. TDB>commit; Commit complete. TDB>-- flush the changes down to the DBA_TAB_MODIFICATIONS table. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS'); TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- OBJ_NONSYS 150 0 0 NO 05-JUL-2010 20:59 OBJ_SYS 200 0 0 NO 05-JUL-2010 20:59 2 rows selected. TDB>-- As can be seen above, the inserts are correctly captured TDB>-- And the below counts confirm this TDB>select count(*) from obj_sys; COUNT(*) ---------- 200 1 row selected. TDB>select count(*) from obj_nonsys; COUNT(*) ---------- 150 1 row selected. TDB>-- Now the core of it. Multi-table inserts TDB>-- NB this is not the best example of a multi-table insert but it will do. TDB>insert when (owner='SYS' or owner ='SYSTEM') 2 then into obj_sys values ( OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID 3 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS 4 ,TEMPORARY,GENERATED,SECONDARY) 5 when (owner !='SYS' and owner !='SYSTEM') 6 then into obj_nonsys values ( OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID 7 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS 8 ,TEMPORARY,GENERATED,SECONDARY) 9 select OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID 10 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS 11 ,TEMPORARY,GENERATED,SECONDARY 12 from dba_objects 13 where object_type='TABLE' 14 and rownum <= 1000 15 / 1000 rows created. TDB>commit; Commit complete. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- And what do we see in DTM? TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS'); TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- OBJ_NONSYS 150 0 0 NO 05-JUL-2010 20:59 OBJ_SYS 200 0 0 NO 05-JUL-2010 20:59 2 rows selected. TDB>-- Argh! No change to the number of inserts! They have been missed TDB>-- TDB>-- Let me veryify what is in the tables... TDB>select count(*) from obj_sys; COUNT(*) ---------- 1025 1 row selected. TDB>select count(*) from obj_nonsys; COUNT(*) ---------- 325 1 row selected. TDB>-- And I will do similar inserts to what the multi-table insert would do just to make sure TDB>-- there is nothing odd going on. TDB>insert into obj_sys 2 select * from dba_objects 3 where object_type='TABLE' 4 and owner in ('SYS','SYSTEM') 5 and rownum <= 600 6 / 600 rows created. TDB>insert into obj_nonsys 2 select * from dba_objects 3 where object_type='TABLE' 4 and owner not in ('SYS','SYSTEM') 5 and rownum <= 400 6 / 400 rows created. TDB>commit; Commit complete. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS'); TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- OBJ_NONSYS 550 0 0 NO 05-JUL-2010 20:59 OBJ_SYS 800 0 0 NO 05-JUL-2010 20:59 2 rows selected. TDB>select count(*) from obj_sys; COUNT(*) ---------- 1625 1 row selected. TDB>select count(*) from obj_nonsys; COUNT(*) ---------- 725 1 row selected. TDB> TDB>-- Note, the counts have gone as well of course and now are adrift from DTM
DBA_TAB_MODIFICATIONS July 2, 2010
Posted by mwidlake in internals, performance, statistics.Tags: data dictionary, SQL, statistics
21 comments
I’ve mentioned DBA_TAB_MODIFICATIONS (and there is also of course ALL_TAB_MODIFICATIONS and USER_TAB_MODIFICATIONS) in a couple of posts, but mostly buried in long, rambling posts about other things. So here is an overview and demonstration of DBA_TAB_MODIFICATIONS. Skip to the end if you just want to see an example of how it works.
The below example is from 10.2.0.3 but for V11 it is pretty much the same, as is 9 except where I say otherwise.
SYS.DBA_TAB_MODIFICATIONS is used by oracle internally to track how many inserts, updates and deletes have been done to a table, table partition or table subpartition since the segment last had stats gathered on it with dbms_stats. It also records if the segment has been truncated since the last stats gather. Why since the last stats gather? Because Oracle uses it internally to detect how much a segment has changed and if it is 10% or more oracle will deem it stale for stats gathering purposes. (under 11 you can alter the stale percentage). Note I said SYS.DBA_TAB_MODIFICATIONS – on 10 at least the public synonym for it is missing, so unlike most other DBA_ views you need to state the owner (or create a synonym).
OWNER.TABLE_NAME INSERTS UPDATES DELETES TIMESTAMP TRU ------------------- ---------- ---------- ---------- ----------------- --- XXXXXXX.YYYYYYYYYYY 22598264 0 1 01-MAR-2010 02:08 NO XXXXXXX.YYYYYYYYYYY 5 0 0 19-SEP-2007 01:47 NO XXXXXXX.YYYYYYYYYYY 888766 0 0 01-MAR-2010 02:08 NO XXXXXXX.YYYYYYYYYYY 3191 1486 0 27-NOV-2009 05:11 NO XXXXXXX.YYYYYYYYYYY 34742 0 0 08-MAR-2010 15:16 NO XXXXXXX.YYYYYYYYYYY 0 7192 0 02-JUL-2010 05:00 NO XXXXXXX.YYYYYYYYYYY 0 1 0 10-MAR-2010 15:16 NO XXXXXXX.YYYYYYYYYYY 8 8 8 26-JAN-2010 08:05 NO XXXXXXX.YYYYYYYYYYY 1533536 0 2 01-MAR-2010 02:08 NO XXXXXXX.YYYYYYYYYYY 281 0 0 11-SEP-2009 03:00 NO
Under oracle 9 you have to register a table as MONITORED before this information is gathered. Under 10 and 11 all tables/partitions/subpartitions are monitored and you can’t turn that monitoring off {you can try, but oracle ignores you 🙂 }.
The information can be very useful for checking how volatile a segment is, if it has been changed a lot since the last time stats were gathered on it and you can also add the values held in DBA_TAB_MODIFICATIONS to the value for NUM_ROWS held for the segment and get a very accurate estimate of the current number of rows. It is a lot, lot faster than actually counting them!
The information on inserts/updates and deletes is gathered for pretty much all DML against tables (see an up-coming post for an example of this not being true). Direct load SQL*Loader and other direct-io activity can skip being recorded but insert-append, using the /*+ append */ hint is recorded correctly {I suspect this was not true for V9 and 10.1 but am no longer sure}. This information is initially held in memory and only later pushed into DBA_TAB_MODIFICATIONS and so you may not see the latest information. Under oracle 9 this information is flushed down every 15 minutes I believe, under 10.1 it is 3 hours and under 10.2 onwards the information is only flushed down when stats are gathered against the segment OR you manually flush the information down to the database.
flushing the latest information is achieved with DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO. It is relatively quick to run, normally taking less than a few seconds.
When statistics are gathered on a segment, any corresponding rows in DBA_TAB_MODIFOCATIONS is deleted, not updated to zeros, and is recreated only when relevent inserts,updates, deletes or truncates occur on the segment.
the DBA_TAB_MODIFICATIONS view sits on top of sys.mon_mods_all$ as well as obj$,user$ and the usual suspects. sys.mon_mods_all$ does not contain any more information that the view exposes.
desc sys.dba_tab_modifications Name Null? Type ----------------------------------------------------- -------- ------------ TABLE_OWNER VARCHAR2(30) TABLE_NAME VARCHAR2(30) PARTITION_NAME VARCHAR2(30) SUBPARTITION_NAME VARCHAR2(30) INSERTS NUMBER UPDATES NUMBER DELETES NUMBER TIMESTAMP DATE TRUNCATED VARCHAR2(3) DROP_SEGMENTS NUMBER --View description TEXT ----------------------------------------------------------------------- SYS DBA_TAB_MODIFICATIONS 9 select u.name, o.name, null, null, m.inserts, m.updates, m.deletes, m.timestamp, decode(bitand(m.flags,1),1,'YES','NO'), m.drop_segments from sys.mon_mods_all$ m, sys.obj$ o, sys.tab$ t, sys.user$ u where o.obj# = m.obj# and o.obj# = t.obj# and o.owner# = u.user# union all select u.name, o.name, o.subname, null, m.inserts, m.updates, m.deletes, m.timestamp, decode(bitand(m.flags,1),1,'YES','NO'), m.drop_segments from sys.mon_mods_all$ m, sys.obj$ o, sys.user$ u where o.owner# = u.user# and o.obj# = m.obj# and o.type#=19 union all select u.name, o.name, o2.subname, o.subname, m.inserts, m.updates, m.deletes, m.timestamp, decode(bitand(m.flags,1),1,'YES','NO'), m.drop_segments from sys.mon_mods_all$ m, sys.obj$ o, sys.tabsubpart$ tsp, sys.obj$ o2, sys.user$ u where o.obj# = m.obj# and o.owner# = u.user# and o.obj# = tsp.obj# and o2.obj# = tsp.pobj# -- underlying sys.$ table desc sys.mon_mods_all$ Name Null? Type ----------------------------------------------------- -------- ------ OBJ# NUMBER INSERTS NUMBER UPDATES NUMBER DELETES NUMBER TIMESTAMP DATE FLAGS NUMBER DROP_SEGMENTS NUMBER
Here is a demo of it in action:
TDB>-- clear down the test table. TDB>-- if you do not have access to DBA_TAB_MODIFICATIONS change to ALL_TAB_MODIFICATIONS TDB>drop table test_1 purge; Table dropped. TDB>create table test_1 2 as select * from all_objects where rownum<1 3 / Table created. TDB>select count(*) from test_1; COUNT(*) ---------- 0 1 row selected. TDB>-- WHAT IS IN DBA_TAB_MODIFICATIONS TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / no rows selected TDB>-- OK, let's flush down the information TDB>-- TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- Still nothing as no activity has occurred on the table. TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / no rows selected TDB>-- Now create some data TDB>insert into test_1 2 select * from dba_objects where rownum <= 100 3 / 100 rows created. TDB>commit; Commit complete. TDB>select count(*) from test_1; COUNT(*) ---------- 100 1 row selected. TDB>-- WHAT IS IN DBA_TAB_MODIFICATIONS TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / no rows selected TDB>-- OK, let's flush down the information TDB>-- TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>-- TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- TEST_1 100 0 0 NO 02-JUL-2010 10:21 1 row selected. TDB>-- The information in DBA_TAB_MODIFICATIONS is used by Oracle to detect if a table TDB>-- (or partition) in a tables is stale - changed by 10% TDB>-- Gathering statistics on an object DELETES the record from DBA_TAB_MODIFICATIONS TDB -- rather than setting all the values to zero. TDB>-- TDB>exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST_1') PL/SQL procedure successfully completed. TDB>-- TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / no rows selected TDB>-- now do some activity again and flush it down to the dictionary TDB>insert into test_1 2 select * from dba_objects where rownum <= 150 3 / 150 rows created. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- TEST_1 150 0 0 NO 02-JUL-2010 10:21 1 row selected. TDB>-- Direct inserts via insert-append are also captured (I think this might have changed) TDB>insert /*+ append */ into test_1 2 select * from dba_objects where rownum <= 170 3 / 170 rows created. TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- TEST_1 320 0 0 NO 02-JUL-2010 10:21 1 row selected. TDB>-- now a mixed bag of inserts, updates and deletes TDB>insert into test_1 2 select * from dba_objects where rownum <= 75 3 and owner not in ('SYS','SYSTEM') 4 / 75 rows created. TDB>-- TDB>update test_1 set created=sysdate 2 where object_type !='TABLE' 3 / 289 rows updated. TDB>delete from test_1 2 where object_type='SEQUENCE' 3 / 10 rows deleted. TDB>commit; Commit complete. TDB>-- TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO PL/SQL procedure successfully completed. TDB>select table_name,inserts,updates,deletes,truncated,timestamp 2 from sys.dba_tab_modifications 3 where table_owner=user and table_name= 'TEST_1' 4 / TABLE_NAME INSERTS UPDATES DELETES trnc TIMESTAMP ------------ ---------- ---------- ---------- ---- ----------------- TEST_1 395 289 10 NO 02-JUL-2010 10:21 1 row selected.
If you want to see what else I said about DBA_TAB_MODIFICATIONS then the end of this {way too long} post shows how to use it to get a very fast count of rows in a table and if you have issues with permissions flushing database monitoring see this link about permissions needed
Question. As the data gets flushed from memory to a data dictionary table, it persists the database being bounced. What happens to the data in memory about table changes when thers is a controlled shutdown and when the database crashes?
SID stays same, AUDSID alters June 24, 2010
Posted by mwidlake in internals.Tags: data dictionary, sql*plus
2 comments
In a couple of recent posts I described what AUDSID is and how you check out what you SID and AUSID are.
Well, your SID may be a little more stable than you think. In the below output you can see that as I use the SQL*Plus connect command to change users my AUDSID increases, new values being selected from the dictionary sequence, but my SID stays the same.
{the below is on 10.2.0.3, no RAC involved, using the windows SQL*Plus client. All usernames, passwords and sids are not real, to protect the innocent}.
1* SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL TDB> TDB> / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253779 524 -- change user, same instance TDB> CONNECT MDW/MDW@TDB Connected. -- {By the way, the information below, plus some I cut from the output, is from my logon.sql script} you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL 2 / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- MDW 253782 524 -- The SID is exactly the same. The AUDSID has increased by 3. This is a quiet dev/test system but -- I am not the only user and my session had been alive for about 30 minutes, -- so I would guess two other people logged on in that time -- However, would you have predicted that the SID would remain the same? -- I will connect back to the original user TDB> CONNECT DWPERF/DWPERF@TDB Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL 2 / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253783 524 -- The SID stays the same still and the AUDSID goes up by one as it is a new session. No other new -- sessions in the minute or so since I last changed accounts. -- -- SYS is a different beast to all other accounts, so if I connect as SYS, will it force a new SID to be used? TDB> CONNECT SYS/GUESS@TDB AS SYSDBA Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL 2 / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- SYS 4294967295 524 -- That did surprise me. I thought that not only would I get an AUDSID set to that special value but -- that Oracle would create a new session to ensure it was clean. -- -- Maybe the persistence of the SID is some trick of SQL*Plus? It knows I have not really logged out -- of the database and just clears pointers and memory constructs, so I will log into a second instance -- and back... TDB> connect dwperf/dwperf@TDB2 Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB2> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 233882 489 1 row selected. TDB2> connect dwperf/dwperf@TDB Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; any key> USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253785 524 1 row selected. -- Again I was surprised to see the same SID. The AUDSID has increased of course. -- I do wonder if someone created a session whilst I did this test or if a new AUDSID values is selected -- from the sequence and discarded by the SYS logon. I should test that... -- -- What will happen if I log of SQL*Plus completely and log back in TDB>exit -- and now I log back in... SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 24 14:23:46 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; any key> USER ------------------------------ SYS_CONTEXT('USERENV','SESSIONID') -------------------------------------------------------------------------------- SYS_CONTEXT('USERENV','SID') -------------------------------------------------------------------------------- DWPERF 253786 524 1 row selected. -- SAME ID!!!!
I must confess, I have never really looked into the persistance of SID before, I did think that Oracle would increment the SID with each and every logon and cycle around. However, it seems to me that the Oracle kernal keeps the constructs of a session around when that session logs out and re-uses it if another session is made quite soon afterwards. It makes sense, why destroy everything if you can re-use it soon after – it will speed up connections. As anyone who has had to deal with an oralce system with high session turnover knows, creating sessions is a very “heavy” task in Oracle. It seems lighter on 10 than 9 {and I am just going on “DBA Feel” when I say this, no concrete evidence} and I wonder if this re-use of SIDs is part of that. I should google it.
I am constantly guilty of starting a blog on one topic and drifting into another. So to keep up the pattern…
I had to edit my output to make the results of my check on USER, SID and AUDSID look OK in this posting. This is because, though your SID may be the same between SQL*Plus sessions, your column definitions ain’t. {If anyone thinks I should split this off to a dedicated BLOG post, just say so}
TDB> col user form a11 TDB> col SYS_CONTEXT('USERENV','SID') form a28 TDB> col SYS_CONTEXT('USERENV','SESSIONID') form a34 TDB> / any key> USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253786 524 -- col commands seem not to be persisted in SQL*Plus when you log onto a new session TDB> connect mdw/mdw@TDB Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; any key> USER ------------------------------ SYS_CONTEXT('USERENV','SESSIONID') -------------------------------------------------------------------------------- SYS_CONTEXT('USERENV','SID') -------------------------------------------------------------------------------- ODS 253795 524 1 row selected. TDB> -- You see? -- An easier way to demo this is to check the defined variables and col settings with the -- very useful, and often overlooked, naked DEFINE and COL commands TDB> define DEFINE _DATE = "24-JUN-2010 14:31" (CHAR) DEFINE _CONNECT_IDENTIFIER = "TDB" (CHAR) DEFINE _USER = "MDW" (CHAR) DEFINE _PRIVILEGE = "" (CHAR) DEFINE _SQLPLUS_RELEASE = "1002000100" (CHAR) DEFINE _EDITOR = "Notepad" (CHAR) DEFINE _O_VERSION = "Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options" (CHAR) DEFINE _O_RELEASE = "1002000300" (CHAR) DEFINE V_NEWPRO = "TDB" (CHAR) TDB> col SP2-0045: * no COLUMN defined -- and just to show COL shows defined column definitions... TDB> col eric form a30 TDB> col COLUMN eric ON FORMAT a30 TDB>