Friday Philosophy – The power of cooperation June 27, 2010
Posted by mwidlake in Friday Philosophy, Perceptions, performance.Tags: behaviour, perception, performance
3 comments
Being the person responsible for the performance of an Oracle-based system can be an oddly lonely working experience. It can also be a very gregarious position as you get to meet a lot of people and discuss lots of different aspects of many systems – all systems seem to have performance issues and so people come along to you, hoping you can identify a “work faster” database setting for them.
But you are often the only person who specialises in Oracle performance. You generally need to be in an organisation that is very large or where performance is key to success for there to be justification for dedicating a person to the topic. To have more than one person dedicated to performance your organisations has to have a very strong focus on getting the best performance out of the Oracle and related systems {or really, really atrocious performance issues đŸ™‚ }. So usually there is no one else around who is as experienced (or more so) as yourself to discuss such things over with or ask for a second opinion.
Which is why I am very lucky at the moment. I’m working in a team of oracle performance people. There are 2.5 of us (one is a manager with other responsibilities, so he only counts as half). Being able to say “Hey, Dave, What do you think of the wait times on scattered reads?” or “how in heck do I force this nested subquery on a view to use a hash join?” and get some other ideas is very valuable.
What is also interesting is how opinions and preferred techniques on tuning can be different and just as valid. As an example, last week I was working on a poorly performing statement. I was at home and it was the evening, so I was not communicating with the rest of the team. I managed to get the code down from 40 minutes to just under 20 by using a combination of a LEADING and USE_HASH hint. I sent the code back to the user. Only to see that within thirty seconds of each other my colleague Graeme had also sent the user a response, again getting the code down to around 20 minutes. Graeme had pulled a chunk of the code into a subquery factoring “WITH” clause and added cardinality hints. Totally different changes.
So Graeme and I then had a “philosophical” discussion about the different changes {“Mine is best” – “No! Mine is, yours is all bloated and complex”- “Your hint is less future-flexible!!!”}. Only joking, we actually discussed the changes and why we each chose what we did. Graeme commented that is showed that tuning was an art and not a science and I countered that it was a science, as we had both identified where we felt the execution plan could be improved but used different techniques to get there. The thing is, Oracle is so complex and has so many options to influence the optimiser that you have flexibility to chose different tools and techniques.
We had both identified the same main improvement but had each come up with different tweaks for later in the plan.
The end result was that we went with Graeme’s main plan {he is bigger than me} but we pulled in my tweak. That bought the execution time down to around 10 minutes, so about four times faster over all and twice as fast of either of us alone. That is one of the advantages of not working alone.
We also then discussed how we could get this code down to seconds with the use of either Materialized views or changing the process that generated the report to do so incrementally and store the daily results. Until one of us realised we had reached the boundary of compulsive tuning disorder. The report running in 10 minutes was enough improvement to satisfy the business, the report was only going to be run over the next couple of months, so spending a day or two re-working it further was going to be fun – but of no advantage to the business. We told each other to finish for the day. So another advantage of not working alone is that not only do you get more technical input but your help prevent each other losing sight of the overall aim.
It really does help to have two people working on the same area.
{There is a sneaky way of getting beyond being a lone performance specialist. If you are in an organisation long enough you can usually find some other idiot who is silly enough to want to learn more about performance and you can train them up. It does not take long before they know enough to start coming up with things you never thought of. Oracle is, after all, full of many ways to do the same thing and you can’t know it all}.
SID stays same, AUDSID alters June 24, 2010
Posted by mwidlake in internals.Tags: data dictionary, sql*plus
2 comments
In a couple of recent posts I described what AUDSID is and how you check out what you SID and AUSID are.
Well, your SID may be a little more stable than you think. In the below output you can see that as I use the SQL*Plus connect command to change users my AUDSID increases, new values being selected from the dictionary sequence, but my SID stays the same.
{the below is on 10.2.0.3, no RAC involved, using the windows SQL*Plus client. All usernames, passwords and sids are not real, to protect the innocent}.
1* SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL TDB> TDB> / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253779 524 -- change user, same instance TDB> CONNECT MDW/MDW@TDB Connected. -- {By the way, the information below, plus some I cut from the output, is from my logon.sql script} you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL 2 / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- MDW 253782 524 -- The SID is exactly the same. The AUDSID has increased by 3. This is a quiet dev/test system but -- I am not the only user and my session had been alive for about 30 minutes, -- so I would guess two other people logged on in that time -- However, would you have predicted that the SID would remain the same? -- I will connect back to the original user TDB> CONNECT DWPERF/DWPERF@TDB Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL 2 / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253783 524 -- The SID stays the same still and the AUDSID goes up by one as it is a new session. No other new -- sessions in the minute or so since I last changed accounts. -- -- SYS is a different beast to all other accounts, so if I connect as SYS, will it force a new SID to be used? TDB> CONNECT SYS/GUESS@TDB AS SYSDBA Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL 2 / USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- SYS 4294967295 524 -- That did surprise me. I thought that not only would I get an AUDSID set to that special value but -- that Oracle would create a new session to ensure it was clean. -- -- Maybe the persistence of the SID is some trick of SQL*Plus? It knows I have not really logged out -- of the database and just clears pointers and memory constructs, so I will log into a second instance -- and back... TDB> connect dwperf/dwperf@TDB2 Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB2> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 233882 489 1 row selected. TDB2> connect dwperf/dwperf@TDB Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; any key> USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253785 524 1 row selected. -- Again I was surprised to see the same SID. The AUDSID has increased of course. -- I do wonder if someone created a session whilst I did this test or if a new AUDSID values is selected -- from the sequence and discarded by the SYS logon. I should test that... -- -- What will happen if I log of SQL*Plus completely and log back in TDB>exit -- and now I log back in... SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 24 14:23:46 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; any key> USER ------------------------------ SYS_CONTEXT('USERENV','SESSIONID') -------------------------------------------------------------------------------- SYS_CONTEXT('USERENV','SID') -------------------------------------------------------------------------------- DWPERF 253786 524 1 row selected. -- SAME ID!!!!
I must confess, I have never really looked into the persistance of SID before, I did think that Oracle would increment the SID with each and every logon and cycle around. However, it seems to me that the Oracle kernal keeps the constructs of a session around when that session logs out and re-uses it if another session is made quite soon afterwards. It makes sense, why destroy everything if you can re-use it soon after – it will speed up connections. As anyone who has had to deal with an oralce system with high session turnover knows, creating sessions is a very “heavy” task in Oracle. It seems lighter on 10 than 9 {and I am just going on “DBA Feel” when I say this, no concrete evidence} and I wonder if this re-use of SIDs is part of that. I should google it.
I am constantly guilty of starting a blog on one topic and drifting into another. So to keep up the pattern…
I had to edit my output to make the results of my check on USER, SID and AUDSID look OK in this posting. This is because, though your SID may be the same between SQL*Plus sessions, your column definitions ain’t. {If anyone thinks I should split this off to a dedicated BLOG post, just say so}
TDB> col user form a11 TDB> col SYS_CONTEXT('USERENV','SID') form a28 TDB> col SYS_CONTEXT('USERENV','SESSIONID') form a34 TDB> / any key> USER SYS_CONTEXT('USERENV','SESSIONID') SYS_CONTEXT('USERENV','SID') ----------- ---------------------------------- ---------------------------- DWPERF 253786 524 -- col commands seem not to be persisted in SQL*Plus when you log onto a new session TDB> connect mdw/mdw@TDB Connected. you are utilising central dba scripts altering the default date format to be dd-mon-yyyy hh24:mi pause on, pages 24 head on recsep off TDB> SELECT USER,sys_context('USERENV','SESSIONID'),SYS_CONTEXT('USERENV','SID') FROM DUAL; any key> USER ------------------------------ SYS_CONTEXT('USERENV','SESSIONID') -------------------------------------------------------------------------------- SYS_CONTEXT('USERENV','SID') -------------------------------------------------------------------------------- ODS 253795 524 1 row selected. TDB> -- You see? -- An easier way to demo this is to check the defined variables and col settings with the -- very useful, and often overlooked, naked DEFINE and COL commands TDB> define DEFINE _DATE = "24-JUN-2010 14:31" (CHAR) DEFINE _CONNECT_IDENTIFIER = "TDB" (CHAR) DEFINE _USER = "MDW" (CHAR) DEFINE _PRIVILEGE = "" (CHAR) DEFINE _SQLPLUS_RELEASE = "1002000100" (CHAR) DEFINE _EDITOR = "Notepad" (CHAR) DEFINE _O_VERSION = "Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, OLAP and Data Mining options" (CHAR) DEFINE _O_RELEASE = "1002000300" (CHAR) DEFINE V_NEWPRO = "TDB" (CHAR) TDB> col SP2-0045: * no COLUMN defined -- and just to show COL shows defined column definitions... TDB> col eric form a30 TDB> col COLUMN eric ON FORMAT a30 TDB>
SQL*Plus Line Insertion June 22, 2010
Posted by mwidlake in development.Tags: SQL
4 comments
I’m quite new to SQL*Plus, having only been using it for just shy of 19 years… And today I learnt something new about it. You can over-write lines anywhere in the buffer by stating the line number:
1 select p.surname 2 ,p.first_forename 3 ,a.house_number 4 ,a.post_code 5 from person p 6 ,adres a 7* where p.addr_id=a.addr_id -- Damn, miss-spelt address in line 6 TDB> 6 ,address a TDB> l 1 select p.surname 2 ,p.first_forename 3 ,a.house_number 4 ,a.post_code 5 from person p 6 ,address a 7* where p.addr_id=a.addr_id
I use SQL*Plus a lot as (a) I am getting stuck in my ways (b) it is always available on all Oracle sites (c) I can use all those useful little scripts I’ve developed over the years. Using “edit” to fire up vi or notepad to edit the file is fine but for correcting a typo I usually type in the line number and c/mistake/correct. This other method I’ve just come across of replacing the line could be useful.
But I played around and found I could do a couple of other things. I can add lines at the end of the script {maybe not so useful as, when you do a list, you are at the end of the buffer so you just insert and type) – but I can also insert lines at the START of the script:
1 select p.surname 2 ,p.first_forename 3 ,a.house_number 4 ,a.post_code 5 from person p 6 ,address a 7* where p.addr_id=a.addr_id TDB> 8 and a.dob <sysdate-(18*365) TDB> l 1 select p.surname 2 ,p.first_forename 3 ,a.house_number 4 ,a.post_code 5 from person p 6 ,address a 7 where p.addr_id=a.addr_id 8* and a.dob <sysdate-(18*365) -- if you enter a line number a way beyond the end of the buffer, SQL*Plus -- intelligently corrects it to the next valid line number TDB> 12 order by 1,2 TDB> l 1 select p.surname 2 ,p.first_forename 3 ,a.house_number 4 ,a.post_code 5 from person p 6 ,address a 7 where p.addr_id=a.addr_id 8 and a.dob <sysdate-(18*365) 9* order by 1,2 -- And it works from the other end of the file. Only it does not replace the -- first valid line, it inserts the new line and moves all the others "down". TDB> 0 select count(*) from ( TDB> l 1 select count(*) from ( 2 select p.surname 3 ,p.first_forename 4 ,a.house_number 5 ,a.post_code 6 from person p 7 ,address a 8 where p.addr_id=a.addr_id 9 and a.dob <sysdate-(18*365) 10* order by 1,2 TDB> a ) 10* order by 1,2 ) -- script finished... TDB> / and a.dob <sysdate-(18*365) * ERROR at line 9: ORA-00904: "A"."DOB": invalid identifier -- Damn! another typo. -- I think in this case I will just go to the line and <em>C</em>hange the character - it -- is less effort than typing the whole line again. TDB> 9 9* and a.dob <sysdate-(18*365) TDB> c/a./p./ 9* and p.dob <sysdate-(18*365) DWPDV1> l 1 select count(*) from ( 2 select p.surname 3 ,p.first_forename 4 ,a.house_number 5 ,a.postcode 6 from person p 7 ,address a 8 where p.addr_id=a.addr_id 9 and p.dob <sysdate-(18*365) 10* order by 1,2 ) TDB> / Any Key> COUNT(*) ---------- 31963 1 row selected.
Up until now, to insert a new line at the start of the file I either had to save and edit it or I altered the first line and re-inserted the original. I do quite often want to insert a first line as I like to use the “select count(*) from (original_query) to test the performance of a sql statement without having the actual data sent over SQL*Net to my session. It takes that network activity out of the equation.
The other thing I do occasionally is add an explain plan statement:
TDB> l 1 select count(*) from ( 2 select p.surname 3 ,p.first_forename 4 ,a.house_number 5 ,a.post_code 6 from person p 7 ,address a 8 where p.addr_id=a.addr_id 9 and p.dob <sysdate-(18*365) 10* order by 1,2 ) TDB> -13 explain plan set statement_id='MDW' for TDB> l 1 explain plan set statement_id='MDW' for 2 select count(*) from ( 3 select p.surname 4 ,p.first_forename 5 ,a.house_number 6 ,a.post_code 7 from person p 8 ,address a 9 where p.addr_id=a.addr_id 10 and p.dob <sysdate-(18*365) 11* order by 1,2 ) TDB> / Explained. TDB>
dbms_stats.set_table_stats “defaults” June 21, 2010
Posted by mwidlake in internals, statistics.Tags: performance, SQL, statistics
2 comments
What happens if you call dbms_stats.set_table_stats without passing in any of the values to set?
I know, why would you do it anyway? Well, I did so by accident. If I want to gather quick stats on a test table I execute something like:
exec dbms_stats.gather_table_stats(OWNNAME =>user,TABNAME =>’TEST_TAB1′)
If I am feeling generous I might state ESTIMATE_PERCENT too.
I was doing some testing work and was gathering stats and also setting stats manually. Then I started to see several of my test tables all had 2000 rows and were 100 blocks in size – at least according to the stats. I knew this was not possible. It turned out to be Cut ‘n’ Paste fingerf the trouble and I was issuing.
exec dbms_stats.set_table_stats(OWNNAME =>user,TABNAME =>’TEST_TAB1′)
If the table did not already have stats this set the stats on the table to default values of 2000 rows, 100 blocks. If the table already had stats then they were left as they were.
If those figures ring a bell, then that is because they are the default values used if a table has no stats and you have no dynamic sampling. See this table of defaults
Anyway, below is a little worked example of these default values being set. Oh, version is 10.2.0.3.
TDB> drop table TEST1 purge 2 / TDB> select sysdate from dual; SYSDATE ----------------- 21-JUN-2010 16:52 TDB> CREATE TABLE TEST1 2 AS SELECT ROWNUM ID 3 ,OBJECT_NAME OBJ_NAME 4 FROM DBA_OBJECTS TDB> SELECT TABLE_NAME,NUM_ROWS,BLOCKS,SAMPLE_SIZE 2 FROM DBA_TABLES 3 WHERE OWNER=USER AND TABLE_NAME = 'TEST1' 4 / TABLE_NAME NUM_ROWS BLOCKS SAMPLE_SIZE ------------------------------ ---------- ---------- ----------- TEST1 -- New table, no stats yet gathered, the columns hold null -- Call dbms_stats.SET_TABLE_STATS, setting nothing TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1') TDB> SELECT TABLE_NAME,NUM_ROWS,BLOCKS,SAMPLE_SIZE 2 FROM DBA_TABLES 3 WHERE OWNER=USER AND TABLE_NAME = 'TEST1' TABLE_NAME NUM_ROWS BLOCKS SAMPLE_SIZE ------------------------------ ---------- ---------- ----------- TEST1 2000 100 2000 -- The columns are set to defaults -- Gather proper stats TDB> exec dbms_stats.gather_table_stats(ownname =>user,tabname =>'TEST1',estimate_percent=>10) TABLE_NAME NUM_ROWS BLOCKS SAMPLE_SIZE ------------------------------ ---------- ---------- ----------- TEST1 205430 956 20543 -- Now use SET_TABLE_STATS as intended, setting numrows to a value TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1',numrows=>5000) TABLE_NAME NUM_ROWS BLOCKS SAMPLE_SIZE ------------------------------ ---------- ---------- ----------- TEST1 5000 956 20543 -- Try the naked SET_TABLE_STATS TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1') TABLE_NAME NUM_ROWS BLOCKS SAMPLE_SIZE ------------------------------ ---------- ---------- ----------- TEST1 5000 956 20543 TDB> select sysdate from dual; Any Key> SYSDATE ----------------- 21-JUN-2010 16:52 -- And let us see how the stats have changed over the last few minutes. TDB> select table_name,stats_update_time 2 from dba_tab_stats_history 3 where table_name = 'TEST1' 4 / Any Key> TABLE_NAME STATS_UPDATE_TIME ------------------------------ -------------------------------- TEST1 21-JUN-10 16.52.03.028086 +00:00 TEST1 21-JUN-10 16.52.05.109905 +00:00 TEST1 21-JUN-10 16.52.06.906204 +00:00 TEST1 21-JUN-10 16.52.08.329664 +00:00
Who am I? {What is my SID} June 18, 2010
Posted by mwidlake in internals.Tags: data dictionary, SQL
2 comments
Yesterday, I posted about what AUDSID is. One of the comments, from Gary Myers, included the info that you can get AUDSID via sys_context – “select sys_context(‘USERENV’,’SESSIONID’) from dual;”.
In one of those wonderous little quirks of fate, it was the need to get my sesssion’s current SID and the use of “select sys_context(‘USERENV’,’SESSIONID’) from dual;” that reminded me of AUDSID and my intention to say something about it. SESSIONID from that little select is the AUDSID and you then check V$SESSION for the SID of the record with that AUDSID. I guess that works so long as you are NOT logged on as SYS (see previous email for why). See the script below for an example, in the commented out section.
Anyway, back to the nominal topic of getting your SID. Lots of people have posted about it before, but let not plagiarism stop me. I actually tend to use v$mystat myself. From my big book of little scripts I drag from site to site:
-- my_sid.sql -- Martin Widlake 09/06/08 -- Get my current SID -- I can never remember the syntax set pages 32 set pause on spool my_sid.lst select sid from v$mystat where rownum < 2 / --or --select sys_context('USERENV','SID') --from dual --/ -- or -- SELECT sid -- FROM V$SESSION -- WHERE audsid = SYS_CONTEXT('userenv','sessionid'); --/ spool off clear col -- -- EOF -- >@my_sid SID --------- 530
Boy does it take me a lot of words and even a 31-line output file to say:
select sid from v$mystat where rownum < 2;
I do wish, like many, that you could simple say “select sid from dual;” in the same way as “select user from dual”, but it the great scheme of things it does not keep me awake at night.
What is AUDSID June 17, 2010
Posted by mwidlake in internals.Tags: audit, data dictionary, performance
15 comments
If you look at v$session you see SID and SERIAL#, which most of us DBA-types know uniquely identify a session and allow you to kill it (*with the relevant permissions and knowledge you are not about to compromise a business process).
But what is AUDSID?
desc v$session Name Null? Type ----------------------------------------- -------- ------------ SADDR RAW(8) SID NUMBER SERIAL# NUMBER AUDSID NUMBER PADDR RAW(8) USER# NUMBER USERNAME VARCHAR2(30) COMMAND NUMBER OWNERID NUMBER TADDR VARCHAR2(16) LOCKWAIT VARCHAR2(16) STATUS VARCHAR2(8) SERVER VARCHAR2(9) SCHEMA# NUMBER SCHEMANAME VARCHAR2(30) OSUSER VARCHAR2(30) ...
AUDSID is a unique identifier for the session and is used in sys.aud$ , as the SESSIONID column. It is the leading column of the only index on sys.aud$
IND_OWNER IND_NAME TAB_NAME PSN COL_NAME ----------- ------------------ ------------------ --------- ------------ SYS I_AUD1 AUD$ 1 SESSIONID 2 SES$TID
All audit records for a session are recorded with the same SESSIONID/AUDSID. Activity can be audited at session level and statement level. If audited at session level, only a single record for that audited thing is recorded in the log. This is updated to indicate any new audited actions that occur on that thing for the duration of the session, so it is important that Oracle can efficiently identify and update that record, as the impact of SQL AUDIT on database functionality wants to be kept to a minimum.
How does AUDSID relate to SID, SERIAL#, USERNAME etc? Let’s have a quick look.
select sid, serial# ,audsid ,username,OSUSER,schemaname from v$session SID SERIAL# AUDSID USERNAME OSUSER SCHEMANAME ---------- ---------- ---------- -------------------- ------------------------- -------------- 485 4745 165550 LEX_AAAAAEX oracle LEX_AAAAAEX 487 23712 165546 LEX_AAAAAEX oracle LEX_AAAAAEX 497 40388 0 oracle SYS 498 19269 0 oracle SYS 502 13362 165432 EAAAAA govanii EAAAAA 505 407 163821 LEXAAAAA oracle LEXAAAAA 506 6302 4294967295 SYS widlake SYS 511 11702 165518 OAAAAA backerella OAAAAA 512 17076 165490 LEX_AAAAAEX oracle LEX_AAAAAEX 518 9066 165507 OAAAAA Manoled OAAAAA 519 6956 163976 LEX_AAAAAEX oracle LEX_AAAAAEX 521 15272 4294967295 SYS widlake SYS 523 4825 163975 LEX_AAAAAEX oracle LEX_AAAAAEX 536 63941 4294967295 SYS backerella SYS 524 19740 165548 LEX_AAAAAEX oracle LEX_AAAAAEX 525 850 165549 LEX_AAAAAEX oracle LEX_AAAAAEX 526 45112 165516 XXX backerella XXX 527 11086 163963 LEX_AAAAAEX oracle LEX_AAAAAEX 529 1662 163818 LEX_AAAAA oracle LEX_AAAAA 530 62788 4294967295 SYS widlake SYS 537 1 0 oracle SYS 540 1 0 oracle SYS
It does not. But it is interesting to note that this database does not have auditing enabled but the AUDSID is still populated.
It is also interesting that all SYS accounts have either an AUDSID of 0 or the same AUDSID, 4294967295 {which is 1 less than 4294967295…1024*1024*4}. I might come back to that shared AUDSID at a later date…
All the “normal” AUDSIDs are unique and about the same value, 163900 to 165560.
If you look at sys.aud$ the SESSIONID increments as time goes on. By this I do not mean it is in order of SESSIONID, it is not, but as time goes by the SESSIONID increments. That sounds a lot like what you would see with a sequence to me. And in fact that is because the value IS from a sequence:
select * from dba_sequences where sequence_name = 'AUDSES$' SEQUENCE_OWNER SEQUENCE_NAME MIN_VALUE MAX_VALUE INCREMENT_BY cycle O CACHE_SIZE LAST_NUMBER --------------- --------------- ---------- ---------- ------------ ----- - ---------- ----------- SYS AUDSES$ 1 2000000000 1 Y N 10000 173695
So the AUDSID is a unique value that comes from a sequence and is used in the audit trail. All SYS accounts have the same AUDSID or have it set to zero.
Why do many sessions have the AUDSID set to zero? Because they are internal Oracle processes:
select sid, serial# ,audsid ,username,OSUSER,schemaname,program from v$session where AUDSID = 0 SID SERIAL# AUDSID USERNAME OSUSER SCHEMANAME PROGRAM ----- ---------- ---------- ------------ ------------ ------------ ---------------------------- 506 6887 0 oracle SYS oracle@db25.eng.ham.uk. (q001) 526 45682 0 oracle SYS oracle@db25.eng.ham.uk. (q002) 531 91 0 YYYYY oracle DLPP1 oracle@db25.eng.ham.uk. (J000) 533 3 0 oracle SYS oracle@db25.eng.ham.uk. (CJQ0) 537 1 0 oracle SYS oracle@db25.eng.ham.uk. (q000) 540 1 0 oracle SYS oracle@db25.eng.ham.uk. (QMNC) 546 1 0 oracle SYS oracle@db25.eng.ham.uk. (MMNL) 547 1 0 oracle SYS oracle@db25.eng.ham.uk. (MMON) 548 1 0 oracle SYS oracle@db25.eng.ham.uk. (RECO) 549 1 0 oracle SYS oracle@db25.eng.ham.uk. (SMON) 550 1 0 oracle SYS oracle@db25.eng.ham.uk. (CKPT) 551 1 0 oracle SYS oracle@db25.eng.ham.uk. (LGWR) 552 1 0 oracle SYS oracle@db25.eng.ham.uk. (DBW0) 553 1 0 oracle SYS oracle@db25.eng.ham.uk. (MMAN) 554 1 0 oracle SYS oracle@db25.eng.ham.uk. (PSP0) 555 1 0 oracle SYS oracle@db25.eng.ham.uk. (PMON)
The activity of PMON, SMON, CKPT is not audited of course. And job schedulers doing nothing do not have their activity audited either.
Oh, you remember I said AUDSID was unique? Well, check back at the sequence. It wraps and has a maximum value of 2,000,000,000. It is going to start repeating values after 2 billion sessions. Make a note to ensure you have cleaned up your audit trail before you hit your 2 billion and first session…
—
Saturday Philosophy – The unbelievably small world of VLDBs June 12, 2010
Posted by mwidlake in VLDB.Tags: Meeting, rant, user group, VLDB
9 comments
Yesterday I posted about the potential for a Oracle in Science community within the UK Oracle user group {and wider for that matter, there is after all a world Oracle Life Science community but it is currently less vibrant than it was, sadly}.
My friend and occasional drinking partner Peter Scott replied to say he felt there was “a place for a SIG for stonking great databases” {now wouldn’t SGDB be a better TLA than VLDB? đŸ™‚ }.
Well, I would agree but for one small detail. An apparent lack of anyone willing to be part of the community.
When I was building a very considerable VLDB {and I’m sorry I keep going on about it, I’ll try and stop soon} back in the early to mid 2000’s I seemed to be working in a vacuum of information, let alone prior experience. Yes, there was stuff in the Oracle manuals about how big things could theoretically be made and some vague advice on some aspects of it, but an absolute lack of any visible Oracle customers with anything even approaching the sizes I was contemplating. 2TB was about the limit and I was already way beyond that. Was this because I really was pushing the boundaries of database size? Well, I have since found out that whilst I was up there just behind the leading edge, there were several databases much, much bigger than mine and others already envisioned that might hit the Petabyte level, let alone Terabyte.
The thing is, no one would speak about them. At all.
We were left to do it all pretty much from scratch and it would not have been possible if I had not spent years building up with VLDBS as the definition of a VLDB size increased, plus of course cracking support by the other DBAs and Systems Admins around me. And to be fair, Oracle Corp helped us a lot with our efforts to build these massive databases. Interestingly, one Oracle Consultant would regularly tell me that our systems really were not so unusually big and there were plenty larger. He usually said this when I asked, exasperatedly as something else failed to scale, if Oracle had every tested things at this level :-). But despite constantly asking to meet with these people with massive systems, so we could exchange war stories and share advice, and being promised such contacts by Oracle, they never materialized except for CERN – who we already talked to as a fellow scientific organisation – and Amazon, who it turns out did things in a very different way to us {but it was really good to talk to them and find out how they did do their big databases, thanks guys}. Both were at the same scale or just behind where we were.
This is because most of the people with massive oracle databases will not talk about them as they are either run by the largest financial organisations, are to do with defense or in some other way just not talked about. In his comment Peter refers to a prior client with an OLTP-type system that is now around the PB scale. I would be pretty sure Peter can’t say who the client is or any details about how the system was designed.
So although I think there is a real need for a “stonking great databases” forum, I think there is a real problem in getting a user community of such people/organisations together. And if you did, none of the members would be allowed to say much about how they achieved it, so all you could do would be sit around and brag about who has the biggest. There is an Oracle community about such things, called the Terabyte Club, but last I knew it was invite-only and when I managed to get invited, it turned out that mine was biggest by a considerable margin, so I was still not meeting these elusive groups with 500TB databases. Maybe there is an Oracle-supported Ă¼ber database society but as I never signed the official secrets act might not have been eligible to play.
If I am wrong and anyone does form such a user group (or is in one!) I would love to be a member and I would strive to present and help.
I’ll finish with what appears to be a contradiction to what I have just written. There already is a UKOUG User Group that deals with large systems and I chair it – the Management and Infrastructure SIG. {sorry, the info on the web page could do with some updating}. Part of what we cover is VLDBs. But we also cover Very Many DataBases (companies with thousands of instances) and Very Complex DataBases plus how you go about the technical and management aspects of working in a massive IT Infrastructure. It might be that we could dedicate a meeting to VLDBs and see how it goes, but I know that whilst many who come along are dealing with database of a few TB, no one is dealing with hundreds of TB or PB database. Either that or they are keeping quiet about it, which takes us back to my main point. The MI SIG is probably the closest to a VLDB SIG we have in Europe though, and is a great bunch of people, so if you have a VLDB and want to meet some fellow sufferers, we have our next meeting on 23rd September in the Oracle City office.
Friday Philosophy – The Science of Oracle June 11, 2010
Posted by mwidlake in Friday Philosophy, Meeting notes.Tags: Meeting, science, user group
3 comments
The title to this blog is very misleading. It is not about scientifically understanding how Oracle technologies work or even about the technology itself.
It is actually about the fact that a lot of scientific organisations, both academic and commercial, work with Oracle technology in ways to do directly with the science {as opposed to using it for CRM, HR or tracking students and results, which they also do but I’m not interested in that}.
If you have worked in Academia or charitable scientific organisations it can be a little suprising that Oracle is used so much, as it is expensive and corporate – there is a tendency to be poor and anti-corporate in such environments. But the thing is, Oracle is able to handle large amounts of complex data, in many formats, in many ways, and most programming languages can easily access the data in the database. You can achieve a lot with just PL/SQL and Java of course.
Commercial scientific organisations, like large Pharmaceuticals, use it for the same reasons of course, but for them the cost is not such an issues {I can imagine IT managers in such organisations going “It damn well IS an issue!” but trust me, not in the same critical way}.
What is the point of this blog? Well, it’s about user communities. The scientific community have a tendency to push the Oracle database further than most Oracle users do. Take data volumes. I worked for many years for the UK-side of the Human Genome Project and part of what I did was create an Oracle database that scaled to 100TB. Even now that is pretty large but I was designing and implementing it back in 2004-2005. The data volumes CERN are going to have to handle for the Large Hadron Collider just dwarf that, and they only hold summarised data of summaries of the actual raw scientific data generated.
Another aspect is coping with very rapid change, for example systems to support lab processes. This is similar to your standard factory management system except that the level of change can be daunting. The process can change, well, weekly, as the science and techniques improve in the lab. Those scientist might even completely change what they are doing when some unexpected avenue opens up. I say “might”, seemed to happen every month.
In scientific organisations there tends to be more openness about what and how they do things. Academic and charitable scientific organisations tend to put less barriers in the way of exchanging knowledge than corporations do and so that encourages more exchange of information. When I was working in the area I was positively encouraged to go to conferences and present. Obviously this is not always true and scientific corporations, like Pharmaceuticals, have gained {rightly or wrongly} a reputation for being very reticent about sharing any knowledge at all. But often the individuals involved will share.
So, the scientific community push areas of the technology very hard, they tend to be an open bunch of people, cost is often critical and, the final thing I have not mentioned, is that they often speak a language only vaguely recognisable as English, due to the jargon. Sounds like a community to me.
The real reason I mention all this is that it looks like, after about 4 years of considering and discussing having a science SIG {Special Interest Group} in the UKOUG, I will finally be putting together an agenda for an initial meeting for such a thing. I wonder if it will be a success?
I can’t Explain Why June 8, 2010
Posted by mwidlake in internals, performance.Tags: data dictionary, explain plan, SQL
3 comments
Have you ever tried to use Explain Plan and it gives you an error like the below (this is on 10.2)?
DWDBT1> set autotrace on 1 select count(*) 2 from person pers 3 ,person_name pena 4 where pena.pers_id=pers.pers_id 5* and pena.surname='SMITH' DWDBT1> / COUNT(*) ---------- 23586 1 row selected. Execution Plan ---------------------------------------------------------- ERROR: ORA-00604: error occurred at recursive SQL level 1 ORA-00959: tablespace 'USER_TEMP' does not exist SP2-0612: Error generating AUTOTRACE EXPLAIN report
I seem to run into this once a year or so and, as it has happened three times now, this time around my brain had at long last hung onto the details and I remembered the cause. As a clue, I run into this issue when I am using a new development database that was derived from a live one…
As you can see from the above error message, tablespace USER_TEMP does not exist. How can you be referencing a tablespace that does not exist? First thing, does the tablespace really not exist and thus the error is real?
DWDBT1> @ts_lst Enter value for ts_name: u old 12: where tablespace_name like upper('&ts_name'||'%') new 12: where tablespace_name like upper('u'||'%') Any Key>; TS_NAME INI_EXT_K NEXT_EXT_K MIN_EX --------------------------------------------------- ---------- ---------- ------ MAX_EX PCT MIN_EXTLN ST ---------- --------- --------- --------- UNDOTBS 64 1 2147483645 64 ON USERS 40 40 1 2147483645 0 40 ON 2 rows selected.
As you can see, there is no tablespace USER_TEMP. So it must be something to do with PLAN_TABLE, the table that underlies EXPLAIN PLAN. So let’s check out that the table exists.
DWDBT1> desc plan_table Name Null? Type ----------------------------------------------------------- -------- --------------------- STATEMENT_ID VARCHAR2(30) PLAN_ID NUMBER TIMESTAMP DATE REMARKS VARCHAR2(4000) OPERATION VARCHAR2(30) OPTIONS VARCHAR2(255) OBJECT_NODE VARCHAR2(128) OBJECT_OWNER VARCHAR2(30) OBJECT_NAME VARCHAR2(30) OBJECT_ALIAS VARCHAR2(65) OBJECT_INSTANCE NUMBER(38) OBJECT_TYPE VARCHAR2(30) OPTIMIZER VARCHAR2(255) SEARCH_COLUMNS NUMBER ...
Yes, the table exists. Or a view or something that looks like a table anyway. Let’s check further.
DWDBT1> @obj_lst Enter value for obj_name: plan_table old 8: where object_name like upper(nvl('&obj_name','WHOOPS')||'%') new 8: where object_name like upper(nvl('plan_table','WHOOPS')||'%') Enter value for sub_name: old 9: and nvl(subobject_name,'WHOOPS') like upper(nvl('&sub_name','WHOOPS')||'%') new 9: and nvl(subobject_name,'WHOOPS') like upper(nvl('','WHOOPS')||'%') Enter value for obj_type: % old 10: and object_type like upper(nvl('&obj_type','TABLE')||'%') new 10: and object_type like upper(nvl('%','TABLE')||'%') Any Key> OWNER OBJECT_NAME SUBOBJECT_NA OBJ_TYPE CRE_DATE LAST_DDL ---------- -------------------- ------------ ------------ --------------- ------------------------ PUBLIC PLAN_TABLE SYNONYM 10-JUL-07 10-JUL-07 12:19:14 SYS PLAN_TABLE$ TABLE 10-JUL-07 10-JUL-07 12:19:14
If you are not aware, in V10 PLAN_TABLE was replaced with a global temporary table PLAN_TABLE$ and a public synonym of PLAN_TABLE referencing it, which is what you see above. If I quickly pull out a few details of the table, you can see that it is temporary and that is has no tablespace allocated to the table.
1 select table_name,status,temporary TEMP,tablespace_name 2 from dba_tables 3* where owner=user and table_name ='PLAN_TABLE$' DWDBT1> / TABLE_NAME STATUS TEMP TABLESPACE_NAME ------------------------------ -------- ---- ------------------------------ PLAN_TABLE$ VALID Y
The temporary table segment goes into the user’s temporary tablespace (and we are getting close to the cause of the error now, honest). Here is a subset of user details:
USERNAME USER_ID DFLT_TABSPACE ------------ ---------- --------------------------------------------- TEMP_TABSPACE CREATED --------------------------------------------- ----------------- XXADMIN 413 WORKING_128K USER_TEMP 07-DEC-2009 17:34 XXRED 134 DW_COMMON_MG BATCH_TEMP 07-DEC-2009 17:29 DWABCD 414 USERS USER_TEMP 07-DEC-2009 17:3
DWABCD is the user I log in as and it has the temp tablespace set to USER_TEMP, from the original error message. But if I now check for what temporary files are on the system then I see the below:
DWDBT1> select * from dba_temp_files 2 / Any Key...> FILE_NAME ---------------------------------------------------------------------------------------------------- FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT ---------- ------------------------------ ---------- ---------- --------- ------------ --- MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ---------- ---------- ------------ ---------- ----------- /u01/oradata/dwDBT1/temp01.tdbf 1 TEMP 5368709120 655360 AVAILABLE 1 YES 3.4360E+10 4194302 1280 5367660544 655232 /u01/oradata/dwDBT1/batch_temp01.tdbf 2 BATCH_TEMP 104857600 12800 AVAILABLE 1 YES 3.4360E+10 4194302 1280 103809024 12672 2 rows selected.
Only two and neither are called USER_TEMP.
So, the error is occurring when the call to Explain Plan is trying to generate a temporary segment in the non-existent temp tablespace. The fix is to simply set the TEMPORARY tablespace for the user to one that exists {or I guess you could create a new temporary tablespace of the correct name}:
DWDBT1> alter user dwabcd temporary tablespace temp; User altered. DWDBT1> select count(*) from person; Any Key...> COUNT(*) ---------- 322798 1 row selected. `Elapsed: 00:00:03.96 Execution Plan ---------------------------------------------------------- Plan hash value: 1154882994 --------------------------------------------------------------------- | Id | Operation | Name | Rows | Cost (%CPU)| Time | --------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 2417 (1)| 00:01:06 | | 1 | SORT AGGREGATE | | 1 | | | | 2 | TABLE ACCESS FULL| PERSON | 328K| 2417 (1)| 00:01:06 | ---------------------------------------------------------------------
What is the cause of the problem? Well, Oracle will not let you set the temporary tablespace of a user to a non-existent temporary tablespace, as you can see by my attempt to set my user back to the invalid value:
DWDBT1> alter user dwabcd temporary tablespace USER_TEMP 2 / alter user dwabcd temporary tablespace USER_TEMP * ERROR at line 1: ORA-00959: tablespace 'USER_TEMP' does not exist
But it will import users from another system where the temporary tablespace exists and user have it set as their default temporary tablespace. This is why I run into the problem every year or so. I find myself helping out with the creation of development databases from a live database and this mismatch in temporary tablespaces seems to regularly occur.