jump to navigation

What is AUDSID June 17, 2010

Posted by mwidlake in internals.
Tags: , ,
12 comments

If you look at v$session you sid SID and SERIAL#, which most of us DBA-types know uniquely identify a session and allow you to kill it (*with the relevant permissions and knowledge you are not about to compromise a business process).

But what is AUDSID?

desc v$session
 Name                                      Null?    Type
 ----------------------------------------- -------- ------------
 SADDR                                              RAW(8)
 SID                                                NUMBER
 SERIAL#                                            NUMBER
 AUDSID                                             NUMBER
 PADDR                                              RAW(8)
 USER#                                              NUMBER
 USERNAME                                           VARCHAR2(30)
 COMMAND                                            NUMBER
 OWNERID                                            NUMBER
 TADDR                                              VARCHAR2(16)
 LOCKWAIT                                           VARCHAR2(16)
 STATUS                                             VARCHAR2(8)
 SERVER                                             VARCHAR2(9)
 SCHEMA#                                            NUMBER
 SCHEMANAME                                         VARCHAR2(30)
 OSUSER                                             VARCHAR2(30)
...

AUDSID is a unique identifier for the session and is used in sys.aud$ , as the SESSIONID column. It is the leading column of the only index on sys.aud$

IND_OWNER   IND_NAME           TAB_NAME           PSN       COL_NAME
----------- ------------------ ------------------ --------- ------------
SYS         I_AUD1             AUD$               1         SESSIONID
                                                  2         SES$TID

All audit records for a session are recorded with the same SESSIONID/AUDSID. Activity can be audited at session level and statement level. If audited at session level, only a single record for that audited thing is recorded in the log. This is updated to indicate any new audited actions that occur on that thing for the duration of the session, so it is important that Oracle can efficiently identify and update that record, as the impact of SQL AUDIT on database functionality wants to be kept to a minimum.

How does AUDSID relate to SID, SERIAL#, USERNAME etc? Let’s have a quick look.

select sid, serial# ,audsid ,username,OSUSER,schemaname
from v$session
       SID    SERIAL#     AUDSID USERNAME             OSUSER                    SCHEMANAME
---------- ---------- ---------- -------------------- ------------------------- --------------
       485       4745     165550 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       487      23712     165546 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       497      40388          0                      oracle                    SYS
       498      19269          0                      oracle                    SYS
       502      13362     165432 EAAAAA               govanii                   EAAAAA
       505        407     163821 LEXAAAAA             oracle                    LEXAAAAA
       506       6302 4294967295 SYS                  widlake                   SYS
       511      11702     165518 OAAAAA               backerella                OAAAAA
       512      17076     165490 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       518       9066     165507 OAAAAA               Manoled                   OAAAAA
       519       6956     163976 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       521      15272 4294967295 SYS                  widlake                   SYS
       523       4825     163975 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       536      63941 4294967295 SYS                  backerella                SYS
       524      19740     165548 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       525        850     165549 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       526      45112     165516 XXX                  backerella                XXX
       527      11086     163963 LEX_AAAAAEX          oracle                    LEX_AAAAAEX
       529       1662     163818 LEX_AAAAA            oracle                    LEX_AAAAA
       530      62788 4294967295 SYS                  widlake                   SYS
       537          1          0                      oracle                    SYS
       540          1          0                      oracle                    SYS

It does not. But it is interesting to note that this database does not have auditing enabled but the AUDSID is still populated.

It is also interesting that all SYS accounts have either an AUDSID of 0 or the same AUDSID, 4294967295 {which is 1 less than 4294967295…1024*1024*4}. I might come back to that shared AUDSID at a later date…

All the “normal” AUDSIDs are unique and about the same value, 163900 to 165560.

If you look at sys.aud$ the SESSIONID increments as time goes on. By this I do not mean it is in order of SESSIONID, it is not, but as time goes by the SESSIONID increments. That sounds a lot like what you would see with a sequence to me. And in fact that is because the value IS from a sequence:

select * from dba_sequences where sequence_name = 'AUDSES$'

SEQUENCE_OWNER  SEQUENCE_NAME    MIN_VALUE  MAX_VALUE INCREMENT_BY cycle O CACHE_SIZE LAST_NUMBER
--------------- --------------- ---------- ---------- ------------ ----- - ---------- -----------
SYS             AUDSES$                  1 2000000000            1 Y     N      10000      173695

So the AUDSID is a unique value that comes from a sequence and is used in the audit trail. All SYS accounts have the same AUDSID or have it set to zero.

Why do many sessions have the AUDSID set to zero? Because they are internal Oracle processes:

select sid, serial# ,audsid ,username,OSUSER,schemaname,program
from v$session
where AUDSID = 0
  SID    SERIAL#     AUDSID USERNAME     OSUSER       SCHEMANAME   PROGRAM
----- ---------- ---------- ------------ ------------ ------------ ----------------------------
  506       6887          0              oracle       SYS          oracle@db25.eng.ham.uk. (q001)
  526      45682          0              oracle       SYS          oracle@db25.eng.ham.uk. (q002)
  531         91          0 YYYYY        oracle       DLPP1        oracle@db25.eng.ham.uk. (J000)
  533          3          0              oracle       SYS          oracle@db25.eng.ham.uk. (CJQ0)
  537          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (q000)
  540          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (QMNC)
  546          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (MMNL)
  547          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (MMON)
  548          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (RECO)
  549          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (SMON)
  550          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (CKPT)
  551          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (LGWR)
  552          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (DBW0)
  553          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (MMAN)
  554          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (PSP0)
  555          1          0              oracle       SYS          oracle@db25.eng.ham.uk. (PMON)

The activity of PMON, SMON, CKPT is not audited of course. And job scheduler doing nothing do not have their activity audited either.

Oh, you remember I said AUDSID was unique? Well, check back at the sequence. It wraps and has a maximum value of 2,000,000,000. It is going to start repeating values after 2 billion sessions. Make a noter to ensure you have cleaned up your audit trail before you hit your 2 billion and first session…

I can’t Explain Why June 8, 2010

Posted by mwidlake in internals, performance.
Tags: , ,
3 comments

Have you ever tried to use Explain Plan and it gives you an error like the below (this is on 10.2)?

DWDBT1> set autotrace on

  1  select count(*)
  2  from person pers
  3  ,person_name pena
  4  where pena.pers_id=pers.pers_id
  5* and pena.surname='SMITH'
DWDBT1> /

  COUNT(*)
----------
     23586
1 row selected.

Execution Plan
----------------------------------------------------------
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-00959: tablespace 'USER_TEMP' does not exist

SP2-0612: Error generating AUTOTRACE EXPLAIN report

I seem to run into this once a year or so and, as it has happened three times now, this time around my brain had at long last hung onto the details and I remembered the cause. As a clue, I run into this issue when I am using a new development database that was derived from a live one…

As you can see from the above error message, tablespace USER_TEMP does not exist. How can you be referencing a tablespace that does not exist? First thing, does the tablespace really not exist and thus the error is real?

DWDBT1> @ts_lst
Enter value for ts_name: u
old 12: where tablespace_name like upper('&ts_name'||'%')
new 12: where tablespace_name like upper('u'||'%')
Any Key>;

TS_NAME                                              INI_EXT_K NEXT_EXT_K MIN_EX
--------------------------------------------------- ---------- ---------- ------
    MAX_EX PCT       MIN_EXTLN ST
---------- --------- --------- ---------
UNDOTBS                                                     64                 1
2147483645                  64 ON
USERS                                                       40         40      1
2147483645 0                40 ON

2 rows selected.

As you can see, there is no tablespace USER_TEMP. So it must be something to do with PLAN_TABLE, the table that underlies EXPLAIN PLAN. So let’s check out that the table exists.

DWDBT1> desc plan_table
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ---------------------
 STATEMENT_ID                                                         VARCHAR2(30)
 PLAN_ID                                                              NUMBER
 TIMESTAMP                                                            DATE
 REMARKS                                                              VARCHAR2(4000)
 OPERATION                                                            VARCHAR2(30)
 OPTIONS                                                              VARCHAR2(255)
 OBJECT_NODE                                                          VARCHAR2(128)
 OBJECT_OWNER                                                         VARCHAR2(30)
 OBJECT_NAME                                                          VARCHAR2(30)
 OBJECT_ALIAS                                                         VARCHAR2(65)
 OBJECT_INSTANCE                                                      NUMBER(38)
 OBJECT_TYPE                                                          VARCHAR2(30)
 OPTIMIZER                                                            VARCHAR2(255)
 SEARCH_COLUMNS                                                       NUMBER
...

Yes, the table exists. Or a view or something that looks like a table anyway. Let’s check further.

DWDBT1> @obj_lst
Enter value for obj_name: plan_table
old   8: where object_name like upper(nvl('&obj_name','WHOOPS')||'%')
new   8: where object_name like upper(nvl('plan_table','WHOOPS')||'%')
Enter value for sub_name: 
old   9: and   nvl(subobject_name,'WHOOPS') like upper(nvl('&sub_name','WHOOPS')||'%')
new   9: and   nvl(subobject_name,'WHOOPS') like upper(nvl('','WHOOPS')||'%')
Enter value for obj_type: %
old  10: and object_type like upper(nvl('&obj_type','TABLE')||'%')
new  10: and object_type like upper(nvl('%','TABLE')||'%')
Any Key>

OWNER      OBJECT_NAME          SUBOBJECT_NA OBJ_TYPE     CRE_DATE        LAST_DDL
---------- -------------------- ------------ ------------ --------------- ------------------------
PUBLIC     PLAN_TABLE                        SYNONYM      10-JUL-07       10-JUL-07 12:19:14
SYS        PLAN_TABLE$                       TABLE        10-JUL-07       10-JUL-07 12:19:14

If you are not aware, in V10 PLAN_TABLE was replaced with a global temporary table PLAN_TABLE$ and a public synonym of PLAN_TABLE referencing it, which is what you see above. If I quickly pull out a few details of the table, you can see that it is temporary and that is has no tablespace allocated to the table.

 1 select table_name,status,temporary TEMP,tablespace_name
2 from dba_tables
3* where owner=user and table_name ='PLAN_TABLE$'
DWDBT1> /
TABLE_NAME   STATUS   TEMP   TABLESPACE_NAME
------------------------------ -------- ---- ------------------------------
PLAN_TABLE$   VALID   Y    

The temporary table segment goes into the user’s temporary tablespace (and we are getting close to the cause of the error now, honest). Here is a subset of user details:

USERNAME        USER_ID DFLT_TABSPACE
------------ ---------- ---------------------------------------------
TEMP_TABSPACE                                 CREATED
--------------------------------------------- -----------------
XXADMIN             413 WORKING_128K
USER_TEMP                                     07-DEC-2009 17:34
XXRED               134 DW_COMMON_MG
BATCH_TEMP                                    07-DEC-2009 17:29
DWABCD              414 USERS
USER_TEMP                                     07-DEC-2009 17:3

DWABCD is the user I log in as and it has the temp tablespace set to USER_TEMP, from the original error message. But if I now check for what temporary files are on the system then I see the below:

DWDBT1> select * from dba_temp_files
2 /
Any Key...>

FILE_NAME
----------------------------------------------------------------------------------------------------
FILE_ID    TABLESPACE_NAME                BYTES      BLOCKS     STATUS     RELATIVE_FNO AUT
---------- ------------------------------ ---------- ---------- --------- ------------ ---
MAXBYTES   MAXBLOCKS  INCREMENT_BY USER_BYTES USER_BLOCKS
---------- ---------- ------------ ---------- -----------
/u01/oradata/dwDBT1/temp01.tdbf
1          TEMP                            5368709120 655360    AVAILABLE 1             YES
3.4360E+10  4194302   1280          5367660544  655232
/u01/oradata/dwDBT1/batch_temp01.tdbf
2          BATCH_TEMP                      104857600  12800     AVAILABLE 1             YES
3.4360E+10  4194302   1280          103809024  12672

2 rows selected.

Only two and neither are called USER_TEMP.

So, the error is occurring when the call to Explain Plan is trying to generate a temporary segment in the non-existent temp tablespace. The fix is to simply set the TEMPORARY tablespace for the user to one that exists {or I guess you could create a new temporary tablespace of the correct name}:

DWDBT1> alter user dwabcd temporary tablespace temp;

User altered.

DWDBT1> select count(*) from person;
Any Key...>

COUNT(*)
----------
322798

1 row selected.

`Elapsed: 00:00:03.96

Execution Plan
----------------------------------------------------------
Plan hash value: 1154882994

---------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     1 |  2417   (1)| 00:01:06 |
|   1 |  SORT AGGREGATE    |        |     1 |            |          |
|   2 |   TABLE ACCESS FULL| PERSON |   328K|  2417   (1)| 00:01:06 |
---------------------------------------------------------------------

What is the cause of the problem? Well, Oracle will not let you set the temporary tablespace of a user to a non-existent temporary tablespace, as you can see by my attempt to set my user back to the invalid value:

DWDBT1> alter user dwabcd temporary tablespace USER_TEMP
2 /
alter user dwabcd temporary tablespace USER_TEMP
*
ERROR at line 1:
ORA-00959: tablespace 'USER_TEMP' does not exist

But it will import users from another system where the temporary tablespace exists and user have it set as their default temporary tablespace. This is why I run into the problem every year or so. I find myself helping out with the creation of development databases from a live database and this mismatch in temporary tablespaces seems to regularly occur.

Dynamic Sampling Hint Ignored May 20, 2010

Posted by mwidlake in internals, performance.
Tags: ,
27 comments

This is a bit of an odd one. On V10.2.0.3 (I have not tried other versions yet) then the dynamic sampling hint appears to be ignored if the stats for the table say there are zero rows in the table. Even if there are in fact lots of rows.

I’ve not had chance to dig into Metalink etc yet {I’m putting together this post before I yet again get overwhelmed and fail to put up something I think is of interest} but if this turns out to be a general bug impacting other versions, it could catch people out. This is because one tactic to handle queries where the data volumes shifts all the time is to not gather stats and use the dynamic sampling hint. “Not gathering stats” could also be “gather stats when the table is empty and ignore it – the dynamic sampling hint will take care of everything”.

Here is my worked example. I use two tables, PERSON and PERSON_NAME, related on a column PERS_ID. {If you want the exact table definitions etc, mail me}.

--preserve my data
create table pena_backup as select * from person_name;

Table created.

--ensure I have stats
exec dbms_stats.gather_table_stats(ownname=>user,tabname =>'PERSON')
PL/SQL procedure successfully completed.

exec dbms_stats.gather_table_stats(ownname=>user,tabname =>'PERSON_NAME')
PL/SQL procedure successfully completed.

--Show I have data
select table_name,num_rows,last_analyzed from dba_tables
where owner=USER  AND TABLE_NAME IN ('PERSON','PERSON_NAME');

TABLE_NAME                       NUM_ROWS LAST_ANALYZED
------------------------------ ---------- -----------------
PERSON                              71913 20-MAY-2010 11:06
PERSON_NAME                        107567 20-MAY-2010 11:06

--Explain a simple select
explain plan for
select /* */ pers.surname, pers.first_forename
from    person pers, person_name pena
where pena.pers_id =pers.pers_id
and    pena.surname = 'BROWN'
/ 
Explained.

select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
Plan hash value: 2653984949

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |  1427 | 45664 |   637   (5)| 00:00:03 |
|*  1 |  HASH JOIN         |             |  1427 | 45664 |   637   (5)| 00:00:03 |
|*  2 |   TABLE ACCESS FULL| PERSON_NAME |  1427 | 17124 |   320   (5)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| PERSON      | 71913 |  1404K|   314   (4)| 00:00:02 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("PENA"."PERS_ID"="PERS"."PERS_ID")
   2 - filter("PENA"."SURNAME"='BROWN')

16 rows selected.

--Now explain with dynamic sampling
explain plan for
select /*+ dynamic_sampling (pena 6) */ pers.surname, pers.first_forename
from    person pers, person_name pena
where pena.pers_id =pers.pers_id
and    pena.surname = 'BROWN'
/
Explained.

select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
Plan hash value: 2653984949

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |  1427 | 45664 |   637   (5)| 00:00:03 |
|*  1 |  HASH JOIN         |             |  1427 | 45664 |   637   (5)| 00:00:03 |
|*  2 |   TABLE ACCESS FULL| PERSON_NAME |  1427 | 17124 |   320   (5)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| PERSON      | 71913 |  1404K|   314   (4)| 00:00:02 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("PENA"."PERS_ID"="PERS"."PERS_ID")
   2 - filter("PENA"."SURNAME"='BROWN')

Note
-----
   - dynamic sampling used for this statement

20 rows selected.

-- Note the appearance of the NOTE about dynamic sampling
-- If I did this again, I would add extra data to the table to get different stats
-- from the dynamic sampling


--truncate my person_name table and re-gather stats

truncate table person_name;
Table truncated.

exec dbms_stats.gather_table_stats(ownname=>user,tabname =>'PERSON_NAME')
PL/SQL procedure successfully completed.

TABLE_NAME                       NUM_ROWS LAST_ANALYZED
------------------------------ ---------- -----------------
PERSON                              71913 20-MAY-2010 11:06
PERSON_NAME                             0 20-MAY-2010 11:06


--now try and dynamic sample a known empty table
explain plan for
select /*+ dynamic_sampling (pena 6) */ pers.surname, pers.first_forename
from    person pers, person_name pena
where pena.pers_id =pers.pers_id
and    pena.surname = 'BROWN'; 
Explained.

select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
Plan hash value: 1295262714

---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |     1 |    50 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID  | PERSON      |     1 |    20 |     2   (0)| 00:00:01 |
|   2 |   NESTED LOOPS                |             |     1 |    50 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| PERSON_NAME |     1 |    30 |     1   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | PENA_SN     |     1 |       |     1   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN           | PERS_PEID   |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PENA"."SURNAME"='BROWN')
   5 - access("PENA"."PERS_ID"="PERS"."PERS_ID")

18 rows selected.

-- I initially thought Oracle might be detecting that the 
-- table was empty - never had data inserted or had been truncated
-- and thus knew there was no point sampling...But...
 
-- put the data back
insert into person_name select * from pena_backup;

107567 rows created.

commit;
Commit complete.

--and explain again

Explained.
explain plan for
select /*+ dynamic_sampling (pena 6) */ pers.surname, pers.first_forename
from    person pers, person_name pena
where pena.pers_id =pers.pers_id
and    pena.surname = 'BROWN';

select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
Plan hash value: 1295262714

---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |     1 |    50 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID  | PERSON      |     1 |    20 |     2   (0)| 00:00:01 |
|   2 |   NESTED LOOPS                |             |     1 |    50 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| PERSON_NAME |     1 |    30 |     1   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | PENA_SN     |     1 |       |     1   (0)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN           | PERS_PEID   |     1 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PENA"."SURNAME"='BROWN')
   5 - access("PENA"."PERS_ID"="PERS"."PERS_ID")

18 rows selected.

-- Note, nothing about dynamic sampling, the rows and bytes expected are the same as
-- the query with no dynamic sampling hint

-- now gather stats again so oracle knows there is some data

exec dbms_stats.gather_table_stats(ownname=>user,tabname =>'PERSON_NAME') 
PL/SQL procedure successfully completed.

select table_name,num_rows,last_analyzed from dba_tables
where owner=USER  AND TABLE_NAME IN ('PERSON','PERSON_NAME');

TABLE_NAME                       NUM_ROWS LAST_ANALYZED
------------------------------ ---------- -----------------
PERSON                              71913 20-MAY-2010 11:06
PERSON_NAME                        107567 20-MAY-2010 11:09

--and now try dynamic sampling again

explain plan for
select /*+ dynamic_sampling (pena 6) */ pers.surname, pers.first_forename
from    person pers, person_name pena
where pena.pers_id =pers.pers_id
and    pena.surname = 'BROWN';
Explained.

select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
Plan hash value: 2653984949

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |  1427 | 45664 |   637   (5)| 00:00:03 |
|*  1 |  HASH JOIN         |             |  1427 | 45664 |   637   (5)| 00:00:03 |
|*  2 |   TABLE ACCESS FULL| PERSON_NAME |  1427 | 17124 |   320   (5)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| PERSON      | 71913 |  1404K|   314   (4)| 00:00:02 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("PENA"."PERS_ID"="PERS"."PERS_ID")
   2 - filter("PENA"."SURNAME"='BROWN')

Note
-----
   - dynamic sampling used for this statement

20 rows selected.

What prompted me to find this is I had to provide a DYNAMIC SAMPLING hint to a development team and I wanted to make sure it was working correctly first (it was a MERGE statement and I was not sure where to best put it). As you get no warnings about syntax errors with hints, it is best to prove they are correct before handing them over :-)
I was doing this in a dev system and, try as I might I could not get the hint to work. Because, as I now know, the dev system had no data in the driving table and stats had been gathere on it. I wasted 30 mins in a critical fix-on-fail due to this oddity.

I have not tried to see if DYNAMIC SAMPLING stated at the instance level is also ignored if there are zero-row stats against the table.

INTERNAL_FUNCTION() Impact April 21, 2010

Posted by mwidlake in internals, performance.
Tags: , ,
13 comments

You may occasionally see something like the following in the filter predicates of an Explain Plan:

INTERNAL_FUNCTION(“COL_1″)>:P0

What is INTERNAL_FUNCTION, what is the use of INTERNAL_FUNCTION() in Oracle explain plans and why am I putting things like ORACLE INTERNAL_FUNCTION partition exclusion and meaning of internal_function performance almost as random strings in this blog?

Well, I want to get the hit rate up for this post as when I did a quick search on INTERNAL_FUNCTION the current top hit is This unfortunately misleading posting that tells you “It is not a significant overhead in SQL execution performance”. This internal function IS potentially a significant overhead in SQL Execution performance. You can’t add comments to the above site {just book a course or order a book}, so I’m having to create my own page…

INTERNAL_FUNCTION is, as the above posting says, where oracle does an implicit data conversion. In the shown case, as well as in my example in this blog and in most cases I have seen, from timestamp to date.

Why is it an issue? Because it is a function on a column and as such it can have a disasterous impact on sql execution performance.

This code below ( which has most select columns removed for simplicity and to protect the innocent) was performing very poorly, taking over 30 minutes to complete. This is all on 10.2.0.3 Enterprise edition of Oracle.

SELECT trans0_.ID as ID3_1_
...
, round(transfers0_.AMOUNT,2) as formula10_0_
FROM VW_PMTT trans0_
WHERE  (trans0_.MODIFIED_DT between :p0 AND :p1
        AND trans0_.PERS_ID = :p2)
and transfers0_.ACCOUNT_ID=:p3
ORDER BY transfers0_.MODIFIED_DT

This is the plan and the significant filter/access predicates:

Plan hash value: 3261981189
------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                 | Rows  |Bytes| Cost   |Pstart|Pstop|
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                      |       |     | 67841  |      |     |
|   1 |  SORT ORDER BY                    |                      |     8 | 920 | 67841  |      |     |
|   2 |   VIEW                            | VW_PMTP              |     8 | 920 | 67840  |      |     |
|   3 |    UNION-ALL                      |                      |       |     |        |      |     |
|*  4 |     FILTER                        |                      |       |     |        |      |     |
|   5 |      NESTED LOOPS                 |                      |     4 | 280 | 28490  |      |     |
|*  6 |       TABLE ACCESS BY INDEX ROWID | W_TBLP               |     1 |  14 |     3  |      |
|*  7 |        INDEX UNIQUE SCAN          | PK_W_TBLP            |     1 |     |     2  |      |     |
|   8 |       PARTITION RANGE ALL         |                      |     4 | 224 | 28487  |    1 | 1580|
|*  9 |        TABLE ACCESS BY LOC INd RID| W_PAYMENT            |     4 | 224 | 28487       1 | 1580|
|* 10 |         INDEX RANGE SCAN          | IDX_W_P_TRANS        | 66936 |     |  3345  |    1 | 1580|
...

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - filter("PLY"."ACCOUNTID"=:P3)
   7 - access("PLY"."PERS_ID"=:P2)
   9 - filter(("PT"."TYPE"=1184769 AND INTERNAL_FUNCTION("PT"."MODIFIED")>=:P0 AND INTERNAL_FUNCTION ("PT"."MODIFIED")<:P1

As you can see, the view is translated into a simple two-table join {and unioned with a second two-table join, which I removed again for clarity, but if anyone wants the full plan and statement, email me} where:

  • one table is accessed on a unique index (via bind variable P2).
  • Row filtered for P3
  • This table is then joined to a Partitioned table in a nested loop 
  • The table being accessed via a local index range scan.

At first glance, the plan may look fine, but Look at the pstart and pstop. 1 to 1580. That is every partition in the table.
The predicate information shows that the INTERANAL_FUNCTION(“PT”.”MODIFIED”) column is being compared to P0 and P1.

The partition key on the table is:-

>@chk_patr_key
Name of Table : W_PAYMENT

TAB_NAME                       OBJECT_TYP PSN COLUMN_NAME
------------------------------ ---------- --- -----------
USER.W_PAYMENT                 TABLE       1 MODIFIED

ie the column that is being flitered by. Why no partition pruning?

Partition pruning is not occuring because of the “INTERNAL_FUNCTION” being applied to that column. The CBO is not able to understand how the result of a function will match to the partition values of the column. Not even it’s own, internally used function :-P

For the above, the “date” bind variables were defined as

p0 timestamp := to_timestamp(’01-FEB-2010 00:00′,’DD-MON-YYYY HH24:MI’);
p1 timestamp := to_timestamp(’03-FEB-2010 00:00′,’DD-MON-YYYY HH24:MI’);

I then altered the code such that the bind variables were defined as dates.

p0 date := to_date(’01-FEB-2010′,’DD-MON-YYYY’);
p1 date := to_date(’03-FEB-2010′,’DD-MON-YYYY’);

The plan and significant predicates were now:

Plan hash value: 346162108
------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                 | Rows  |Bytes| Cost   |Pstart|Pstop|
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                      |       |     | 67841  |      |     |
|   1 |  SORT ORDER BY                    |                      |     4 | 460 | 67841  |      |     |
|   2 |   VIEW                            | VW_PMTP              |     4 | 460 | 67840  |      |     |
|   3 |    UNION-ALL                      |                      |       |     |        |      |     |
|*  4 |     FILTER                        |                      |       |     |        |      |     |
|   5 |      NESTED LOOPS                 |                      |     2 | 140 | 28490  |      |     |
|*  6 |       TABLE ACCESS BY INDEX ROWID | W_TBLP               |     1 |  14 |     3  |      |     |
|*  7 |        INDEX UNIQUE SCAN          | PK_W_TBLP            |     1 |     |     2  |      |     |
|   8 |       PARTITION RANGE ALL         |                      |     2 | 112 | 28487  |  KEY | KEY |
|*  9 |        TABLE ACCESS BY LOC INd RID| W_PAYMENT            |     2 | 112 | 28487  |  KEY | KEY |
|* 10 |         INDEX RANGE SCAN          | IDX_W_P_TRANS        | 66936 |     |  3345  |  KEY | KEY |
...

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - filter("PLY"."ACCOUNTID"=:P3)
   7 - access("PLY"."PERS_ID"=:P2)
   9 - filter(("PT"."TYPE"=1184769 AND "PT"."MODIFIED">=:P0 AND "PT"."MODIFIED"<:P1

The expected rows and bytes drop but the key difference in the plan is Pstart and Pstop are now KEY KEY. ie the CBO will evaluate at run time the values coming back from the Table access of W_TBLP and partition prune the access of the table.

In this situation, the original query took over 30 minutes to return. The new code took 18 seconds.

I tried one final test. I set the bind variables to be varchar2 so that implicit data conversion would occur:
p0 varchar2(20) := ’01-FEB-2010 00:00′;
p1 varchar2(20) := ’03-FEB-2010 00:00′;

With VARCHAR2 the CBO was able to do the implicit conversion without the INTERNAL_FUNCTION and the plan/filter predicates were exactly as for native dates.

It would seem this issue with INTERNAL_FUNCTION blights timestamp to date implicit data conversion but not some other implicit data conversions. Without testing further, I can’t be sure which.

This is a documented feature by Oracle. Unusually (and much to their credit) Oracle even document the potential negative impact of this particular construct with timestamps and dates. See this section in chapter 5 of the Oracle 10g data warehousing guide.

I better balance the fact I linked to an inaccurate posting on the subject with a few good ones.
This is a nice one from McLaud on the OraFAQ site where he has come across it as a problem and with help shows how it is due to implicit conversion of a timestamp from Hibernate.
This blog on the issue arising with Java is a nice example of the fix (use a date to hold a date, not a timestamp).

Finally, this link is to an older but very, very valid rant by Tom Kyte about implicit data conversion and using the correct data type for your data. {Don’t go using numbers or varchars to hold dates, just don’t, and similarly you should avoid implicit data conversion}.

DBMS SIG March 10, 2010

Posted by mwidlake in internals, Meeting notes, performance.
Tags: , ,
5 comments

I went to the DBMS SIG today {DBMS Special Interest Group meeting of the UK Oracle User Group}. Don’t worry, I am not going to run through the set of presentations and make comments on them – although I like reading such entries by others on their blogs, I generally like them due to the style of writing as opposed to getting information out of them. But there is the odd tidbit that can be worth disseminating to as wide an audience as possible and I like to do my bit.

Having said I won’t go through all the talks… :-). No, I just want to comment that all the talks had merit at this meeting and, quite rightly, the meeting was full. This is nice to see as last year SIG attendance fell across the board, due to the “economic climate”. Very daft, in my opinion, as I see SIGs as free training plus networking opportunities (sorry to use the “networking” word” plus different viewpoints, all of which are invaluable at any time and especially valuable when things are hard.

Go to SIGs (or whatever is available in your country) as it is always worth the day invested. Where else can you see half a dozen experts give opinions and also corner them and ask them more stuff as well?

Anyway, the tidbits. First, Jonathan Lewis demonstrated how he goes about solving issues with complex SQL statements. It was terribly feindish and extremly clever and needs incredible in-depth knowledge of the oracle RDBMS… He draws a pseudo Entity Relationship Diagram of the tables involved, adds in some figures on what filtering will achieve and what ratio of records will be involved in table joins and asks the question “what will probably happen” a couple of dozen times. Yes, I lied, it does not need vast knowledge. It needs a clear, simple approach to solving the problem. And an ERD. I love ERDs and rue their general demise. I use exactly the same method myself to investigate complex SQL performance issues {I think I could be accused of trying to ride on shirt-tails here, but honestly, the step I take if an Explain Plan does not help me is to ERD the statement and look at the indexes and ratios between tables to see how I , as a human, would solve the query. Chatting to a few other old lags, it is a relatively universal approach by those of us who have used ERDs}. If you are a member of the UKOUG I strongly recommend downloading the slides to Jonathan’s talk. If you are not a member, maybe Jonathan will present it again at another venue, or you could get him to come along and do a course on tuning. {Jonathan, if you get a gig as a result if this, I want a pint of Marston’s Pedigree, OK?}

{And thanks to Sean malloy for commenting to provide a link to a published version of Jonathan’s method – Jonathan did mention this, highlighting the fact that it is his first real foray into SQL*Server. However, the method is database agnostic. This is the article}

Second tidbit. Adrian Dodman and Owen Ireland (who both look remarkably like Hollywood hearthrobs in their pictures, but different as their in-the-flesh selves, though very decent chaps they are too.) did an excellent talk on VLDB physical standbys, a topic that has particular resonance for myself. They mentioned parallel_execution_message_size. This defaults to 2k, on 10g at least. It is a rubbish setting. No, let me not beat about the bush, it is an utterly rubbish setting. If you use parallel query, parallel recovery or parallel anything-at-all, check out this parameter and, due dilligence allowing, increase it. Try 8k as opposed to 2k and even 16k. The manual on it says the default of 2k/4k is fine. It ain’t. Increasing the value just takes some memory out of the shared pool and, these days, if you can’t afford a few extra KB out of your shared pool, you need to replace your server with something costing about twice as much as a top-end desktop PC. { Why am I so vigorous in my opinion on this parameter? Well, I had a situation a few months back of trying to migrate a database to a new geographic location for a client in Germany. We did a backup/recovery type operation to do this. Applying the redo logs was proving to be a performance issue so Oracle Corp suggested parallel redo log application. It ran a LOT slower than single thread, about 300% slower. However, increasing the parallel_execution_message_size from 2k to 8k made the parallel log application about 400% faster than single thread. ie a dozen times faster. I know from presentations by Christian Antognini and discussions with others that it is a key parameter to getting parallel query to perform well too.}

Last tidbit. Don’t drop the OUTLN user. Yes, I know, why would you? Just don’t, OK? Especially on Oracle 11. If you do, for whatever reason, DO NOT SHUT DOWN THE DATABASE. Call Oracle Support and pray. Thanks go to Peter Mahaffey for that one. Yes he did. It all went terribly wrong for him.

Update to Decoding High and Low values February 24, 2010

Posted by mwidlake in internals, performance.
Tags: ,
10 comments

After what seems an age, I finally got a little time to enhance my col_stats code with the extra info people provided as comments to
this original posting on it. I also now handle the translation of dates correctly and also timestamps.

I had a real struggle with timestamps. I knew I could find the answer in Tom Kyte’s “Expert Oracle Database Architecture” as one of the comments said so, but I tried to work it out myself as my copy of the book was at my Mothers {no, she is not learning Oracle Databases, I left it there by accident}. As the other elements of timestamps are held as numerical elements stored as hex (number of days, number of months) I tried to interpret it like that. I was being too complex, it is just an 8-digit hex number – to_number(value,’XXXXXXXX’).

Anyway, this is new-improved, cleanes whiter-than-white script {I’ve increased the width of myt blog to better show code like the belwo, but try clicking on “show source” if you want a plain text version}:

-- col_stats
-- Martin Widlake mdw 21/03/2003
-- MDW 11/12/09 enhanced to include more translations of low_value/high_value
-- pilfered from Gary Myers blog
-- MDW 20/02/10 added in the handling of timestamps.
col owner        form a6 word wrap
col table_name   form a15 word wrap
col column_name  form a22 word wrap
col data_type    form a12
col M            form a1
col num_vals     form 99999,999
col dnsty        form 0.9999
col num_nulls    form 99999,999
col low_v        form a30
col low_v2       form a18
col hi_v         form a30
col data_type    form a10
col low_value    form a25
col high_value   form a25
set lines 110
break on owner nodup on table_name nodup
spool col_stats.lst
select --owner
--      ,table_name
      column_name
      ,data_type
      ,decode (nullable,'N','Y','N')  M
      ,num_distinct num_vals
      ,num_nulls
      ,density dnsty
,decode(substr(data_type,1,9) -- as there are several timestamp types
  ,'NUMBER'       ,to_char(utl_raw.cast_to_number(low_value))
  ,'VARCHAR2'     ,to_char(utl_raw.cast_to_varchar2(low_value))
  ,'NVARCHAR2'    ,to_char(utl_raw.cast_to_nvarchar2(low_value))
  ,'BINARY_DO',to_char(utl_raw.cast_to_binary_double(low_value))
  ,'BINARY_FL' ,to_char(utl_raw.cast_to_binary_float(low_value))
  ,'DATE',rtrim(
               to_char(100*(to_number(substr(low_value,1,2),'XX')-100)
                      + (to_number(substr(low_value,3,2),'XX')-100),'fm0000')||'-'||
               to_char(to_number(substr(low_value,5,2),'XX'),'fm00')||'-'||
               to_char(to_number(substr(low_value,7,2),'XX'),'fm00')||' '||
               to_char(to_number(substr(low_value,9,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(low_value,11,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(low_value,13,2),'XX')-1,'fm00'))
  ,'TIMESTAMP',rtrim(
               to_char(100*(to_number(substr(low_value,1,2),'XX')-100)
                      + (to_number(substr(low_value,3,2),'XX')-100),'fm0000')||'-'||
               to_char(to_number(substr(low_value,5,2),'XX'),'fm00')||'-'||
               to_char(to_number(substr(low_value,7,2),'XX'),'fm00')||' '||
               to_char(to_number(substr(low_value,9,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(low_value,11,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(low_value,13,2),'XX')-1,'fm00')
              ||'.'||to_number(substr(low_value,15,8),'XXXXXXXX')  )
       ) low_v
,decode(substr(data_type,1,9) -- as there are several timestamp types
  ,'NUMBER'       ,to_char(utl_raw.cast_to_number(high_value))
  ,'VARCHAR2'     ,to_char(utl_raw.cast_to_varchar2(high_value))
  ,'NVARCHAR2'    ,to_char(utl_raw.cast_to_nvarchar2(high_value))
  ,'BINARY_DO',to_char(utl_raw.cast_to_binary_double(high_value))
  ,'BINARY_FL' ,to_char(utl_raw.cast_to_binary_float(high_value))
  ,'DATE',rtrim(
               to_char(100*(to_number(substr(high_value,1,2),'XX')-100)
                      + (to_number(substr(high_value,3,2),'XX')-100),'fm0000')||'-'||
               to_char(to_number(substr(high_value,5,2),'XX'),'fm00')||'-'||
               to_char(to_number(substr(high_value,7,2),'XX'),'fm00')||' '||
               to_char(to_number(substr(high_value,9,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(high_value,11,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(high_value,13,2),'XX')-1,'fm00'))
  ,'TIMESTAMP',rtrim(
               to_char(100*(to_number(substr(high_value,1,2),'XX')-100)
                      + (to_number(substr(high_value,3,2),'XX')-100),'fm0000')||'-'||
               to_char(to_number(substr(high_value,5,2),'XX'),'fm00')||'-'||
               to_char(to_number(substr(high_value,7,2),'XX'),'fm00')||' '||
               to_char(to_number(substr(high_value,9,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(high_value,11,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(high_value,13,2),'XX')-1,'fm00')
              ||'.'||to_char(to_number(substr(low_value,15,8),'XXXXXXXX')))
  ,  high_value
       ) hi_v
,low_value,high_value
from dba_tab_columns
where owner      like upper('&tab_own')
and   table_name like upper(nvl('&tab_name','WHOOPS')||'%')
ORDER BY owner,table_name,COLUMN_ID
/
clear colu
spool off
clear breaks

Yes, I know, it is a considerable chunk of code just to get the high and low values for columns, as identified by the last stats gather to be run on the table. {it is important to remember that these high and low values will only be as accurate as the stats gather that was run and when. An “estimate_percent=> 100″ run a minute ago will give probaby give accurate values, an “estimate_percent=>0.1′” run last month is not – which is usually what I am checking for}. I could make the script a lot neater by writing a set of functions to call to get the translation, as was suggested as a comment on the original posting and is in fact somethign I have done in the past, but one is often not allowed to put new stored PL/SQL code onto production systems, so a script which does it is, for me, best.

I should show some proof of it working, I guess. The below will create a table with a nice spread of column type and three records with a reasonable spread of data:

-- test_col_hilo
-- M Widlake 11/12/09
--
--testing script to show high and low values for various column types
--
set lines 90 pages 1000 trims on pause off
--set sqlpro 'test102>'
set sqlpro'>'
set echo on
set autotrace off
spool test_col_hilo.lst
-- basic setup info
col name form a30
col value form a20
select substr(name,1,30) name,substr(value,1,20) value
from v$parameter
where name in ('db_block_size','compatible','cpu_count','db_file_multiblock_read_count'
              ,'optimizer_mode','sga_target','sort_area_size')
order by name
/
drop table test_hilo purge;
--
--
COL TABLE_NAME FORM A15
COL TABLE_OWNER FORM A10
create table test_hilo
(id          number(8) not null
,numsmall    number(4)
,numlarge    number
,datesmall   date
,datelarge   date
,ts          timestamp
,ts0        timestamp(0)
,ts3	     timestamp(3)
,ts6	     timestamp(6)
,ts9	     timestamp(9)
,vcsmall     varchar2(10)
,vclarge     varchar2(100)
,vcodd       varchar2(20)
,nvcsmall    nvarchar2(10)
,nvclarge    nvarchar2(100)
,bfsmall     binary_float -- NB precision is only 6 or 7 digits
,bflarge     binary_float
,bdsmall     binary_double
,bdlarge     binary_double
)
tablespace users
/
-- insert low values
 insert into test_hilo
(id,numsmall,numlarge
,datesmall,datelarge
,ts,ts0,ts3
,ts6,ts9
,vcsmall,vclarge
,vcodd
,nvcsmall,nvclarge
,bfsmall,bflarge
,bdsmall,bdlarge
)
values
(1
,-10,-123456789.12345
,trunc(sysdate-1000),sysdate-500000
,systimestamp,systimestamp
,to_timestamp('01-FEB-2010 12:34:56.123','DD-MON-YYYY HH24:MI:SS.FF')
,to_timestamp('01-FEB-2010 12:34:56.1234567','DD-MON-YYYY HH24:MI:SS.FF')
,to_timestamp('01-FEB-2010 12:34:56.987654321','DD-MON-YYYY HH24:MI:SS.FF')
,'ABCDE','AABCABCDEABCDEFGABCDEFGHI'
,'Abc!"£$%^&*()deF'
,'ABCDE','AABCABCDEABCDEFGABCDEFGHI'
,-1.23,-12345.6
,12345,1234567890
)
/
-- insert mid values
 insert into test_hilo
(id,numsmall,numlarge
,datesmall,datelarge
,ts,ts0,ts3
,ts6,ts9
,vcsmall,vclarge
,vcodd
,nvcsmall,nvclarge
,bfsmall,bflarge
,bdsmall,bdlarge
)
values
(2
,15,1515151515151
,trunc(sysdate-10),sysdate-5000
,systimestamp,systimestamp
,to_timestamp('05-FEB-2010 12:34:56.123','DD-MON-YYYY HH24:MI:SS.FF')
,to_timestamp('05-FEB-2010 12:34:56.1234567','DD-MON-YYYY HH24:MI:SS.FF')
,to_timestamp('05-FEB-2010 12:34:56.987654321','DD-MON-YYYY HH24:MI:SS.FF')
,'CCCCC','CCBCABCDEABCDEFGABCDEFGHI'
,'CbaaBC'
,'EFGHI','ABCDEFGHIJKLMNOPQRSTUV'
,1.23,12345.6
,54321,5432112345
)
/
--insert high values
 insert into test_hilo
(id,numsmall,numlarge
,datesmall,datelarge
,ts,ts0,ts3
,ts6,ts9
,vcsmall,vclarge
,vcodd
,nvcsmall,nvclarge
,bfsmall,bflarge
,bdsmall,bdlarge
)
values
(3
,99,9898989898989
,trunc(sysdate-1),sysdate+1000
,systimestamp+10,systimestamp+10
,to_timestamp('20-FEB-2010 18:17:16.876','DD-MON-YYYY HH24:MI:SS.FF')
,to_timestamp('21-FEB-2010 17:16:15.555555','DD-MON-YYYY HH24:MI:SS.FF')
,to_timestamp('22-FEB-2010 16:15:14.123456789','DD-MON-YYYY HH24:MI:SS.FF')
,'ZYXWV','ZZZZZYZZXZZWZZVZZUZZTZZSZZRZZQZZP'
,'Z;#[]{}~@:'
,'VWXYZ','ZYXWVUTSRQPONMLKJIHGFEDCBA'
,9.87,98765.4
,987654321,987654321.1234567
)
/
commit;
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=> USER, TABNAME=>'TEST_HILO')
set autotrace off
set echo off
--
spool off
clear colu
--

And the below is the translation of the values – I used a cut-down version of my script, so it woud fit on the blog better:

COLUMN_NAME  DATA_TYPE     LOW_V                         HI_V
------------ ------------- ----------------------------- -----------------------------
LOW_VALUE                 HIGH_VALUE
------------------------- -------------------------
ID           NUMBER        1                             3
C102                      C104
NUMSMALL     NUMBER        -10                           99
3E5B66                    C164
NUMLARGE     NUMBER        -123456789.12345              9898989898989
3A644E38220C59433366      C70A5A5A5A5A5A5A
DATESMALL    DATE          2007-05-31 00:00:00           2010-02-23 00:00:00
786B051F010101            786E0217010101
DATELARGE    DATE          0641-03-10 17:36:47           2012-11-20 17:36:47
6A8D030A122530            78700B14122530
TS           TIMESTAMP(6)  2010-02-24 17:36:47.255015000 2010-03-06 17:36:47.255015000
786E02181225300F333858    786E0306122530
TS0          TIMESTAMP(0)  2010-02-24 17:36:47.          2010-03-06 17:36:47.
786E0218122530            786E0306122530
TS3          TIMESTAMP(3)  2010-02-01 12:34:56.123000000 2010-02-20 18:17:16.123000000
786E02010D23390754D4C0    786E02141312113436B300
TS6          TIMESTAMP(6)  2010-02-01 12:34:56.123457000 2010-02-21 17:16:15.123457000
786E02010D2339075BCDE8    786E0215121110211D18B8
TS9          TIMESTAMP(9)  2010-02-01 12:34:56.987654321 2010-02-22 16:15:14.987654321
786E02010D23393ADE68B1    786E021611100F075BCD15
VCSMALL      VARCHAR2      ABCDE                         ZYXWV
4142434445                5A59585756
VCLARGE      VARCHAR2      AABCABCDEABCDEFGABCDEFGHI     ZZZZZYZZXZZWZZVZZUZZTZZSZZRZZ
                                                         QZZ
4141424341424344454142434 5A5A5A5A5A595A5A585A5A575
4454647414243444546474849 A5A565A5A555A5A545A5A535A
                          5A525A5A515A5A
VCODD        VARCHAR2      Abc!"£$%^&*()deF              Z;#[]{}~@:
4162632122C2A324255E262A2 5A3B235B5D7B7D7E403A
829646546
NVCSMALL     NVARCHAR2     ABCDE                         VWXYZ
00410042004300440045      0056005700580059005A
NVCLARGE     NVARCHAR2     AABCABCDEABCDEFG              ZYXWVUTSRQPONMLK
0041004100420043004100420 005A005900580057005600550
0430044004500410042004300 0540053005200510050004F00
44004500460047            4E004D004C004B
BFSMALL      BINARY_FLOAT  3.53999972E+000               -9.86999989E+000
40628F5B                  C11DEB85
BFLARGE      BINARY_FLOAT  3.64494306E-004               -9.87653984E+004
39BF1999                  C7C0E6B3
BDSMALL      BINARY_DOUBLE -1.2345E+004                  -9.87654321E+008
C0C81C8000000000          C1CD6F3458800000
BDLARGE      BINARY_DOUBLE -9.8765432112345672E+008      -5.432112345E+009
C1CD6F34588FCD6E          C1F43C774D900000

I could go on and look at handling intervals and time zones but I leave this to you as an exercise. In other words, I have no real need for those data types right now and can’t justify the time! I hope the above is enough to answer whatever questions you may have about the high and low values of your columns…

Format of Date Histograms February 6, 2010

Posted by mwidlake in internals.
Tags: , ,
9 comments

Oracle annoyingly handles dates internally in several formats. Dates are stored in tables as seven bytes, each byte representing century, year-in-century, month, day, hour, minute and second, shifted by different amounts. For the high/low values seen in DBA_TAB_COLUMNS or DBA_TAB_COL_STATISTICS, it is stored as a RAW value, where a two-digit hex string represents the century, year-in-century,month etc as before – see this post on decoding high and low column values and check out the comments for corrections.

So what about histograms? You might want to know what is in the histogram for a date column in order to better understand the decisions made by the CBO. Below, I pull out the histogram for an example date column {I’ve trimmed the output a little to save space}:

select table_name
,column_name
,endpoint_value end_val
,endpoint_number rowcount
from all_tab_histograms
where table_name ='TEST_TABLE_X'
and owner ='TEST_1'
and column_name = 'PLACED_DT'
order by endpoint_number

TABLE_NAME      COLUMN_NAME   END_VAL       ROWCOUNT
--------------- ------------- ------------- ---------------
TEST_TABLE_X     PLACED_DT   2,452,258         0
TEST_TABLE_X     PLACED_DT   2,454,334         1
TEST_TABLE_X     PLACED_DT   2,454,647         2
TEST_TABLE_X     PLACED_DT   2,454,737         3
TEST_TABLE_X     PLACED_DT   2,454,820         4
TEST_TABLE_X     PLACED_DT   2,454,867         5
TEST_TABLE_X     PLACED_DT   2,454,929         6
TEST_TABLE_X     PLACED_DT   2,454,981         7
TEST_TABLE_X     PLACED_DT   2,455,006         8
TEST_TABLE_X     PLACED_DT   2,455,024         9
TEST_TABLE_X     PLACED_DT   2,455,039         10
TEST_TABLE_X     PLACED_DT   2,455,050         11
...
TEST_TABLE_X     PLACED_DT   2,455,205         42
TEST_TABLE_X     PLACED_DT   2,455,207         43
TEST_TABLE_X     PLACED_DT   2,455,209         44
TEST_TABLE_X     PLACED_DT   2,455,211         45
TEST_TABLE_X     PLACED_DT   2,455,213         46
TEST_TABLE_X     PLACED_DT   2,455,215         47
TEST_TABLE_X     PLACED_DT   2,455,216         48
TEST_TABLE_X     PLACED_DT   2,455,218         49
TEST_TABLE_X     PLACED_DT   2,455,220         50
TEST_TABLE_X     PLACED_DT   2,455,221         51
TEST_TABLE_X     PLACED_DT   2,455,222         52
TEST_TABLE_X     PLACED_DT   2,455,222         53
TEST_TABLE_X     PLACED_DT   2,455,222         54
TEST_TABLE_X     PLACED_DT   2,455,222         55
TEST_TABLE_X     PLACED_DT   2,455,223         56
TEST_TABLE_X     PLACED_DT   2,455,223         57
TEST_TABLE_X     PLACED_DT   2,455,223         58
TEST_TABLE_X     PLACED_DT   2,455,223         59
...
TEST_TABLE_X     PLACED_DT   2,455,223         69
TEST_TABLE_X     PLACED_DT   2,455,223         70
TEST_TABLE_X     PLACED_DT   2,455,223         71
TEST_TABLE_X     PLACED_DT   2,455,223         72
TEST_TABLE_X     PLACED_DT   2,455,223         73
TEST_TABLE_X     PLACED_DT   2,455,224         74
TEST_TABLE_X     PLACED_DT   2,455,226         75

Well, it looks like a seven digit number so it must be representing the date in a similar way to as described above, yes? No. The format is obviously something different {Oh come ON Oracle, some internal standards would be nice once in a while! :-) }

I pulled out the minimum and maximum values from the DBA_TAB_COLUMNS table and translated them:-

COLUMN_NAME  NUM_DIST   LOW_V               HI_V 
------------ ---------- ------------------- ------------------- 
PLACED_DT    375,428    2001-12-13 17:31:38 2010-01-28 23:51:38 

So the above low values and high values will pretty much match the first and last values in the histograms, which are 2452258 and 2455226. Any guesses? Those values from the histogram are very close to each other, only 2968 different to cover around 9 years of values. 9 times 365…

The histogram is representing the dates in Julian format, ie number of days since 1st Jan 4712BC. As a quick proof of this:

select to_date('2452258','J'),to_date('2455226','J')
from dual;

TO_DATE('2452258' TO_DATE('2455226'
----------------- -----------------
14-DEC-2001 00:00 29-JAN-2010 00:00

Well, what do you know. Very close to the 13th Dec 2001 and 28th Jan 2010

This of course makes sense, storing the date as an ascending numeric means it is simple to calculate the width of the range and how many values per day there are.

Imagine how complex it would be to do this if the date was stored in the bizarre way we humans deal with it – an ascending numeric for year, a cycling 12 digit number for month and a varying cyclic number for the day of the month. And that is ignoring other calendars in common use.

However, I’m looking at this method of representing the date and something bothers me. There is no allowance for the time portion. Maybe column histograms can’t cope with time? I feel a couple of tests coming on…

ADDITIONAL.
If you have seen the comments, you will know that Jonathan commented to ask if I might have truncated/been selective with the data in creating my test table as he is sure there is a fractional section representing the time portion.

Well, I did say I needed to check further so maybe I would have spotted my error on my own… :-)
This is the script I use to pull out histogram data (and yes, I trimed the output from the script before copying it into this post, so it does not quite match the script):

-- chk_hist.sql
-- Martin Widlake 7/4/4
-- pull out the histograms on a table
set pause on pages 32
spool chk_hist.lst
col owner form A8
col table_name form A15
col column_name form a20
col rowcount form 99,999,999,999
col end_val form 99,999,999,999
select owner
,table_name
,column_name
,endpoint_value  end_val
,endpoint_number rowcount
from all_tab_histograms
where table_name like upper(nvl('&tab_name','WHOOPS')||'%')
and  owner like upper('&tab_own')||'%'
and  column_name like upper('&col_name')||'%'
order by table_name,column_name,endpoint_number
/
spool off
clear colu

Hmm wonder what happens if I change the “col end_val form 99,999,999,999″ so that it would show fractions…

col end_val form 999,999,999.99999999

TABLE_NAME      COLUMN_NAM               END_VAL     ROWCOUNT
--------------- ---------- --------------------- ------------
TEST_TABLE_X    PLACED_DT     2,452,257.73030093            0
TEST_TABLE_X    PLACED_DT     2,454,333.76546296            1
TEST_TABLE_X    PLACED_DT     2,454,647.32561343            2
TEST_TABLE_X    PLACED_DT     2,454,737.25017361            3
TEST_TABLE_X    PLACED_DT     2,454,820.02204861            4
TEST_TABLE_X    PLACED_DT     2,454,866.98009259            5
TEST_TABLE_X    PLACED_DT     2,454,928.66848380            6
TEST_TABLE_X    PLACED_DT     2,454,980.94815972            7
TEST_TABLE_X    PLACED_DT     2,455,005.68413194            8
TEST_TABLE_X    PLACED_DT     2,455,023.67142361            9
TEST_TABLE_X    PLACED_DT     2,455,039.03236111           10
TEST_TABLE_X    PLACED_DT     2,455,050.39246528           11

Ahhhhh……

Thanks Jonathan :-)
It is still a Julian date, but with the time as a fraction as he said.

More on COMMAND_TYPE values. January 8, 2010

Posted by mwidlake in internals.
Tags: ,
3 comments

This is just a quick update on finding the values for COMMAND_TYPE as is held in V$SQL and V$SQLAREA. The original post is here. Quick sumary – table SYS.AUDIT_ACTIONS as an almost-complete list of COMMAND_TYPES.

In one of those little concidences of fate, I have been looking at use of SQL AUDIT for the last few days. I am not very knowledgeble about AUDIT or oracle security in general, so I do what I normally do when I don’t know much about an area. I look on the web, scan the manual and then go poking around in the data dictionary. I came across two potentially interesting tables, SYS.AUDIT_ACTIONS and SYS.STMT_AUDIT_OPTION_MAP. At first glance they look to hold similar information, things that can be audited. This seems to be what SYS.STMT_AUDIT_OPTION_MAP is:-

select * from SYS.STMT_AUDIT_OPTION_MAP
order by option#
   OPTION# NAME                                       PROPERTY
---------- ---------------------------------------- ----------
         3 ALTER SYSTEM                                      0
         4 SYSTEM AUDIT                                      0
         5 CREATE SESSION                                    0
         6 ALTER SESSION                                     0
         7 RESTRICTED SESSION                                0
         8 TABLE                                             0
         9 CLUSTER                                           0
        10 CREATE TABLESPACE                                 0
        11 ALTER TABLESPACE                                  0
        12 MANAGE TABLESPACE                                 0
        13 DROP TABLESPACE                                   0
        14 TABLESPACE                                        0
        15 UNLIMITED TABLESPACE                              0
-- lets look at dba_audit_trail that underlys many of the audit views.
@vw_txt
Enter value for vw_name: dba_audit_trail
Any Key...>

OWNER      VIEW_NAME                      TEXT_LENGTH
---------- ------------------------------ -----------
TEXT
------------------------------------------------------------------
SYS        DBA_AUDIT_TRAIL                       3424
select spare1           /* OS_USERNAME */,
       userid           /* USERNAME */,
       userhost         /* USERHOST */,
       terminal         /* TERMINAL */,
       cast (           /* TIMESTAMP */
           (from_tz(ntimestamp#,'00:00') at local) as date),
       obj$creator      /* OWNER */,
       obj$name         /* OBJECT_NAME */,
       aud.action#      /* ACTION */,
       act.name         /* ACTION_NAME */,
....
from sys.aud$ aud, system_privilege_map spm, system_privilege_map spx,
     STMT_AUDIT_OPTION_MAP aom, audit_actions act
where   aud.action#     = act.action    (+)
  and - aud.logoff$dead = spm.privilege (+)
  and   aud.logoff$dead = aom.option#   (+)
  and - aud.priv$used   = spx.privilege (+)

What about AUDIT_ACTIONS? Well, I looked at the table and thought “this looks very similar to the list of values for COMMAND_TYPE – I should check that out”.

I don’t have to. The very next day Christian Antognini posted a quick comment on the posting about COMMAND_TYPE telling me where one could find a list of these commands in the DB – Yep, AUDIT_ACTIONS.

Christian’s posting warned me that a couple of commands are missing (our old favorite from the original post MERGE being one and CREATE_DISKGROUP being the other he cites) but it includes several more that are not listed in the manual. If anyone is aware of COMMAND_TYPES not included in AUDIT_ACTIONS then let me know and I can maintain a list here.

{Thanks to Chris for providing many more missing actions and two new to 11.2, included below}

New ones to those liste in the manual are:-
128 – FLASHBACK
129 – CREATE_SESSION
197 – PURGE USER_RECYCLEBIN
198 – PURGE DBA_RECYCLEBIN
199 – PURGE TABLESAPCE
200 – PURGE TABLE
201 – PURGE INDEX
202 – UNDROP OBJECT
204 – FLASHBACK DATABASE
205 – FLASHBACK TABLE
206 – CREATE RESTORE POINT
207 – DROP RESTORE POINT
208 – PROXY AUTHENTICATION ONLY
209 – DECLARE REWRITE EQUIVALENCE
210 – ALTER REWRITE EQUIVALENCE
211 – DROP REWRITE EQUIVALENCE
and new in the table for 11.2
225 – ALTER DATABASE LINK
305 – ALTER PUBLIC DATABASE LINK

Missing codes are:-
88 – ALTER VIEW
90 – SET CONSTRAINTS
187 – CREATE SPFILE
188 – CREATE PFILE
189 – MERGE
192 – ALTER SYNONYM
193 – ALTER DISKGROUP
194 – CREATE DISKGROUP
195 – DROP DISKGROUP
XXX – FLASHBACK

Here is the list form AUDIT_ACTIONS from 10.2.0.3 (an eyeball check with 11.1.0.1 indicates the list has not increased in that version at least, which surprised me):

select * from audit_actions order by action

    ACTION NAME
---------- ----------------------------
         0 UNKNOWN
         1 CREATE TABLE
         2 INSERT
         3 SELECT
         4 CREATE CLUSTER
         5 ALTER CLUSTER
         6 UPDATE
         7 DELETE
         8 DROP CLUSTER
         9 CREATE INDEX
        10 DROP INDEX
        11 ALTER INDEX
        12 DROP TABLE
        13 CREATE SEQUENCE
        14 ALTER SEQUENCE
        15 ALTER TABLE
        16 DROP SEQUENCE
        17 GRANT OBJECT
        18 REVOKE OBJECT
        19 CREATE SYNONYM
        20 DROP SYNONYM
        21 CREATE VIEW
        22 DROP VIEW
        23 VALIDATE INDEX
        24 CREATE PROCEDURE
        25 ALTER PROCEDURE
        26 LOCK
        27 NO-OP
        28 RENAME
        29 COMMENT
        30 AUDIT OBJECT
        31 NOAUDIT OBJECT
        32 CREATE DATABASE LINK
        33 DROP DATABASE LINK
        34 CREATE DATABASE
        35 ALTER DATABASE
        36 CREATE ROLLBACK SEG
        37 ALTER ROLLBACK SEG
        38 DROP ROLLBACK SEG
        39 CREATE TABLESPACE
        40 ALTER TABLESPACE
        41 DROP TABLESPACE
        42 ALTER SESSION
        43 ALTER USER
        44 COMMIT
        45 ROLLBACK
        46 SAVEPOINT
        47 PL/SQL EXECUTE
        48 SET TRANSACTION
        49 ALTER SYSTEM
        50 EXPLAIN
        51 CREATE USER
        52 CREATE ROLE
        53 DROP USER
        54 DROP ROLE
        55 SET ROLE
        56 CREATE SCHEMA
        57 CREATE CONTROL FILE
        59 CREATE TRIGGER
        60 ALTER TRIGGER
        61 DROP TRIGGER
        62 ANALYZE TABLE
        63 ANALYZE INDEX
        64 ANALYZE CLUSTER
        65 CREATE PROFILE
        66 DROP PROFILE
        67 ALTER PROFILE
        68 DROP PROCEDURE
        70 ALTER RESOURCE COST
        71 CREATE MATERIALIZED VIEW LOG
        72 ALTER MATERIALIZED VIEW LOG
        73 DROP MATERIALIZED VIEW LOG
        74 CREATE MATERIALIZED VIEW
        75 ALTER MATERIALIZED VIEW
        76 DROP MATERIALIZED VIEW
        77 CREATE TYPE
        78 DROP TYPE
        79 ALTER ROLE
        80 ALTER TYPE
        81 CREATE TYPE BODY
        82 ALTER TYPE BODY
        83 DROP TYPE BODY
        84 DROP LIBRARY
        85 TRUNCATE TABLE
        86 TRUNCATE CLUSTER
        91 CREATE FUNCTION
        92 ALTER FUNCTION
        93 DROP FUNCTION
        94 CREATE PACKAGE
        95 ALTER PACKAGE
        96 DROP PACKAGE
        97 CREATE PACKAGE BODY
        98 ALTER PACKAGE BODY
        99 DROP PACKAGE BODY
       100 LOGON
       101 LOGOFF
       102 LOGOFF BY CLEANUP
       103 SESSION REC
       104 SYSTEM AUDIT
       105 SYSTEM NOAUDIT
       106 AUDIT DEFAULT
       107 NOAUDIT DEFAULT
       108 SYSTEM GRANT
       109 SYSTEM REVOKE
       110 CREATE PUBLIC SYNONYM
       111 DROP PUBLIC SYNONYM
       112 CREATE PUBLIC DATABASE LINK
       113 DROP PUBLIC DATABASE LINK
       114 GRANT ROLE
       115 REVOKE ROLE
       116 EXECUTE PROCEDURE
       117 USER COMMENT
       118 ENABLE TRIGGER
       119 DISABLE TRIGGER
       120 ENABLE ALL TRIGGERS
       121 DISABLE ALL TRIGGERS
       122 NETWORK ERROR
       123 EXECUTE TYPE
       128 FLASHBACK
       129 CREATE SESSION
       157 CREATE DIRECTORY
       158 DROP DIRECTORY
       159 CREATE LIBRARY
       160 CREATE JAVA
       161 ALTER JAVA
       162 DROP JAVA
       163 CREATE OPERATOR
       164 CREATE INDEXTYPE
       165 DROP INDEXTYPE
       167 DROP OPERATOR
       168 ASSOCIATE STATISTICS
       169 DISASSOCIATE STATISTICS
       170 CALL METHOD
       171 CREATE SUMMARY
       172 ALTER SUMMARY
       173 DROP SUMMARY
       174 CREATE DIMENSION
       175 ALTER DIMENSION
       176 DROP DIMENSION
       177 CREATE CONTEXT
       178 DROP CONTEXT
       179 ALTER OUTLINE
       180 CREATE OUTLINE
       181 DROP OUTLINE
       182 UPDATE INDEXES
       183 ALTER OPERATOR
       197 PURGE USER_RECYCLEBIN
       198 PURGE DBA_RECYCLEBIN
       199 PURGE TABLESAPCE
       200 PURGE TABLE
       201 PURGE INDEX
       202 UNDROP OBJECT
       204 FLASHBACK DATABASE
       205 FLASHBACK TABLE
       206 CREATE RESTORE POINT
       207 DROP RESTORE POINT
       208 PROXY AUTHENTICATION ONLY
       209 DECLARE REWRITE EQUIVALENCE
       210 ALTER REWRITE EQUIVALENCE
       211 DROP REWRITE EQUIVALENCE

Decoding high_value and low_value January 3, 2010

Posted by mwidlake in internals, performance.
Tags: , ,
13 comments

There is an update to this post here>

The table DBA_TAB_COLUMNS holds the LOW_VALUE and HIGH_VALUE for columns. This information is potentially very useful to us, especially as when the CBO looks at a WHERE clause on a column for a given value, the High and Low values for that column are considered. If the value in the WHERE clause is outside the known range, the expected number of values to be found is decreased in proportion to the distance outside the range.

What is not so helpful is that Oracle stores, and displays, the information in an internal raw format. Which is utterly unhelpful to us of course. Why can’t they at least expose the information in a view in a human-readable format? Tsch.

So here is some code on how to decode low_value and high_value. I pulled most of this together a few years back but always struggled to decode the low and high values for dates, until I found this post by Gary Myers.

-- col_stats
-- Martin Widlake mdw 21/03/2003
-- MDW 11/12/09 enhanced to include more translations of low_value/high_value
-- pilfered from Gary Myers blog
col owner        form a6 word wrap
col table_name   form a15 word wrap
col column_name  form a22 word wrap
col data_type    form a12
col M            form a1
col num_vals     form 99999,999
col dnsty        form 0.9999
col num_nulls    form 99999,999
col low_v        form a18
col hi_v         form a18
col data_type    form a10
set lines 110
break on owner nodup on table_name nodup
spool col_stats.lst
select --owner
--      ,table_name
      column_name
      ,data_type
      ,decode (nullable,'N','Y','N')  M
      ,num_distinct num_vals
      ,num_nulls
      ,density dnsty
,decode(data_type
  ,'NUMBER'       ,to_char(utl_raw.cast_to_number(low_value))
  ,'VARCHAR2'     ,to_char(utl_raw.cast_to_varchar2(low_value))
  ,'NVARCHAR2'    ,to_char(utl_raw.cast_to_nvarchar2(low_value))
  ,'BINARY_DOUBLE',to_char(utl_raw.cast_to_binary_double(low_value))
  ,'BINARY_FLOAT' ,to_char(utl_raw.cast_to_binary_float(low_value))
  ,'DATE',to_char(1780+to_number(substr(low_value,1,2),'XX')
         +to_number(substr(low_value,3,2),'XX'))||'-'
       ||to_number(substr(low_value,5,2),'XX')||'-'
       ||to_number(substr(low_value,7,2),'XX')||' '
       ||(to_number(substr(low_value,9,2),'XX')-1)||':'
       ||(to_number(substr(low_value,11,2),'XX')-1)||':'
       ||(to_number(substr(low_value,13,2),'XX')-1)
,  low_value
       ) low_v
,decode(data_type
  ,'NUMBER'       ,to_char(utl_raw.cast_to_number(high_value))
  ,'VARCHAR2'     ,to_char(utl_raw.cast_to_varchar2(high_value))
  ,'NVARCHAR2'    ,to_char(utl_raw.cast_to_nvarchar2(high_value))
  ,'BINARY_DOUBLE',to_char(utl_raw.cast_to_binary_double(high_value))
  ,'BINARY_FLOAT' ,to_char(utl_raw.cast_to_binary_float(high_value))
  ,'DATE',to_char(1780+to_number(substr(high_value,1,2),'XX')
         +to_number(substr(high_value,3,2),'XX'))||'-'
       ||to_number(substr(high_value,5,2),'XX')||'-'
       ||to_number(substr(high_value,7,2),'XX')||' '
       ||(to_number(substr(high_value,9,2),'XX')-1)||':'
       ||(to_number(substr(high_value,11,2),'XX')-1)||':'
       ||(to_number(substr(high_value,13,2),'XX')-1)
,  high_value
       ) hi_v
from dba_tab_columns
where owner      like upper('&tab_own')
and   table_name like upper(nvl('&tab_name','WHOOPS')||'%')
ORDER BY owner,table_name,COLUMN_ID
/
clear colu
spool off
clear breaks

Most of the translation is done via the utl_raw.cast_to_datatype functions but no such function is provided for dates, which is where most of us come unstuck. Gary recognised that the data was being stored in hex format, with an offset for the year.

I hope the script helps some of you.
{There are also functions under DBMS_STATS, DBMS_STATS.CONVERT_RAW_VALUES, that can also be called to translate many datatypes}

If anyone knows how to decode timestamps, I’d love to know as it would save me trying to work it out/find it on the Net somewhere. I’ll add it to the post so that there is one place to find all translatations.

Here is a quick output for a test table

@col_stats
Enter value for tab_own: dwperf
old  40: where owner      like upper('&tab_own')
new  40: where owner      like upper('dwperf')
Enter value for tab_name: ad_sql_exec_p
old  41: and   table_name like upper(nvl('&tab_name','WHOOPS')||'%')
new  41: and   table_name like upper(nvl('ad_sql_exec_p','WHOOPS')||'%')
any key> 

COLUMN_NAME            DATA_TYPE  M   NUM_VALS  NUM_NULLS   DNSTY LOW_V              HI_V
---------------------- ---------- - ---------- ---------- ------- ------------------ ---------------
INST_ID                NUMBER     Y          4          0  0.2500 1                  4
SQL_ID                 VARCHAR2   Y     87,104          0  0.0000 008d71chrks14      gzw309ttkx862
PLAN_HASH_VALUE        NUMBER     Y      1,884          0  0.0005 2646229            4294043874
SPC_ID                 NUMBER     N         83          0  0.0120 1076               2269
ADDRESS                RAW        N    118,198          0  0.0000 00000003E33123A0   0000001342FF3FF8
HASH_VALUE             NUMBER     N     87,104          0  0.0000 2758710            4294676643
CREATED_DATE           DATE       N        120          0  0.0083 2009-10-23 8:19:10 2009-12-10 9:19:13
LATEST_FIRST_LOAD_TIME DATE       N     11,791          0  0.0001 2009-9-14 11:55:59 2009-12-13 9:33:24
TOTAL_LOADS            NUMBER     N         98          0  0.0102 1                  55047
TOTAL_PARSE_CALLS      NUMBER     N         92          0  0.0109 0                  2972
TOTAL_EXECUTIONS       NUMBER     N        235          0  0.0043 0                  834624
TOTAL_ROWS_PROCESSED   NUMBER     N        809          0  0.0012 0                  26946123
TOTAL_FETCHES          NUMBER     N        313          0  0.0032 0                  834624
TOTAL_BUFFER_GETS      NUMBER     N      3,016          0  0.0003 0                  3355576809
TOTAL_DISK_READS       NUMBER     N        985          0  0.0010 0                  28189240
TOTAL_DIRECT_WRITES    NUMBER     N         98          0  0.0102 0                  751289
TOTAL_SORTS            NUMBER     N        106          0  0.0094 0                  5283
TOTAL_CPU_TIME         NUMBER     N     94,401          0  0.0000 1337               12183936207
TOTAL_ELAPSED_TIME     NUMBER     N    115,203          0  0.0000 1337               139692482086
TOTAL_OPTIMIZER_COST   NUMBER     N      1,467          0  0.0007 0                  369740902209315000
                                                                                     0000000

COMMAND_TYPE Values December 10, 2009

Posted by mwidlake in internals.
Tags: , ,
18 comments

Follow-up post on finding most of the COMMAND_TYPES in the data dictionary is here and thanks to Christian Antognini who’s comment led me in that direction.

Spoiler – See end for getting a full list of COMMAND_TYPE values.

If you are in the habit of looking at V$SQLAREA and V$SQL to see what is going on in the database {for those of use who occasionally step out of the “GUI DBA Tool” environment} you will see a lot of stuff in there. I tend to look for the most demanding SQL so I am not interested in PL/SQL, I just want to see the SQL executed via it.

To do this, you can ignore those entires in V$SQL/V$SQLAREA with a “COMMAND_TYPE=47″ filter.

This COMMAND_TYPE is useful. 3=SELECT, the most common thing you probablly see and look at, 6=UPDATE, 7=DELETE. 50=EXPLAIN.

If you google/bing/whatever COMMAND_TYPE you get many hits that list of some of the commands, all looking very similar. But very partial lists.

So I decided to amalgamate these partial lists, verrify what COMMAND_TYPE links to what commands in the databases I have access to and publish this fuller, verified list. I quickly found some commands do not stay in the SGA after issuing, so I could not confirm them {“drop index” being one}.

I got this far:

-- mdw 10/12/09
-- check for cmd types not seen before, as lists on web are not complete
-- * = I have verified
col sql_txt form a60
select command_type,sql_id,substr(sql_text,1,60) sql_txt
from gv$sqlarea
where command_type not in (
 1 --  create table  *
,2 --  INSERT        *
,3 --  SELECT        *
,6 --  UPDATE        *
,7 --  DELETE        *
,9 --  create index  *
,11 -- ALTER INDEX   *
,26 -- LOCK table    *
,42 -- ALTER_SESSION (NOT ddl)
--two postings suggest 42 is alter session
,44 -- COMMIT
,45 -- rollback
,46 -- savepoint
,47 -- PL/SQL BLOCK' or begin/declare *
,48 -- set transaction   *
,50 -- explain           *
,62 -- analyze table     *
,90 -- set constraints   *
,170 -- call             *
,189 -- merge            *
)
and rownum < 20
/

Please feel free to add to it.

Or save yourself the bother and check out the list provided in the Oracle documentation. Not under the descriptions for V$SQL or V$SQLAREA, which would have been the obvious place Mr Larry Ellison thank you, but under V$SESSION. . Further, the fact that in the V$SESSION table the column is called just COMMAND and not COMMAND_TYPE does not assist in locating this information (don’t google COMMAND and ORACLE, you get many millions of hits…). Just click the below.

This is the full table of values and meanings.

(This is the listing for 10.2 and 11.1 is very similar).

But it does not include 189 – MERGE, so that is one up for practical testing than just reading the manual :-)

Back to the day job…

Follow

Get every new post delivered to your Inbox.

Join 171 other followers