jump to navigation

Friday Philosophy – The Answer To Everything January 27, 2012

Posted by mwidlake in Friday Philosophy.
Tags: ,
3 comments

For those of us acquainted with the philosophical works of Douglas Adams we know that the the answer to everything is 42.

mdw1123> select all knowledge from everything
  2  /

 KNOWLEDGE
----------
        42

This above is a real SQL statement (version 11.2.0.3, just in case you wanted to know :-) ).

This was prompted by a silly discussion at lunch time about the answer to everything and databases and I wondered aloud how you could go about getting Oracle to respond with 42 when you “selected all from everything”. My colleagues looked at me like I was an idiot and said “create a table called everything with a column called all and select it”. Yeah, of course, and I laughed. So much for being an expert at Oracle huh?

Well, I tried. It did not work:

mdw1123> create table EVERYTHING (ALL number not null)
  2  /
create table EVERYTHING (ALL number not null)
                         *
ERROR at line 1:
ORA-00904: : invalid identifier

Damn. It’s a reserved word. But for what? Off the top of my head I could not remember what ALL is used for in Oracle select syntax. Never mind, I could get around the issue to some degree by the use of quotes around the column name (and just for fun, I made the column name lowercase too – this is how you can create lowercase columns but you have to be careful with this, as you will see below):

mdw1123> create table everything ("all" number not null)
  2  /
mdw1123> desc everything
 Name                                                              Null?    Type
 ----------------------------------------------------------------- -------- --------

 all                                                               NOT NULL NUMBER

mdw1123> insert into everything values (42)
  2  /
mdw1123> select "all" from everything
  2  /

       all
----------
        42

-- but be careful of case
mdw1123> select "ALL" from everything
  2  /
select "ALL" from everything
       *
ERROR at line 1:
ORA-00904: "ALL": invalid identifier

I was not happy with this though, I was having to put the quotes in my line and be careful about the syntax.

So, what is the word ALL used for? A quick check of the SQL reference manual:

It is the opposite of DISTINCT and the default, so we never have to put it in the statement.

With the above in mind I was able to quickly come up with something close, but not quite, what I originally asked for. See below for how.

mdw1123> create table everything (KNOWLEDGE NUMBER NOT NULL)
  2  /

mdw1123> insert into everything values (42)
  2  /

mdw1123> select all knowledge from everything
  2  /

 KNOWLEDGE
----------
        42
mdw1123>

Having said it was not quite what I had originally set out to do, I actually prefer this version.

Of course, I cleaned up after myself. It feels odd typing in commands that have an English meaning that would not be what you want to do – I could not get over the nagging feeling that the below was going to cause a lot of data to disappear :-) :

mdw1123> drop table everything purge;

Dropped Partitions do not go in the Recycle Bin January 24, 2012

Posted by mwidlake in SQL.
Tags: , ,
3 comments

If you alter table TEST drop partition Q1, does it go in the recycle bin?

That is the question I was asked today. “Of course it….” Pause. More pause.

No, I did not know. I knew I’d seen partitions in the recyclebin on Oracle but I thought there was no option to state PURGE when you dropped a partition.

So, a quick test was needed.

First I tried a drop of a partition I knew I could live without {NB this is on version 11.2.0.3, I am 99.8% sure this is the same on 10}

mdw1123> alter table person_call drop partition d_20111205 purge
  2  /
alter table person_call drop partition d_20111205 purge
                                                  *
ERROR at line 1:
ORA-14048: a partition maintenance operation may not be combined with other operations


mdw1123> alter table person_call drop partition d_20111205
  2  /

Table altered.

mdw1123> select count(*) from dba_recyclebin
  2  /
Any Key>

  COUNT(*)
----------
         0

1 row selected.

That’s pretty conclusive then, Oracle does not like the PURGE option when dropping a partitions, which suggests it does not go into the recyclebin, and in fact my recyclebin remains empty when I correctly drop the partition. {NB I had purged the recyclebin before this test}.

However, I want to make sure that it is indeed just the partition operation that does not go to the recyclebin (you can of course turn off the recyclebin and I have not proven that I have not done this). So:


mdw1123> create table mdw (id number,vc1 varchar2(10))
  2  partition by range (id)
  3  (partition p1 values less than (10)
  4  ,partition p2 values less than (20)
  5  ,partition p3 values less than (30)
  6  ,partition pm values less than (maxvalue)
  7  )
  8
mdw1123> /
Table created.

mdw1123> insert into mdw
  2  select rownum,'AAAAAAAA'
  3  from dual
  4  connect by level <40
  5  /
39 rows created.

mdw1123> select * from dba_recyclebin
  2  /
no rows selected

mdw1123> alter table mdw drop partition p3
  2  /
Table altered.

mdw1123> select * from dba_recyclebin
  2  /
no rows selected

mdw1123> drop table mdw
  2  /
Table dropped.

mdw1123> select * from dba_recyclebin;
Any Key>
OWNER                          OBJECT_NAME                    ORIGINAL_NAME
------------------------------ ------------------------------ --------------------------------
OPERATION TYPE                      TS_NAME                        CREATETIME
--------- ------------------------- ------------------------------ -------------------
DROPTIME               DROPSCN PARTITION_NAME                   CAN CAN    RELATED BASE_OBJECT
------------------- ---------- -------------------------------- --- --- ---------- -----------
PURGE_OBJECT      SPACE
------------ ----------
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      Table Partition           INDEX_01                       2012-01-24:16:13:55
2012-01-24:16:15:33    2787392                                  NO  NO       77672       77672
       77672       1024
MDW                            BIN$DI/3ZwiMRneOxLrJ4jidyg==$0 MDW
DROP      TABLE                                                    2012-01-24:16:13:55
2012-01-24:16:15:33    2787393                                  YES YES      77672       77672
       77672

4 rows selected.

So, if you go and drop the wrong partition, you can’t go getting it back without some sort of restore or recovery (as there is nothing to get it back from) but you can if you drop the whole table. And I was not going mad, I had seen partitions in the Recyclebin (there is, after all, a PARTITION_NAME column in the table)

This seems a little odd to me, I would imagine that dropping the wrong partition is something you would want to use flashback query to fix (as you can with accidentally dropping a table) but there you go. You can’t.

Back to the day job….

You can explain an invalid SQL statement November 27, 2010

Posted by mwidlake in internals.
Tags: , ,
6 comments

I’m in “nightmare weekend before presenting” mode. I’m up to my eyes at work (and have been for ages, thus the quiet blog) and my recent weekends have been full of normal {and abnormal} life.

As is the way, when up against it and putting together my proofs for wild claims, everything breaks subtly and makes my wild claims look a little, well, wild – even though they are real issues I’ve seen, worked through and fixed in the day job. *sigh*. It does not help when you come across little oddities you have never seen before and end up spending valuable time looking into them.

So here is one. I’m just putting together a very, very simple demo of how the number of rows the CBO expects to see drops off as you move outside the known range. In the below you can see the statement I am using (I keep passing in different days of the month and watching the expected number of rows drop until I hit 1 expected row), but look at how it progress to the last entry…

mdw11> select count(*) from date_test_flat where date_1=to_date('&day-02-2011','DD-MM-YYYY')
  2  /
Enter value for day: 01

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    16 |   128 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 15

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     2 |    16 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-15 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 21

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11>

The expected number of rows drops, becomes and – and has shot up to 99 again (which is the expected number in the known range, as I have 10,000 rows spread over 100 days). My immediate thought is “Wow! Maybe Oracle have put some odd fix in where when you go well out of range it reverts to expecting an average number of rows”. Nope. It is because I asked for the data for 30th February. And I did not get an error.

I think it is because I have set autotrace traceonly explain. This causes the SQL statement not to be executed {if it is just a select, not an insert, update or delete}. It seems the costing section of the CBO is not so good at spotting duff dates, but it then gets the costing wrong.

I’ve spotted that the format of the filter also changes when the date is invalid, I really want to check that out – but I better continue failing to write the presentation!

I know, pretty pointless knowing this but it just amused me. Below is just a quick continuation to show that if the statment is to be executed you get an error and no plan and that utterly duff dates can be passed in.

mdw11> /
Enter value for day: 28

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-28 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))

mdw11> SET AUTOTRACE ON
mdw11> /
Enter value for day: 20
any key>

  COUNT(*)
----------
         0

1 row selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |     1 |     8 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE(' 2011-02-20 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        821  consistent gets
          0  physical reads
          0  redo size
        421  bytes sent via SQL*Net to client
        415  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

mdw11> /
Enter value for day: 30
select count(*) from date_test_flat where date_1=to_date('30-02-2011','DD-MM-YYYY')
                                                         *
ERROR at line 1:
ORA-01839: date not valid for month specified


mdw11> set autotrace traceonly explain
mdw11> /
Enter value for day: 30

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('30-02-2011','DD-MM-YYYY'))

mdw11> /
Enter value for day: 45

Execution Plan
----------------------------------------------------------
Plan hash value: 247163334

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 |     8 |   215   (0)| 00:00:04 |
|   1 |  SORT AGGREGATE    |                |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| DATE_TEST_FLAT |    99 |   792 |   215   (0)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("DATE_1"=TO_DATE('45-02-2011','DD-MM-YYYY'))

DBA_TAB_MODIFICATIONS can miss multi-table inserts July 5, 2010

Posted by mwidlake in internals.
Tags: ,
4 comments

Last week I blogged a piece all about DBA_TAB_MODIFICATIONS and how it captures all normal DML against table. However, a word of warning. This may not be true for multi-table inserts. It appears that inserts via multi-table inserts are not recorded in DBA_TAB_MODIFICATIONS for 10.2.0.3 and a couple of 9.2 versions of oracle.

{My thanks got to my comrade-in-arms Basavaraj Patil for this. He not only noticed the issue but tracked it down to multi-table inserts, came and told me about it and then said I could put it in my blog. This is his work really}.

I will not go into multi-table inserts right now but it is a very nice feature that was introduced with oracle 9 which allows you to insert a row into several tables at a time or into one of different tables. In fact, I think it is a relatively unknown feature. Check out this oracle-base article on new 9i features that mentions it or go pull up the Oracle 10 SQL reference manual.

There is a bug with Oracle up to 10.2.0.3 and 9.2.0.6 and 9.2.0.7.
It is fixed in 9.2.0.8, 10.2.0.4 and 11.1.0.6
Bug number 4572043 it you have metalink.
It impacts direct load merge statements as well as multi-table inserts, but I have not tested and verified the direct load merge statement.

Below is a demonstration of the issue:

TDB>drop table obj_nonsys purge;

Table dropped.

TDB>drop table obj_sys purge;

Table dropped.

TDB>create table obj_nonsys
  2  as select * from all_objects where rownum<1
  3  /

Table created.

TDB>create table obj_sys
  2  as select * from all_objects where rownum<1
  3  /

Table created.

TDB>insert into obj_sys
  2  select * from dba_objects
  3  where owner in ('SYS','SYSTEM')
  4  and rownum <= 200
  5  /

200 rows created.

TDB>insert into obj_nonsys
  2  select * from dba_objects
  3  where owner not in ('SYS','SYSTEM')
  4  and rownum <= 150
  5  /

150 rows created.

TDB>commit;

Commit complete.
TDB>-- flush the changes down to the DBA_TAB_MODIFICATIONS table.
TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>--
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS');

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
OBJ_NONSYS          150          0          0 NO   05-JUL-2010 20:59            
OBJ_SYS             200          0          0 NO   05-JUL-2010 20:59            

2 rows selected.

TDB>-- As can be seen above, the inserts are correctly captured
TDB>-- And the below counts confirm this
TDB>select count(*) from obj_sys;

  COUNT(*)                                                                      
----------                                                                      
       200                                                                      

1 row selected.

TDB>select count(*) from obj_nonsys;

  COUNT(*)                                                                      
----------                                                                      
       150                                                                      

1 row selected.

TDB>-- Now the core of it. Multi-table inserts
TDB>-- NB this is not the best example of a multi-table insert but it will do.
TDB>insert when (owner='SYS' or owner ='SYSTEM')
  2  	      then into obj_sys values ( OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID
  3  				       ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS
  4  				       ,TEMPORARY,GENERATED,SECONDARY)
  5  	    when (owner !='SYS' and owner !='SYSTEM')
  6  	      then into obj_nonsys values ( OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID
  7  				 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS
  8  				 ,TEMPORARY,GENERATED,SECONDARY)
  9  	    select  OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID
 10  				 ,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS
 11  				 ,TEMPORARY,GENERATED,SECONDARY
 12  	    from dba_objects
 13  	    where object_type='TABLE'
 14  	    and rownum <= 1000
 15  /

1000 rows created.

TDB>commit;

Commit complete.

TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>-- And what do we see in DTM?
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS');

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
OBJ_NONSYS          150          0          0 NO   05-JUL-2010 20:59            
OBJ_SYS             200          0          0 NO   05-JUL-2010 20:59            

2 rows selected.

TDB>-- Argh! No change to the number of inserts! They have been missed
TDB>--
TDB>-- Let me veryify what is in the tables...
TDB>select count(*) from obj_sys;

  COUNT(*)                                                                      
----------                                                                      
      1025                                                                      

1 row selected.

TDB>select count(*) from obj_nonsys;

  COUNT(*)                                                                      
----------                                                                      
       325                                                                      

1 row selected.

TDB>-- And I will do similar inserts to what the multi-table insert would do just to make sure
TDB>-- there is nothing odd going on.
TDB>insert into obj_sys
  2  select * from dba_objects
  3  where object_type='TABLE'
  4  and owner in ('SYS','SYSTEM')
  5  and rownum <= 600
  6  /

600 rows created.

TDB>insert into obj_nonsys
  2  select * from dba_objects
  3  where object_type='TABLE'
  4  and owner not in ('SYS','SYSTEM')
  5  and rownum <= 400
  6  /

400 rows created.

TDB>commit;

Commit complete.

TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>--
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name in ('OBJ_SYS','OBJ_NONSYS');

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
OBJ_NONSYS          550          0          0 NO   05-JUL-2010 20:59            
OBJ_SYS             800          0          0 NO   05-JUL-2010 20:59            

2 rows selected.

TDB>select count(*) from obj_sys;

  COUNT(*)                                                                      
----------                                                                      
      1625                                                                      

1 row selected.

TDB>select count(*) from obj_nonsys;

  COUNT(*)                                                                      
----------                                                                      
       725                                                                      

1 row selected.

TDB>
TDB>-- Note, the counts have gone as well of course and now are adrift from DTM

DBA_TAB_MODIFICATIONS July 2, 2010

Posted by mwidlake in internals, performance, statistics.
Tags: , ,
16 comments

I’ve mentioned DBA_TAB_MODIFICATIONS (and there is also of course ALL_TAB_MODIFICATIONS and USER_TAB_MODIFICATIONS) in a couple of posts, but mostly buried in long, rambling posts about other things. So here is an overview and demonstration of DBA_TAB_MODIFICATIONS. Skip to the end if you just want to see an example of how it works.

The below example is from 10.2.0.3 but for V11 it is pretty much the same, as is 9 except where I say otherwise.

SYS.DBA_TAB_MODIFICATIONS is used by oracle internally to track how many inserts, updates and deletes have been done to a table, table partition or table subpartition since the segment last had stats gathered on it with dbms_stats. It also records if the segment has been truncated since the last stats gather. Why since the last stats gather? Because Oracle uses it internally to detect how much a segment has changed and if it is 10% or more oracle will deem it stale for stats gathering purposes. (under 11 you can alter the stale percentage). Note I said SYS.DBA_TAB_MODIFICATIONS – on 10 at least the public synonym for it is missing, so unlike most other DBA_ views you need to state the owner (or create a synonym).

OWNER.TABLE_NAME     INSERTS    UPDATES    DELETES TIMESTAMP         TRU
------------------- ---------- ---------- ---------- ----------------- ---
XXXXXXX.YYYYYYYYYYY   22598264          0          1 01-MAR-2010 02:08 NO
XXXXXXX.YYYYYYYYYYY          5          0          0 19-SEP-2007 01:47 NO
XXXXXXX.YYYYYYYYYYY     888766          0          0 01-MAR-2010 02:08 NO
XXXXXXX.YYYYYYYYYYY       3191       1486          0 27-NOV-2009 05:11 NO
XXXXXXX.YYYYYYYYYYY      34742          0          0 08-MAR-2010 15:16 NO
XXXXXXX.YYYYYYYYYYY          0       7192          0 02-JUL-2010 05:00 NO
XXXXXXX.YYYYYYYYYYY          0          1          0 10-MAR-2010 15:16 NO
XXXXXXX.YYYYYYYYYYY          8          8          8 26-JAN-2010 08:05 NO
XXXXXXX.YYYYYYYYYYY    1533536          0          2 01-MAR-2010 02:08 NO
XXXXXXX.YYYYYYYYYYY        281          0          0 11-SEP-2009 03:00 NO 

Under oracle 9 you have to register a table as MONITORED before this information is gathered. Under 10 and 11 all tables/partitions/subpartitions are monitored and you can’t turn that monitoring off {you can try, but oracle ignores you :-) }.

The information can be very useful for checking how volatile a segment is, if it has been changed a lot since the last time stats were gathered on it and you can also add the values held in DBA_TAB_MODIFICATIONS to the value for NUM_ROWS held for the segment and get a very accurate estimate of the current number of rows. It is a lot, lot faster than actually counting them!

The information on inserts/updates and deletes is gathered for pretty much all DML against tables (see an up-coming post for an example of this not being true). Direct load SQL*Loader and other direct-io activity can skip being recorded but insert-append, using the /*+ append */ hint is recorded correctly {I suspect this was not true for V9 and 10.1 but am no longer sure}. This information is initially held in memory and only later pushed into DBA_TAB_MODIFICATIONS and so you may not see the latest information. Under oracle 9 this information is flushed down every 15 minutes I believe, under 10.1 it is 3 hours and under 10.2 onwards the information is only flushed down when stats are gathered against the segment OR you manually flush the information down to the database.

flushing the latest information is achieved with DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO. It is relatively quick to run, normally taking less than a few seconds.

When statistics are gathered on a segment, any corresponding rows in DBA_TAB_MODIFOCATIONS is deleted, not updated to zeros, and is recreated only when relevent inserts,updates, deletes or truncates occur on the segment.

the DBA_TAB_MODIFICATIONS view sits on top of sys.mon_mods_all$ as well as obj$,user$ and the usual suspects. sys.mon_mods_all$ does not contain any more information that the view exposes.

desc sys.dba_tab_modifications
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------
 TABLE_OWNER                                                    VARCHAR2(30)
 TABLE_NAME                                                     VARCHAR2(30)
 PARTITION_NAME                                                 VARCHAR2(30)
 SUBPARTITION_NAME                                              VARCHAR2(30)
 INSERTS                                                        NUMBER
 UPDATES                                                        NUMBER
 DELETES                                                        NUMBER
 TIMESTAMP                                                      DATE
 TRUNCATED                                                      VARCHAR2(3)
 DROP_SEGMENTS                                                  NUMBER

--View description
TEXT
-----------------------------------------------------------------------
SYS                            DBA_TAB_MODIFICATIONS                  9
select u.name, o.name, null, null,
       m.inserts, m.updates, m.deletes, m.timestamp,
       decode(bitand(m.flags,1),1,'YES','NO'),
       m.drop_segments
from sys.mon_mods_all$ m, sys.obj$ o, sys.tab$ t, sys.user$ u
where o.obj# = m.obj# and o.obj# = t.obj# and o.owner# = u.user#
union all
select u.name, o.name, o.subname, null,
       m.inserts, m.updates, m.deletes, m.timestamp,
       decode(bitand(m.flags,1),1,'YES','NO'),
       m.drop_segments
from sys.mon_mods_all$ m, sys.obj$ o, sys.user$ u
where o.owner# = u.user# and o.obj# = m.obj# and o.type#=19
union all
select u.name, o.name, o2.subname, o.subname,
       m.inserts, m.updates, m.deletes, m.timestamp,
       decode(bitand(m.flags,1),1,'YES','NO'),
       m.drop_segments
from sys.mon_mods_all$ m, sys.obj$ o, sys.tabsubpart$ tsp, sys.obj$ o2,
     sys.user$ u
where o.obj# = m.obj# and o.owner# = u.user# and
      o.obj# = tsp.obj# and o2.obj# = tsp.pobj#

-- underlying sys.$ table
desc sys.mon_mods_all$
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------
 OBJ#                                                           NUMBER
 INSERTS                                                        NUMBER
 UPDATES                                                        NUMBER
 DELETES                                                        NUMBER
 TIMESTAMP                                                      DATE
 FLAGS                                                          NUMBER
 DROP_SEGMENTS                                                  NUMBER

Here is a demo of it in action:

TDB>-- clear down the test table.
TDB>-- if you do not have access to DBA_TAB_MODIFICATIONS change to ALL_TAB_MODIFICATIONS
TDB>drop table test_1 purge;

Table dropped.

TDB>create table test_1
  2  as select * from all_objects where rownum<1
  3  /

Table created.

TDB>select count(*) from test_1;

  COUNT(*)                                                                      
----------                                                                      
         0                                                                      

1 row selected.

TDB>-- WHAT IS IN DBA_TAB_MODIFICATIONS
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- OK, let's flush down the information
TDB>--
TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>-- Still nothing as no activity has occurred on the table.
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- Now create some data
TDB>insert into test_1
  2  select * from dba_objects where rownum <= 100
  3  /

100 rows created.

TDB>commit;

Commit complete.

TDB>select count(*) from test_1;

  COUNT(*)                                                                      
----------                                                                      
       100                                                                      

1 row selected.

TDB>-- WHAT IS IN DBA_TAB_MODIFICATIONS
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- OK, let's flush down the information
TDB>--
TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>--
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              100          0          0 NO   02-JUL-2010 10:21            

1 row selected.

TDB>-- The information in DBA_TAB_MODIFICATIONS is used by Oracle to detect if a table
TDB>-- (or partition) in a tables is stale - changed by 10%
TDB>-- Gathering statistics on an object DELETES the record from DBA_TAB_MODIFICATIONS
TDB -- rather than setting all the values to zero.
TDB>--
TDB>exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST_1')

PL/SQL procedure successfully completed.

TDB>--
TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

no rows selected

TDB>-- now do some activity again and flush it down to the dictionary
TDB>insert  into test_1
  2  select * from dba_objects where rownum <= 150
  3  /

150 rows created.

TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              150          0          0 NO   02-JUL-2010 10:21            

1 row selected.

TDB>-- Direct inserts via insert-append are also captured (I think this might have changed)
TDB>insert /*+ append */ into test_1
  2  select * from dba_objects where rownum <= 170
  3  /

170 rows created.

TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              320          0          0 NO   02-JUL-2010 10:21            

1 row selected.

TDB>-- now a mixed bag of inserts, updates and deletes
TDB>insert into test_1
  2  select * from dba_objects where rownum <= 75
  3  and owner not in ('SYS','SYSTEM')
  4  /

75 rows created.

TDB>--
TDB>update test_1 set created=sysdate
  2  where object_type !='TABLE'
  3  /

289 rows updated.

TDB>delete from test_1
  2  where object_type='SEQUENCE'
  3  /

10 rows deleted.

TDB>commit;

Commit complete.

TDB>--
TDB>exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO

PL/SQL procedure successfully completed.

TDB>select table_name,inserts,updates,deletes,truncated,timestamp
  2  from sys.dba_tab_modifications
  3  where table_owner=user and table_name= 'TEST_1'
  4  /

TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
------------ ---------- ---------- ---------- ---- -----------------            
TEST_1              395        289         10 NO   02-JUL-2010 10:21            

1 row selected.

If you want to see what else I said about DBA_TAB_MODIFICATIONS then the end of this {way too long} post shows how to use it to get a very fast count of rows in a table and if you have issues with permissions flushing database monitoring see this link about permissions needed

Question. As the data gets flushed from memory to a data dictionary table, it persists the database being bounced. What happens to the data in memory about table changes when thers is a controlled shutdown and when the database crashes?

SQL*Plus Line Insertion June 22, 2010

Posted by mwidlake in development.
Tags:
4 comments

I’m quite new to SQL*Plus, having only been using it for just shy of 19 years… And today I learnt something new about it. You can over-write lines anywhere in the buffer by stating the line number:

  1  select p.surname
  2        ,p.first_forename
  3        ,a.house_number
  4        ,a.post_code
  5  from person p
  6      ,adres a
  7* where p.addr_id=a.addr_id

-- Damn, miss-spelt address in line 6
TDB> 6   ,address a
TDB> l
  1  select p.surname
  2        ,p.first_forename
  3        ,a.house_number
  4        ,a.post_code
  5  from person p
  6    ,address a
  7* where p.addr_id=a.addr_id 

I use SQL*Plus a lot as (a) I am getting stuck in my ways (b) it is always available on all Oracle sites (c) I can use all those useful little scripts I’ve developed over the years. Using “edit” to fire up vi or notepad to edit the file is fine but for correcting a typo I usually type in the line number and c/mistake/correct. This other method I’ve just come across of replacing the line could be useful.

But I played around and found I could do a couple of other things. I can add lines at the end of the script {maybe not so useful as, when you do a list, you are at the end of the buffer so you just insert and type) – but I can also insert lines at the START of the script:

  1  select p.surname
  2        ,p.first_forename
  3        ,a.house_number
  4        ,a.post_code
  5  from person p
  6    ,address a
  7* where p.addr_id=a.addr_id

TDB> 8 and a.dob <sysdate-(18*365)
TDB> l
  1  select p.surname
  2        ,p.first_forename
  3        ,a.house_number
  4        ,a.post_code
  5  from person p
  6    ,address a
  7  where p.addr_id=a.addr_id
  8* and a.dob <sysdate-(18*365)

-- if you enter a line number a way beyond the end of the buffer, SQL*Plus
-- intelligently corrects it to the next valid line number
TDB> 12 order by 1,2
TDB> l
  1  select p.surname
  2        ,p.first_forename
  3        ,a.house_number
  4        ,a.post_code
  5  from person p
  6    ,address a
  7  where p.addr_id=a.addr_id
  8  and a.dob <sysdate-(18*365)
  9* order by 1,2

-- And it works from the other end of the file. Only it does not replace the
-- first valid line, it inserts the new line and moves all the others "down".
TDB> 0 select count(*) from (
TDB> l
  1  select count(*) from (
  2  select p.surname
  3        ,p.first_forename
  4        ,a.house_number
  5        ,a.post_code
  6  from person p
  7    ,address a
  8  where p.addr_id=a.addr_id
  9  and a.dob <sysdate-(18*365)
 10* order by 1,2
TDB> a  )
 10* order by 1,2 )
-- script finished...
TDB> /
and a.dob <sysdate-(18*365)
    *
ERROR at line 9:
ORA-00904: "A"."DOB": invalid identifier

-- Damn! another typo.
-- I think in this case I will just go to the line and <em>C</em>hange the character - it 
-- is less effort than typing the whole line again.
TDB> 9
  9* and a.dob <sysdate-(18*365)
TDB> c/a./p./
  9* and p.dob <sysdate-(18*365)
DWPDV1> l
  1  select count(*) from (
  2  select p.surname
  3        ,p.first_forename
  4        ,a.house_number
  5        ,a.postcode
  6  from person p
  7    ,address a
  8  where p.addr_id=a.addr_id
  9  and p.dob <sysdate-(18*365)
 10* order by 1,2 )
TDB> /
Any Key>

  COUNT(*)
----------
     31963

1 row selected.

Up until now, to insert a new line at the start of the file I either had to save and edit it or I altered the first line and re-inserted the original. I do quite often want to insert a first line as I like to use the “select count(*) from (original_query) to test the performance of a sql statement without having the actual data sent over SQL*Net to my session. It takes that network activity out of the equation.

The other thing I do occasionally is add an explain plan statement:

TDB> l
  1  select count(*) from (
  2  select p.surname
  3        ,p.first_forename
  4        ,a.house_number
  5        ,a.post_code
  6  from person p
  7    ,address a
  8  where p.addr_id=a.addr_id
  9  and p.dob <sysdate-(18*365)
 10* order by 1,2 )
TDB> -13 explain plan set statement_id='MDW' for
TDB> l
  1  explain plan set statement_id='MDW' for
  2  select count(*) from (
  3  select p.surname
  4        ,p.first_forename
  5        ,a.house_number
  6        ,a.post_code
  7  from person p
  8    ,address a
  9  where p.addr_id=a.addr_id
 10  and p.dob <sysdate-(18*365)
 11* order by 1,2 )
TDB> /

Explained.

TDB> 

dbms_stats.set_table_stats “defaults” June 21, 2010

Posted by mwidlake in internals, statistics.
Tags: , ,
add a comment

What happens if you call dbms_stats.set_table_stats without passing in any of the values to set?

I know, why would you do it anyway? Well, I did so by accident. If I want to gather quick stats on a test table I execute something like:

exec dbms_stats.gather_table_stats(OWNNAME =>user,TABNAME =>’TEST_TAB1′)
If I am feeling generous I might state ESTIMATE_PERCENT too.

I was doing some testing work and was gathering stats and also setting stats manually. Then I started to see several of my test tables all had 2000 rows and were 100 blocks in size – at least according to the stats. I knew this was not possible. It turned out to be Cut ‘n’ Paste fingerf the trouble and I was issuing.

exec dbms_stats.set_table_stats(OWNNAME =>user,TABNAME =>’TEST_TAB1′)

If the table did not already have stats this set the stats on the table to default values of 2000 rows, 100 blocks. If the table already had stats then they were left as they were.

If those figures ring a bell, then that is because they are the default values used if a table has no stats and you have no dynamic sampling. See this table of defaults

Anyway, below is a little worked example of these default values being set. Oh, version is 10.2.0.3.

TDB> drop table TEST1 purge
  2  /

TDB> select sysdate from dual;
SYSDATE
-----------------
21-JUN-2010 16:52


TDB> CREATE TABLE TEST1
  2  AS SELECT	    ROWNUM ID
  3    ,OBJECT_NAME OBJ_NAME
  4  FROM DBA_OBJECTS


TDB> SELECT TABLE_NAME,NUM_ROWS,BLOCKS,SAMPLE_SIZE
  2  FROM DBA_TABLES
  3  WHERE OWNER=USER AND TABLE_NAME = 'TEST1'
  4  /

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1

-- New table, no stats yet gathered, the columns hold null

-- Call dbms_stats.SET_TABLE_STATS, setting nothing
TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1')

TDB> SELECT TABLE_NAME,NUM_ROWS,BLOCKS,SAMPLE_SIZE
  2  FROM DBA_TABLES
  3  WHERE OWNER=USER AND TABLE_NAME = 'TEST1'

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                                2000        100        2000

-- The columns are set to defaults

-- Gather proper stats
TDB> exec dbms_stats.gather_table_stats(ownname =>user,tabname =>'TEST1',estimate_percent=>10)

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                              205430        956       20543


-- Now use SET_TABLE_STATS as intended, setting numrows to a value
TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1',numrows=>5000)

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                                5000        956       20543

-- Try the naked SET_TABLE_STATS
TDB> EXEC dbms_stats.set_table_stats(ownname => user,tabname => 'TEST1')

TABLE_NAME                       NUM_ROWS     BLOCKS SAMPLE_SIZE
------------------------------ ---------- ---------- -----------
TEST1                                5000        956       20543

TDB> select sysdate from dual;
Any Key>
SYSDATE
-----------------
21-JUN-2010 16:52


-- And let us see how the stats have changed over the last few minutes.
TDB> select table_name,stats_update_time
  2  from dba_tab_stats_history
  3  where table_name = 'TEST1'
  4  /
Any Key>
TABLE_NAME                     STATS_UPDATE_TIME
------------------------------ --------------------------------
TEST1                          21-JUN-10 16.52.03.028086 +00:00
TEST1                          21-JUN-10 16.52.05.109905 +00:00
TEST1                          21-JUN-10 16.52.06.906204 +00:00
TEST1                          21-JUN-10 16.52.08.329664 +00:00

Who am I? {What is my SID} June 18, 2010

Posted by mwidlake in internals.
Tags: ,
2 comments

Yesterday, I posted about what AUDSID is. One of the comments, from Gary Myers, included the info that you can get AUDSID via sys_context – “select sys_context(‘USERENV’,’SESSIONID’) from dual;”.

In one of those wonderous little quirks of fate, it was the need to get my sesssion’s current SID and the use of “select sys_context(‘USERENV’,’SESSIONID’) from dual;” that reminded me of AUDSID and my intention to say something about it. SESSIONID from that little select is the AUDSID and you then check V$SESSION for the SID of the record with that AUDSID. I guess that works so long as you are NOT logged on as SYS (see previous email for why). See the script below for an example, in the commented out section.

Anyway, back to the nominal topic of getting your SID. Lots of people have posted about it before, but let not plagiarism stop me. I actually tend to use v$mystat myself. From my big book of little scripts I drag from site to site:

-- my_sid.sql
-- Martin Widlake 09/06/08
-- Get my current SID
-- I can never remember the syntax
set pages 32
set pause on
spool my_sid.lst
select sid
from v$mystat
where rownum < 2
/
--or
--select sys_context('USERENV','SID') 
--from dual
--/
-- or
-- SELECT sid 
-- FROM V$SESSION
-- WHERE audsid = SYS_CONTEXT('userenv','sessionid');
--/
spool off
clear col
--
-- EOF
--

>@my_sid

      SID
---------
      530

Boy does it take me a lot of words and even a 31-line output file to say:

select sid from v$mystat where rownum < 2;

I do wish, like many, that you could simple say “select sid from dual;” in the same way as “select user from dual”, but it the great scheme of things it does not keep me awake at night.

I can’t Explain Why June 8, 2010

Posted by mwidlake in internals, performance.
Tags: , ,
3 comments

Have you ever tried to use Explain Plan and it gives you an error like the below (this is on 10.2)?

DWDBT1> set autotrace on

  1  select count(*)
  2  from person pers
  3  ,person_name pena
  4  where pena.pers_id=pers.pers_id
  5* and pena.surname='SMITH'
DWDBT1> /

  COUNT(*)
----------
     23586
1 row selected.

Execution Plan
----------------------------------------------------------
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-00959: tablespace 'USER_TEMP' does not exist

SP2-0612: Error generating AUTOTRACE EXPLAIN report

I seem to run into this once a year or so and, as it has happened three times now, this time around my brain had at long last hung onto the details and I remembered the cause. As a clue, I run into this issue when I am using a new development database that was derived from a live one…

As you can see from the above error message, tablespace USER_TEMP does not exist. How can you be referencing a tablespace that does not exist? First thing, does the tablespace really not exist and thus the error is real?

DWDBT1> @ts_lst
Enter value for ts_name: u
old 12: where tablespace_name like upper('&ts_name'||'%')
new 12: where tablespace_name like upper('u'||'%')
Any Key>;

TS_NAME                                              INI_EXT_K NEXT_EXT_K MIN_EX
--------------------------------------------------- ---------- ---------- ------
    MAX_EX PCT       MIN_EXTLN ST
---------- --------- --------- ---------
UNDOTBS                                                     64                 1
2147483645                  64 ON
USERS                                                       40         40      1
2147483645 0                40 ON

2 rows selected.

As you can see, there is no tablespace USER_TEMP. So it must be something to do with PLAN_TABLE, the table that underlies EXPLAIN PLAN. So let’s check out that the table exists.

DWDBT1> desc plan_table
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ---------------------
 STATEMENT_ID                                                         VARCHAR2(30)
 PLAN_ID                                                              NUMBER
 TIMESTAMP                                                            DATE
 REMARKS                                                              VARCHAR2(4000)
 OPERATION                                                            VARCHAR2(30)
 OPTIONS                                                              VARCHAR2(255)
 OBJECT_NODE                                                          VARCHAR2(128)
 OBJECT_OWNER                                                         VARCHAR2(30)
 OBJECT_NAME                                                          VARCHAR2(30)
 OBJECT_ALIAS                                                         VARCHAR2(65)
 OBJECT_INSTANCE                                                      NUMBER(38)
 OBJECT_TYPE                                                          VARCHAR2(30)
 OPTIMIZER                                                            VARCHAR2(255)
 SEARCH_COLUMNS                                                       NUMBER
...

Yes, the table exists. Or a view or something that looks like a table anyway. Let’s check further.

DWDBT1> @obj_lst
Enter value for obj_name: plan_table
old   8: where object_name like upper(nvl('&obj_name','WHOOPS')||'%')
new   8: where object_name like upper(nvl('plan_table','WHOOPS')||'%')
Enter value for sub_name: 
old   9: and   nvl(subobject_name,'WHOOPS') like upper(nvl('&sub_name','WHOOPS')||'%')
new   9: and   nvl(subobject_name,'WHOOPS') like upper(nvl('','WHOOPS')||'%')
Enter value for obj_type: %
old  10: and object_type like upper(nvl('&obj_type','TABLE')||'%')
new  10: and object_type like upper(nvl('%','TABLE')||'%')
Any Key>

OWNER      OBJECT_NAME          SUBOBJECT_NA OBJ_TYPE     CRE_DATE        LAST_DDL
---------- -------------------- ------------ ------------ --------------- ------------------------
PUBLIC     PLAN_TABLE                        SYNONYM      10-JUL-07       10-JUL-07 12:19:14
SYS        PLAN_TABLE$                       TABLE        10-JUL-07       10-JUL-07 12:19:14

If you are not aware, in V10 PLAN_TABLE was replaced with a global temporary table PLAN_TABLE$ and a public synonym of PLAN_TABLE referencing it, which is what you see above. If I quickly pull out a few details of the table, you can see that it is temporary and that is has no tablespace allocated to the table.

 1 select table_name,status,temporary TEMP,tablespace_name
2 from dba_tables
3* where owner=user and table_name ='PLAN_TABLE$'
DWDBT1> /
TABLE_NAME   STATUS   TEMP   TABLESPACE_NAME
------------------------------ -------- ---- ------------------------------
PLAN_TABLE$   VALID   Y    

The temporary table segment goes into the user’s temporary tablespace (and we are getting close to the cause of the error now, honest). Here is a subset of user details:

USERNAME        USER_ID DFLT_TABSPACE
------------ ---------- ---------------------------------------------
TEMP_TABSPACE                                 CREATED
--------------------------------------------- -----------------
XXADMIN             413 WORKING_128K
USER_TEMP                                     07-DEC-2009 17:34
XXRED               134 DW_COMMON_MG
BATCH_TEMP                                    07-DEC-2009 17:29
DWABCD              414 USERS
USER_TEMP                                     07-DEC-2009 17:3

DWABCD is the user I log in as and it has the temp tablespace set to USER_TEMP, from the original error message. But if I now check for what temporary files are on the system then I see the below:

DWDBT1> select * from dba_temp_files
2 /
Any Key...>

FILE_NAME
----------------------------------------------------------------------------------------------------
FILE_ID    TABLESPACE_NAME                BYTES      BLOCKS     STATUS     RELATIVE_FNO AUT
---------- ------------------------------ ---------- ---------- --------- ------------ ---
MAXBYTES   MAXBLOCKS  INCREMENT_BY USER_BYTES USER_BLOCKS
---------- ---------- ------------ ---------- -----------
/u01/oradata/dwDBT1/temp01.tdbf
1          TEMP                            5368709120 655360    AVAILABLE 1             YES
3.4360E+10  4194302   1280          5367660544  655232
/u01/oradata/dwDBT1/batch_temp01.tdbf
2          BATCH_TEMP                      104857600  12800     AVAILABLE 1             YES
3.4360E+10  4194302   1280          103809024  12672

2 rows selected.

Only two and neither are called USER_TEMP.

So, the error is occurring when the call to Explain Plan is trying to generate a temporary segment in the non-existent temp tablespace. The fix is to simply set the TEMPORARY tablespace for the user to one that exists {or I guess you could create a new temporary tablespace of the correct name}:

DWDBT1> alter user dwabcd temporary tablespace temp;

User altered.

DWDBT1> select count(*) from person;
Any Key...&gt;

COUNT(*)
----------
322798

1 row selected.

`Elapsed: 00:00:03.96

Execution Plan
----------------------------------------------------------
Plan hash value: 1154882994

---------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     1 |  2417   (1)| 00:01:06 |
|   1 |  SORT AGGREGATE    |        |     1 |            |          |
|   2 |   TABLE ACCESS FULL| PERSON |   328K|  2417   (1)| 00:01:06 |
---------------------------------------------------------------------

What is the cause of the problem? Well, Oracle will not let you set the temporary tablespace of a user to a non-existent temporary tablespace, as you can see by my attempt to set my user back to the invalid value:

DWDBT1> alter user dwabcd temporary tablespace USER_TEMP
2 /
alter user dwabcd temporary tablespace USER_TEMP
*
ERROR at line 1:
ORA-00959: tablespace 'USER_TEMP' does not exist

But it will import users from another system where the temporary tablespace exists and user have it set as their default temporary tablespace. This is why I run into the problem every year or so. I find myself helping out with the creation of development databases from a live database and this mismatch in temporary tablespaces seems to regularly occur.

INTERNAL_FUNCTION() Impact April 21, 2010

Posted by mwidlake in internals, performance.
Tags: , ,
13 comments

You may occasionally see something like the following in the filter predicates of an Explain Plan:

INTERNAL_FUNCTION(“COL_1″)>:P0

What is INTERNAL_FUNCTION, what is the use of INTERNAL_FUNCTION() in Oracle explain plans and why am I putting things like ORACLE INTERNAL_FUNCTION partition exclusion and meaning of internal_function performance almost as random strings in this blog?

Well, I want to get the hit rate up for this post as when I did a quick search on INTERNAL_FUNCTION the current top hit is This unfortunately misleading posting that tells you “It is not a significant overhead in SQL execution performance”. This internal function IS potentially a significant overhead in SQL Execution performance. You can’t add comments to the above site {just book a course or order a book}, so I’m having to create my own page…

INTERNAL_FUNCTION is, as the above posting says, where oracle does an implicit data conversion. In the shown case, as well as in my example in this blog and in most cases I have seen, from timestamp to date.

Why is it an issue? Because it is a function on a column and as such it can have a disasterous impact on sql execution performance.

This code below ( which has most select columns removed for simplicity and to protect the innocent) was performing very poorly, taking over 30 minutes to complete. This is all on 10.2.0.3 Enterprise edition of Oracle.

SELECT trans0_.ID as ID3_1_
...
, round(transfers0_.AMOUNT,2) as formula10_0_
FROM VW_PMTT trans0_
WHERE  (trans0_.MODIFIED_DT between :p0 AND :p1
        AND trans0_.PERS_ID = :p2)
and transfers0_.ACCOUNT_ID=:p3
ORDER BY transfers0_.MODIFIED_DT

This is the plan and the significant filter/access predicates:

Plan hash value: 3261981189
------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                 | Rows  |Bytes| Cost   |Pstart|Pstop|
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                      |       |     | 67841  |      |     |
|   1 |  SORT ORDER BY                    |                      |     8 | 920 | 67841  |      |     |
|   2 |   VIEW                            | VW_PMTP              |     8 | 920 | 67840  |      |     |
|   3 |    UNION-ALL                      |                      |       |     |        |      |     |
|*  4 |     FILTER                        |                      |       |     |        |      |     |
|   5 |      NESTED LOOPS                 |                      |     4 | 280 | 28490  |      |     |
|*  6 |       TABLE ACCESS BY INDEX ROWID | W_TBLP               |     1 |  14 |     3  |      |
|*  7 |        INDEX UNIQUE SCAN          | PK_W_TBLP            |     1 |     |     2  |      |     |
|   8 |       PARTITION RANGE ALL         |                      |     4 | 224 | 28487  |    1 | 1580|
|*  9 |        TABLE ACCESS BY LOC INd RID| W_PAYMENT            |     4 | 224 | 28487       1 | 1580|
|* 10 |         INDEX RANGE SCAN          | IDX_W_P_TRANS        | 66936 |     |  3345  |    1 | 1580|
...

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - filter("PLY"."ACCOUNTID"=:P3)
   7 - access("PLY"."PERS_ID"=:P2)
   9 - filter(("PT"."TYPE"=1184769 AND INTERNAL_FUNCTION("PT"."MODIFIED")>=:P0 AND INTERNAL_FUNCTION ("PT"."MODIFIED")<:P1

As you can see, the view is translated into a simple two-table join {and unioned with a second two-table join, which I removed again for clarity, but if anyone wants the full plan and statement, email me} where:

  • one table is accessed on a unique index (via bind variable P2).
  • Row filtered for P3
  • This table is then joined to a Partitioned table in a nested loop 
  • The table being accessed via a local index range scan.

At first glance, the plan may look fine, but Look at the pstart and pstop. 1 to 1580. That is every partition in the table.
The predicate information shows that the INTERANAL_FUNCTION(“PT”.”MODIFIED”) column is being compared to P0 and P1.

The partition key on the table is:-

>@chk_patr_key
Name of Table : W_PAYMENT

TAB_NAME                       OBJECT_TYP PSN COLUMN_NAME
------------------------------ ---------- --- -----------
USER.W_PAYMENT                 TABLE       1 MODIFIED

ie the column that is being flitered by. Why no partition pruning?

Partition pruning is not occuring because of the “INTERNAL_FUNCTION” being applied to that column. The CBO is not able to understand how the result of a function will match to the partition values of the column. Not even it’s own, internally used function :-P

For the above, the “date” bind variables were defined as

p0 timestamp := to_timestamp(’01-FEB-2010 00:00′,’DD-MON-YYYY HH24:MI’);
p1 timestamp := to_timestamp(’03-FEB-2010 00:00′,’DD-MON-YYYY HH24:MI’);

I then altered the code such that the bind variables were defined as dates.

p0 date := to_date(’01-FEB-2010′,’DD-MON-YYYY’);
p1 date := to_date(’03-FEB-2010′,’DD-MON-YYYY’);

The plan and significant predicates were now:

Plan hash value: 346162108
------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                 | Rows  |Bytes| Cost   |Pstart|Pstop|
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                      |       |     | 67841  |      |     |
|   1 |  SORT ORDER BY                    |                      |     4 | 460 | 67841  |      |     |
|   2 |   VIEW                            | VW_PMTP              |     4 | 460 | 67840  |      |     |
|   3 |    UNION-ALL                      |                      |       |     |        |      |     |
|*  4 |     FILTER                        |                      |       |     |        |      |     |
|   5 |      NESTED LOOPS                 |                      |     2 | 140 | 28490  |      |     |
|*  6 |       TABLE ACCESS BY INDEX ROWID | W_TBLP               |     1 |  14 |     3  |      |     |
|*  7 |        INDEX UNIQUE SCAN          | PK_W_TBLP            |     1 |     |     2  |      |     |
|   8 |       PARTITION RANGE ALL         |                      |     2 | 112 | 28487  |  KEY | KEY |
|*  9 |        TABLE ACCESS BY LOC INd RID| W_PAYMENT            |     2 | 112 | 28487  |  KEY | KEY |
|* 10 |         INDEX RANGE SCAN          | IDX_W_P_TRANS        | 66936 |     |  3345  |  KEY | KEY |
...

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - filter("PLY"."ACCOUNTID"=:P3)
   7 - access("PLY"."PERS_ID"=:P2)
   9 - filter(("PT"."TYPE"=1184769 AND "PT"."MODIFIED">=:P0 AND "PT"."MODIFIED"<:P1

The expected rows and bytes drop but the key difference in the plan is Pstart and Pstop are now KEY KEY. ie the CBO will evaluate at run time the values coming back from the Table access of W_TBLP and partition prune the access of the table.

In this situation, the original query took over 30 minutes to return. The new code took 18 seconds.

I tried one final test. I set the bind variables to be varchar2 so that implicit data conversion would occur:
p0 varchar2(20) := ’01-FEB-2010 00:00′;
p1 varchar2(20) := ’03-FEB-2010 00:00′;

With VARCHAR2 the CBO was able to do the implicit conversion without the INTERNAL_FUNCTION and the plan/filter predicates were exactly as for native dates.

It would seem this issue with INTERNAL_FUNCTION blights timestamp to date implicit data conversion but not some other implicit data conversions. Without testing further, I can’t be sure which.

This is a documented feature by Oracle. Unusually (and much to their credit) Oracle even document the potential negative impact of this particular construct with timestamps and dates. See this section in chapter 5 of the Oracle 10g data warehousing guide.

I better balance the fact I linked to an inaccurate posting on the subject with a few good ones.
This is a nice one from McLaud on the OraFAQ site where he has come across it as a problem and with help shows how it is due to implicit conversion of a timestamp from Hibernate.
This blog on the issue arising with Java is a nice example of the fix (use a date to hold a date, not a timestamp).

Finally, this link is to an older but very, very valid rant by Tom Kyte about implicit data conversion and using the correct data type for your data. {Don’t go using numbers or varchars to hold dates, just don’t, and similarly you should avoid implicit data conversion}.

Follow

Get every new post delivered to your Inbox.

Join 171 other followers