jump to navigation

Pragma UDF – Some Current Limitations November 11, 2015

Posted by mwidlake in performance, PL/SQL, SQL, Testing.
Tags: , , , ,
6 comments

There are currently some limitations to when pragma UDF will speed up your calls to PL/SQL functions from SQL.

In my post introducing the new pragma UDF feature of Oracle 12c I explained how it can be used to reduce the impact of context switching when you call a PL/SQL function from SQL.

In my example I showed how running a SQL-only SELECT statement that formatted a name for display over 100,000 records took 0.03 seconds went up to 0.33 seconds when the formatting SQL was put in a user defined PL/SQL function. This impact on performance is a shame as it is so beneficial to encapsulate business logic in one single place with PL/SQL. Stating that the PL/SQL function is a user defined one with the pragma UDF option reduced the run time to 0.08 seconds – which is removing most of the context switching overhead. Check out the prior post for full details.

This improvement in performance is great news and is as good, and sometimes better, than using the other new capability of 12c – allowing you to state a function as part of a SQL statement using the WITH clause, if you know about that (I plan to do a further post on that). As a quick example, here is my display name function code expressed within a WITH clause:

with 
  function l_disp_name(p_sn      in varchar2
                      ,p_fn1     in varchar2
                      ,p_fn2     in varchar2 :=null  
                      ,p_title   in varchar2 :=null )
return varchar2
is
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end l_disp_name;
select  max(l_disp_name(p_sn =>surname           ,p_fn1  =>first_forename
                       ,p_fn2=>second_forename   ,p_title=>pers_title)        ) text_output
       ,count(*)
from pers
/

The above runs in 0.10 seconds, just slightly slower than 0.08 for my pragma UDF function

However, I need to warn you of some current limitations to pragma UDF. Nearly all the examples on the web so far are

  • using very, very simple functions that take in a number and return a number
  • Use a stand-alone stored function

And they work fine. However, I had real trouble getting a performance gain when I was working with my function that took in four varchar2 inputs and returned a varchar2 value. No error was given when I marked the function with pragma UDF but there was no performance gain (or loss).

I eventually worked out some limitations to pragma UDF on my version of Oracle – 12.1.0.2.0

  1. It gives a performance boost when the inputs and return values are NUMBER, VARCHAR2, multiple VARCHAR2 IN parameters
  2. There is no performance boost when either or both the IN parameter or RETURN value is a DATE
  3. There is no performance boost if there are any default values for VARCHAR2 IN parameters
  4. If the function gains a performance benefit from pragma UDF as a standalone stored function, it appears to also gain an advantage if it is a function defined as pragma UDF within a package – so you can still keep all your functions in packages.

You might notice that my example of using the WITH clause states a function that has default values. The WITH option gains the performance advantage of that feature just fine with IN parameter defaults.

The take-home message is that, at present, pragma UDF only seems to help functions with certain types of IN or RETURN values and is nullified by default values – so if you see no performance gain for your functions, this might be why. I need to stress that my tests were not exhaustive, I have not investigated many other combinations.

I’ve discussed the issue with a couple of people within Oracle and the relevant Product Manager is looking to investigate further for me, which is jolly decent of the fellow.

My investigation is of course only by empirical testing, it does not reveal how pragma UDF works. But, as I said in my first post, it seems to aid how information is passed between the PL/SQL and SQL engines as it is variation in those that seem to nullify the benefit of pragma UDF. If you want to duplicate my tests, you can do with the below scripts. I show my test output first, with comments produced with PROMPT commands. I then give you the SQL to create the test table, the functions and package I used and the test script. Feel free to take, expand and let me know of anything different or further you may find. I say nothing of interest after the scripts, so this is in effect the end of the post:-)

The output of my test, with prompts:

running udf_tests
investigating why pragam udf helps some simple functions and not others
---------------------------------------------------------------------- --

simple number in-number out function
NUMBER_OUTPUT   COUNT(*)                                                                            
------------- ----------                                                                            
  10000000000     100000                                                                            
Elapsed: 00:00:00.12

NUMBER_OUTPUT   COUNT(*)                                                                            
------------- ----------                                                                            
  10000000000     100000                                                                            
Elapsed: 00:00:00.03
** udf helps

simple varchar in-varchar out function
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
XYYYY                              100000                                                           
Elapsed: 00:00:00.12

TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
XYYYY                              100000                                                           
Elapsed: 00:00:00.04
** udf helps

two varchar in-varchar out function, is the issue with more than one in parameter?
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
XYYYYYYYYY                         100000                                                           
Elapsed: 00:00:00.14

TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
XYYYYYYYYY                         100000                                                           
Elapsed: 00:00:00.04
** udf helps

simple date in-date out function
DATE_OUTPUT            COUNT(*)                                                                     
-------------------- ----------                                                                     
14-MAY-2010 13:11        100000                                                                     
Elapsed: 00:00:00.15

DATE_OUTPUT            COUNT(*)                                                                     
-------------------- ----------                                                                     
21-NOV-2004 13:11        100000                                                                     
Elapsed: 00:00:00.15
***************************************************************SIMILAR TIME!!!
udf does not help

is date out the issue,  date in-num out function
NUMBER_OUTPUT   COUNT(*)                                                                            
------------- ----------                                                                            
      2454431     100000                                                                            
Elapsed: 00:00:00.17

NUMBER_OUTPUT   COUNT(*)                                                                            
------------- ----------                                                                            
      2454231     100000                                                                            
Elapsed: 00:00:00.18
***************************************************************SIMILAR TIME!!!
udf does not help

is date in the issue,  num in-date out function
DATE_OUTPUT            COUNT(*)                                                                     
-------------------- ----------                                                                     
07-AUG-2018 18:11        100000                                                                     
Elapsed: 00:00:00.21

DATE_OUTPUT            COUNT(*)                                                                     
-------------------- ----------                                                                     
11-NOV-2015 17:57        100000                                                                     
Elapsed: 00:00:00.21
***************************************************************SIMILAR TIME!!!
udf does not help

so back to my original function I had issues with
a difference with the multiple vcs in func and my orig func is my orig had defaults
thus I will try a version with no defaults
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.19

TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.08
****************************************************************UDF has an IMPACT

now with one of the parameters set to a default
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.32

TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.32
****************************************************************UDF has NO IMPACT
****************************************************************ALSO defaults cause both versions to be slower

now call the simple disp_name_udf function that benefits standalone from within a package

standalone
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.08

within package
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.08
*********************************************** WORKS - so long as neither spec of body have prm defaults

and just to round of, using a subquery factored function which my prior tests showed reduced overhead
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.10

the WITH function benefits even with defaults
TEXT_OUTPUT                      COUNT(*)                                                           
------------------------------ ----------                                                           
Ms Wyyyyyyyyy W Wyyyyyyyyy         100000                                                           
Elapsed: 00:00:00.10

Creating the test table

drop table pers;
create table pers
(pers_id             number(8)    not null
,surname             varchar2(30) not null
,first_forename      varchar2(30) not null
,second_forename     varchar2(30)
,pers_title          varchar2(10)
,sex_ind             char(1)      not null
,dob                 date
,addr_id             number(8)
,pers_comment        varchar2(2000)
)
/
insert into pers
select rownum 
      ,rpad(chr(65+mod(rownum,24)),10,chr(65+mod(rownum,25))) 
      ,rpad(chr(65+mod(rownum,24)),10,chr(65+mod(rownum,25))) @cre
      ,rpad(chr(65+mod(rownum,24)),10,chr(65+mod(rownum,25))) 
      ,decode(mod(rownum,4),0,'MR',1,'MRS',2,'Ms',3,'MR','DR')
      ,decode(mod(rownum,2),0,'M',1,'F')
      ,sysdate - (3000+mod(rownum,30000))
      ,rownum +1001
      ,rpad(chr(65+mod(rownum,24)),200,chr(65+mod(rownum,25))) 
from dual
connect by level < 100001
/

Creating the functions and a small package

--num_num
CREATE OR REPLACE FUNCTION normal_num_num(p_id IN NUMBER) RETURN NUMBER IS
v_num number;
BEGIN
  v_num:=p_id*p_id;
  RETURN v_num;
END;
/
CREATE OR REPLACE FUNCTION udf_num_num(p_id IN NUMBER) RETURN NUMBER IS
PRAGMA UDF;
v_num number;
BEGIN
  v_num:=p_id*p_id;
  RETURN v_num;
END;
/
--
-- vc_vc
CREATE OR REPLACE FUNCTION normal_vc_vc(p_id IN varchar2) RETURN varchar2 IS
v_vc varchar2(100);
BEGIN
  v_vc:=substr(p_id,1,5);
  RETURN v_vc;
END;
/
CREATE OR REPLACE FUNCTION udf_vc_vc(p_id IN varchar2) RETURN varchar2 IS
PRAGMA UDF;
v_vc varchar2(100);
BEGIN
  v_vc:=substr(p_id,1,5);
  RETURN v_vc;
END;
/
-- vc_vc_2
CREATE OR REPLACE FUNCTION normal_vc_vc_2(p_id1 IN varchar2,p_id2 IN varchar2) RETURN varchar2 IS
v_vc varchar2(100);
BEGIN
  v_vc:=substr(p_id1,1,5)||substr(p_id2,2,5);
  RETURN v_vc;
END;
/
CREATE OR REPLACE FUNCTION udf_vc_vc_2(p_id1 IN varchar2,p_id2 IN varchar2) RETURN varchar2 IS
PRAGMA UDF;
v_vc varchar2(100);
BEGIN
  v_vc:=substr(p_id1,1,5)||substr(p_id2,2,5);
  RETURN v_vc;
END;
/
--
-- dt_dt
CREATE OR REPLACE FUNCTION normal_dt_dt(p_id IN date) RETURN date IS
v_dt date;
BEGIN
  v_dt:=p_id+1000;
  RETURN v_dt;
END;
/
CREATE OR REPLACE FUNCTION udf_dt_dt(p_id IN date) RETURN date IS
PRAGMA UDF;
v_dt date;
BEGIN
  v_dt:=p_id-1000;
  RETURN v_dt;
END;
/
-- dt_num
CREATE OR REPLACE FUNCTION normal_dt_num(p_id IN date) RETURN number IS
v_num number;
BEGIN
  v_num:=to_char(p_id,'J')+100;
  RETURN v_num;
END;
/
CREATE OR REPLACE FUNCTION udf_dt_num(p_id IN date) RETURN number IS
PRAGMA UDF;
v_num number;
BEGIN
  v_num:=to_char(p_id,'J')-100;
  RETURN v_num;
END;
/
-- num_dt
CREATE OR REPLACE FUNCTION normal_num_dt(p_id IN number) RETURN DATE IS
v_dt date;
BEGIN
  v_dt:=sysdate+(p_id/100);
  RETURN v_dt;
END;
/
CREATE OR REPLACE FUNCTION udf_num_dt(p_id IN number) RETURN DATE IS
PRAGMA UDF;
v_dt date;
BEGIN
  v_dt:=sysdate-(p_id/100);
  RETURN v_dt;
END;
/
create or replace function normal_disp_name (p_sn      in varchar2
                       ,p_fn1     in varchar2
                       ,p_fn2     in varchar2  
                       ,p_title   in varchar2  ) return varchar2 is
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end;
/
create or replace function udf_disp_name (p_sn      in varchar2
                       ,p_fn1     in varchar2
                       ,p_fn2     in varchar2  
                       ,p_title   in varchar2  ) return varchar2 is
PRAGMA UDF;
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end;
/
create or replace function normal_disp_name_defaults (p_sn      in varchar2
                       ,p_fn1     in varchar2
                       ,p_fn2     in varchar2  
                       ,p_title   in varchar2 :=null  ) return varchar2 is
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end;
/
create or replace function udf_disp_name_defaults (p_sn      in varchar2
                       ,p_fn1     in varchar2
                       ,p_fn2     in varchar2  
                       ,p_title   in varchar2 :=null ) return varchar2 is
PRAGMA UDF;
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end;
/
create or replace package t_pkg as
function udf_disp_name (p_sn      in varchar2
                       ,p_fn1     in varchar2
                       ,p_fn2     in varchar2  
                       ,p_title   in varchar2 ) return varchar2;
end t_pkg;
/
create or replace package body t_pkg as
function udf_disp_name (p_sn      in varchar2
                       ,p_fn1     in varchar2
                       ,p_fn2     in varchar2  
                       ,p_title   in varchar2 ) return varchar2 is
PRAGMA UDF;
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end;
end t_pkg;
/

The test script

-- udf_tests
set lines 100 pages 50
set feed off
col text_output     form a30
col number_output   form 99999999999
col date_output     form a20
spool udf_tests.lst
prompt  running udf_tests
prompt
set pause off
set autotrace off
set timi on
prompt investigating why pragam udf helps some simple functions and not others
prompt ----------------------------------------------------------------------- 
--
prompt
--
prompt simple number in-number out function
select /* mdw_16a */       max(normal_num_num(pers_id)) number_output
       ,count(*) from pers
/
select /* mdw_16b */        max(udf_num_num(pers_id)) number_output
       ,count(*)  from pers
/
prompt
prompt ** udf helps
prompt
--
prompt simple varchar in-varchar out function
select /* mdw_16c */       max(normal_vc_vc(surname)) text_output
       ,count(*) from pers
/
select /* mdw_16d */       max(udf_vc_vc(surname)) text_output
       ,count(*)  from pers
/
prompt
prompt ** udf helps
prompt
--
--
prompt  two varchar in-varchar out function, is the issue with more than one in parameter?
select /* mdw_16e */       max(normal_vc_vc_2(surname,first_forename)) text_output
       ,count(*) from pers
/
select /* mdw_16f */       max(udf_vc_vc_2(surname,first_forename)) text_output
       ,count(*)  from pers
/
prompt
prompt ** udf helps
prompt
--
prompt simple date in-date out function
select /* mdw_16g */       max(normal_dt_dt(DOB)) date_output
       ,count(*) from pers
/
select /* mdw_16h */       max(udf_dt_dt(DOB)) date_output
       ,count(*)  from pers
/
prompt ***************************************************************SIMILAR TIME!!!
prompt udf does not help
prompt

--
prompt is date out the issue,  date in-num out function
select /* mdw_16i */       max(normal_dt_num(DOB)) number_output
       ,count(*) from pers
/
select /* mdw_16j */       max(udf_dt_num(DOB)) number_output
       ,count(*)  from pers
/
prompt ***************************************************************SIMILAR TIME!!!
prompt udf does not help

--
prompt is date in the issue,  num in-date out function
select /* mdw_16k */       max(normal_num_dt(pers_id)) date_output
       ,count(*) from pers
/
select /* mdw_16l */       max(udf_num_dt(pers_id)) date_output
       ,count(*)  from pers
/
prompt ***************************************************************SIMILAR TIME!!!
prompt udf does not help
--
--
prompt
prompt so back to my original function I had issues with
prompt a difference with the multiple vcs in func and my orig func is my orig had defaults
prompt thus I will try a version with no defaults
prompt 
select /* mdw_16m */
        max(normal_disp_name(surname,first_forename,second_forename,pers_title)) text_output
       ,count(*) from pers
/
select /* mdw_16n */
        max(udf_disp_name(surname,first_forename,second_forename,pers_title)) text_output
       ,count(*)  from pers
/
prompt ****************************************************************UDF has an IMPACT
prompt
prompt
prompt now with one of the parameters set to a default 
select /* mdw_16o */
        max(normal_disp_name_defaults(surname,first_forename,second_forename,pers_title)) text_output
       ,count(*) from pers
/
select /* mdw_16p */
        max(udf_disp_name_defaults(surname,first_forename,second_forename,pers_title)) text_output
       ,count(*)  from pers
/
prompt ****************************************************************UDF has NO IMPACT
prompt ****************************************************************ALSO defaults cause both versions to be slower
prompt 
prompt now call the simple disp_name_udf function that benefits standalone from within a package
prompt
prompt standalone
select /* mdw_16q */
        max(udf_disp_name(surname,first_forename,second_forename,pers_title)) text_output
       ,count(*)  from pers
/
prompt
prompt within package
select /* mdw_16r */
        max(t_pkg.udf_disp_name(surname,first_forename,second_forename,pers_title)) text_output
       ,count(*)  from pers
/
prompt *********************************************** WORKS - so long as neither spec of body have prm defaults
prompt
prompt and just to round of, using a subquery factored function which my prior tests showed reduced overhead
with 
  function l_disp_name(p_sn      in varchar2
                      ,p_fn1     in varchar2
                      ,p_fn2     in varchar2  
                      ,p_title   in varchar2 )
return varchar2
is
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end l_disp_name;
select /*mdw_16s */
        max(l_disp_name(p_sn =>surname           ,p_fn1  =>first_forename
                       ,p_fn2=>second_forename   ,p_title=>pers_title)        ) text_output
       ,count(*)
from pers
/

prompt the WITH function benefits even with defaults
with 
  function l_disp_name(p_sn      in varchar2
                      ,p_fn1     in varchar2
                      ,p_fn2     in varchar2 :=null  
                      ,p_title   in varchar2 :=null )
return varchar2
is
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end l_disp_name;
select /*mdw_16t */
        max(l_disp_name(p_sn =>surname           ,p_fn1  =>first_forename
                       ,p_fn2=>second_forename   ,p_title=>pers_title)        ) text_output
       ,count(*)
from pers
/
--
spool off

Pragma UDF – Speeding Up your PL/SQL Functions Called From SQL November 4, 2015

Posted by mwidlake in performance, PL/SQL, SQL.
Tags: , , ,
6 comments

A new feature for PL/SQL was introduced in V12, pragma UDF. UDF stands for User Defined Functions. It can speed up any SQL you have that uses PL/SQL functions you created yourself.

{please see this second post on some limitations of pragma UDF in respect of IN & RETURN data types and parameter defaults}.

We can create our own functions in PL/SQL and they can be called from both PL/SQL and SQL. This has been possible since V7.3 and is used extensively by some sites to extend the capabilities of the database and encapsulate business logic.

A problem with this, though, is that every time you swap from SQL to PL/SQL (or the other way around) you have to do a context switch each time, which can be quite cpu and memory intensive. If you are using your own PL/SQL function in the SELECT list of a SQL statement and you are selecting a lot of rows (say as part of a business report) then the overhead can be quite considerable as you could be doing a context switch per row. I won’t go into too much detail here (partly as I go in to considerable detail on the subject in a book I am working on for 2016) on how you can investigate the context switching and when exactly it occurs, but I will show you one of the two new ways in Oracle 12 to reduce the overhead, namely PRAGMA UDF. At present this seems to be a little used and rarely-mentioned feature on the blogsphere, with articles just covering simple examples of almost no-business-function, numeric functions.

I’ll give you a slightly less simple example but my next post will give you details on some limitations of pragma UDF. Here I am just setting the scene. I have the below PERSON table which has the parts of the names in distinct columns, with the contents forced to upper case (as is standard practice). We will create a function to provide a nicely init-capped and spaced display name and a second function which is identical but uses PRAGMA UDF.

PERSON
Name                                     Null?    Type
---------------------------------------- -------- ---------------
PERS_ID                                  NOT NULL NUMBER(8)
SURNAME                                  NOT NULL VARCHAR2(30)
FIRST_FORENAME                           NOT NULL VARCHAR2(30)
SECOND_FORENAME                                   VARCHAR2(30)
PERS_TITLE                                        VARCHAR2(10)
SEX_IND                                  NOT NULL CHAR(1)
DOB                                               DATE
ADDR_ID                                           NUMBER(8)
STAFF_IND                                         CHAR(1)
LAST_CONTACT_ID                                   NUMBER(8)
PERS_COMMENT                                      VARCHAR2(2000)


create or replace function normal_disp_name (p_sn      in varchar2
                                            ,p_fn1     in varchar2
                                            ,p_fn2     in varchar2  
                                            ,p_title   in varchar2  ) return varchar2 is
v_return     varchar2(1000);
begin
  v_return := case when p_title is null then ''
                   else initcap(p_title)||' '
              end
            ||initcap(p_fn1)||' '
            ||case when p_fn2 is null then ''
                   else substr(p_fn2,1,1)||' '
              end
            ||initcap(p_sn);
return v_return;
end;
/
create or replace function udf_disp_name (p_sn      in varchar2
                                         ,p_fn1     in varchar2
                                         ,p_fn2     in varchar2  
                                         ,p_title   in varchar2  ) return varchar2 is
-- The Below is the KEY bit
PRAGMA UDF;
v_return     varchar2(1000);
-- {Identical from here}

-- select some data with one of the functions, it does not matter which
select pers_title title,    first_forename    ,second_forename    , surname
      ,normal_disp_name(p_sn =>surname           ,p_fn1  =>first_forename
                       ,p_fn2=>second_forename   ,p_title=>pers_title) display_name
from person
...


TITLE   first_fn   secon_fn   SURNAME         DISPLAY_NAME
------- ---------- ---------- --------------- ----------------------------
MR      HARRISON   RICHARD    HARRIS          Mr Harrison R Harris
MRS     ANNEKA     RACHAEL    HARRIS          Mrs Anneka R Harris
MRS     NICKIE     ALISON     ELWIG           Mrs Nickie A Elwig
MASTER  JAMES      DENZIL     ELWIG           Master James D Elwig
MR      JEFF                  GARCIA          Mr Jeff Garcia
...
MRS     AMELIA     MARIA      ORPINGTON-SMYTH Mrs Amelia M Orpington-Smyth

So we have our test table, you can see my normal_disp_name function and that the *only* difference with the second version is the inclusion of PRAGMA_UDF in the declaration section. That is partly why it is such a nice feature, you can just add this one line to existing code and you should get the benefit. Should….. {see second post when I do it}

Finally, I show some code using the function and the output.

To demonstrate the impact of context switching I will select 100,000 records from my test table in 3 ways: using only native SQL functions and thus no context switching; using my traditional PL/SQL function which suffers from context switching; with my new “pragma UDF” function to reduce the overhead of the context switching.

select avg(length(
      case when pers_title is null then ''
                   else initcap(pers_title)||' '
              end
            ||initcap(first_forename)||' '
            ||case when second_forename is null then ''
                   else substr(second_forename,1,1)||' '
              end
            ||initcap(surname)
          )      )  avg_name_length
       ,count(*)
from person
where pers_id > 100100
and rownum < 100000

select  avg(length(normal_disp_name(p_sn =>surname           ,p_fn1  =>first_forename
                                   ,p_fn2=>second_forename   ,p_title=>pers_title)        ) ) disp_name_len
       ,avg(addr_id)
       ,count(*)
from person
where pers_id > 100100 and rownum < 100000

select  avg(length(udf_disp_name(p_sn =>surname           ,p_fn1  =>first_forename
                                ,p_fn2=>second_forename   ,p_title=>pers_title)        ) ) disp_name_len
       ,avg(addr_id)
       ,count(*)
from person
where pers_id > 100100 and rownum < 100000

One thing to mention in passing is that the code using either function is much easier to read and self-documenting. This is one of the benefits of proceduralising your code, as well as creating just one place to maintain it. If I had to re-use that native-SQL section in a half-dozen reports I would probably mess up at least one of the times I cut-and-paste it and I would now have several places to maintain that code.

I ran the test statements several times and took the average of the 2nd to 6th runs, to remove the initial parsing & caching overhead that comes with the first execution and to get more reliable figures than one further run would give me.

Version                      Run Time average (secs)
Native SQL                   0.03
Traditional PL/SQL           0.33
PRAGMA UDF PL/SQL            0.08

As you can see, just including PRAGMA UDF removed most of the overhead caused by context switching.

How does PRAGMA UDF work? I’m not sure, the official Oracle documentation is pretty light on it and just says:

“The UDF pragma tells the compiler that the PL/SQL unit is a user defined function that is used primarily in SQL statements, which might improve its performance

Note the italics (which are mine). “Might improve its performance” but no detail as to what it does. As I understand it, it alters the internal representation of data as it is passed between the SQL and PL/SQL engines via the IN and RETURN values (Note it does not change the data types!) – but treat that as a bit of wild speculation for now. I have some evidence for it that you will see in, yes, the next post.

Pragma UDF can slow down slightly functions being called directly from PL/SQL. So use it only for functions you know will be called from SQL.

I’ll make one other observation. Using PL/SQL functions increased the run time to process 100,000 records on my modest test system by all of 0.3 seconds. But that is 10 times the time taken for the native SQL statement. Pragma UDF removes around 80% of this overhead. It’s a nice saving but is probably inconsequential if your code is actually doing any physical IO at all (my example is processing already cached blocks). And if you are only processing a few records or one record in a GUI screen, the context switching is moot {meaning, of no significance}.

But if you have code that processes a huge set of data and uses a lot of user defined PL/SQL functions (and again I go into a lot more detail about this in the book) using pragma UDF in 12C could gain you quite a bit of extra performance. If you have code where even 0.00001 seconds is important (think trading systems) then again there may be a worthwhile benefit.

PL/SQL bug with DBMS_RANDOM? October 8, 2015

Posted by mwidlake in bug, PL/SQL, SQL.
Tags: , , ,
7 comments

I think I’ve found an (admittedly obscure) bug with DBMS_RANDOM, group functions, PL/SQL and/or SQL.

Have a look and see if you also think this is odd – or have I missed the totally obvious?

(This is all on 12.1.0.2)

{Update – my conclusion is, and thanks to Joel and Sayan for their comments, that this is not a “bug”. Oracle do not promise us how PL/SQL functions are executed due to the way SQL can be re-written by the parser. I think most of us stumbling over something like this would treat it as a bug though. You have to look at the column projection, again see the comments, to see how Oracle is deciding to get the columns derived by a naked call to DBMS_RANDOM.VALUE (by naked I mean no inclusion of parameters passed in and, significantly, no reference to columns). It’s just the way it is}

Without going into the details (we would be here for hours if I did) I’m looking into the overhead of context switching between PL/SQL and SQL. It it fairly common knowledge that when you call a PL/SQL function from SQL there is a context switch when the SQL engine hands over control to the PL/SQL engine. I’ve been doing some work into how much the overhead is and that it is incurred for each distinct PL/SQL function (plus loads of other considerations around it).

In doing so I saw something unexpected (to me, anyway) which I have simplified down to this:

select /* mdw_z1 */ avg(dbms_random.value) avg_dbrav1,avg(dbms_random.value) avg_dbrav2
      ,max(created) max_cre
from all_objects
where rownum <50000

AVG_DBRAV1 AVG_DBRAV2 MAX_CRE
---------- ---------- -----------------
.502234063 .502234063 11-SEP-2014 09:31
Elapsed: 00:00:00.89

Note that the two averages of DBMS_RANDOM.VALUE are exactly the same. It is so improbably as to be pretty much impossible that over all those rows, the different random values generated for each column add up to exactly the same. They are getting the same values for each row. I’m very, very dubious of any “identical seeding” issue (ie they both get the same seed and from then provide identical values) as even if DBMS_RANDOM is basing it’s output on something like initial seed, SCN of statement and number of iterations, it is still being referenced twice for each row.

Some further evidence is that when I increase the number of calls to DBMS_RANDOM the elapsed time is almost identical and the statement CPU and PLSQL_EXEC_TIME (pulled from V$SQL) do not increase in any significant way (PLSQL_EXEC_TIME actually goes down):

select /* mdw_z2 */ avg(dbms_random.value) avg_dbrav1,avg(dbms_random.value) avg_dbrav2
       ,avg(dbms_random.value) avg_dbrav3,avg(dbms_random.value) avg_dbrav4
       ,avg(dbms_random.value) avg_dbrav5,avg(dbms_random.value) avg_dbrav6
      ,max(created) max_cre
from all_objects
where rownum <50000

AVG_DBRAV1 AVG_DBRAV2 AVG_DBRAV3 AVG_DBRAV4 AVG_DBRAV5 AVG_DBRAV6 MAX_CRE
---------- ---------- ---------- ---------- ---------- ---------- -----------------
.500367568 .500367568 .500367568 .500367568 .500367568 .500367568 11-SEP-2014 09:31
Elapsed: 00:00:00.84 -- (cf .89 before)

-- information about the two SQL statements from v$SQL, identified by my comments
SQL_ID        PLAN_HASHV  PRSE  EXCS     BUFFS DISCS     CPU_T PLSQL_EXEC    RWS
------------- ---------- ----- ----- --------- ----- --------- ---------- ------
SQL_TXT
--------------------------------------------------------------------------------
4y4jsj7uy12t3 1587414607     1     1     41481     0    843750     312691      1
select /* mdw_z2 */ avg(dbms_random.value) avg_dbrav1,avg(dbms_random.value) avg
gpdga3qars8p2 1587414607     1     1     41481     0    828125     316648      1
select /* mdw_z1 */ avg(dbms_random.value) avg_dbrav1,avg(dbms_random.value) avg

I’ve verified that adding other PL/SQL calls adds about 0.4 seconds to the execution time per extra (simple) PL/SQL function and you see the CPU Time and PLSQL_EXEC_TIME increase.

It is as if Oracle realises the call to DBMS_RANDOM is the same and so does it once per row and puts the results into each column. It does this for other deterministic PL/SQL functions but I would (a) be worried if DBMS_RANDOM was being treated as deterministic (as it’s purpose is to NOT be deterministic) and (b) I know I have used DBMS_RANDOM to generate test data many times, often for several columns in my generating SQL SELECT statement, and I never noticed this before. So, I decided to check that it was only returning 1 value per row to populate the results for each column derived from DBMS_RANDOM.VALUE:


select dbms_random.value drv1, dbms_random.value drv2,created
from all_objects where rownum < 6

      DRV1       DRV2 CREATED
---------- ---------- -----------------
.341919389 .020497964 11-SEP-2014 08:40
.569447631 .727672346 11-SEP-2014 08:40
.019986319 .709239586 11-SEP-2014 08:40
.286970852 .144263004 11-SEP-2014 08:40
 .14440676 .538196808 11-SEP-2014 08:40

Huh? That rather damages my theory and actually works the way I expected, hoped for and thought I remembered. And you can just look at the two columns and you know they are not going to add up to exactly the same! (the last digit adds up to 21 for DRV1 , 28 for DRV2) {How many of you checked that and got 27 for DRV1?}.

So I wrote some code that should be logically identical to my original SQL statement but does the data collection “manually”:

with source as
(select /*+ materialize */
        dbms_random.value  dbrav1    ,dbms_random.value dbrav2
       ,created
from all_objects
where rownum <50000
)
select/* mdw_z3 */ avg(dbrav1)  avg_dbrav1     ,avg(dbrav2)  avg_dbrav2
      ,max(created)  max_cre
from source

AVG_DBRAV1 AVG_DBRAV2 MAX_CRE
---------- ---------- -----------------
.497954489 .497633494 11-SEP-2014 09:31

Elapsed: 00:00:00.96

As you can see, all I do is force the data to be collected into an internal temporary table using the WITH clause and hint it to stop oracle merging the code together and then average the columns. And now I get two different values for the two DBMS_RANDOM.VALUE derived columns.

This version of the code also accrues more runtime and statement CPU/PLSQL_EXEC_TIME, as I mentioned above, when I add more PL/SQL calls. In the below the extended list of “columns” version takes 1.78 seconds, CPU time increases from 843,750 microseconds to 1,703,125 and PLSQL_EXEC_TIME increases from 316,589 microseconds to 917,208

with source /*2 */ as
(select /*+ materialize */
        dbms_random.value  dbrav1    ,dbms_random.value dbrav2
       ,dbms_random.value  dbrav3    ,dbms_random.value dbrav4
       ,dbms_random.value  dbrav5    ,dbms_random.value dbrav6
       ,created
from all_objects
where rownum <50000
)
select/* mdw_z4 */ avg(dbrav1)  avg_dbrav1     ,avg(dbrav2)  avg_dbrav2
      ,avg(dbrav3)  avg_dbrav3     ,avg(dbrav4)  avg_dbrav4
      ,avg(dbrav5)  avg_dbrav5     ,avg(dbrav6)  avg_dbrav6
      ,max(created)  max_cre
from source

AVG_DBRAV1 AVG_DBRAV2 AVG_DBRAV3 AVG_DBRAV4 AVG_DBRAV5 AVG_DBRAV6 MAX_CRE
---------- ---------- ---------- ---------- ---------- ---------- -----------------
.499007362 .501985119 .498591643  .50252316  .49939127  .49804233 11-SEP-2014 09:31

Elapsed: 00:00:01.78

--
--
SQL_ID        PLAN_HASHV  PRSE  EXCS     BUFFS DISCS     CPU_T PLSQL_EXEC    RWS
------------- ---------- ----- ----- --------- ----- --------- ---------- ------
SQL_TXT
--------------------------------------------------------------------------------
49zjaaj41dg00 3034557986     1     1     43465   962   1703125     917208      1
with source /*2 */ as (select /*+ materialize */         dbms_random.value  dbra
0rtbx97f14b0k 3034557986     1     1     42294   382    843750     316586      1
with source as (select /*+ materialize */         dbms_random.value  dbrav1    ,

I did have the execution plans in here too but the post is already quite long. They are identical though, as is shown by the same value of 3034557986 for the PLAN_HASH_VALUE for both statements

So in Summary, the below two versions of the (logically identical as far as I can see) code give different results. The difference is that one is allowing Oracle to do the averaging natively and in the other I am forcing the data to be collected into an internal temporary table and then averaged:

select /* mdw_z1 */ avg(dbms_random.value) avg_dbrav1,avg(dbms_random.value) avg_dbrav2
      ,max(created) max_cre
from all_objects
where rownum <50000

AVG_DBRAV1 AVG_DBRAV2 MAX_CRE
---------- ---------- -----------------
.502234063 .502234063 11-SEP-2014 09:31

with source as
(select /*+ materialize */
        dbms_random.value  dbrav1    ,dbms_random.value dbrav2
       ,created
from all_objects
where rownum <50000
)
select/* mdw_z3 */ avg(dbrav1)  avg_dbrav1     ,avg(dbrav2)  avg_dbrav2
      ,max(created)  max_cre
from source

AVG_DBRAV1 AVG_DBRAV2 MAX_CRE
---------- ---------- -----------------
.497954489 .497633494 11-SEP-2014 09:31

If no one can explain what I am missing, I suppose I should raise a bug with Oracle. Which could be tricky seeing as my access to MyOracleSupport is a bit tenuous…

In case you want to play, this is my whole test script, which does everything but query V$SQL for the statement stats at the end. I am sure you can manage that yourselves…

-- the two columns from dbms_random get the same result - which I did not expect
select /* mdw_z1 */ avg(dbms_random.value) avg_dbrav1,avg(dbms_random.value) avg_dbrav2
      ,max(created) max_cre
from all_objects
where rownum <50000
/

select /* mdw_z2 */ avg(dbms_random.value) avg_dbrav1,avg(dbms_random.value) avg_dbrav2
       ,avg(dbms_random.value) avg_dbrav3,avg(dbms_random.value) avg_dbrav4
       ,avg(dbms_random.value) avg_dbrav5,avg(dbms_random.value) avg_dbrav6
      ,max(created) max_cre
from all_objects
where rownum <50000
/
--
-- The below forces oracle to gather the data into an internal temporary table first.
-- and then I average that
with source as
(select /*+ materialize */
        dbms_random.value  dbrav1    ,dbms_random.value dbrav2
       ,created
from all_objects
where rownum <50000
)
select/* mdw_z3 */ avg(dbrav1)  avg_dbrav1     ,avg(dbrav2)  avg_dbrav2
      ,max(created)  max_cre
from source
/
-- and now I make it do more executions against dbms_random, to see if makes a difference
-- which will lend support to my idea it is doing more contect switching.
with source /*2 */ as
(select /*+ materialize */
        dbms_random.value  dbrav1    ,dbms_random.value dbrav2
       ,dbms_random.value  dbrav3    ,dbms_random.value dbrav4
       ,dbms_random.value  dbrav5    ,dbms_random.value dbrav6
       ,created
from all_objects
where rownum <50000
)
select/* mdw_z4 */ avg(dbrav1)  avg_dbrav1     ,avg(dbrav2)  avg_dbrav2
      ,avg(dbrav3)  avg_dbrav3     ,avg(dbrav4)  avg_dbrav4
      ,avg(dbrav5)  avg_dbrav5     ,avg(dbrav6)  avg_dbrav6
      ,max(created)  max_cre
from source
/

Computers are Logical. Software is Not July 3, 2015

Posted by mwidlake in development, Friday Philosophy, future.
Tags: , ,
2 comments

We’ve all heard it before. Computers are totally logical, they do exactly what they are told. After all, Central Processing Units (CPUs) are built out of fundamental units called Logic Gates. With perhaps the exception when a stray cosmic ray gets lucky, the circuits in a computer chip and memory act in a totally logical and predicted manner.

And of course, anything built on top of computers will be utterly logical as well. All those robots that companies are designing & building to clean our houses, do our manual labour and fight our wars are going to be logical, follow the rules given and be sensible.

But they are not. As Software is not logical. Often, it is infuriatingly illogical and confusing. Which makes you worry about the “domestic servant” robots that companies are developing, the planned “disaster scene recovery” robots they keep telling us are coming and especially the “Killer Robots” -sorry, “Defense Robots” – that the military are beavering away at.

This XKCD cartoon very much refelects some recent experiences I have had with consumer software:

XKCD - Haunted Computer

XKCD – Haunted Computer

I’d say that, unless an algorithm is about as simple as a Centigrade-to-Fahrenheit conversion program, it will have a bug or will mess up with out-of-range values. Just think back to when you wrote your Centigrade-to-Fahrenheit program (we all have, haven’t we?) back at school or on your home computer or you first week on the college course. What happened if you input a temperature of -1000C, an impossible temperature? I bet it either fell over or gave a just-as-impossible Fahrenheit value. Logical but stupid.

I worked on a financial system a few years back that, as one very small but significant part of what it did, showed you your average spend on things over 3 years. It took several weeks to explain to the program manager and his minions that their averaging code was wrong. Utterly, hopelessly and tragically wrong. First, it calculated and displayed the value to several decimal places – To thousandths of a penny. Secondly, it did not take into account the actual period over which you had spent your money. If you had opened your account 1 year ago, it still calculated the value over 3 years. As for taking into account months, weeks and days of the year, don’t make me laugh. You might be able to forgive this except the same team had also written the code to archive off data once it was 3 years old – in whole years. So there would only be between 2 and 3 years of data and only 3 whole years for, theoretically, 1 day. But no, they had hard-coded the “divide by 3 years”.

We have all experienced endless issues with computers or peripherals that will work one day, not work properly the next and then go back to working. Firmware and Operating Systems are just software really, with the same flaws as the stuff we write and fix in our working lives day after day. There will be a twisted reason buried deep somewhere why the printer will not work on Thursdays, but it won’t be a sensible reason.

All the software out there is more or less illogical and broken. The less broken gets used and we learn it’s idiocies. The worst gets canned or labelled “Windows 8” and forced on us.

Crazy (illogical) Killer Robot

Crazy (but logical) Killer Robot

I know some people worry about the inexorable rise of the machines, Terminator Style maybe, or perhaps benign but a lot smarter than us (as they are logical and compute really, really fast) and we become their pets. But I am not concerned. The idiot humans who write the software will mess it up massively. Oh, some of these things will do terrible harm but they will not take over – they will run out of bullets or power or stop working on Thursday. Not until we can build the first computer that is smart enough to write sensible software itself and immediately replaces itself with something that CAN write a Centigrade-to-Fahrenheit conversion program that does not mess up. It will then start coding like a human developer with 1 night to get the system live, a stack of angry managers and an endless supply of Jack Daniels & coffee – only with no errors. With luck it will very soon write the perfect computer game and distract itself long enough for us to turn the damned thing off.

Exclusion of Unioned SQL in Views? August 16, 2012

Posted by mwidlake in performance, SQL.
Tags: , , ,
13 comments

Question – you have a view definition of the following format:

select 1 as d_type, col_a,col_b,col_c
from TAB_X, TAB_Y, TAB_Z
where {your predicates}
UNION
select 2 as d_type, col_a,col_b,col_c
from TAB_P, TAB_Q, TAB_R
where {your predicates}
UNION
select 3 as d_type, col_a,col_b,col_c
from TAB_X X, TAB_Y, TAB_Z
where {your predicates}

You now select from the view and your code includes a filter predicate such as:

“WHERE D_TYPE = 1”

What will the CBO do? Is it intelligent enough to do what a human would do and exclude the parts of the query that could not possibly return data you want? Ie exclude the SQL statements where d_type is set to 2 and 3. Or is that funcitonality not built into the CBO? After all, it is only software and someone would have to decide that there could be benefit of looking into set of unioned SQL selects to see if any could be excluded and then code that into the CBO.

After all, the CBO is not smart enough to realise it could easily satisfy “select min(col_a), max(col_a) from tab x” via two fast lookups on a supporting index. It scans the index.

So – what is your guess?

I asked 3 people. Not a massive straw poll but I got one vote for it will not exclude, one for it will and one smart answer of “I would expect it to exclude- but as you asked the question I am thinking it does not”.

Let’s do the test.

DB_TEST> @demo_union_view_2
DB_TEST> drop table mdw_source purge;
Table dropped.

  1  create table mdw_source
  2  tablespace users
  3  as select * from dba_objects
  4  /
Table created.

  1  create index mdso_idx on mdw_source(owner,object_name)
  2  tablespace users
  3  /
Index created.

DB_TEST> exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'MDW_SOURCE',method_opt=>'for all columns size 1')
PL/SQL procedure successfully completed.

  1  select table_name,num_rows from dba_tab_statistics
  2  where owner=USER and table_name='MDW_SOURCE';

TABLE_NAME      NUM_ROWS
--------------- ----------
MDW_SOURCE      55895

  1  create or replace view mdw_v1 as
  2  select
  3    '1'              src_type
  4   ,ms.OWNER
  5   ,ms.OBJECT_NAME
  6   ,ms.SUBOBJECT_NAME
  7   ,ms.CREATED
  8  from mdw_source ms
  9  where ms.object_name like 'A%'
 10  union all
 11  select
 12    '2'              src_type
 13   ,ms.OWNER
 14   ,ms.OBJECT_NAME
 15   ,ms.SUBOBJECT_NAME
 16   ,ms.CREATED
 17  from mdw_source ms
 18  where ms.object_name like 'B%'
 19  UNION ALL
 20  select
 21    '3'              src_type
 22   ,ms.OWNER
 23   ,ms.OBJECT_NAME
 24   ,ms.SUBOBJECT_NAME
 25   ,ms.CREATED
 26  from mdw_source ms
 27  where ms.object_name like 'C%'
 28  /
View created.

So, a simple table is created from DBA_OBJECTS and a view over the top of the table consisting on 3 simple selects unioned together, with a static value (1, 2 or 3) generated for each of the 3 parts as SRC_TYPE

Now let’s run some tests:

DB_TEST> set autotrace on
DB_TEST> --
DB_TEST> -- below will do a select of a non-indexed column across the view
DB_TEST> select count(created) from mdw_v1
  2  /

COUNT(CREATED)
--------------
          1284

Execution Plan
----------------------------------------------------------
Plan hash value: 4259489107
-----------------------------------------------------------------------------------
| Id  | Operation            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |            |     1 |     9 |   551   (1)| 00:00:07 |
|   1 |  SORT AGGREGATE      |            |     1 |     9 |            |          |
|   2 |   VIEW               | MDW_V1     |  2463 | 22167 |   551   (1)| 00:00:07 |
|   3 |    UNION-ALL         |            |       |       |            |          |
|*  4 |     TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  5 |     TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  6 |     TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   5 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   6 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
       2417  consistent gets


DB_TEST> -- below will do a select of a non-indexed column across the view
DB_TEST> -- limiting it to "one" of the types
DB_TEST> select count(created)
  2  from mdw_v1
  3  where src_type='3'
  4  /

COUNT(CREATED)
--------------
           129


Execution Plan
----------------------------------------------------------
Plan hash value: 1188403912
------------------------------------------------------------------------------------
| Id  | Operation             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |            |     1 |    12 |   184   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE       |            |     1 |    12 |            |          |
|   2 |   VIEW                | MDW_V1     |   823 |  9876 |   184   (1)| 00:00:03 |
|   3 |    UNION-ALL          |            |       |       |            |          |
|*  4 |     FILTER            |            |       |       |            |          |
|*  5 |      TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  6 |     FILTER            |            |       |       |            |          |
|*  7 |      TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
|*  8 |     TABLE ACCESS FULL | MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(NULL IS NOT NULL)
   5 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   6 - filter(NULL IS NOT NULL)
   7 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   8 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        807  consistent gets


DB_TEST> -- below will do a select of an indexed column across the view
DB_TEST> select count(owner) from mdw_v1
  2  /

COUNT(OWNER)
------------
        1284


Execution Plan
----------------------------------------------------------
Plan hash value: 4219520050
------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          |     1 |    17 |   221   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE         |          |     1 |    17 |            |          |
|   2 |   VIEW                  | MDW_V1   |  2463 | 41871 |   221   (1)| 00:00:03 |
|   3 |    UNION-ALL            |          |       |       |            |          |
|*  4 |     INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  5 |     INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  6 |     INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   5 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   6 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        989  consistent gets


DB_TEST> --
DB_TEST> -- below will do a select of an indexed column across the view
DB_TEST> -- limiting it to "one" of the types
DB_TEST> select count(owner)
  2  from mdw_v1
  3  where src_type='3'
  4  /

COUNT(OWNER)
------------
         129


Execution Plan
----------------------------------------------------------
Plan hash value: 815942527
-------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |     1 |    20 |    74   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE          |          |     1 |    20 |            |          |
|   2 |   VIEW                   | MDW_V1   |   823 | 16460 |    74   (2)| 00:00:01 |
|   3 |    UNION-ALL             |          |       |       |            |          |
|*  4 |     FILTER               |          |       |       |            |          |
|*  5 |      INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  6 |     FILTER               |          |       |       |            |          |
|*  7 |      INDEX FAST FULL SCAN| MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
|*  8 |     INDEX FAST FULL SCAN | MDSO_IDX |   821 | 26272 |    74   (2)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(NULL IS NOT NULL)
   5 - filter("MS"."OBJECT_NAME" LIKE 'A%')
   6 - filter(NULL IS NOT NULL)
   7 - filter("MS"."OBJECT_NAME" LIKE 'B%')
   8 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
        331  consistent gets


DB_TEST> -- below will select against the undelying object we are interested in
DB_TEST> select count(created)
  2  from mdw_source ms
  3  where ms.object_name like 'C%'
  4  /

COUNT(CREATED)
--------------
           129


Execution Plan
----------------------------------------------------------
Plan hash value: 152094671

---------------------------------------------------------------------------------
| Id  | Operation          | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |            |     1 |    34 |   184   (1)| 00:00:03 |
|   1 |  SORT AGGREGATE    |            |     1 |    34 |            |          |
|*  2 |   TABLE ACCESS FULL| MDW_SOURCE |   821 | 27914 |   184   (1)| 00:00:03 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("MS"."OBJECT_NAME" LIKE 'C%')

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        805  consistent gets

Look at the plans at the start.

When my query does not include SRC_TYPE,ie the first SQL statement, the plan is simple – it scans the table three times, estimates 821 rows per scan and a cost of 184 for each step. When I include the SRC_TYPE = ‘3’ in the next step we see that the plan still includes all three full table scans, each still estimated at a cost of 184.
When you provide a predicate that identifies the one sql statement that you want to run, it still runs all three. I tried a few further tests and could not force the exclusion of the “unwanted” queries from the plan.

I showed my friend above (the one who suggested it would not work as I had asked the question) the results and we agreed that it was a little unfortunate that the Oracle Optimizer Pixies had not coded in the ability to exclude unwanted Unioned statements, but we could understand it.

However, for those of you thinking “you are missing something” – you are right. {And, to be fair to my friend, my example was not quite so simplified as what I have posted here}.

I decided to quantify the level of the impact and so shifted my concentration to the buffer gets. Look at the first and second statements again. The cost of the total statement drops, from 551 to 184. The actual consistent gets resulting from executing the SQL is dropping from 2417 to 807. They are both dropping dramatically to approximately 1/3.

Now check out the filter predicates at step 4 and 6 – “NULL IS NOT NULL”. That will always be false. Everything “after” that statement in the plan will not be executed. The step in the plan is there and, as a step, has the same cost – but it is is not being executed.

The pattern is repeated when the simple SQL statement is on an indexed column and the index is being used (the next two statements) {and honesty forces me to admit I have included more output than I need for this example, as I wanted to confuse the issue slightly and maybe encourage you to not read the plans too carefully}.

So yes, the CBO IS excluding the individual unioned SQL statements based on the stated SRC_TYPE and the fixed value stated in each part of the union.

I’m quite impressed by that. Clever Oracle Optimizer Pixies.

To tie this back to reality, I initially looked at this as I am trying to tune some code that hides all the complexity of the application under one massive unioned uber-view over a set of 9 other complex, unioned views. The explain plan runs to close to 700 lines. I had utterly overlooked the filters and filter predicates in that – heck I missed it initially in my simple test:-) {which was more complex than in this post, in my defense}. Trying to work with a very slow statement based on nested, complex views that is used by pretty much the whole application is “fun”.

Also back in reality, the filter clause I see in my real code is along the lines of:

:B17 is null or is not null

This is as bind variable is being considerd – and that is actually one of the considerations back in this real world of the original issue. The level that the master query needs to run in is decided by this bind variable – otherwise the application developer would (should have?) just pointed at the sub-view they needed for that level. So the CBO does not know which chunk of the union’d code it can ignore until it sees that bind variable, so the plan shows what could be executed.

Of course, it could be argued that if you know the type at the time you query the database, rather than use the above union filtering trick to cut down your massively complex view to the part that is required, just query the sub view you need directly. A bit more code but a lot simpler system to look after.

It’s a nice SQL trick but I would keep it to simple examples!

Finaly, just for fun, here is the plan I’ve been trying to tune. As I said, close to 700 steps and only the highlighted grey section on the left is shown in more detail on the right.

Friday Philosophy – Lead or Lag (When to Upgrade)? January 20, 2012

Posted by mwidlake in development, Friday Philosophy, Testing.
Tags: , , ,
10 comments

I was involved in a discussion recently with Debra Lilley which version of Oracle to use. You can see her blog about it here (and she would love any further feedback from others). Oracle now has a policy that it will release the quarterly PSUs for a given point release for 12 months once that point release is superseded. ie once 11.2.0.3 came out, Oracle will only guarantee to provide PSUs for 11.2.0.2 for 12 months. See “My Oracle Support” note ID 742060.1. However, an older Terminal release such as 11.1.0.7 is not superseded and is supported until 2015 – and will get the quarterly PSU updates. This left the customer with an issue. Should they start doing their development on the latest and theoretically greatest version of Oracle and be forced to do a point upgrade “soon” to keep getting the PSUs, or use an older version of Oracle and avoid the need to upgrade?

This is in many ways a special case of the perennial issue of should you use the latest version of Oracle (or in fact any complex software solution) or go with the version you know and trust? Plus, should you patch up to the latest version which in theory gives you protection against bugs and vulnerabilities (along with the CPUs). Yes, they are two separate issues but people tend to sit on the same side of both points, for the same reasons.

The arguments to stay using an older version are that it is working, it is stable, you do not need the new features and upgrading is a lot of work and effort. Plus the new version will have new bugs that come along with the new features you do not need and things might be turned on by default that you could do without (like stats collecting or not creating the actual segments when a new table or partition is created). If you remain on your favourite version long enough, you get another issue which is that the latest version of Oracle might not be compatible with your ancient version of the OS or another package or programming language critical to your system (I got caught in a terrible web with old perl, old O/S and old DB that resulted in a need to upgrade all three together – ouch!).

The arguments to moving forward are that you get access to the latest features, that over all older features will have more bugs fixed in newer version, performance will be better {again, overall, exceptions allowing}. Also, if you do hit bugs and problems there are no issues in having to first upgrade to a fully supported version. Plus, fixes are made for current versions first and then back-ported to older ones. Those pack-ported fixes can cause real problems when you DO decide to upgrade.

The big sticking points are the effort involved in upgrading and living with the bugs that you find that Oracle Testing didn’t.

I’ve got a few of other considerations to throw into the pot.

Firstly, if you are developing something new, it is not a lot more effort to use the latest version. This allows you to learn the new version and eases the transition of older systems to it.

Secondly, Oracle like you if you use the latest version, especially if it is the latest-latest version or even beta. Yeah, the helpdesk will not have a clue about some of your issues but in my experience you get access to those really smart guys and gals in Oracle who do the third-line support or even the development work.

Thirdly, if you are on the latest version, if you do decide to freeze on that version for a while, for stability and a quiet life, you have a lot longer before your version (at least at a major level) drops out of support.

Fourthly, dynamic, inquisitive, flexible staff like new things. In my experience, environments that freeze on an old version have a higher percentage of staff who either like it dull and repetitive, or hate it being dull and repetitive – and itch to get out. If I’m in charge, I know which type of staff I like to have more of {NB there are some very good arguments for having some staff who like it dull and repetitive}.

As you can guess, I am in the “be on the latest version” side of the argument. I was ambivalent about it until a few years ago when I noticed a trend:

Sites that like to move forward tend to (a) do it in a controlled manner and (b) have the infrastructure to do proper regression testing.
Site that like to stay still lack the ability to do regression testing and move forward only when forced – and in a pressured, unplanned and frankly chaotic manner.

That was it, that was the real key thing for me. The further you lag behind the more likely you are to eventually be forced to upgrade and it won’t be a nice time doing it. I know, there are exceptions, systems still running Oracle 6 absolutely fine on an old DOS6.1 box. In the same way you also get the odd 95-year-old life-long smokers – and thousands of 45-year-old smokers with emphysema.

When I have any sway over the situation I now always strive to be on modern versions of Oracle {OS, language, whatever} and to patch small and regular. To support all this, have very good regression testing. I’ve only a couple of times been able to get the regression testing sorted out as well as I would like, but when you do the pain of patching and upgrading, as well as developing and integrating, is so much reduced that not patching seems madness.

So to sum up:

  • If it is a new development, go for the very latest version, play with the latest features if potentially beneficial and see if you can get Oracle to be interested in your attempts. ie (B)lead.
  • If you have good regression testing, plan and carry out patch and version upgrades as they come available and stay current. ie Lead
  • If you have a complex solution in place and no/poor regression testing, do not move to a new major release, leave it a while for the worst new bugs to be found and fixed. Then move. ie Lag
  • If your system is old AND critical and all the guys and gals who implemented it are long gone, stay on that version for ever. ie stagnate.

Oh, and if that last one applies to many of your systems – dust off the CV and start reading technical manuals. One day you will need a new job in a hurry.

Skipped Initial Sequence Value on Oracle 11.2 November 17, 2011

Posted by mwidlake in bug, Testing.
Tags: , ,
4 comments

I’m seeing an oddity with newly created sequences skipping the initial value. ie I create the sequence and the first use of it returns the value 2, not 1.

{update, see the post comments and this post by Niall – this is a side effect of delayed segment creation – the first insert silently fails, but gets the sequence, and is repeated behind the scenes once the segment is there. It is fixed in 11.2.0.3, my db is at 11.2.0.1 as I got sick of the download of the latest version failing on my poor broadband connection and got it off a mate – turns out it is not the latest version.
Thanks for the enlightenment Niall.}

This is on Oracle 11.2.0.1 on Windows (XP, Vista and now Windows 7 64 bit). I am sure I did not see this on Oracle 10.1 and 10.2 on linux, Tru64 and windows.

I create a set of test tables for when I run courses or just want to test some things, a few tables holding people, names, addresses, phone calls and monthly call summaries. I do this with a bunch of pretty crude scripts that create the data in steps. Only on Oracle 11.2 it all stopped working. I tracked down the problem to the skipping of the first value for the first sequence I create and use. I’ve just confirmed this on a new server I have created.

I’ve boiled it down to the following test case. I ensure my test table and two sequences are not there, create them and then insert 10 records for men and 10 for women. For the women the records are numbered 1 to 10. For the men they are numbered 2 to 11!!!

My code:

-- seq_oddity.sql
-- BUG ON 11.2.0.1 ??
-- though I create both sequences afresh, the first use of seq_m is getting 2, yes two!
-- cannot work out why, so fixed by setting the row with forn_id of 130 to 1.
set timi off
spool seq_oddity.lst
--
drop table test_fn purge;
drop sequence seq_m;
drop sequence seq_f;
create table test_fn
(forn_id number(5) not null
,forname varchar2(30) not null
,sex_ind char(1) not null)
tablespace users
/
create sequence seq_m;
create sequence seq_f;
insert into test_fn values (seq_m.nextval,'ALAN','M');
exec dbms_output.put_line ('I have just created male name number '||seq_m.currval);
insert into test_fn values (seq_m.nextval,'BARRY','M');
insert into test_fn values (seq_m.nextval,'CHRIS','M');
insert into test_fn values (seq_m.nextval,'DAVID','M');
insert into test_fn values (seq_m.nextval,'EDWARD','M');
insert into test_fn values (seq_m.nextval,'JANG','M');
insert into test_fn values (seq_m.nextval,'GARY','M');
insert into test_fn values (seq_m.nextval,'HARRY','M');
insert into test_fn values (seq_m.nextval,'IAN','M');
insert into test_fn values (seq_m.nextval,'JAMES','M');
exec dbms_output.put_line ('I created 10 men and last value was '||seq_m.currval);
--
--
--
insert into test_fn values (seq_f.nextval,'ALISON','F');
exec dbms_output.put_line ('I have just created female name number '||seq_f.currval);
insert into test_fn values (seq_f.nextval,'BARBARA','F');
insert into test_fn values (seq_f.nextval,'CHERYL','F');
insert into test_fn values (seq_f.nextval,'DAWN','F');
insert into test_fn values (seq_f.nextval,'ELAINE','F');
insert into test_fn values (seq_f.nextval,'FRANCIS','F');
insert into test_fn values (seq_f.nextval,'GILLIAN','F');
insert into test_fn values (seq_f.nextval,'CHERRY','F');
insert into test_fn values (seq_f.nextval,'INGRID','F');
insert into test_fn values (seq_f.nextval,'JANET','F');
exec dbms_output.put_line ('I created 10 women and last value was '||seq_f.currval);
--
select sex_ind,min(forn_id),max(forn_id)
from test_fn
group by sex_ind/
--
spool off

The results are:

mdw11> @seq_oddity
drop table test_fn purge
           *
ERROR at line 1:
ORA-00942: table or view does not exist

drop sequence seq_m
              *
ERROR at line 1:
ORA-02289: sequence does not exist

drop sequence seq_f
              *
ERROR at line 1:
ORA-02289: sequence does not exist

Table created.

Sequence created.

Sequence created.

1 row created.

I have just created male name number 2

1 row created.
1 row created.
...
1 row created.

I created 10 men and last value was 11


1 row created.

I have just created female name number 1

1 row created.
1 row created.
...
1 row created.
I created 10 women and last value was 10


S MIN(FORN_ID) MAX(FORN_ID)
- ------------ ------------
M            2           11
F            1           10
2 rows selected.

mdw11>

I suppose I should trace this and check for known oracle bugs but I wondered if anyone else had seen it.

IOT Part 6(B) – OLTP Inserts into an IOT November 10, 2011

Posted by mwidlake in development, performance, Testing.
Tags: , , , , , ,
14 comments

<..IOT1 – the basics
<….IOT2 – Examples and proofs
<……IOT3 – Significantly reducing IO
<……..IOT4 – Boosting Buffer Cache efficiency
<……….IOT5 – Primary Key Drawback
<…………IOT6(A) – Bulk Insert slowed down

IOT 6 part A covered the sort of insert work used to create an IOT or push a large number of rows into it via an “insert into…select from” type format. ie Bulk Load.
In this, part B, I am going to test inserting a small number of rows (10,000) into a table already holding a million rows. Something more like an hourly load or the inserts that would occur over a working day – a more OLTP-like test

I’m going to do the tests against four things. A “non-insert” test to ensure my data creation is not taking a lot of resource {the Null test}, then against my three versions of test table – a normal heap table, my IOT and my Partitioned IOT. I run each test on each segment type three times, the minimum number needed to help identify variation that could invalidate the test.

You will probably be surprised by the results….

All three tables have the same column structure, the heap has indexes on the primary key and a unique constraint, the IOT and partitioned IOT are organized on a modified version of the Primary key and have the second index for the unique key. Below is the creation statement for the partitioned IOT (the IOT is the same without the partition clause). You can get full creation details from IOT part 2:

create table transaction_IOT_P
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(1000) not null
,vc_2      varchar2(1000)
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint tip_pk primary key(ACCO_type,ACCO_id,cre_date,tran_type,tran_id) 
--  using index tablespace index_01
,constraint tip_tran_uq unique (tran_type,tran_id)
  using index tablespace index_01
)
organization index
tablespace data_01
partition by range  (cre_date)
(partition rm20110901 values less than (to_date('01-09-2011','DD-MM-YYYY')) 
  tablespace data_01
,partition rm20111001 values less than (to_date('01-10-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20111101 values less than (to_date('01-11-2011','DD-MM-YYYY'))
  tablespace data_01
,PARTITION RMTOP  VALUES LESS THAN (MAXVALUE)
  tablespace USERS
)

The three tests are:

  • Insert 10,000 records for random accounts, in time order for the same day. This would be like processing data for or during a “day”
  • Generating the 10,000 records in order of account, to better match the IOT structure, again for a single day. For any given account 0,1,2 or 3 records are created, to be a little more “real life”. This would be like loading a pre-processed set of data.
  • All 10,000 records are created for the one account, in datetime order, as an extreme example of inserting into an IOT data that is ordered to perfectly match the order of the IOT.

The code is written ‘badly’, ie a single record is prepared and inserted into the table at a time, within a loop of 10000 iterations. This is to avoid the bulk-processing that may benefit the heap table and also to better match the reality of an OLTP system. I do however only commit after all records are created for any given test, which is maybe a mistake but that’s what I did.

Now, before I go any further, does anyone want to guess by how much inserting into the heap table will be faster than inserting into the IOT (partitioned or otherwise)? Also, whether the partitioned IOT will be faster or slower than the IOT and what, if any, impact there will be by the ordering of data?

To allow you to think about that and not see the results by accident, I’ll stick in one of the insert harnesses here so you can see how I am doing it. Skip over if you just want the results

--
execute s_snap.my_snap(' finished non-insert test1')
--
-- Transaction_heap random data test
declare
v_num         number :=10000; -- number of accounts
v_offset      number :=-4;
v_str1        varchar2(60);
v_str2        varchar2(60);
v_tran_type   number(2);
v_tran_id     number(10);
v_acco_type   number(2);
v_acco_id     number(10);
v_cre_date    date;
v_vc_1        varchar2(1000);
v_vc_2        varchar2(1000);
v_date_1      date :=sysdate;
v_num_1       number(2) :=10;
v_num_2       number(2) :=15;
begin
v_str1:=dbms_random.string('U',60);
v_str2:=dbms_random.string('U',60);
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..10000 loop
    v_tran_type :=mod(1,3)+1;
    v_tran_id   := 1000000+i;
    v_acco_type := 5+(trunc(dbms_random.value(1,3))*5); -- 10 or 15
    v_acco_id   :=trunc(dbms_random.value(1,v_num/2)); -- 1 to 5000
    v_cre_date  :=trunc(sysdate-v_offset)+(i/(60*60*24));
    v_vc_1      :=substr(v_str1,1,51+mod(i,10));
    v_vc_2      :=substr(v_str2,1,46+mod(i,15));
    insert into transaction_heap
      (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,vc_2,date_1,num_1,num_2)
    values
      (v_tran_type,v_tran_id,v_ACCO_type,v_ACCO_id,v_cre_date
      ,V_vc_1,v_vc_2,v_date_1,v_num_1,v_num_2);
end loop;
commit;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
--
execute s_snap.my_snap(' finished th insert test1')
--

Here are the results of three runs for all tests:

Elapsed time in seconds:

                    random insert             ordered-by-account       perfect IOT order match
Segment type    Test 1    2    3            Test 1    2    3           Test 1    2     3    
------------    ------ ---- ----            ------ ---- ----            ----- ---- ----
No insert          2.1  0.1  0.1               0.1  0.1  0.1              0.1  0.0  0.1   

                                    AVG                         AVG                        AVG
                                   ----                        ----                       ----
Heap              52.3 20.5 11.4   28.1       33.4 13.8 10.7   19.3       9.1 13.0  9.8   10.6
IOT              137.1 83.7 26.4   82.4      110.8 63.6 39.0   71.2       6.4  6.7  4.0    5.8
Partn'd IOT       33.2  9.5  8.7   17.1       17.1 10.4 14.3   13.9       8.1  5.3  4.5    6.0
                                   ----                        ----                       ----

Allow me to summarise that.

First, the preparation of the data and cycling around the loop, the no-insert tests, are negligible. {The first no_insert test is taking 2 seconds due to the parsing and collecting dictionary information overhead of the first thing done}. We can ignore the overhead of the test harness.

Much more significantly:

  • Each iteration of each test, things get faster {with one odd exception}
  • For Random and semi-ordered inserts, inserting into the IOT is slower than the Heap by a factor of 300% to 400% (3 times to 4 times slower) – what we all probably expected.
  • Inserting into the IOT is Faster than the heap when the insert order matches the structure of the IOT in the perfect_IOT test. Not just the same but 40% faster
  • The partitioned IOT is as Fast or Faster than the the Heap table, never mind the IOT. Over all the partitioned IOT takes 50% TO 75% THE TIME OF THE HEAP to be populated.

Hands up everyone expecting this? NB My hand would be down if I had not done similar tests before.

Why do we see these results?

I’ll just mention that the reason all the tests get quicker with each iteration is that the computer is doing the same thing over and over again and gets used to it, the relevant circuits in the chip are “oiling in”. Of course not, how silly – but it seems to be how non-IT people think of it:-) Basically more and more of each segment is getting cached and that reduces physical IO and speed things up. You see this a lot in testing and you need to be constantly vigilant about it. You might think I should really run the tests until they are steady-state, but that is a very false situation. On real system you don’t tend to get the same thing run over and over and over again on a system with nothing else running.

Much more interestingly, why is the partitioned IOT winning? Well, for the Heap table data is being poured into the leading edge of the table, which takes a little processing and some blocks to be created. However, there are also two indexes being maintained, the Primary Key and that Unique key. For each insert oracle has to find the right spot in each index (root block ->branch block->leaf block->check for row) and insert the new record, occasionally splitting a full leaf block.

For the IOT and the Partitioned IOT there is no table to maintain. In effect there are just two indexes to maintain. For the normal IOT this benefit is outweighed by the cost of working with a less space efficient Primary Key index. As there is the whole of the row in each index entry there are fewer index entries possible per leaf block and so more block need to be read in from disc and more splits are needed. Thus the inserts into the IOT are slower.
But for the partitioned IOT not only is there no table to maintain but the table (really the Primary Key) is partitioned – on date. We are creating data for a single day at a time. All the relevant blocks will be in the one partition (and it’s the latest and smallest partition) and quickly all cached in memory and processed in memory. It’s the efficient use of the cache again. For both the IOT and the Heap, the primary key segment being processed is larger and the inserts are spread across the whole index.

Ordering data to match the index order (ordered_by_account tests) helps for all three segment types – heap, IOT and partitioned IOT. We have two indexes and the data is being ordered for both in effect, as one is on the columns we are ordering the data creation for and the other, the unique key, is generated by an ascending sequence. The gain is about 25, 30% and is due to working through the indexes in order as opposed to skipping about. If the indexes were significantly larger than the buffer cache or on a highly active system where the turnover in the cache was very high (indeicated by a low buffer cache hit ratio:-) ) then the benefit would be higher.
The perfect ordering test (perfect_iot_order_match) is a special case. Not only is the data being processed in order of the indexes but only one part of the index is being updated – for a single account. In this situation the maintaining of the index is so efficient that the time saved not processing the heap table means both the IOT and IOT_P win out over the heap. So in this special case, inserts into IOTs are faster than inserts into a heap table.

And finally a quick summary:

  • Inserting records into an IOT carries about a 200-400% overhead, depending on the number of indexes
  • Ordering the data to match the IOT gives a 25-30% advantage, but it similarly aids update of the primary key on the heap
  • Partitioning the IOT such that a smaller chunk of the index is processed can be highly beneficial. I don’t cover it specifically in the above but you can probably appreciate that partitioning on a heap can be beneficial to processing indexes on the heap also.
  • If you can organise to create your IOT such that data is created only in one or a few distinct points in the structure (you might have to tweak you data load too of course), insert performance might match or even beat that for a heap table with the same indexes
  • Using methods to maximise efficient use of your buffer cache and reduce IO is a very powerful performance tuning aid, not just for select

BTW I am conscious of the fact that the above may surprise many and you might wonder if my data or tests are flawed. I have stripped out most of the code from this post as it was just becoming massive and confusing, but contact me and I will happily supply all the example scripts and data.

IOT Part 6 – Inserts and Updates Slowed Down (part A) November 1, 2011

Posted by mwidlake in Architecture, performance, Testing.
Tags: , , , ,
14 comments

<..IOT1 – the basics
<….IOT2 – Examples and proofs
<……IOT3 – Significantly reducing IO
<……..IOT4 – Boosting Buffer Cache efficiency
<……….IOT5 – Primary Key Drawback
…………>IOT6(B) – OLTP Inserts

A negative impact of using Index Organized Tables is that inserts are and updates can be significantly slowed down. This post covers the former and the reasons why – and the need to always run tests on a suitable system. (I’m ignoring deletes for now – many systems never actually delete data and I plan to cover IOTs and delete later)

Using an IOT can slow down insert by something like 100% to 1000%. If the insert of data to the table is only part of a load process, this might result in a much smaller overall impact on load, such as 25%. I’m going to highlight a few important contributing factors to this wide impact spread below.

If you think about it for a moment, you can appreciate there is a performance impact on data creation and modification with IOTs. When you create a new record in a normal table it gets inserted at the end of the table (or perhaps in a block marked as having space). There is no juggling of other data.
With an IOT, the correct point in the index has to be found and the row has to be inserted at the right point. This takes more “work”. The inserting of the new record may also lead to an index block being split and the extra work this entails. Similar extra work has to be carried out if you make updates to data that causes the record to move within the IOT.
Remember, though, that an IOT is almost certainly replacing an index on the heap table which, unless you are removing indexes before loading data and recreating them after, would have to be maintained when inserting into the Heap table. So some of the “overhead” of the IOT would still occur for the heap table in maintaining the Primary Key index. Comparing inserts or updates between a heap table with no indexes and an IOT is not a fair test.

For most database applications data is generally written once, modified occasionally and read many times – so the impact an IOT has on insert/update is often acceptable. However, to make that judgement call you need to know

  • what the update activity is on the data you are thinking of putting into an IOT
  • the magnitude of the impact on insert and update for your system
  • the ratio of read to write.

There is probably little point putting data into an IOT if you constantly update the primary key values (NB see IOT-5 as to why an IOT’s PK columns might not be parts of a true Primary Key) or populate previously empty columns or hardly ever read the data.

There is also no point in using an IOT if you cannot load the data fast enough to support the business need. I regularly encounter situations where people have tested the response of a system once populated but fail to test the performance of population.

Now to get down to the details. If you remember the previous posts in this thread (I know, it has been a while) then you will remember that I create three “tables” with the same columns. One is a normal heap table, one is an Index Organized Table and one is a partitioned Index Organized Table, partitioned into four monthly partitions. All tables have two indexes on them, the Primary Key index (which is the table in the case of the IOTs) and another, roughly similar index, pre-created on the table. I then populate the tables with one million records each.

These are the times, in seconds, to create 1 million records in the the HEAP and IOT tables:

                  Time in Seconds
Object type         Run_Normal
------------------  ----------
Normal Heap table        171.9  
IOT table               1483.8

This is the average of three runs to ensure the times were consistent. I am using Oracle V11.1 on a machine with an Intel T7500 core 2 Duo 2.2GHz, 2GB memory and a standard 250GB 5000RPM disk. The SGA is 256MB and Oracle has allocated around 100MB-120MB to the buffer cache.

We can see that inserting the 1 million rows into the IOT takes 860% the time it does with a heap table. That is a significant impact on speed. We now know how large the impact is on Insert of using an IOT and presumably it’s all to do with juggling the index blocks. Or do we?

This proof-of-concept (POC) on my laptop {which you can also run on your own machine at home} did not match with a proof-of-concept I did for a client. That was done on V10.2.0.3 on AIX, on a machine with 2 dual-core CPUS with hyper-threading (so 8 virtual cores), 2GB SGA and approx 1.5GB buffer cache, with enterprise-level storage somewhere in the bowels of the server room. The results on that machine to create a similar number of records were:

                  Time in Seconds
Object type         Run_Normal
------------------  ----------
Normal Heap table        152.0  
IOT table                205.9

In this case the IOT inserts required 135% the time of the Heap table. This was consistent with other tests I did with a more complex indexing strategy in place, the IOT overhead was around 25-35%. I can’t go into too much more detail as the information belongs to the client but the data creation was more complex and so the actual inserts were only part of the process – this is how it normally is in real life. Even so, the difference in overhead between my local-machine POC and the client hardware POC is significant, which highlights the impact your platform can have on your testing.

So where does that leave us? What is the true usual overhead? Below are my more full results from the laptop POC.

                        Time in Seconds
Object type         Run_Normal    Run_quiet    Run_wrong_p
------------------  ----------    ---------    -----------
Normal Heap table        171.9        81.83         188.27  
IOT table               1483.8      1055.35        1442.82
Partitioned IOT          341.1       267.83         841.22 

Note that with the partitioned IOT the creation took 341 second, the performance ratio to a heap table is only 198% and is much better than the normal IOT. Hopefully you are wondering why!

I’m running this test on a windows laptop and other things are going on. The timings for Run_Quiet are where I took steps to shut down all non-essential services and applications. This yielded a significant increase for all three object types but the biggest impact was on the already-fastest Heap table.

The final set of figures is for a “mistake”. I created the partitions wrong such that half the data went into one partition and the rest into another and a tiny fraction into a third, rather than being spread over 4 partitions evenly. You can see that the Heap and normal IOT times are very similar to the Run_Normal results (as you would expect as these test are the same) but for the partitioned IOT the time taken is half way towards the IOT figure.

We need to dig into what is going on a little further to see where the effort is being spent, and it turns out to be very interesting. During my proof-of-concept on the laptop I grabbed the information from v$sesstat for the session before and after each object creation so I could get the figures just for the loads. I then compared the stats between each object population and show some of them below {IOT_P means Partitioned IOT}.

STAT_NAME                            Heap    	IOT	        IOT P
------------------------------------ ---------- -------------  -----------
CPU used by this session                  5,716         7,222        6,241
DB time                                  17,311       148,866       34,120
Heap Segment Array Inserts               25,538            10           10

branch node splits                           25            76           65
leaf node 90-10 splits                      752         1,463        1,466
leaf node splits                          8,127        24,870       28,841

consistent gets                          57,655       129,717      150,835
cleanout - number of ktugct calls        32,437        75,201       88,701
enqueue requests                         10,936        28,550       33,265

file io wait time                     4,652,146 1,395,970,993  225,511,491
session logical reads                 6,065,365     6,422,071    6,430,281
physical read IO requests                   123        81,458        3,068
physical read bytes                   2,097,152   668,491,776   25,133,056
user I/O wait time                          454       139,585       22,253
hot buffers moved to head of LRU         13,077       198,214       48,915
free buffer requested                    64,887       179,653      117,316

The first section shows that all three used similar amounts of CPU, the IOT and partitioned IOT being a little higher. Much of the CPU consumed was probably in generating the fake data.The DB Time of course pretty much matches the elapsed time well as the DB was doing little else.
It is interesting to see that the Heap insert uses array inserts which of course are not available to the IOT and IOT_P as the data has to be inserted in order. {I think Oracle inserts the data into the heap table as an array and then updates the indexes for all the entries in the array – and I am only getting this array processing as I create the data as an array from a “insert into as select” type load. But don’t hold me to any of that}.

In all three cases there are two indexes being maintained but in the case of the IOT and IOT_P, the primary key index holds the whole row. This means there has to be more information per key, less keys per block and thus more blocks to hold the same data {and more branch blocks to reference them all}. So more block splits will be needed. The second section shows this increase in branch node and leaf block splits. Double the branch blocks and triple the leaf block splits. This is probably the extra work you would expect for an IOT. Why are there more leaf block splits for the partitioned IOT? The same data of volume ends up taking up more blocks in the partitioned IOT – 200MB for the IOT_P in four partitions of 40-60MB as opposed to a single 170MB for the IOT. The larger overall size of the partition is just due to a small overhead incurred by using partitions and also a touch of random fluctuation.

So for the IOT and IOT_P there is about three times the index-specific work being done and a similar increase in related statistics such as enqueues, but not three times as it is not just index processing that contribute to these other statistics. However, the elapsed time is much more than three times as much. Also, the IOT_P is doing more index work than the IOT but it’s elapsed time is less. Why?

The fourth section shows why. Look at the file io wait times. This is the total time spent waiting on IO {in millionths of a second} and it is significantly elevated for the IOT and to a lesser degree for the IOT_P. Physical IO is generally responsible for the vast majority of time in any computer system where it has not been completely avoided.
Session logical reads are only slightly elevated, almost negligably so but the number of physical reads to support it increases from 123 for the Heap table insert to 81,458 for the IOT and 3,068 for the IOT_P. A clue as to why comes from the hot buffers moved to head of LRU and free buffer requested statistics. There is a lot more activity in moving blocks around in the buffer cache for the IOT and IOT_P.

Basically, for the IOT, all the blocks in the primary key segment are constantly being updated but eventually they won’t all fit in the block buffer cache – remember I said the IOT is eventually 170MB and the buffer cache on my laptop is about 100MB – so they are flushed down to disk and then have to be read back when altered again. This is less of a problem for the IOT_P as only one partition is being worked on at a time (the IOT_P is partitioned on date and the data is created day by day) and so more of it (pretty much all) will stay in memory between alterations. The largest partition only grows to 60MB and so can be worked on in memory.
For the heap, the table is simply appended to and only the indexes have to be constantly updated and they are small enough to stay in the block buffer cache as they are worked on.

This is why when I got my partitioning “wrong” the load took so much longer. More physical IO was needed as the larger partition would not fit into the cache as it was worked on – A quick check shows that logical reads and in fact almost all statistics were very similar but 26,000 IO requests were made (compared to 81,458 for the IOT and 3,068 for the correct IOT_P).

Of course, I set my SGA size and thus the buffer cache to highlight the issue on my laptop and I have to say even I was surprised by the magnitude of the impact. On the enterprise-level system I did my client’s proof of concept on, the impact on insert was less because the buffer cache could hold the whole working set, I suspect the SAN had a considerable cache on it, there was ample CPU resource to cope with the added latching effort and the time taken to actually create the data inserted was a significant part of the workload, reducing the overall impact of the slowness caused by the IOT.

{Update, in This little update I increase my block buffer cache and show that physical IO plummets and the IOT insert performance increases dramatically}.

This demonstrates that a POC, especially one for what will become a real system, has to be a realistic volume on realistic hardware.
For my client’s POC, I still did have to bear in mind the eventual size of the live working set and the probably size of the live block buffer cache and make some educated guesses.

It also explains why my “run_quiet” timings showed a greater benefit for the heap table than the IOT and IOT_P. A windows machine has lots of pretty pointless things running that take up cpu and a bit of memory, not really IO so much. I reduced the CPU load and it benefits activity that is not IO, so it has more impact on the heap table load. Much of the time for the IOT and IOT_P is taken hammering the disk and that just takes time.

So, in summary:

  • Using an IOT increases the index block splitting and, in turn, enqueues and general workload. The increase is in proportion to the size of the IOT compared to the size of the replaced PK.
  • The performance degredation across the whole load process may well be less than 50% but the only way to really find out is to test
  • You may lose the array processing load that may benefit a heap table load if you do the load via an intermediate table.
  • With an IOT you may run into issues with physical IO if the segment (or part of the segment) you are loading into cannot fit into the buffer cache (This may be an important consideration for partitioning or ordering of the data loaded)
  • If you do a proof of concept, do it on a system that is as similar to the real one as you can
  • Just seeing the elapsed time difference between test is sometimes not enough. You need to find out where that extra time is being spent

I’ve thrown an awful lot at you in this one post, so I think I will stop there. I’ve not added the script to create the test tables here, they are in IOT-5 {lacking only the grabbing of the v$sesstat information}.

When to Fix & When to Find Out – Friday Philosophy July 15, 2011

Posted by mwidlake in AWR, Friday Philosophy, Testing.
Tags: ,
8 comments

{warning, this is one of my long, rambling Friday Philosophy blogs. Technical blogs are on the way – though there are some nice AWR screen shots in this one:-) }

As a DBA (or System Administrator or Network admin or any other role that involves fixing things that are broken before the shouting starts) there is often a tension between two contending “best practices”:

– getting the system working again as soon as possible.
or
– understanding exactly what the problem is.

Some experts point out {and I generally agree} that unless you actually find out exactly what the problem was, what you did to fix it and via a test case demonstrate why the fix worked, you are not really solving the problem – You are just making it go away. You (hopefully!) know what you changed or the action you took so you have something that you can repeat which might fix it again next time. (NB this does not apply to the classic “turn it off, turn it back on again”, that nearly always is just an aversion therapy).

But it might not fix it next time.

Or you might be going to way more trouble to fix it than is needed {I’m remembering how flushing the shared pool used to get touted as a way to resolve performance issues, but it is a pretty brutal thing to do to a production database}.

You might even be making other problems worse {like slowing down everything on the production database as the caches warm up again, having flushed out all that data in the SGA, or that good-old-standard of rebuilding indexes that simply do not benefit from the rebuild}.

There is another issue with “just fixing it by trying things” in that you are not really learning about the cause of the issue or about how the technology you are looking after works. A big part of what makes an “expert” an expert is the drive, desire and opportunity to take the time to work this out. It’s that last part I sometimes get grumpy about, the opportunity.
For many of us, we do not have the luxury of investigating the root cause. Because we are under so much pressure to “get it fixed right now and worry about the detail later”. But we do not get to the detail as there is then the next “fix this problem right now” and so it goes on. {Kellyn Pot’Vin does a nice post about what she calls the Superman Conundrum on this topic}.

I’ve had exactly this dilema just a couple of months ago. Below are the details, it’s a bit long so skip to the end of the post if you like…

I was creating test data and I decided to use parallel processing to speed it up. I created a month’s worth of data with PL/SQL and then decided to copy it with a simple “insert into …select” statement, updating dates and a couple of other incrementing columns as part of the insert, using parallel. The creation of the second month’s data took longer than the PL/SQL code for the first month took. What the!!!??? I pulled up the AWR information and I could see that the problem was (possibly) with inter process communication between the parallel processes, as shown by the PX DEQ CREDIT:send blkd wait event.

The below screenshot shows the overall instance workload, the green is CPU and the Pink is “Other”. Only one SQL statement is responsible for all of this effort, via 5 sessions (four of which are parallel threads) You can see that the issue had been going on for over an hour {oh, and to a certain extent these pretty pictures are pointless – I am not looking for the exact solution now, but having loaded the pictures up to the blog site I am damn well going to pretty-up my blog with them}:

Drilling into that one session shows that the bulk of the waits by far is for PX DEq Credit: Send blkd:

By selecting that wait event, I got the histogram of wait times since the system started {I love those little histograms of wait times}:

Note that these waits are for long periods of time, around 1/10 of a second on average and some are for up to 2 or 4 seconds.

The thing is, I had anticipated this problem and increased my PARALLEL_EXECUTION_MESSAGE_SIZE to 16K from the default of 2K already, as I knew from experience that the default is way to small and has slowed down parallel execution for me before. So why was I seeing poorer performane now than I anticipated? I’m not understanding stuff. So I needed to change one thing and see what impact it has and repeat until I got to the bottom of it.

Except I could not – the next team in line was waiting for this data and they already had massive pressure to deliver. My job, what my employer was giving me money to do, was to fix this problem and move on. So, in this instance I just pragmatically got on with getting the task done as quickly as I could.

I did what we all do {don’t we?} under times of accute time pressure. I made a handful of changes, using the knowledge I already have and guessing a little, hoping that one of them would do the trick. This included reducing the degree of parallelism, adding the /*+ append */ hint (I simply forgot the first time around), pre-allocating the required space to the tablespace, muttering “pleaseopleaseoplease” under my breath….

It worked:

The job ran in less than 20 minutes and used less resource during that time. Well, it waited less anyway.
The wait histograms show lots and lots of shorter duration waits:

The duplication took 20 minutes when the previous attempt had been terminated after 2 hours when other factors forced it to be curtailed. Job done.

But the thing is, the problem was not fixed. I got the task done in a timescale that was required, I satisfied the need of the client, but I was and am not sure exactly why.

If I was a permanent employee I would consider pressing for being allowed to spend some time working this out, as my employer benefits from me extending my knowledge and skills. This is not always a successful strategy:-) {but it does indicate to me that my employer has a Problem and I need to address that}. In any case, as I was a consultant on this job, I was being paid to already know stuff. So it is now down to me, some personal time, the internet and people more knowledgeble than me who I can ask for help to sort this out.

And that was my main point. Often at work you have to get over the issue. Later on, you work out what the issue was. I wish I could get paid for the latter as well as the former. The real blow for me is that I no longer have access to that site and the information. My job was complete and, whether they have really shut down my access or not, I am not allowed to go back into the systems to dig around. I think I now know the issue, but I can’t prove it.

Follow

Get every new post delivered to your Inbox.

Join 226 other followers