jump to navigation

Friday Philosophy – Do good DBAs need PL/SQL Skills? March 1, 2013

Posted by mwidlake in Friday Philosophy, PL/SQL.
Tags: ,
14 comments

This Friday Philosophy was prompted by a discusion between some OakTable people about did we think “good” DBAs should know PL/SQL? Not all the tricks, bulk processing, using all the built-ins, but able to write PL/SQL with cursor loops and some exception handling that could eg cycle thorough tables and archive off data or implement some logon trigger functionality.

My response was “that depends on the age of the DBA”.

If you had asked me that question 15 years ago I would have said Yes, a good DAB would and should know PL/SQL.
If you had asked me 10 years ago I would have said I’d hope they would and most DBAs I respected has some PL/SQL skills.
If you had asked me 5 years ago I would have sighed and had a little rant about how they should but the younger ones don’t and that is wrong.

But now, I would say that no, a good DBA does not need PL/SQL skills as so often they have so many other things they have to do and the tools to manage the database are somewhat better than they were. But inside I would still be thinking any DBA beyond their first year or two in the job would benefit from knowing the basics of PL/SQL.

It seems to me that a DBA now is generally expected to look after a very large number of instances, application servers, agents etc and all their time is taken doing the bread-and-butter tasks of backups and recoveries, patching, duplicating data, raising SRs (and that seems to take more and more time each SR every year), unlocking accounts, sorting out permissions…

Not only that but there is more and more to Oracle that a good DBA needs to understand as the technolgy gets more complex. Oracle has tried to make Oracle look after itself more but the result seems to be that for larger systems there are more moving parts to go wrong – and when they do it is often the DBA who has to sort it out. As an example, you no longer need to set several instance parameters to allocate memory to the components of the PGA and SGA. Just set the Memory Target. But if the system starts to throw odd errors about components being out of memory, the DBA needs to sort that out. They need to know about the dynamic memory adjust ments and check them out. They need to understand that certain components are now calculated in a different way, like the log buffer size. And probably they will have to revert back to the old parameters so they still need to know all about that!

So unless a DBA is an old hand and “grew up” with all this, they have no time to develop PL/SQL skills. Thus my response was “that depends on the age of the DBA”.

Should a good DBA know SQL?

Yes. I still see that as a given. Buttons, widgets and assistants will only get you so far.

DBMS_APPLICATION_INFO for Instrumentation September 3, 2012

Posted by mwidlake in development, Instrumentation, performance.
Tags: , , , ,
14 comments

I just wanted to put up a post about DBMS_APPLICATION_INFO. This is a fantastic little built-in PL/SQL package that Oracle has provided since Oracle 8 to allow you to instrument your code. i.e record what it is doing. I’m a big fan of DBMS_APPLICATION_INFO and have used it several times to help identify where in an application time is being spent and how that pattern of time has altered.

Some PL/SQL developers use it and some don’t. It seems to me that it’s use comes down to where you work, as most PL/SQL developers are aware of it – but not everyone uses it (a friend of mine made the comment recently that “all good PL/SQL developers use it“. I can understand his point but don’t 100% agree).

It is incredibly easy to use. You use the procedures SET_MODULE(module_name,action_name), SET_ACTION(action_name) and SET_CLIENT_INFO(client_info) to set the values of the corresponding columns in V$SESSION for your current session. This is done in memory, there is no inserting or updating of any table rows, so it is incredibly quick and light-weight. The below shows a subset of V$SESSION including the columns that get set:

desc v$session
Name Null? Type
----------------------------------------- -------- ----------------
SADDR RAW(8)
SID NUMBER
SERIAL# NUMBER
AUDSID NUMBER
PADDR RAW(8)
USER# NUMBER
USERNAME VARCHAR2(30)
COMMAND NUMBER
...
MODULE VARCHAR2(64)
MODULE_HASH NUMBER
ACTION VARCHAR2(64)
ACTION_HASH NUMBER
CLIENT_INFO VARCHAR2(64)
...

Just something odd to note. MODULE, ACTION and CLIENT_INFO are shown at VC(64) but if you check the documentation you will see that:

MODULE is limited to 48 bytes
ACTION is limited to 32 bytes
CLIENT_INFO can be set to the whole 64 bytes

I have no idea why MODULE and ACTION are limited in this way but it might be historic, backward compatibility with prior versions.

As a quick example of their use:

test_11_2> --demo_dai.sql
test_11_2> -- quick demo of dbms_application_info
test_11_2> COL sid form 9999
test_11_2> col username form a12
test_11_2> col module form a18
test_11_2> col action form a22
test_11_2> col client_info form a15
test_11_2> -- set module and action
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_MODULE(MODULE_NAME=>'SALES_LOAD',ACTION_NAME=>'VALIDATE_ALL_PRICES')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC SALES_LOAD VALIDATE_ALL_PRICES
test_11_2> --
test_11_2> -- Update the action within a module
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_ACTION(ACTION_NAME=>'Update all Prices')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC SALES_LOAD Update all Prices

test_11_2> -- clear them
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_MODULE(null,null)
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC

test_11_2> -- you can set just an action
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_action(ACTION_NAME=>'orphan process')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC orphan process

test_11_2> -- set them to something sensible
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_MODULE(MODULE_NAME=>'RETURNS_LOAD',ACTION_NAME=>'MATCH_TO_SALE')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ ------------------ ---------------------- ---------------
650 ERIC RETURNS_LOAD MATCH_TO_SALE

test_11_2> -- client_info can be longer
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_client_info(client_info=>'entered this module at '||sysdate)
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

SID USERNAME MODULE ACTION CLIENT_INFO
----- ------------ --------------- --------------- ----------------------------------------
650 ERIC RETURNS_LOAD MATCH_TO_SALE entered this module at 03-SEP-2012 13:07

The intention is that you set the MODULE as you enter a functional chunk of your application (be it a PL/SQL package or a chunk of JAVA, whatever {but be aware of the stateless nature of systems using middle tiers and pooled connections}) and then update the ACTION as you progress. You can set an action with no module, as demonstrated above, but I would recommend against it. On that topic I would make the following recommendations:

  • Set the MODULE at the start, eg as you enter a PL/SQL package’s main module
  • Set the MODULE before you do all the validation – you may wish to see if that validation is as quick as you think.
  • Use SET_ACTION to update the action prudently. ie when the activity of the code alters but not all the time. It will come clearer with use, but what you are after is to be able to judge the relative run durations of parts of the application. Too much detail can mask the overall picture. If you can alter code you can always add more instrumentation.
  • Clear the MODULE and ACTION explicitly as you exit a package. The activity is lightweight and you do not want any code that lacks this instrumentation to inherit values from calling code.
  • Do not set an ACTION without having set the MODULE earlier in the code.
  • Keep the text of these values short and specific.

You do not have to look at V$SESSION to see these values, which is good as you may lack the privs needed to do so. Since oracle 9 there have also been procedures to check what the values are set to. e.g:

test_11_2> BEGIN
2 DECLARE
3 V_MODULE VARCHAR2 (64);
4 V_ACTION VARCHAR2 (64);
5 begin
6 dbms_application_info.READ_MODULE(V_MODULE,V_ACTION);
7 DBMS_OUTPUT.PUT_LINE('Module IS '||V_MODULE||' Action IS '||V_ACTION);
8 END;
9 END;
10 /

Module IS RETURNS_LOAD Action IS MATCH_TO_SALE
test_11_2>

So as you can see, these values can be set, they can be accessed via the data dictionary or the package itself and, if you believe my wild claims, they are very lightweight to use. However, these values really come into themselves with eg OEM’s performance screens. I’ll leave examples of that to a second post. Here I just wanted to cover how easy it is to set and get the information.

That does lead me onto what I think is the main reason that DBMS_APPLICATION_INFO is not as widely used as it could {and should} be. Developers often lack access to eg OEM and so do not see the full benefit of using it. Also, developers tend to be {note the get-out-clause) more concerned with making the code work and less with monitoring it after it is released. DBA-types tend to be more concerned with monitoring it. Thus the DBA types would want it in the code and developers would not want to spend the time coding it in. We are all one big team though, so some negotiation and beer-buying (or management enforcement) might well resolve that one.

Pickler Fetch – What is it? August 11, 2011

Posted by mwidlake in performance, PL/SQL.
Tags: , ,
12 comments

How many of you have seen the step COLLECTION ITERATOR PICKLER FETCH appearing in Explain Plans and wondered “what is a Pickler Fetch”?

{Note – make sure you check out the comments on this post if you are interested in Pickler Fetches. Timur Akhmadeev and Dom Brooks correct and extend my original efforts}.

----------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |            |       |       |    31 (100)|          |
|*  1 |  FILTER                               |            |       |       |            |          |
|   2 |   NESTED LOOPS                        |            |       |       |            |          |
|   3 |    NESTED LOOPS                       |            |     1 |    71 |    31   (4)| 00:00:01 |
|   4 |     SORT UNIQUE                       |            |     4 |     8 |    26   (0)| 00:00:01 |
|   5 |      COLLECTION ITERATOR PICKLER FETCH|            |     4 |     8 |    26   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | CHHE_PK    |     1 |       |     2   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID        | CHILD_HEAP |     1 |    69 |     4   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

It is where Oracle works through a SQL array. I’ve usually seen this within PL/SQL where an array variable is used within a procedure or package, being passed in containing a set of accounts or customers that are of interest and is used with a cursor. But you might also see it as the step processing the output of a pipelined function (one that returns a set of “rows”) being cast into a table. See this example by Tom Kyte. Tom also suggests that it is called a “Pickler” fetch as the data is pickled – packed and formatted. I never knew that, I just thought someone in Oracle development was having a giggle and it was “Pickled” as it was preserved from the PL/SQL side of the SQL engine. It seems that I was a little off-target with that.

{My thanks to Timur (see comments) who corrected me when I said it was a PL/SQL array. It is not, it is a SQL object}.

The above plan was generated for the below SQL statement:

cursor c_collect is 
select  PARE_ID
       ,CRE_DATE
       ,VC_1
from child_heap
where pare_id in (select  id  from table(p_parents) p)
and   cre_date >=trunc(p_date) and cre_date < trunc(p_date+1);

Anyway, I just wanted to make a few comments about Pickler Fetches:

  • The CBO is not very good at looking “inside” such arrays to determine the cardinality of that step. This can cause performance issues.
  • Because using them involves a few steps and potentially involves grants, it is possible for them to be “temporarily removed” during testing and so their impact is not seen.
  • They can leak memory. I think.

I’m now going to cover each of those points in turn. If you just wanted to know what a pickler fetch is and I’ve answered that for you, I suggest you go back to whatever you were doing before you started reading this :-)

By “not very good at looking inside pickler fetches” I mean that the CBO seems to make a poor “base it on the first seen array” (V11 up) or utterly fails to identify how many records are inside the SQL array (V10 down), depending on the Oracle version. From an Oracle 10.2.0.4 system I’ve got the two following examples:

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name              | Rows| Bytes | Cost (%CPU)| Time     | Pstart
--------------------------------------------------------------------------------------------------------------------
|   9 |      NESTED LOOPS                       |                   |   1 |   139 |   114   (1)| 00:00:01 |      
|* 10 |       HASH JOIN SEMI                    |                   |   1 |    50 |    11  (10)| 00:00:01 |      
|* 11 |        TABLE ACCESS BY INDEX ROWID      | ABCDE_ABCDEFGHIJ  |   3 |   144 |     4   (0)| 00:00:01 |      
|* 12 |         INDEX RANGE SCAN                | XX_PK             |   3 |       |     3   (0)| 00:00:01 |      
|  13 |        COLLECTION ITERATOR PICKLER FETCH|                   |     |       |            |          |      
|  14 |       PARTITION RANGE ITERATOR          |                   |   6 |   534 |   103   (0)| 00:00:01 |   KEY
|* 15 |        INDEX RANGE SCAN                 | AAXX_PK           |   6 |   534 |   103   (0)| 00:00:01 |   KEY

Note the cost and expected rows for the Pickler Fetch step. Or rather, the lack of them.

This would manifest itself in the following way in OEM screens:

Note the cardinality of the Pickler Fetch step is 0 {sorry, I cut off the column headings}. This resulted in this step having a cost of 0 and all the subsequent steps of having an expected cardinality of one and having very low costs – between 1 and 3 {Again, not shown, sorry}.

The end result of this was that the CBO struggled to accuratley cost any given plan and came up with several, usually quite terrible, plans that it swapped between as other table stats varied. The CBO was picking between very complex plans with total costs of 100 or 200 or so! Any difference was “significant”.

Please note, OPTIMIZER_DYNAMIC_SAMPLING was set to 4 on this system and I tried hints and session settings to higher levels and they did not prompt the CBO to look into the array, on 10.2.0.4 at least.

In 11.1 things seem to be better, as is shown in the explain plan at the top of this post. The step has a cost. I have to confess, I have not tested this very much on 11 {and if anyone has, feel free to correct me/add enlightenment via comments or pointers to other sources}, but it seems to set the cardinality to the number of elements the Pickler Fetch finds in the first itteration. Unless it uses the same sort of trick Oracle 11 now uses for bind variables (detecting when the value supplied is out of range and generates a new plan) this is going to lead to the old and much ‘loved’ issue of the plan being fixed by the first execution, irrespective of how suitable that plan is.

How do you fix this issue? Well, I resort to the cardinality hint. Usually the number of records being passed into the array is not too variable and any half-decent value is better than nothing in Oracle 10 and before. As for in 11, I like stating the value rather than risking a variable ‘first seen at parsing time’ setting. It is a judgement call. The below is from 11.1 but I’ve used it extensively in 10.2, where the impact is much more significant:

select  PARE_ID
       ,CRE_DATE
       ,VC_1
from child_heap
where pare_id in (select  id  from table(p_parents) )
and   cre_date >=trunc(p_date) and cre_date < trunc(p_date+1)
--
----------------------------------------------------------------------------
| Id  | Operation                             | Name       | Rows  | Bytes |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |            |       |       |
|*  1 |  FILTER                               |            |       |       |
|   2 |   NESTED LOOPS                        |            |       |       |
|   3 |    NESTED LOOPS                       |            |     1 |    71 |
|   4 |     SORT UNIQUE                       |            |     4 |     8 |
|   5 |      COLLECTION ITERATOR PICKLER FETCH|            |     4 |     8 |
|*  6 |     INDEX RANGE SCAN                  | CHHE_PK    |     1 |       |
|   7 |    TABLE ACCESS BY INDEX ROWID        | CHILD_HEAP |     1 |    69 |
----------------------------------------------------------------------------
--
-- and now with hint
--
select  PARE_ID
       ,CRE_DATE
       ,VC_1
from child_heap
where pare_id in (select /*+ CARDINALITY(p,11) */ id  from table(p_parents) p)
and   cre_date >=trunc(p_date) and cre_date < trunc(p_date+1)
--
-----------------------------------------------------------------------------
| Id  | Operation                             | Name       | Rows  | Bytes |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |            |       |       |
|*  1 |  FILTER                               |            |       |       |
|   2 |   NESTED LOOPS                        |            |       |       |
|   3 |    NESTED LOOPS                       |            |     1 |    71 |
|   4 |     SORT UNIQUE                       |            |    11 |    22 |
|   5 |      COLLECTION ITERATOR PICKLER FETCH|            |    11 |    22 |
|*  6 |     INDEX RANGE SCAN                  | CHHE_PK    |     1 |       |
|   7 |    TABLE ACCESS BY INDEX ROWID        | CHILD_HEAP |     1 |    69 |
-----------------------------------------------------------------------------

Note the change of ROWS to 11 in step 5. In V10 this is a change from blank to 11 and in real situations, do not be at all suprised if the plan changes dramatically – away from nested loop access and more to hash joins. {I should note, the cardinality hint is not documented in Oracle 10 or 11 and any use you make of it in live code is your responsibility. Sorry about that}.

What about my second point, about testing them? Well, as an example of testing Pickler processing of SQL arrays, which are defined SQL types, this is what I had to do to run my basic test:

-- pare_types_demo.sql
--
-- create and use some record and table types.
--
create or replace type pare_rec as object
(id number(10)
,name varchar2(100))
/
create or replace type pare_rec_t
as table of pare_rec
/
--
set serveroutput on size unli
--
create or replace procedure get_children
(p_parents in pare_rec_t
,p_date    in date)
is
cursor c_collect is 
select  PARE_ID
       ,CRE_DATE
       ,VC_1
from child_heap
where pare_id in (select id from table(p_parents))
and   cre_date >=trunc(p_date) and cre_date < trunc(p_date+1);
begin
  for v_rec in c_collect
  loop
    dbms_output.put_line('parent '||to_char(v_rec.pare_id)||' on '||to_char(v_rec.cre_date)
    ||' value '||v_rec.vc_1);
  end loop;
end;
/
--
declare
l_date        date           :=sysdate-100;
l_parents     pare_rec_t;
begin
  l_parents :=pare_rec_T(PARE_REC(1,'ERIC')
                        ,PARE_REC(2,'BARRY')
                        ,PARE_REC(3,'KERMIT')
                        ,PARE_REC(4,'GONZO')
                        );
  GET_CHILDREN (L_PARENTS,L_DATE);
END;
/

I had to create a table type, which is the SQL array, and this was based on an object type which I had to create first {you can have table types based on standard SQL types but very often they are based on a “row” object}. After creating the stored procedure, I had to define and populate the array with a set of records which I then passed in to my procedure call . {If you want to repeat this yourself, check out my postings on IOTs to get the table creation statement for table CHILD_HEAP}.
Now, I created those types so I had access to them. If those types do not belong to you you have to be granted execute on the types to reference them. Not select, execute. Some sites have a pretty strict attuitude to granting execute on anything and types seem to get forgotten when the execute priviledges against packages and procedures are set up. In a recent situation I had, I was forced to do some testing work on Live and it had taken people with big sticks to get me select access on data. Execute privileges were totally refused. Calmly explaining why it was needed and how it was acceptable fell on not so much deaf as bricked-up ears.

So, for testing, the reference to an array passed in is often replaced by a little sub-select. After all, quite often what is being passed in for a pickler fetch is actually a set of records {but a subset of the rows} from a table that has been collected by a previous processing step.
As an example of such a change:

where  ev.SYS_ID                       = pag.SYS_ID
and    ev.EXT_ID                       = pag.EXT_ID
and    P_EXT_ID                        = pag.EXT_ID
and    P_SYS_ID                        = pag.SYS_ID
and    ev.CREATED_DT                  >= pag.START_DT
and (ev.ID, ev.TX) in (select ID,TX from table(P_ARRAY_IN))
--
--BECOMES
--
where  ev.SYS_ID                       = pag.SYS_ID
and    ev.EXT_ID                       = pag.EXT_ID
and    P_EXT_ID                        = pag.EXT_ID
and    P_SYS_ID                        = pag.SYS_ID
and    ev.CREATED_DT                  >= pag.START_DT
--and (ev.ID, ev.TX) in (select ID,TX from table(P_ARRAY_IN)) --NB swap back in before release!
and (ev.ID, ev.TX) in (select ID,TX from CUSTOMERS WHERE SURNAME='WIDLAKE' and FFNAME='MARTIN')

All that has changed is that we now have a little sub-select rather than the casting of the SQL array into a table and, heck, as the developer might say, those were the records that would have been passed in, the code still works as expected and the same data comes back. No identifying which object types you need, no getting the execute permissions, no populating it yourself in the test harness, we can just swap back in the array later.

{I apologise to good developers, who even now are throwing imaginary darts at me. I know Dawn, you would certainly not do this. But I’ve seen it a couple of times. Developers have enough on their plate to go worrying about esoteric aspects of the CBO}

But the thing is, Oracle can look at that select and evaluate it’s cost and get an expected cardinality. The pickler fetch version has the issues I’ve just covered. I had to deal with a situation just like this last year, it does happen. In dev it was fine, in pre-live testing it was not.

What about memory leaks? Well, I had lots of issues with SQL arrays and memory leaks with oracle 10.1. and 10.2 at one client site and there is a documented bug in oracle 8 with pickler fetch and memory leaks but I have to confess, a quick metalink search did not find any hits for Oracle 10 and 11. So maybe you should not trust me on that one. In the situation I saw the arrays were massive, several MB at times, and so if you are using SQL arrays to pass in a modest list of eg accounts or customers, it is not going to be an issue anyway.

You know, this was just going to be a quick post on something I’ve been meaning to mention for months, not a small essay :-).

Scalar Functions Not Behaving as Scalar Functions December 8, 2009

Posted by mwidlake in performance, Uncategorized.
Tags: ,
3 comments

My thanks go to Graeme Hobbs, who I currently have the pleasure of working with, for explaining this to me.

This is all about scalar functions that do not act like scalar functions.

Where I am spending much of my time at the moment, we have an issue with a very simple sql statement. It takes on average 2 buffer gets per execution. How could that be an issue? Well, it is being executed a few thousand times an hour. By “a few thousand” read 50,000 times. Even a simple statement being executed 15 times a second is not good if it is not needed. Especially, as in this case, there is a context switch.

I got a look at the code and it is doing something like this:

HAVING SUM( col.vol_gbp ) >=
my_user.pkg_application_settings.get_number_value( ‘rp157_eric’ )

I quickly batted an email to my esteemed colleague Mr Hobbs saying “but this is a scalar. It will be executed once for the statement and the value passed into each execution of the whole query, it can’t be the source of the 50,000 executions an hour”.

Oh woe, for my ignorance has undone me. But it’s OK, Graeme not only took time to explain, but knocked up the following nice demo of WHY this is a problem. And said I could put it on my Blog. What a nice chap.

From Graeme:
******************************************************

Consider package:

CREATE OR REPLACE PACKAGE BODY ghtest
IS
l_called_count NUMBER := 0;

-- Function and procedure implementations
FUNCTION f_get_called_count
RETURN NUMBER
IS
BEGIN
RETURN( l_called_count );
END;

FUNCTION f_call
RETURN NUMBER
IS
BEGIN
l_called_count := l_called_count + 1;
RETURN 1;
END;

FUNCTION f_reset
RETURN NUMBER
IS
BEGIN
l_called_count := 0;
RETURN 1;
END;
END ghtest;

– nothing up my sleeve
SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
——————
0
ie no calls made yet

SQL> SELECT   id
           , COUNT( * )
        FROM ( SELECT *
                FROM MY_USR.tbl_SOME_DATA
               WHERE ROWNUM < 10 )
    GROUP BY id
      HAVING COUNT( * ) > ghtest.f_call;

ID     COUNT(*)
---------- ----------
1624361594 3

SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
------------------
7

NOTE the counter has gone up to 7 as there were 7 IDs found from that code -6 IDs with one record and one with 3.

SELECT ghtest.f_reset
FROM DUAL;

F_RESET
———-
1

That just blanks the counter (see the package).

-- nothing up my sleeve
SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
------------------
0

SELECT   id
          , COUNT( * )
       FROM ( SELECT *
               FROM my_usr.tbl_some_data
              WHERE ROWNUM < 10 )
   GROUP BY id
     HAVING COUNT( * ) > (SELECT ghtest.f_call FROM dual);

ID     COUNT(*)
---------- ----------
1624361594 3

SELECT ghtest.f_get_called_count
FROM DUAL;

F_GET_CALLED_COUNT
------------------
1

You can see that the scalar subquery
HAVING COUNT( * ) > (SELECT ghtest.f_call FROM dual);
calls f_call once.

Whereas
HAVING COUNT( * ) > ghtest.f_call;
results in it being called 7 times – the number of rows returned from the group by

That is precisely the problem.
******************************************************

Apparently our developers should know this, as it is a “known issue”. But not known to me until now (unless I knew it and forgot it)

So, hats of to Graeme.

{apologies, wordpress utterly screwed the layout/truncated text when I pasted in the example, have fixed now}

Accessing Roles in stored PL/SQL October 22, 2009

Posted by mwidlake in development, internals.
Tags: , ,
7 comments

Whilst looking for the minimum privileges I needed to execute DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO {it is ANALYZE ANY, by the way} I discovered something about PL/SQL and roles that I did not know. Now, any right I had to claim expertise in PL/SQL expired at least 8 years ago but I asked some friends who ARE still professional PL/SQL experts and they did not know this either.

Privileges granted via Roles to a user are not available to stored PL/SQL created by that user, correct? This is widely known and understood. You have to grant priveleges directly to the user for them to be seen in the PL/SQL packages, functions etc.

Having found that I needed the ANALYZE ANY privilege as I mentioned above, I asked the DBA team to grant my user that privilege on Production and Staging. They did so – via a role. “it won’t work” I said “I run the code via a package, it won’t see the privilege” and proved it by running DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO via a quickly constructed demo. Except I only proved my ignorance, it worked. WHY?

If a package is created such that it is executed with invokers rights then roles are seen

This is my test script:

create or replace package test1 is
procedure run_flush;
end test1;
/
--
create or replace package test1 authid current_user is
procedure run_flush is
cursor get_ses_roles is
select role
from session_roles;
begin
  dbms_output.put_line('starting');
  for ses_roles_rec in get_ses_roles loop
    dbms_output.put_line(ses_roles_rec.role);
  end loop;
  dbms_output.put_line('flushing');
  dbms_stats.flush_database_monitoring_info;
  dbms_output.put_line('ending');
end;
begin
  null;
end;
/

I create this package as user MDW.

Now as a privileged user I create a role and grant analyze_any to the role.

MGR>create role mdw_role
Role created.
MGR>grant analyze any to mdw_role;
Grant succeeded.

I’ll just prove that user MDW cannot yet execute the monitoring procedure

MDW> exec dbms_stats.flush_database_monitoring_info
BEGIN dbms_stats.flush_database_monitoring_info; END;
*
ERROR at line 1:
ORA-20000: Insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 2148
ORA-06512: at "SYS.DBMS_STATS", line 14135
ORA-06512: at line 1

Now I grant the role

MGR>grant mdw_role to mdw
Grant succeeded.

MDW has to log out and back in again to see the role correctly. Having done this I check for the role and then try to execute the test procedure:

MDW> select * from session_roles

ROLE
------------------------------
CONNECT
MDW_ROLE
2 rows selected.

MDW> exec test1.run_flush
starting
CONNECT
MDW_ROLE
flushing
ending

PL/SQL procedure successfully completed.

You can see that the package sees the roles and it executes the procedure successfully. So, stored PL/SQL can utilise privileges via roles if the packages is created with authid current_user, ie executors rights.

I better admit, as someone else might raise it, that this is not the best demonstration of this feature. I recreated the package with the first line set to:

create or replace package test1 is

ie the default of owners privileges. I now re-execute the call to the package:-

MDW> exec test1.run_flush
starting
flushing
ending

PL/SQL procedure successfully completed.

Note that the roles are no longer seen. However, the DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO still executed correctly.
Why?
Well, because if you look at the package specification of DBMS_STATS you see:-

create or replace package sys.dbms_stats authid current_user is

It makes sense. It is dangerous for system-owned packages to be executing as the owner, ie SYS, as execute access to the package would allow access to SYS privileges.

Which, of course, is why my little proof script executed the flush correctly and I looked a bit silly in front of the DBA team. Oh well.

Follow

Get every new post delivered to your Inbox.

Join 158 other followers