jump to navigation

Decrypting Histogram Data #3 – is 7 or 15 Characters Significant? September 15, 2009

Posted by mwidlake in internals, performance.
Tags: , , ,
2 comments

At last I am going to wrap up about looking at Histogram data and endpoint_value. Sorry about the delay, life is a tad complex at present. This is also a very, very long post. You may want to skip to the end to get the conclusions

I first posted about how to translate endpoint_values.
I then posted in the second post a week ago about how endpoint_value is limited to 7 or so characters for varchar2 columns.
{Addition – this later post discusses the content of endpoinit_value for dates}

Coskan contacted me to say he believed I had missed something, and of course he was right – but I’d laid big hints that I was using theory and not practical test and there was more to know. {There is usually more to know, only practical tests confirm whether your theory works or not. And even after that, it seems there is still more to know!}

OK, I have a test table, created with the following {oh, this is under 10.2.0.3, Enterprise Edition on linux}:

create table hist_test
as select rownum  id
,lpad(chr(mod(rownum,6)+65),5,chr(mod(rownum,6)+65) )vc_1
,lpad('A',2,'A')||dbms_random.string('U',1) vc_2
,lpad('B',3,'B')||dbms_random.string('U',1) vc_3
,lpad('C',4,'C')||dbms_random.string('U',1) vc_4
,lpad('D',5,'D')||dbms_random.string('U',1) vc_5
,lpad('E',6,'E')||dbms_random.string('U',1) vc_6
,lpad('F',7,'F')||dbms_random.string('U',1) vc_7
,lpad('G',8,'G')||dbms_random.string('U',1) vc_8
,lpad('H',16,'H')||dbms_random.string('U',1)) vc_16
,lpad('I',40,'I')||dbms_random.string('U',1) vc_40
from dba_objects
where rownum < 1001

I need to skew the data in some columns so that different WHERE clauses should see different expected numbers of rows
I now need to add some skew to the data so you can see the histogram in action .

update hist_test set
VC_5=lpad(‘D’,5,’D’)||’A’
,VC_7=lpad(‘F’,7,’F’)||’A’
,VC_16=lpad(‘H’,16,’H’)||’A’
,VC_40=lpad(‘I’,40,’I’)||’A’
where mod(id,10)=1
/

Gather histogram stats, 10 buckets:
dbms_stats.gather_table_stats(ownname=>user,tabname=>’HIST_TEST’
,method_opt=>’FOR ALL COLUMNS SIZE 10′)

And looking at ALL_TAB_COLUMNS we can see the general stucture of the data (just the VC_ columns are shown)

COL_NAME NUM_DIST LOW_V                HI_V                 N_BUCK  AVG_L
-------------------------------------------------------------
ID          1,000 1                    1000                     10      4
VC_1            6 AAAAA                FFFFF                     6      6
VC_2           26 AAA                  AAZ                      10      4
VC_3           26 BBBA                 BBBZ                     10      5
VC_4           26 CCCCA                CCCCZ                    10      6
VC_5           26 DDDDDA               DDDDDZ                   10      7
VC_6           26 EEEEEEA              EEEEEEZ                  10      8
VC_7           26 FFFFFFFA             FFFFFFFZ                 10      9
VC_8           26 GGGGGGGGA            GGGGGGGGZ                10     10
VC_16          26 HHHHHHHHHHHHHHHHA    HHHHHHHHHHHHHHHHZ        10     18
VC_40           1 IIIIIIIIIIIIIIIIIIII IIIIIIIIIIIIIIIIIIII      1     42

I have created a table with varchar2 columns set to a fixed sting of characters the length indicated by the column name with the last column varying from A to Z. So VC_5 contains EEEEEx, where x is A to Z. 26 distinct values each column.

If you look at the above, all columns are showing as having 26 values and 10 buckets EXCEPT VC_40, but it does have 26 distinct values:

select count(distinct(VC_40)) from hist_test;
COUNT(DISTINCT(VC_40))
———————-
26

I have my function to unpack the endpoint_value for varchar2 columns from the number stored to the character string it represents. My function is not perfect, but it is reasonably good. Go checkout post one for the function.
Looking at the histograms with my function you can see the below {I show only some records}

colname                               END_VAL ROWCOUNT MOD_REAL
------------------------------------------------------------------
VC_5     354460798875962000000000000000000000        1 DDDDDA
VC_5     354460798875972000000000000000000000        2 DDDDDD
VC_5     354460798875986000000000000000000000        3 DDDDDG
{snip}
VC_5     354460798876071000000000000000000000        9 DDDDDY
VC_5     354460798876080000000000000000000000       10 DDDDDZ
VC_7     364886116489977000000000000000000000        1 FFFFFGK
VC_7     364886116489977000000000000000000000        2 FFFFFGK
VC_7     364886116489977000000000000000000000        3 FFFFFGK
VC_7     364886116489977000000000000000000000        4 FFFFFGK
{snip}
VC_7     364886116489977000000000000000000000        9 FFFFFGK
VC_7     364886116489977000000000000000000000       10 FFFFFGK
VC_16    375311434103976000000000000000000000        1 HHHHHI:
VC_16    375311434103976000000000000000000000        2 HHHHHI:
VC_16    375311434103976000000000000000000000        3 HHHHHI:
{snip}
VC_16    375311434103976000000000000000000000        9 HHHHHI:
VC_16    375311434103976000000000000000000000       10 HHHHHI:
VC_40    380524092910976000000000000000000000    1,000 IIIIIJM

For VC_5, there are 10 buckets, all with different endpoint_values.
The VC_7 has 10 buckets, but most have the same endpoint_value.
The VC_16 has 10 records in the HISTOGRAM table, all with the same endpoint_value.
VC_40 has only one record in the HISTOGRAM table.

Basically, if I am right, WHERE filters on a column holding values that are the same for the first seven or so characters will not be well supported by the histograms. We should see the estimated cost for such a WHERE filter to be wrong.

I have skewed the data so that Histograms should show more expected values for DDDDDA than DDDDDS, for FFFFFFFA than FFFFFFFS etc

update hist_test
set
 VC_5=lpad(‘D’,5,’D’)||’A’
,VC_7=lpad(‘F’,7,’F’)||’A’
,VC_16=lpad(‘H’,16,’H’)||’A’
,VC_40=lpad(‘I’,40,’I’)||’A’
where mod(id,10)=1
/

And now I’ll see how the CBO estimates the rows coming back for some WHERE clauses.

select count(*) from hist_test
where VC_5 = 'DDDDDA'
  COUNT(*)
----------
       139

Execution Plan
----------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |     7 |     8   (0)|
|   1 |  SORT AGGREGATE    |           |     1 |     7 |            |
|*  2 |   TABLE ACCESS FULL| HIST_TEST |   100 |   700 |     8   (0)|
----------------------------------------------------------

select count(*) from hist_test
where VC_5 = 'DDDDDS'
  COUNT(*)
----------
        35

Execution Plan
----------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |     7 |     8   (0)|
|   1 |  SORT AGGREGATE    |           |     1 |     7 |            |
|*  2 |   TABLE ACCESS FULL| HIST_TEST |    49 |   343 |     8   (0)|
---------------------------------------------------------------------

For column VC_5 it all works as we would expect. the CBO estimates 100 values for DDDDDA and 49 for DDDDDS. I’ve shown the actual count(*)’s as well to show there is a real variation in numbers. There are good reasons why the estimates do not match reality, I won’t go into them now but this does highlight that histograms can help but do have flaws.

What about the other rows? We are only interested in the WHERE clause and the estimate for the number of rows from the table access, so I’ll show only those.

where VC_7 = 'FFFFFFFA'
|*  2 |   TABLE ACCESS FULL| HIST_TEST |   100 |   900 |     8   (0)|

where VC_7 = 'FFFFFFFS'
|*  2 |   TABLE ACCESS FULL| HIST_TEST |    49 |   441 |     8   (0)|

where VC_16 = 'HHHHHHHHHHHHHHHHA'
|*  2 |   TABLE ACCESS FULL| HIST_TEST |   100 |  7200 |     8   (0)|

where VC_16 = 'HHHHHHHHHHHHHHHHS'
|*  2 |   TABLE ACCESS FULL| HIST_TEST |    47 |   216 |     8   (0)|

where VC_40 = 'IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIA'
|*  2 |   TABLE ACCESS FULL| HIST_TEST |  1000 | 42000 |     8   (0)|

where VC_40 = 'IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIS'
|*  2 |   TABLE ACCESS FULL| HIST_TEST |  1000 | 42000 |     8   (0)|

{and just to show there is no typo in the string of 'I's above...
where VC_40 = lpad('I',40,'I')||'S'
|*  2 |   TABLE ACCESS FULL| HIST_TEST |  1000 | 42000 |     8   (0)|

In the above you can see that the CBO estimates the costs for VC_7 exactly the same as it did for VC_5. Despite the endpoint_values no longer distinguishing the buckets.

Even more telling is tha for VC_16 the CBO also is able to predict there are more values ending in A than S.

As for VC_40, the CBO utterly fails to spot the skew. In fact it utterly fails to spot that there is more than one value in the column (look back at the section showing min and max values and number of distinct values), and there is only one histogram bucket. so I supposed it is no surprise.

How does the CBO still detect the skew for VC_7 and VC_16, even though the endpoint_valuess are the same?
Because it swaps to using column ENDPOINT_ACTUAL_VALUE.

ENDPOINT_ACTUAL_VALUE is only populated for varchar2 columns when the precision of endpoint_value is exceeded:

colname ROWCOUNT MOD_REAL          ENDPOINT_ACTUAL_VALUE
------- ---------------------------------------------
VC_5           7 DDDDDR
VC_5           8 DDDDDV
VC_5           9 DDDDDY
VC_5          10 DDDDDZ
VC_6           0 EEEEEF8           EEEEEEA
VC_6           1 EEEEEF8           EEEEEEC
VC_6           2 EEEEEF8           EEEEEEF
VC_6           3 EEEEEF8           EEEEEEH
VC_6           4 EEEEEF8           EEEEEEK
VC_6           5 EEEEEF8           EEEEEEN
VC_6           6 EEEEEF8           EEEEEEP
VC_6           7 EEEEEFn           EEEEEES
VC_6           8 EEEEEFn           EEEEEEU
VC_6           9 EEEEEFn           EEEEEEX
VC_6          10 EEEEEFn           EEEEEEZ
VC_7           1 FFFFFGK           FFFFFFFA
VC_7           2 FFFFFGK           FFFFFFFC
VC_7           3 FFFFFGK           FFFFFFFF
VC_7           4 FFFFFGK           FFFFFFFI
VC_7           5 FFFFFGK           FFFFFFFK
VC_7           6 FFFFFGK           FFFFFFFO
VC_7           7 FFFFFGK           FFFFFFFR
VC_7           8 FFFFFGK           FFFFFFFT
VC_7           9 FFFFFGK           FFFFFFFW
VC_7          10 FFFFFGK           FFFFFFFZ
VC_16          1 HHHHHI:           HHHHHHHHHHHHHHHHA
VC_16          2 HHHHHI:           HHHHHHHHHHHHHHHHD
VC_16          3 HHHHHI:           HHHHHHHHHHHHHHHHG
VC_16          4 HHHHHI:           HHHHHHHHHHHHHHHHJ
VC_16          5 HHHHHI:           HHHHHHHHHHHHHHHHL
VC_16          6 HHHHHI:           HHHHHHHHHHHHHHHHO
VC_16          7 HHHHHI:           HHHHHHHHHHHHHHHHS
VC_16          8 HHHHHI:           HHHHHHHHHHHHHHHHU
VC_16          9 HHHHHI:           HHHHHHHHHHHHHHHHX
VC_16         10 HHHHHI:           HHHHHHHHHHHHHHHHZ
VC_40      1,000 IIIIIJM

You can see that VC_5 columns have no ENDPOINT_ACTUAL_VALUEs but VC_6, VC_7 and VC_16 do. VC_40 does not.

So, at what point does the CBO stop storing values in ENDPOINT_ACTUAL_VALUES? 32 characters. I created  another test table and these are the low/high values, number of distinct values and number of histogram buckets:

COL_NAME NUM_DIST LOW_V HI_V N_BUCK AVG_L
————————————————————–
ID 1,000 1 1000 10 4
NUM_1 10 0 9 10 3
VC_1 6 AAAAA FFFFF 6 6
VC_30 26 AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA 10 32
VC_31 26 BBBBBBBBBBBBBBBBBBBB BBBBBBBBBBBBBBBBBBBB 10 33
VC_32 1 CCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCC 1 34
VC_33 1 DDDDDDDDDDDDDDDDDDDD DDDDDDDDDDDDDDDDDDDD 1 35
VC_34 1 EEEEEEEEEEEEEEEEEEEE EEEEEEEEEEEEEEEEEEEE 1 36
VC_35 1 FFFFFFFFFFFFFFFFFFFF FFFFFFFFFFFFFFFFFFFF 1 37[/sourcecode ]

The name of the column is the length of the fixed string, so the columns are actually one character longer, as can be seen by the “AVG_L” column.

And just to check {as after all this is just a nice theory until you test}, I skew the number of records ending in ‘A’ for VC_31 and VC_32:-

update hist_test2
set
VC_31=lpad(‘B’,31,’B’)||’A’
,VC_32=lpad(‘C’,32,’C’)||’A’
where mod(id,10)=1

And see what the CBO thinks the rows to be identified are:-

where VC_31 = lpad(‘B’,31,’B’)||’A’
|* 2 | TABLE ACCESS FULL| HIST_TEST2 | 100 | 3300 |

where VC_31 = lpad(‘B’,31,’B’)||’S’
|* 2 | TABLE ACCESS FULL| HIST_TEST2 | 49 | 1617 |

where VC_32 = lpad(‘C’,32,’C’)||’A’
|* 2 | TABLE ACCESS FULL| HIST_TEST2 | 1000 | 34000 |

where VC_32 = lpad(‘C’,32,’C’)||’S’
|* 2 | TABLE ACCESS FULL| HIST_TEST2 | 1000 | 34000 | [/sourcecode ]

Yes. The CBO can see the skew for the 32 characters of VC_31 and not for the 33 characters of VC_32

So in conclusion

  • endpoint_value is only accurate for varchar2 columns up to around 7 characters.
  • If the varchar2 exceeds the precision of the endpoint_value column then endpoint_actual_value is used.
  • endpoint_actual_value is used for varchar2 columns up to 32 characters and not for columns less than 7 characters.
  • Histograms will not help with columns having many values with the same leading 32 or more characters.
  • If you column has the same leading 32 characters or more, even the number of distinct values is not gathered by DBMS_STATS.
  • You can’t tell how SQL/CBO will work from theory, you have to test. And still consider you knowledge as open to correction after that :-)

Friday Philosophy – What ever happened to System Design? September 14, 2009

Posted by mwidlake in Architecture, performance.
Tags: ,
8 comments

{OK, so it is not Friday, I’ve got a lot on, OK?! Shesh…}

I’m a Contractor/Consultant. As a Contractor I am expected to arrive On Site and within a week be making progress.  As a Consultant the week becomes a day.

That’s fine, it’s the territory. However, there is a question I always ask and I know other people ask it when they arrive on site:-

“Can I please have a look at the database design”.

We normally mean an Entertity Relationship diagram or at least a schema design by this. The usual answer these days?

” We keep meaning to do one, but it never seems to quite get done – here are some misleading and poor notes on a few of the the schemas, if you get an ER diagram together, we would love to see it”.

Why does no one do an ER diagram these days and why is there never a picture of the tables and relationships that you can look at for a system? Despite the fact that anyone new to the system hopes one will exist? What Happened to the skill of logical design?!

When did the ethos of having an overall schema layout just go out of the window??? And how in hell do You (yes, You, the system owner) expect to have a clue about the overall system without one?

Oh, and what is that I hear? You want me to improve the performance of your system without having a description of said system!? You have no clue of the overall database design but you want it to run faster?!? I better set the hidden “_RUN_DATABASE_FASTER” parameter to TRUE immediately then.

Is this just me or WHAT?

Another Day, Another Obscure Oracle Error September 11, 2009

Posted by mwidlake in internals, performance.
Tags: , ,
8 comments

I seem to be spending my life hitting odd bugs or errors in Oracle’s performance-related areas at the moment. I guess part of it is because I am using features not widely touched, part is I’m not working as the SYS or SYSTEM user {in my experience a lot of people working on such things do it when connected as sysdba or at least as SYSTEM} and part is that Larry Ellisson personally hates me {he fills in for God when it comes to Oracle}.

I’m seeing this a lot today,and it seems virtually unknown:-

ERROR at line 1:
ORA-20003: Unable to gather system statistics
ORA-06512: at “SYS.DBMS_STATS”, line 15882
ORA-06512: at line 1

I’m also seeing this, but this error is documented:-

ERROR at line 1:
ORA-20000: Unable to gather system statistics : insufficient privileges
ORA-06512: at “SYS.DBMS_STATS”, line 15822
ORA-06512: at line 1

The rest of this post will explain my path to only partial enlightenment. If you have hit the above errors and found this page via Google, this might help you out and you might like to skip towards the end.
If you just read my blog, regard this as a very long “tweet” or whatever the hell they are. Again, feel free to skip to the end. Or just skip.

The task I’m doing which is causing me to hit these issues is that I’m trying to assure myself of the ability to role back gathering system statistics before getting the DBA team to do it on a production system. {I am not in the DBA team on this one}.

No system statistics have been gathered yet so I have the default set, I’m sure many of you will recognise them:

SNAME           PNAME                     PVAL1 PVAL2
---------------------------------------------------
SYSSTATS_MAIN   CPUSPEED
SYSSTATS_MAIN   CPUSPEEDNW         1,567.798516
SYSSTATS_MAIN   IOSEEKTIM             10.000000
SYSSTATS_MAIN   IOTFRSPEED         4,096.000000
SYSSTATS_MAIN   MAXTHR
SYSSTATS_MAIN   MBRC
SYSSTATS_MAIN   MREADTIM
SYSSTATS_MAIN   SLAVETHR
SYSSTATS_MAIN   SREADTIM
SYSSTATS_INFO   DSTART                          07-25-2006 12:39
SYSSTATS_INFO   DSTOP                           07-25-2006 12:39
SYSSTATS_INFO   FLAGS                  1.000000
SYSSTATS_INFO   STATUS                          COMPLETED

I decided to pull the current system statistics into a statistics table:-

exec dbms_stats.create_stat_table(‘DWPERF’,’AD_BF_STATS’,’DW_COMMON_MG’)

EXEC DBMS_STATS.EXPORT_SYSTEM_STATS
(‘AD_BF_STATS’,’SYSSTAT’||TO_CHAR(SYSDATE,’YYMMDDHH24MI’),’DWPERF’)

SELECT * FROM AD_BF_STATS;
SYSSTAT0909101309 S 4 1 COMPLETED
07-25-2006 12:39 07-25-2006 12:39 CPU_SERIO

SYSSTAT0909101309 S 4 1
PARIO
10 4096 1567.79852

 
Two records created, first is the header record about what this statid is all about, the second contains the stored info. I’ve highlighted the relevant records.

All well and good, I can save my stats. Why do I want to? Because we may gather system statistics on an infrequent but regular basis, and I want a record of them. The automtically stored history only goes back a month {by default}.

Of course, there is no need for me to explicitly export the stats to the table, it can be done as part of the gathering process, by stating the statistics table, owner and an ID when I do the gather, so I did:-

exec dbms_stats.gather_system_stats(gathering_mode => ‘INTERVAL’,interval => 15,stattab=> ‘
AD_BF_STATS’,statown=> ‘DWPERF’ ,statid=>’SYSSTAT’||TO_CHAR(SYSDATE,’YYMMDDHH24MI’))

I went off for a cup of coffee, came back, checked my system stats.

SNAME           PNAME                     PVAL1 PVAL2
---------------------------------------------------
SYSSTATS_MAIN   CPUSPEED
SYSSTATS_MAIN   CPUSPEEDNW         1,567.798516
SYSSTATS_MAIN   IOSEEKTIM             10.000000
SYSSTATS_MAIN   IOTFRSPEED         4,096.000000
SYSSTATS_MAIN   MAXTHR
SYSSTATS_MAIN   MBRC
SYSSTATS_MAIN   MREADTIM
SYSSTATS_MAIN   SLAVETHR
SYSSTATS_MAIN   SREADTIM
SYSSTATS_INFO   DSTART                          07-25-2006 12:39
SYSSTATS_INFO   DSTOP                           07-25-2006 12:39
SYSSTATS_INFO   FLAGS                  1.000000
SYSSTATS_INFO   STATUS                          COMPLETED

Oh. No change. No system stats gathered. Damn.

Well, it’s a test box, there may be too little worload to detect anything – like none! I checked the SGA for recent code and the only SQL was that for recording the stats gathering :-).
So, repeated with me running some workload in another window for 15 minutes.

Still no change, still no system stats. Double Damn!

I remembered from reading around that if MREADTIME is less than SREADTIME the stats gathered could be ignored so I tried again.
And again.
And then decided it would not magically work, there was a problem.

I’ll gather the system stats with START and STOP and really hammer the box for a while, ensuring with autotrace
exec dbms_stats.gather_system_stats(gathering_mode => ‘START’,stattab=>’AD_BF_STATS’,statown
=> ‘DWPERF’ ,statid=>’SYSSTAT’||TO_CHAR(SYSDATE,’YYMMDDHH24MI’))

work-work-work-work-work

exec dbms_stats.gather_system_stats(gathering_mode => ‘STOP’);
BEGIN dbms_stats.gather_system_stats(gathering_mode => ‘STOP’); END;
*
ERROR at line 1:
ORA-20003: Unable to gather system statistics
ORA-06512: at “SYS.DBMS_STATS”, line 15882
ORA-06512: at line 1

Oh.

Oh Hell. Well, at least I now know that the chances are my interval systenm stats collection failed with this, but silently.

I bet if I dug around I would find a few failed jobs recorded where the INTERVAL version of the stats collection failed with this. I have not checked, I am just up to my eyes right now.

I found nothing at all via Google or Metalink about this exact error for gathering system stats. BUT maybe I lacked the correct priveleges on my non-SYS account. So I granted ANALYZE AND DICTIONARY and ANALYZE ANY to my user, as those or the priveleges mentioned for many of the DBMS_STATS management procedures.

Nope, did not help.

Blow it, If DBMS_STATS won’t let me keep the stats in my table, I’ll just rely on the default store of them, and not state my stats table in the call:

DWD2_DWPERF> exec dbms_stats.gather_system_stats(gathering_mode => ‘START’)
BEGIN dbms_stats.gather_system_stats(gathering_mode => ‘START’); END;
*
ERROR at line 1:
ORA-20000: Unable to gather system statistics : insufficient privileges
ORA-06512: at “SYS.DBMS_STATS”, line 15822
ORA-06512: at line 1

Oh. Well, that’s interesting.
If I don’t state a stats table the gather fails when initiated.
If I state a stats table the gather fails when I stop.
How frustrating.

I decided to give up. Then I decided I won’t be beaten and to grant myself DBA role and see if it works.

DWDEV2> exec dbms_stats.gather_system_stats(gathering_mode => ‘START’)
PL/SQL procedure successfully completed.

YEEEESSSSSS!!!

But will it now error when I stop the gather…

DWDEV2> exec dbms_stats.gather_system_stats(gathering_mode => ‘STOP’)
PL/SQL procedure successfully completed.

Deep Joy.

And it gathered stats.

SNAME           PNAME                     PVAL1 PVAL2
------------------------------------------
SYSSTATS_MAIN   CPUSPEED           1,545.000000
SYSSTATS_MAIN   CPUSPEEDNW         1,567.798516
SYSSTATS_MAIN   IOSEEKTIM             10.000000
SYSSTATS_MAIN   IOTFRSPEED         4,096.000000
SYSSTATS_MAIN   MAXTHR
SYSSTATS_MAIN   MBRC
SYSSTATS_MAIN   MREADTIM
SYSSTATS_MAIN   SLAVETHR
SYSSTATS_MAIN   SREADTIM               5.000000
SYSSTATS_INFO   DSTART                          09-11-2009 13:02
SYSSTATS_INFO   DSTOP                           09-11-2009 13:02
SYSSTATS_INFO   FLAGS                  1.000000
SYSSTATS_INFO   STATUS                          COMPLETED

{ignore the lack of mreadtime and sreadtime, nothing happend on the box in the few seconds I was collecting for}

SO, the lack of DBA role was a problem but had been hidden by my attempts to use a stats table. Now surely I can do what I wanted to and store the stats in my stats table…

exec dbms_stats.gather_system_stats(gathering_mode => ‘START’,stattab=>’AD_BF_STATS’,statown
=> ‘DWPERF’ ,statid=>’SYSSTAT’||TO_CHAR(SYSDATE,’YYMMDDHH24MI’))

PL/SQL procedure successfully completed.

Wehay.
work-work-work-work-work

exec dbms_stats.gather_system_stats(gathering_mode => ‘STOP’)
BEGIN dbms_stats.gather_system_stats(gathering_mode => ‘STOP’); END;
*
ERROR at line 1:
ORA-20003: Unable to gather system statistics
ORA-06512: at “SYS.DBMS_STATS”, line 15882
ORA-06512: at line 1

AAAAAGGGGHHHHHH!!!!!!!!!

Let’s check back.

exec dbms_stats.gather_system_stats(gathering_mode => ‘START’)
PL/SQL procedure successfully completed.

work-work-work-work-work

exec dbms_stats.gather_system_stats(gathering_mode => ‘STOP’)
PL/SQL procedure successfully completed.

That works, because I am not involving a stats table. If you look back at the two errors, they come from different (but very similar) line numbers. I’d missed that for a couple of hours, I blame dyslexia.

Conclusion:

Gathering and storing System Stats is as buggy and random as the rest of the dbms_stats functionality. {yes, I am peeved about this}. It Seems..

  • If you gather system stats with the interval method, it can silently fail. This might only be if you involve a stats table.
  • You need the DBA role to gather system stats,  otherwise you get ORA-20000, via ORA-06512 at line 15822
  • analyze and dictionary and analyze any are not enough to allow a non DBA account to gather system stats.
  • If you try and store the previous system stats in your stats table as part of the gather you get ORA-20003, via ORA-06512 at line 15882 {different line}, even if you have DBA role.
  • If you just rely on Oracle preserving the previous version of the stats, you can gather system stats fine.
  • You can work around the issue by exporting the system stats to your stats table as a single step first, then gathering system stats without involving your stats table.

There is no metalink for the ORA-20003 via line 15882 in metalink and no google hit for it, so I reckon it is a damned rare event. And since the last time I blogged about a damned rare error, I see 1 or 2 hits on that blog page every day since :-).

Your Backups Are Probably Too Simple September 10, 2009

Posted by mwidlake in Architecture.
Tags:
5 comments

Following on from my post a few days back on When a Backup is Not a Backup I want to pick up on the topic of Simplicity, something Piet de Visser, Neil Chandler and I regularly discuss.

It’s very good to ask yourself the question “how to keep oracle backups simple“. A simple backup may not be the quickest to run or the fastest to recover from but it is a lot more likely to work. Which is the one critical defining feature of a backup!

However, the backing up of the Oracle database itself is not what I have in mind, after all any DBA who is beyond the trainee stage should know how to do it and there are many options with Oracle to achieve a good, simple backup (Cold backup, full export, manual hot backups, simple RMAN )

{As an aside, something that really bugs me about Oracle at the moment is their utter focus on RMAN in the backup documentation -  for example, the 10gR2 “Backup and Recovery Basics Manual” is almost dedicated to RMAN. It should not be, is not called the “Recovery Manager Basics Manual”. If you go to the Advanced guide it is all pretty much RMAN until you get to chapter 17, where you eventually find a reasonable set of information on creating a simple, user-controlled hot backup. Or use Google on “End backup Oracle” and there is endless information and help. Such a simple backup method should be made more of by Oracle Corp, I believe}

My concern is that the Oracle Database is very often only one component of the whole system.  Note  the “the.”

First up – do you use Database Links to read data between the database you are considering and others? More to the point, do you do two-phase commits between your database of concern and other oracle databases? If so, you may well need to keep the data in-synch between them. {Don’t forget – Your applications may independently update multiple database without using Database Links}. How exactly do you manage that? Have a policy that if database (a) has to be recovered, you also do database (b), both to a point-in-time?

Second, and probably most common, do you have files external to the database that are referenced, say as BFILES or via the application? If you do a recovery, do you need to make sure those files are in-synch with the database? You may well have the situation where files on the server/file store are backed up at one time and the database at another. Loss of the whole server will need a full recovery of both backups, which were taken at different times.

Three, are there other dependencies, such as password authentication, account creation, audit maintenance?

Four, If your system crashes and was half way through a batch process, how do you cleanly resume the batch or clear it down and start again (having made sure the consideration of external files above has been resolved)?

 I’m sure you get the idea now. Think about it.

Test recoveries are good at identifying the above issues, but only if the test recovery includes testing the application(s) that use the database {as opposed to the usual “select sysdate from dual – yep, it works” test}.

The above considerations are part of why I am a fan of keeping all the things to do with a business application in the Oracle database, even if it might not be the ideal from other perspectives (eg images make the database a lot larger and may not be processed as fast as they can be as external files).

All of the above are probably the first steps towards creating your Disaster Recovery Plan. “Disaster Recovery” is a whole topic in its own right. Actually, it’s a whole business and one people have very lucrative careers in. But if you are the DBA and have the backups nailed, then Disaster Recovery (what to do if the building burns down) is probably next on your list of things to worry about.

When a Backup is Not a Backup September 8, 2009

Posted by mwidlake in Architecture.
Tags: ,
13 comments

I had a call from an old client today who had found themselves in a bit of a pickle. They were facing what I’d judge is the third most common “avoidable disaster” for Oracle databases. And probably the most devastating.

They had a damaged database, just one of those things. Media failure had corrupted a couple of datafiles. This is notthe disaster, most DBAs who have been around for a few years have experienced this. Disks are mechanical devices, they go wrong, controllers develop faults, people accidentally pull the plugs out of switches.

The disaster was that they had never tested their backup. Guess what? It turned out that their backup was not be a valid backup. It was just a bunch of data on tape {or disk or wherever they had put the backup}.

A backup is not a backup until you have proven you can recover from it.

If your backup has not been tested, and by that I mean used to recreate the working system which you then test, then it is just a bunch of data and a hopeful wish. You will not know if it works until you are forced to resort to it, and that means (a) you already have a broken system and an impact on your business (b) time is going to be short and pressure is high and (c) if you are the DBA, you could be about to fail in your job in a most visible and spectacular way.

Oddly enough, only two or three weeks ago I had another client in exactly the same position. Database is corrupted, the backup had not been tested, the backup turned out to be a waste of storage. In fact, in this case, I think the backup had not been taken for a considerable period of time as a standby database was being relied on as the backup. Which would have been fine if the Standby had been tested as fulfilling the purpose of “working backup”.

The standby had not, as far as I could deduce, ever been opened and tested as a working and complete database since the last major upgrade to the host system. When tested for real, It proved not to be a working and complete database. It was an expensive “hopeful wish”.

The client from a few weeks ago finally got their system back, but it took a lot of effort for the DBAs and the developers to sort it out and they were a tad lucky. The jury is out on the client who called me today.

I can’t think of anything at all that the DBA function does that is more important than ensuring backups are taken and work. {Maybe an argument could be made that creating the system is more important as nothing can be done about that, but then you could argue that the most important thing you do in the morning is get out of bed}. Admittedly, setting up, running and testing backups is not a very exciting job. In fact it often seems to be a task passed on to a less experienced member of the team {just like creating databases in the first place}. But most of us are not paid to have fun, we are paid to do a job.

I’ll just make a handful more comments before giving up for today.

  • The database backup nearly always needs to be part of a more comprehensive backup solution. You need a way to recreate the oracle install, either a backup of the binaries and auxiliary files (sql*net, initialization files, password files etc) or at least installation media and all patches so you can recreate the installation. You need a method to recover or recreate your application and also your monitoring. You might need to be able to recreate your batch control. O/S? Have you covered all components of your system, can you delete any given file off your system and fix it?
  • Despite the potential complexity resulting from the previous point, you should keep your backup as simple as you can. For example, if you only have a handful of small databases, you might just need old-style hot backups via a shell script, rather than RMAN. Only get complex {like backing up from a standby} if you have a very, very compelling business need to do so.
  • Testing the backup once is a whole world better then never testing it. However, regular, repeated recovery tests not only allow the DBA/Sys Admin teams to become very comfortable with the recovery process and help ensure it is swift and painless, but by trying different scenarios, you may well discover issues that come under the first two points.

I’ve not even touched on the whole nightmare of Disaster Recovery :-)

Friday Philosophy – Disasters September 4, 2009

Posted by mwidlake in Perceptions.
Tags: , ,
3 comments

Some of you may be aware that I occasionally do presentations called something like:-

“5 Ways to Progress your Career Through Systems Disasters”

The intention of the presentations are to comment on things I have “been in the vicinity of” going wrong in I.T. and ways to avoid them, in a light-hearted manner. Having a bit of a laugh whilst trying to make some serious points about project management, infrastructure, teams, people and the powers of chaos.

I’ll see about putting one up on my Web Site so that you can take a look {check back this weekend if you like}

The talks usually go down well but there are two potential issues in giving the talks:

  • The disasters or problems, if in some way my fault, could make me look like an idiot or incompetent.
  • If it is known who I was working for when I “witnessed a disaster”, it could make that company look bad.

I never used to worry too much about this when I worked permanently for a company that was pretty relaxed about “stuff happens, it is good to share”. After all, if I looked like an idiot then that is fair enough and if I said anything that could be linked back to a prior (or current) employer {which, I hasten to point out, I did aim to avoid} then, well, sorry. But I only said things that were true.

However, when I returned to being self-employed, a good friend of mine took me to one side and suggested such talks could harm my career. I argued that it was not malicious and was helpful to people. My friend argued back that potential employing companies would not look so favourably on this, especially if they suspected that they may one day feature.

Hmmmm…. This was true.

So I toned down the talk…

The next time I did the presentation, the sanitised one, it was not such a hit. In fact, it was a bit rubbish.

The question is, should I have toned it down? Hands up anyone who has not personally done something unbelievably stupid at least once in their working life? Can everyone who has worked for an organisation that has not messed up at least one I.T. project please also raise their hand?

I can’t see any raised hands from here :-)

We all make mistakes.
All companies get things wrong at times.

Something you find when you start presenting or organising events is that the talks people most appreciate and learn the most from are about things going wrong.

So why can’t we all be grown-ups about admitting them, talking about them and learning? Personally, when I have interviewed people for jobs, I am always impressed by someone who will admit to the odd failure, especially if they can show what they learnt from it.

Oh, if anyone is reading this before offering me a position, I never made a mistake in my life, honest. I never deleted every patient record from a hospital information system, I was not even on-site when the incident didn’t happen. And if anyone suggest otherwise, it was a long time ago when it didn’t happen. ..

{I got all the data back, anyway. Never start work without a backup…}

Decrypting Histogram Data #2 September 3, 2009

Posted by mwidlake in internals, performance.
Tags: , , ,
7 comments

Hint – if you have a column where most or all of the entries have the same 15 plus characters, don’t bother gathering histograms on it. They will not help you a lot. Or at all. eg you have a column that holds “CUSTOMER IDENTIFIER xxxxxxxxx”, “CUSTOMER IDENTIFIER yyyyyyyy”, etc. Of course, good design suggests that the “CUSTOMER IDENTIFIER” bit is probably redundant and can be got rid of, but we live in the real world and may not have the power or ability to enforce that change, or we might have 3 or 4 such “prefix” strings.

Further, histograms on dnumerics longer than 15 significant digits will also potentially behave not as you would expect.

I better justify my assertion.

In the previous post on decrypting histogram data I covered how Oracle turns a varchar2 value into a numeric value that is held in the ENDPOINT_VALUE of DBA_TAB_HISTOGRAMS and I also gave you a cunning (but slightly flawed) function to convert it back. I use it in this post so you might want to go look at the prior post. Sorry it is long, I can’t stop rambling.

First, I’ll create some test data {Oh, this is on 10.2.0.3 on Linux}. The below script create a table HIST_TEST with columns NUM_1 through to NUM7, which hold numbers padded out to greater lengths and then 0-9 as the last value. Similarlry columns VC_2 to VC_8 are padded out and have a random character added. VC_1 is a random 5-character string.

create table hist_test
as select rownum  id
,mod(rownum,10)  num_1
,trunc(dbms_random.value(1,10)) num_2
,1000+trunc(dbms_random.value(1,10)) num_3
,1000000+trunc(dbms_random.value(1,10)) num_4
,1000000000+trunc(dbms_random.value(1,10)) num_5
,1000000000000+trunc(dbms_random.value(1,10)) num_6
,1000000000000000+trunc(dbms_random.value(1,10)) num_7
,lpad(chr(mod(rownum,6)+65),5,chr(mod(rownum,6)+65) )vc_1
,lpad('A',2,'A')||dbms_random.string('U',1) vc_2
,lpad('B',3,'B')||dbms_random.string('U',1) vc_3
,lpad('C',4,'C')||dbms_random.string('U',1) vc_4
,lpad('D',5,'D')||dbms_random.string('U',1) vc_5
,lpad('E',6,'E')||dbms_random.string('U',1) vc_6
,lpad('F',7,'F')||dbms_random.string('U',1) vc_7
,lpad('G',8,'G')||dbms_random.string('U',1) vc_8
--below skews the table to AB.
,lpad('H',16,'H')||decode(mod(rownum,3),0,'A'
                                       ,1,'B'
                                       ,   dbms_random.string('U',1)) vc_16
,lpad('I',40,'I')||dbms_random.string('U',1) vc_40
from dba_objects
where rownum < 1001
/
begin
  dbms_stats.gather_table_stats(ownname=>user,tabname=>'HIST_TEST'
  ,method_opt=>'FOR ALL COLUMNS SIZE 10');
END;
/

The below is a simple select against DBA_TAB_COLUMNS to see the information for the column {Oh, I say simple, but you have to use the functions utl_raw.cat_to_number and utl_raw.cast_to_varchar2 to turn the raw values held in the columns LOW_VALUE and HIGH_VALUE to something we humans can read. Why does Oracle Corp have to make life so difficult? *sigh*.

Oh, click on “view plain” on the box below to get a better layout.

select
 column_name
,num_distinct num_dist
,decode(data_type,'NUMBER',to_char(utl_raw.cast_to_number(low_value))
                 ,'VARCHAR2',to_char(utl_raw.cast_to_varchar2(low_value))
                 ,          low_value
       ) low_v
,decode(data_type,'NUMBER',to_char(utl_raw.cast_to_number(high_value))
                 ,'VARCHAR2',to_char(utl_raw.cast_to_varchar2(high_value))
                 ,          high_value
       ) hi_v
,num_nulls    n_nulls
,num_buckets  n_buck
,avg_col_len  avg_l
from dba_tab_columns
where table_name ='HIST_TEST'
and owner=USER
order by owner,column_id
/

COLUMN_NAME       NUM_DIST LOW_V              HI_V                    N_NULLS N_BUCK  AVG_L
---------------------------------------------------------------------
ID                   1,000 1                  1000                          0     10      4
NUM_1                   10 0                  9                             0     10      3
NUM_2                    9 1                  9                             0      9      3
NUM_3                    9 1001               1009                          0      9      4
NUM_4                    9 1000001            1000009                       0      9      6
NUM_5                    9 1000000001         1000000009                    0      9      7
NUM_6                    9 1000000000001      1000000000009                 0      9      9
NUM_7                    9 1000000000000001   1000000000000009              0      9     10
VC_1                     6 AAAAA              FFFFF                         0      6      6
VC_2                    26 AAA                AAZ                           0     10      4
VC_3                    26 BBBA               BBBZ                          0     10      5
VC_4                    26 CCCCA              CCCCZ                         0     10      6
VC_5                    26 DDDDDA             DDDDDZ                        0     10      7
VC_6                    26 EEEEEEA            EEEEEEZ                       0     10      8
VC_7                    26 FFFFFFFA           FFFFFFFZ                      0     10      9
VC_8                    26 GGGGGGGGA          GGGGGGGGZ                     0     10     10
VC_16                   26 HHHHHHHHHHHHHHHHA  HHHHHHHHHHHHHHHHZ             0     10     18
VC_40                    1 IIIIIIIIIIIIIIIIII IIIIIIIIIIIIIIIIII            0      1     42/

I now select the data out of the DBA_TAB_HISTOGRAMS table to see what is really being stored. For the sake of brevity {which means, “so I can finish this post tonight”} I’ll just show bits, but if you want, download the script right at the end and, if you created the functions from the prior post, you can play with this yourself.

First, here you can see some of the values actually stored in ENDPOINT_VALUE and what they translate into:-

TABLE_NAME   colname                        ENDPOINT_VALUE ROWCOUNT REAL_VAL
------------------------------------------------------------
Numbers
HIST_TEST    NUM_5                              1000000007      760 1000000007
HIST_TEST    NUM_5                              1000000008      870 1000000008
HIST_TEST    NUM_5                              1000000009    1,000 1000000009
HIST_TEST    NUM_6                           1000000000001      103 1000000000001
HIST_TEST    NUM_6                           1000000000002      219 1000000000002
HIST_TEST    NUM_6                           1000000000003      328 1000000000003
HIST_TEST    NUM_6                           1000000000004      427 1000000000004
Varchars
HIST_TEST    VC_2     338824386822815000000000000000000000        8 AAU
HIST_TEST    VC_2     338824624507302000000000000000000000        9 AAW
HIST_TEST    VC_2     338824782963627000000000000000000000       10 AAY
HIST_TEST    VC_3     344035480872391000000000000000000000        0 BBB@
HIST_TEST    VC_3     344035481491361000000000000000000000        1 BBBB
HIST_TEST    VC_3     344035482419816000000000000000000000        2 BBBE
HIST_TEST    VC_3     344035483348271000000000000000000000        3 BBBH
HIST_TEST    VC_3     344035483967241000000000000000000000        4 BBBJ

Note that for numerics the number itself is stored and I do not need to translate it.
For VARCHAR2 columns the value held is the 15-digit number padded with zeros.
Also note, for VC_2 the range covered seems to end at AAY not AAZ and column VC_3 starts at BBB@ not BBBA {I am creating values with the last character set to A-Z}. Also, bucket 8 for VC_2 ends in a control character.

To reduce this I add a fudge to my function {again, see previous post}. It helps:

TABLE_NAME   colname ROWCOUNT REAL_VAL          MOD_REAL
---------------------------------------------------
HIST_TEST    VC_2           9 AAW               AAX
HIST_TEST    VC_2          10 AAY               AAZ
HIST_TEST    VC_3           0 BBB@              BBBA
HIST_TEST    VC_3           1 BBBB              BBBC
HIST_TEST    VC_3           2 BBBE              BBBF
HIST_TEST    VC_3           3 BBBH              BBBI
HIST_TEST    VC_3           4 BBBJ              BBBK
HIST_TEST    VC_3           5 BBBM              BBBN
HIST_TEST    VC_3           6 BBBO              BBBP
HIST_TEST    VC_3           7 BBBR              BBBS
HIST_TEST    VC_3           8 BBBT              BBBU
HIST_TEST    VC_3           9 BBBW              BBBX
HIST_TEST    VC_3          10 BBBY              BBBZ
HIST_TEST    VC_4           0 CCCC@             CCCCA
HIST_TEST    VC_4           1 CCCCB             CCCCC
HIST_TEST    VC_4           2 CCCCD             CCCCE
HIST_TEST    VC_4           3 CCCCH            CCCCH
HIST_TEST    VC_4           4 CCCCJ            CCCCJ

As you can see, I now get a better translation, but it still goes wrong sometimes (see last couple of rows). So, feel free to take my functions and use them, but be aware even the modified version is not perfect. If YOU perfect it, can I have a copy please :-)

The below shows that Numeric value histograms break when you exceed 15 digits:

colname                  END_VAL ROWCOUNT REAL_VAL
------- ------------------------ -------- --------
NUM_5                 1000000009    1,000 1000000009
NUM_6              1000000000001      103 1000000000001
NUM_6              1000000000002      219 1000000000002
NUM_6              1000000000003      328 1000000000003
NUM_6              1000000000004      427 1000000000004
NUM_6              1000000000005      542 1000000000005
NUM_6              1000000000006      651 1000000000006
NUM_6              1000000000007      771 1000000000007
NUM_6              1000000000008      881 1000000000008
NUM_6              1000000000009    1,000 1000000000009
NUM_7           1000000000000000      133 1000000000000000
NUM_7           1000000000000000      256 1000000000000000
NUM_7           1000000000000000      367 1000000000000000
NUM_7           1000000000000000      467 1000000000000000
NUM_7           1000000000000010      567 1000000000000010
NUM_7           1000000000000010      665 1000000000000010
NUM_7           1000000000000010      784 1000000000000010
NUM_7           1000000000000010      896 1000000000000010
NUM_7           1000000000000010    1,000 1000000000000010

This is the point at which storage of numbers for histograms breaks. You can see that NUM_6 is fine but NUM_7 is not. That is because NUM_6 is below 15 significant digits and NUM_7 is over 15 significant digits and effectively gets truncated.

Histograms on numeric values with more than 15 significant digits will not work as you expect, possible not at all.

With Varchar(2) values, histogram END_VALUES break even sooner, at around 7 digits:

colname                               END_VAL ROWCOUNT MOD_REAL
------- ------------------------------------- -------- --------
VC_5     354460798876024000000000000000000000        5 DDDDDO
VC_5     354460798876038000000000000000000000        6 DDDDDR
VC_5     354460798876047000000000000000000000        7 DDDDDS
VC_5     354460798876061000000000000000000000        8 DDDDDV
VC_5     354460798876071000000000000000000000        9 DDDDDY
VC_5     354460798876080000000000000000000000       10 DDDDDZ
VC_6     359673457682977000000000000000000000        0 EEEEEF8
VC_6     359673457682977000000000000000000000        1 EEEEEF8
VC_6     359673457682977000000000000000000000        2 EEEEEF8
VC_6     359673457682977000000000000000000000        3 EEEEEF8
VC_6     359673457682977000000000000000000000        4 EEEEEF8
VC_6     359673457682977000000000000000000000        5 EEEEEF8
VC_6     359673457682977000000000000000000000        6 EEEEEF8
VC_6     359673457682977000000000000000000000        7 EEEEEF8
VC_6     359673457682978000000000000000000000        8 EEEEEFn
VC_6     359673457682978000000000000000000000        9 EEEEEFn
VC_6     359673457682978000000000000000000000       10 EEEEEFn
VC_7     364886116489977000000000000000000000        0 FFFFFGK
VC_7     364886116489977000000000000000000000        1 FFFFFGK
VC_7     364886116489977000000000000000000000        2 FFFFFGK
VC_7     364886116489977000000000000000000000        3 FFFFFGK
VC_7     364886116489977000000000000000000000        4 FFFFFGK
VC_7     364886116489977000000000000000000000        5 FFFFFGK
VC_7     364886116489977000000000000000000000        6 FFFFFGK
VC_7     364886116489977000000000000000000000        7 FFFFFGK
VC_7     364886116489977000000000000000000000        8 FFFFFGK
VC_7     364886116489977000000000000000000000        9 FFFFFGK
VC_7     364886116489977000000000000000000000       10 FFFFFGK

You can see that for column VC_5 the actual ENDPOINT_VALUE is varying at the 14th and 15th significant digit and my translated value changes. But for VC_6 the numeric ENDPOINT_VALUE is hardly changing. Each row translated to one of two values.
For VC_7 the ENDPOINT_VALUE is static. All histogram END_VALUES are the same.

This means Histograms will not work properly for any VARCHAR(2) columns which do not vary for the first 7 or more characters and any characters after the 7th will be ignored.

Or does it? My logic is correct but tomorrow (or soon after) I’ll try some actual tests over theory… {So don’t ping me with corrections just yet}

I’ve not mentioned VC_16 and VC_40 have I? I will tomorrow. Maybe :-|.

Finally, Here is the selecting script, as promised.

col column_name form a7 head colname
col rowcount form 99,999
col real_val form a17
col end_val form 999999999999999999999999999999999999
col end_string form a10
col endpoint_actual_value form a40
col mod_real form a17
select --owner
 dth.table_name
,dth.column_name
,dth.endpoint_value  end_val
,dth.endpoint_number rowcount
,decode(dtc.data_type,'NUMBER',to_char(dth.endpoint_value)
            ,'VARCHAR2',hist_numtochar(dth.endpoint_value+1)
                 ,          dth.endpoint_value
       ) real_val
,decode(dtc.data_type,'NUMBER',to_char(dth.endpoint_value)

            ,'VARCHAR2',hist_numtochar2(dth.endpoint_value+1)
                 ,          dth.endpoint_value
       ) mod_real
,endpoint_actual_value
from dba_tab_histograms dth
,dba_tab_columns dtc
where dth.table_name ='HIST_TEST'
and  dth.owner=USER
and dth.owner=dtc.owner
and dth.table_name=dtc.table_name
and dth.column_name=dtc.column_name
order by dth.table_name,dth.column_name,dth.endpoint_number
/
Follow

Get every new post delivered to your Inbox.

Join 152 other followers