jump to navigation

Data Dictionary Performance – reference September 29, 2009

Posted by mwidlake in internals, Perceptions.
Tags: , ,
1 comment so far

I’ve got a couple more postings on Data Dictionary performance to get where I plan to go to with this, but if you want to deep-dive into far more technical details then go and check out Dion Cho’s excellent posting on fixed object indexes.

I was not planning on getting into the sys.x$ fixed objects as you need SYS access to look at them, which not everyone has, but this is where Dion goes. His posts are always good, I need to check them far more.

As a soft-technical aside, I often mention to people when doing courses on SQL or writing standards or even the odd occasions I’ve discussed perception, that we Westerners are taught to read from left to right, top-to-bottom and we pick out left justification very well. Code laid out like the below we find easy to read:

select pers.name1                surname
      ,pers.name2                first_forename
      ,pers.nameother            middle_names
      ,peap.appdate              appointment_date
      ,decode (addr.name_num_ind
                 ,'N', to_char(addr.housenum)
                 ,'V', addr.housename
                 ,'B', to_char(addr.housenum
                              ||' '||addr.housename)
                                 house_no_name
      ,addr.address2             addr_street
      ,addr.address3             addr_town
      ,addr.address4             addr_dist
      ,addr.code                 addr_code
from person              pers
     ,address            addr
    ,person_appointments peap
where pers.addr_id     =addr.addr_uid
and   pers.pers_id     =peap.pers_id
and   pers.active_fl   ='Y'
and   pers.prim_cons   ='ANDREWSDP'
and   peap.latest_fl   ='Y'

But this is not true of other cultures, where people do not read left to right, top to bottom. I have had this confirmed just a couple of times when people who were born in Eastern cultures are in the course/conversation.

So I was very interested to see Dion’s Korean version of the blogpost I reference above (I really hope this link here to the korean version is stable).
The main body of the page is on the right, not left, but the text appears to be left justified.

Of course, I am horribly ignorant, I do not know which direction Koreans read in :-(. I could be spouting utter rubbish.

Data Dictionary Performance #2 September 28, 2009

Posted by mwidlake in internals, performance.
Tags: ,
4 comments

A couple of posts ago I demonstrated that the DBA_ views are general faster than the ALL_ views and that adding an owner filter to DBA_ views is potentially quicker than selecting from the USER_ views.

Something else I do automatically, to help performance, is always specify and link via the schema owner.

The below is based on a 10.2.0.3 database with fixed object statistics gathered, though a long while ago. A have a test partitioned table called TEST_P and I want to see how many blocks are in the table and also in the table partitions, so I link between DBA_TABLES and DBA_TAB_PARTITIONS. I only specify the table name as I know there is only one table called TEST_P:

select dbta.table_name,dbta.blocks
       ,dtp.partition_name,dtp.blocks
from dba_tables         dbta
    ,dba_tab_partitions dtp
where dbta.table_name=dtp.table_name
and   dbta.table_name='TEST_P'

TABLE_NAME     BLOCKS PARTITION_     BLOCKS
---------- ---------- ---------- ----------
TEST_P           7394 ID_MAX            370
TEST_P           7394 ID_40K           1756
TEST_P           7394 ID_30K           1756
TEST_P           7394 ID_20K           1756
TEST_P           7394 ID_10K           1756
5 rows selected.
Elapsed: 00:00:00.04

Statistics
---------------------------------------------
          0  recursive calls
          0  db block gets
        782 consistent gets

Now I do the same but specify the schema owner in the join.

select dbta.table_name,dbta.blocks
      ,dtp.partition_name,dtp.blocks
from dba_tables         dbta
    ,dba_tab_partitions dtp
where dbta.table_name =dtp.table_name
AND   DBTA.OWNER      =DTP.TABLE_OWNER
and   dbta.table_name ='TEST_P'

TABLE_NAME     BLOCKS PARTITION_     BLOCKS
---------- ---------- ---------- ----------
TEST_P           7394 ID_MAX            370
TEST_P           7394 ID_40K           1756
TEST_P           7394 ID_30K           1756
TEST_P           7394 ID_20K           1756
TEST_P           7394 ID_10K           1756
5 rows selected.
Elapsed: 00:00:00.04

Statistics
--------------------------------------------
          0  recursive calls
          0  db block gets
        580 consistent gets

Just for the record, I have run these statements several time so the consistent gets are constant. The form with the “AND DBTA.OWNER =DTP.TABLE_OWNER” line is consistently running at 580 consistent gets and the version without at 782 consistent gets.

If I now add in the where clause to look only for tables within the schema I am interested in, the code is even more efficient:

select dbta.table_name,dbta.blocks
      ,dtp.partition_name,dtp.blocks
from dba_tables         dbta
    ,dba_tab_partitions dtp
where dbta.table_name =dtp.table_name
AND  DBTA.OWNER       =DTP.TABLE_OWNER
and   dbta.table_name ='TEST_P'
and   dbta.owner      =user

TABLE_NAME     BLOCKS PARTITION_     BLOCKS
---------- ---------- ---------- ----------
TEST_P           7394 ID_10K           1756
TEST_P           7394 ID_20K           1756
TEST_P           7394 ID_30K           1756
TEST_P           7394 ID_40K           1756
TEST_P           7394 ID_MAX            370
5 rows selected.
Elapsed: 00:00:00.03

Statistics
--------------------------------------------
          0  recursive calls
          0  db block gets
        137  consistent gets

Down to 137 consistent gets.

It’s maybe not at all suprising that fully specifying the schema owner of the table we are interested in allows a more efficient path through the data dictionary, but it is something we all often fail to specify as it means writing more SQL, and it works quickly enough. For one table.
But then if you convert a quick check script into part of your system housekeeping scripts and then use that housekeeping script on a very large database, things can start to run very slowly. I’ve never seen that happen, not with one of my own scripts, honest :-)

The reason to always specify the schema owner, at all times when possible, when drilling down the data dictionary, is that so many things link back to sys.obj$ and it has an index with a leading column on owner {well,owner#}. The whole data dictionary tends towards 3rd normal form and parent-child relationships through IDs (name#, obj#, user#, ts#). Schema and name identify the objects and are one of the few links into this structure, which then links to other tables via obj# and other hash IDs.

IND_NAME           TAB_NAME           PSN       COL_NAME
--------------- ------------------ --------- -------------
I_OBJ1          OBJ$               1         OBJ#

I_OBJ2          OBJ$               1         OWNER#
                                   2         NAME
                                   3         NAMESPACE
                                   4         REMOTEOWNER
                                   5         LINKNAME
                                   6         SUBNAME

I_OBJ3           OBJ$              1       OID$

So my advice is, if you are looking at data dictionary views where the schema owner is involved, always include that column in the join and always filter by schema owner if you can. And if things are still slow, pull out the view code and look at the indexes on the underlying sys.$ views. After all, the data dictionary is, to all intents and purposes, just a normal normalised database.

Big Discs are Bad September 27, 2009

Posted by mwidlake in development, performance, Uncategorized.
Tags: , , ,
8 comments

I recently came across this article on large discs for database by Paul Vallee. The article is over 3 years old but is still incredibly valid. It’s a very good description of why big discs are a problem for Oracle Database Performance. {Paul also introduces the BAHD-DB campaign – Battle Against Huge Disks for Databases, which I quite like}.

To summarise the article, and the problem in general, IT managers will buy big discs as they provide more GB per pound sterling. It saves money.
However, less discs is Bad For Performance. As an extreme example, you can now buy a single disc that is a TB in size, so you could put a 1TB Oracle database on one such disc. This one disc can only transfer so much data per second and it takes this one disc say 10ms to search for any piece of data. If you want the index entry from one place and the table row from another, that is at least two seeks. This will not be a fast database {and I am not even touching on the consideration of disc resilience}.

Now spread the data over 10 discs. In theory these 10 discs can transfer 10 times the total data volume and one disc can be looking for information while the others are satisfying IO requests {This is a gross over-simplification, but it is the general idea}.

IT Managers will understand this 1-to-10 argument when you go through it.

Kind of.

But then discussions about how many modern “fast” discs are need to replace the old “slow” discs ensure. It can be very, very hard to get the message through that modern discs are not much faster. A 1TB disc compared to a 4-year-old 100GB disc will not have a transfer speed 10 times faster and it will certainly not have a seek time ten times less, chances are the seek time is the same. And then there are the discussion of how much impact the larger memory caches of modern storage units have. Answer,(a) quite a lot so long as it is caching what you want and (b) even if it is perfectly caching what you want, as soon as you have read a cache-sized set of data, you are back to disc IO speed.

Bottom line. Disc Drives are now slower in proportion to the disc acerage than they used to be.

Anyway, I am certainly not the only person to have had these discussions, though I have had them for longer than most {due to my accidental work history of having worked on VLDBs for so long}. There are certainly practitioners of Oracle Arts who understand all of this far better than I and one of them, James Morle, recently started blogging. It’s looking good so far. If he keeps it up for a month, I’ll put him on my blog roll :-)

There is, however, one aspect of the Big Disc Performance issue that does not seem to get much mention but is something I have suffered from more than a couple of times.

As a Database Performance person you have had the argument about needing spindles not disc acreage and won. The IT manager buys enough spindles to provide the I/O performance your system needs. Success.

However, the success has left a situation behind. You need 10 spindles over a couple of RAID 10 arrays to give you the IO you need. 250GB discs were the smallest you could buy. So you have 1.25TB of available storage (RAID 10 halves the storage) and have a 500GB database sitting on it. There is 750GB of empty storage there…

That 750GB of empty storage will not be left inviolate. Someone will use it. Someone will need “a bit of temporary storage” and that nice chunk of fast storage will be too inviting. Especially if it IS fast storage. It will be used.

Now your database, who’s storage you specified to support said database, is sharing it’s storage with another app. An  app that steals some of your IO and potentially {heck, let’s say this straight WILL} impact your database performance. And the galling thing? Twice, I had no idea my storage had become shared until I started getting odd IO latency issues on the database.

You may be able to make a logical argument for the spindles you need at design time. But you have almost no chance of protecting those spindles in the future. But who said working life was easy? :-)

Friday Philosophy – A Comment on Comments September 25, 2009

Posted by mwidlake in development, internals.
Tags: , , ,
3 comments

This blog is not about blog comments. It’s about table and column comments in the data dictionary.

Some of you may well be going “huh?”. Others are probably going “Oh yes, I remember them?”. Table and column comments appear to be suffering the same fate as ERDs, who’s slow demise I bemoaned a couple of weeks ago. They are becoming a feature not known about or used by those with “less personal experience of history” {ie younger}.

It’s a simple principle, you can add a comment against a table or a column,up to 4000 characters. {you can also add comments against index types, materialized views and operators (huh?), at least in 10.2}.

comment on table widlakem.person is
'Test table of fake people for training purposes, approx 50k records'
Comment created.

select * from dba_tab_comments
where owner='WIDLAKEM'
and table_name = 'PERSON'

OWNER      TABLE_NAME                     TABLE_TYPE
---------- ------------------------------ -----------
COMMENTS
--------------------------------------------------------------------------
WIDLAKEM PERSON TABLE
Test table of fake people for training purposes, approx 50k records

--

comment on column widlakem.person.second_forename is
'null allowed, second or middle name. If more than one, delimited by / character'

select * from dba_col_comments
where owner='WIDLAKEM' and table_name = 'PERSON'
and column_name='SECOND_FORENAME'

OWNER      TABLE_NAME                     COLUMN_NAME
---------- ------------------------------ ------------------------------
COMMENTS
-------------------------------------------------------------------------------
WIDLAKEM PERSON SECOND_FORENAME
null allowed, second or middle name. If more than one, delimited by / character

So you can stick basic information into the data dictionary where it will remain with the object until the object is dropped or the comment is updated. (You can’t drop a comment, you can only update it to ”:

>comment on table widlakem.person is ”;

It’s simple, it’s sensible, it’s solid.

And it seems to be dying out. In fact, I had stopped adding comments to my tables and columns as no one else seemed to bother. Probably as a consequence of them not being added, no one ever seemed to think to look at them to get hints about the database structure and table/column use.

But Raj sitting next to me is as old a hand at this game as myself and I “caught” him adding comments to the little schema we are working on together. Well, if he is bothering, so will I!

How about Oracle Corp? How do they manage on this front? After all, the Oracle Reference manual has all these short descriptions of tables and columns in the data dictionary {some helpful, some utterly unhelpful}:

select owner,count(*) from dba_tab_comments
where owner in ('SYS','SYSTEM','SYSMAN')
group by owner
OWNER        COUNT(*)
---------- ----------
SYSTEM            151
SYSMAN 472
SYS 3894

3 rows selected.

Heyyyy, nice Oracle.

select owner,table_name,comments
from dba_tab_comments
where owner in ('SYS','SYSTEM','SYSMAN')

OWNER      TABLE_NAME
---------- ------------------------------
COMMENTS
-----------------------------------------------------
SYS ICOL$

SYS CON$

SYS FILE$

SYS        UET$

SYS        IND$
SYSTEM     MVIEW_RECOMMENDATIONS
This view gives DBA access to summary recommendations
SYSTEM     MVIEW_EXCEPTIONS
This view gives DBA access to dimension validation results
SYSTEM     AQ$_QUEUE_TABLES
SYS        SEG$

SYS        COL$

SYS        CLU$
SYSTEM     SQLPLUS_PRODUCT_PROFILE

SYSTEM     PRODUCT_PRIVS

SYSTEM     HELP

SYSMAN MGMT_NOTIFY_QTABLE

SYSMAN AQ$MGMT_NOTIFY_QTABLE_S

Oh. Lots of blanks. Not so nice Oracle. No, scrub that, several lines are not blank, so Not a bad attempt Oracle.

Why all the blanks? Why have Oracle set blank comments? That’s because a blank table comment gets created when you create a table, and a blank column comment is created per column.

create table mdw_temp (col1 number);
Table created.

select * from dba_tab_comments where table_name = 'MDW_TEMP';
OWNER      TABLE_NAME                     TABLE_TYPE
---------- ------------------------------ -----------
COMMENTS
----------------------------------------------------------------------
WIDLAKEM   MDW_TEMP                       TABLE

1 row selected.

select * from dba_col_comments where table_name='MDW_TEMP';
OWNER      TABLE_NAME                     COLUMN_NAME
---------- ------------------------------ ---------------
COMMENTS
-------------------------------------------------------------------------
WIDLAKEM   MDW_TEMP                       COL1

1 row selected.

So what populated system-like comments do we have?

select owner,count(*) from dba_tab_comments
where owner in (‘SYS’,’SYSTEM’,’SYSMAN’)
and comments is not null
group by owner

OWNER COUNT(*)
———- ———-
SYSTEM 73
SYSMAN 15
SYS 944

OK, there are some, and as you can see below, some are more useful than others…

OWNER      TABLE_NAME
---------- ---------------
COMMENTS
----------------------------------------------------------------------------------------------------
SYS        SYSTEM_PRIVILEG
           E_MAP
Description table for privilege type codes.  Maps privilege  type numbers to type names
SYS        TABLE_PRIVILEGE
           _MAP
Description table for privilege (auditing option) type codes.  Maps privilege (auditing option) type
numbers to type names
SYS        STMT_AUDIT_OPTI
           ON_MAP
Description table for auditing option type codes.  Maps auditing option type numbers to type names
SYS        RESOURCE_MAP
Description table for resources.  Maps resource name to number
SYS        SESSION_PRIVS
Privileges which the user currently has set
SYS        SESSION_ROLES
Roles which the user currently has enabled.
SYS        ROLE_SYS_PRIVS
System privileges granted to roles
SYS        ROLE_TAB_PRIVS
Table privileges granted to roles
SYS        ROLE_ROLE_PRIVS
Roles which are granted to roles
Oracle_DatabaseInstance contains one entry for each Oracle Instance that is
centrally managed.  A Real Application Cluster has one entry for each of the
instances that manipulate it.  Instances of Oracle_DatabaseInstance are created
using the database instances that are known to the Oracle Enterprise Manager
repository.
SYS        DBA_AUDIT_OBJECT
Audit trail records for statements concerning objects, specifically: table, cluster, view, index, se
uence, [public] database link, [public] synonym, procedure, trigger, rollback segment, tablespace, r
ole, user
SYS        USER_AUDIT_OBJECT
Audit trail records for statements concerning objects, specifically: table, cluster, view, index, se
uence, [public] database link, [public] synonym, procedure, trigger, rollback segment, tablespace, r
ole, user
SYSMAN     ORACLE_DATABASESTATISTICS
Oracle_DatabaseStatistics provides current information about the statistics for
a database.  Database statistics pertain to the database and have the same
value regardless of the database instance that is used.
SYSMAN     ORACLE_DBINSTANCESTATISTICS
Oracle_DBInstanceStatistics contains statistics for a database instance.  These
are retrieved from the Oracle Managment Repository that is managing the
database upon request from a managment client.

If you don’t add comments to tables and comments, you will just have blank entries for them in the data dictionary.

So why not pop a few real comments in there, especially for any tables or comments where the name is not really telling you what that column or table is for? It’s free and easy, and it might just prove useful. And if you add them to your tables, I’ll add them to mine.

Beware Deleted Tables September 22, 2009

Posted by mwidlake in internals.
Tags:
6 comments

From version 10 onwards, Oracle introduced the concept of the recycle bin and the purge command. I won’t go into details of the recycle bin, go to Natalka Roshak’s FAQ page on it if you want the details.

I am only concerned with the fact that from Oracle 10 onwards by default if you drop a table or partition it is not dropped but just renamed, unless you use the PURGE command {which can be tricky if you are using a 9i client as PURGE won’t be accepted by SQL*Plus}.

And it keeps catching me out. So I’m blogging about it –  more for me than any of you lot :-)).

How exactly does it catch me out? Well, as an example,  I’m looking at some partitioning considerations at the moment and ran the below.

select count(distinct(table_name)) from dba_tables
where partitioned='YES'
and owner='ERIC'

COUNT(DISTINCT(TABLE_NAME))
---------------------------
393
Elapsed: 00:00:00.14

Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
7079 consistent gets

select count(distinct(table_name))
from dba_part_tables
where owner='ERIC'

COUNT(DISTINCT(TABLE_NAME))
---------------------------
419
Elapsed: 00:00:00.07

Statistics
--------------------------------------------
8 recursive calls
0 db block gets
14084 consistent gets

419 records from DBA_PART TABLES and 393 from DBA_TABLES.

How can that be? How can you have more records with partition information than tables with PARTITIONED flag set to YES?

{And for readers of my post yesterday on querying the data dictionary, check which of the statements performs better – one might expect the view specific to partitions to be quicker than one that is not, but then the partition-specific view is looking at a lot more under the covers}.

OK, Let’s find the objects in DBA_PART_TABLES not in DBA_TABLES

SELECT DISTINCT TABLE_NAME FROM DBA_PART_TABLES
WHERE OWNER ='ERIC'
minus
SELECT DISTINCT TABLE_NAME FROM DBA_TABLEs
WHERE PARTITIONED='YES'
AND OWNER='ERIC'

TABLE_NAME
------------------------------
BIN$c0wxEyj93/vgQKAKQstN+w==$0
BIN$c59mEBlzOvLgQKAKQssjNA==$0
BIN$c6D0zY7yTrvgQKAKQssmdQ==$0
BIN$c6FJQBU6FG3gQKAKQssxCw==$0
BIN$c9qFcYojb/LgQKAKQssyVQ==$0
BIN$c9qFcYokb/LgQKAKQssyVQ==$0
BIN$c9qFcYolb/LgQKAKQssyVQ==$0
BIN$c9qFcYoqb/LgQKAKQssyVQ==$0
BIN$c9t8m70OdEXgQKAKQss4LA==$0
BIN$c9tu0S7KoUngQKAKQss39w==$0
BIN$c9uGpjGSbcvgQKAKQss4fw==$0
BIN$cWfis1j/BGPgQKAKQss7Lw==$0
BIN$cWft6d2mNcvgQKAKQss7qQ==$0
(snip)
BIN$czz0jembWe/gQKAKQsthhg==$0
BIN$czzu+/hC+yTgQKAKQsthpw==$0

26 rows selected.

All 26 records are for deleted objects. You can tell they are deleted as the name is “BIN$something”.

That’s the thing that keeps catching me out, I keep forgetting to exclude deleted but not purged objects from maintenence scripts. *sigh*.

DBA_TABLES has a DROPPED column but other views do not, so you have to exclude on the name, which is not ideal. Someone might want to create a table called “BIN$somthing”.

Mind you, the DROPPED column is not much help:
select count(*) from dba_tables WHERE DROPPED !=’NO’

COUNT(*)
———-
0

So that is as much use a chocolate teapot.
{some little bell is ringing in my head that you need special privs to see the dropped tables. I have DBA role and I can’t…}

This does highlight something very, very odd though.

Why have the records gone from DBA_TABLES and not DBA_PART_TABLES?

Because the concept of the wastebasket has not been around long enough with V10.2 of Oracle for it to work consistently :-)

I’ll check DBA_OBJECTS for the last object name in the above list:

select * from dba_objects
where object_name='BIN$cxIzaly77FzgQKAKQst5tg==$0'

OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID
--------------------------------------------------------
DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME
--------------------------------------------------------
TIMESTAMP STATUS T G S
---------------------------
ERIC BIN$cxIzaly77FzgQKAK 2613938
Qst5tg==$0
TABLE 08-SEP-2009 13:49 08-SEP-2009 14:10
2009-09-08:14:10:02 VALID N N N

And I can still look at it {note the use of quotes to allow for an object name with mixed case}:

DESC ERIC.”BIN$cxIzaly77FzgQKAKQst5tg==$0″
Name Null? Type
—————————————– ——– ————
LOAD_LOG_ID NUMBER(12)
LOG_DATE DATE
ABCDEFG_ID NUMBER(12)
REMOTE_ADDRESS VARCHAR2(20)
KGBHFDERS NUMBER(12)
PRODUCT_ID NUMBER(12)
KNGFDRET NUMBER(22)
WKNHGFYTRERDS NUMBER(12)
CHANNEL_TYPE VARCHAR2(1)
COUNT_HITS NUMBER(12)
MFDKEMNFK NUMBER(12)
COUNT_NON_WEIGHTED NUMBER(12)

I can still select from it with the right privileges as well {I’ve not shown this, I’m out of time}.

In conclusion:

  • Objects that are dropped but not purged are still in the data dictionary.
  • You may have to exclude such objects using the fact that the name starts ‘BIN$…’.
  • Dropped objects appear in some parts of the data dictionary but not others.
  • You can still and in fact look at dropped objects.
  • I have a poor memory.

Data Dictionary Performance September 21, 2009

Posted by mwidlake in internals, performance.
Tags: ,
10 comments

Which data dictionary view you use to look at information about your schemas and objects can be important from a performance point of view.

{Part two is here.}

Try this:

set autotrace on statistics
set timi on
set pause off

select count(tablespace_name) from user_tables;
select count(tablespace_name) from all_tables;
select count(tablespace_name) from dba_tables;

You might want to execute each select statement a couple of times until the consistent gets are stable.

These are the stats off my test system, version 10.2.0.3:

USER_TABLES
8 records
0.14 seconds
14217 consistent gets

ALL_TABLES
4455 records
0.34 seconds
29097 consistent gets

DBA_TABLES
4455 records
0.21 seconds
16834 consistent gets

The select against user_tables is fastest and uses the least consistent gets as the view implicitly limits the search to your objects with the clause “where o.owner# = userenv(‘SCHEMAID’)”. You use the USER_ views when you really only want to see your objects.

More interesting are the timings for ALL_TABLES and DBA_TABLES. I have SELECT_ANY_TABLE and SELECT_ANY_DICTIONARY so I see all tables with either view. But the DBA_ view is quicker and takes less consistent gets than the ALL_ view.

The reason is that the ALL_ views take into account checking your privileges to see if you have the right to see any particular object. The DBA_ views do not as you can only see them if you have system privileges to do so.

ALL_TABLES includes the following to check for access rights:

and (o.owner# = userenv('SCHEMAID')
or o.obj# in
(select oa.obj#
from sys.objauth$ oa
where grantee# in ( select kzsrorol
from x$kzsro
)
)
or /* user has system privileges */
exists (select null from v$enabledprivs
where priv_number in (-45 /* LOCK ANY TABLE */,
-47 /* SELECT ANY TABLE */,
-48 /* INSERT ANY TABLE */,
-49 /* UPDATE ANY TABLE */,
-50 /* DELETE ANY TABLE */)
)
)

So, if you are using an account that has access rights to see everything (DBA access typically), use the DBA_ views and they will perform better.

If you only want your objects then the best performance will come from using the USER_ views and not the ALL_ views, correct? It makes sense! Remember my cautions in earlier posts about theory and practice. There is a sting in this particular tale…

select count(tablespace_name) from user_tables;
COUNT(TABLESPACE_NAME)
———————-
8
Elapsed: 00:00:00.15

Statistics
——————————————————
0 recursive calls
0 db block gets
14217 consistent gets

Compared to

select count(tablespace_name) from all_tables
where owner=USER;
COUNT(TABLESPACE_NAME)
———————-
8
Elapsed: 00:00:00.10

Statistics
—————————————————–
0 recursive calls
0 db block gets
4117 consistent gets

selecting from ALL_TABLES and filtering by USER performs significantly better on my system than using the USER_TABLES view!

One caveat. I suspect my test system has poor fixed-object statistics, so I would be interested if anyone else finds different results on their system.

Friday Philosophy – Who Comes Looking? September 18, 2009

Posted by mwidlake in Blogging.
Tags: ,
8 comments

I’ve been running this blog for a few months now and I find it interesting to see how people come to it. A handful of people come to it as I tell them I have a blog page, but most people come across it by either:

  • Links from other blogs or web pages.
  • Search engines.

WordPress gives me stats on these for Today and Yesterday and I can check back on the referrers and searches for any given day, going back several months. Most blog sites provide the same features, I thought I would just run through them for those who do not have a blog.

I can tell when I have been mentioned on someone else’s blog, as I usually see a spike in my hits and their web page is at or near the top of the list of referrers. Interestingly, I will sometimes see a burst of hits from an old reference on someone else’s blog or webpage. I think this happens when a third person has referenced the page or person which then referenced me.

Another interesting facet is the impact on my hits if an Oracle Name mentions me. My busiest day occurred when Richard Foote mentioned a posting I did on “Unhelpful Helpful People” and a couple of other well-known Oracle Names also picked up on the thread. It’s a bit like a small-time-actor getting into a scene with a Hollywood Star :-).

The most interesting, though, are the search engine hits.

My favorite search term to lead to my blog so far is “martin widlake unhelpful people”. I really hope that was someone looking for the post I mention above, as opposed to anything else…

As time goes by, the search engine hits are generating a larger and larger slice of my traffic (and the personal mentions less and less :-) ). This is going to be partly due to me putting more content on the Blog to be found but also, as I get more hits and links, search engines will give me more prominence. It becomes self-feeding. Search engines find me as I have been visited before, so I get visited again and Search engines see that I have been visited even more and move me up the list…

{This is, of course, how Burleson gets so much traffic, he always references back to himself and his web sites and appears to have several sites that all cross-reference between them, priming the search engine pump (or absolutely flooding it, I suspect)}.

Some of the most common searches that find me are on obscure items I have blogged about. They may not be of such general interest {such as when I blogged about errors with gathering system statistics {{and more to follow on that topic}} } but I guess when someone hits the same issue or topic, I am one of a very few places that has mentioned it. I get a steady trickle of hits for “c_obj#_intcol#” since I blogged about it often being the biggest object in the SYSTEM tablespace. So perhaps to increase my search engine hits I should not blog about mainstream issues but rather really obscure, odd stuff than almost no one is interested in!

Some days I will get several hits by people searching on “Martin Widlake”. I wonder why they are searching on me specifically. Occasionally, it has been just before I am called about a job. Usually not though {so maybe it was about a job – but then they found my blog and decided against it…}.

Some searches that get to my blog are just odd. Yesterday one search that found me was “how to put fingers on keyboard”. Why? I have no idea why a search on that would land on my blog. Maybe I should try it!

Oh, and I suddenly have a favorite search that found me, hot in today, just as I am blogging about the very topic:

“it’s a crock of cr4p and it stinks”

Now what is that about? Why search on it and why find me?

*sigh*

What is a VLDB? September 18, 2009

Posted by mwidlake in Architecture, VLDB.
Tags: ,
10 comments

In a post just a couple of days ago on testing, I complained that VLDBs break at the edges. Coskan posted a comment asking I blog on what I consider to be a VLDB and thus what a VLDB DBA is, and I am more than happy to oblige, especially as Coskan not only comments a lot but provides a really good summary of blogs on his own blog. {in fact, I need to add Coskan to my blog roll, something I have been meaning to do for ages}.

Hopefully, this will link to the comments section of that post as Piet deVisser added a wonderful comment answering the question for me. Go and read, I suspect it is better than my attempt here!

VLDB stands for Very Large DataBase. It is not an acronym I like as it sounds suspiciously like a sexually transmitted disease, but maybe that is just a problem with my mind. The term ULDB appeared for a while but seems to have failed to gain traction. U stands for “Ultra” of course.

So what is a Very Large DataBase?

A VLDB is a database who’s very size gives you, as a DBA or database architect, extra work.

Maybe a simpler rule that you can apply is “you can’t back up the database in 24 hours using standard tools”. You can chuck more tape drives and IO channels at a DB but you will quickly hit a limit where you infrastructure or budget can’t cope.

Another, and one I pinch outrageously from Piet is “you can’t afford to duplicate the database for test/QA purposes”. That leads to a whole raft of issues.

I put forward another definition of a VLDB in a comment on the original blog. “Any database who’s size makes 8 out of 10 DBAs comment ‘that is a big database’.” That definition takes into account whether a database is generally beyond the experience of most DBAs /Designers. Why do I think that is significant? Because it means most DBAs/Designers will not have worked with a database that size and thus dealt with the associated problems. The database engine may {or may NOT, as I have complained about} cope with the database size, but you need staff to design it and look after it who know how to do so.

The definitive size of a VLDB, of course, goes up year by year. A few weeks ago I found a document I have mentioned in presentations a couple of times, an internal White Paper by Oracle Corp on what a VLDB is, written around 1994. Next time I am at home I’ll scan it. If I remember correctly, at that time 30GB and beyond on a VMS or Mainframe system was considered a VLDB and, in Unix (the new kid on the block back then), 20GB was the threshold.

Right now, as of September 2009, I would judge any database over 10TB of used space is a VLDB. In 12 months, that will be 20TB. In another 12 months, 40 or maybe 50TB.

“Moore’s Law” traditionally states that compute power doubles every 18 months, but I have just suggested that the VLDB limit doubles every 12 months. I say that as, over the last 10 years, I have worked on several systems, systems most DBAs would consider as “challengingly large”, which double in 12 months or less. Data is exploding. More and more of us are dealing with VLDBs.
This “doubling in 12 months” was not the case (I think) back in 1995, it started in 2000 or so. Before then, database size was doubling about in line or less than with Moore’s law I would say, but that is only my opinion.

What changed? People swapped from thinking you could store only what you really needed to thinking you could store “everything”. Which is “everything” your Moore’s-law expanding CPUs can process PLUS all the summary and metadata you extract from that data.

I could be wrong in my figures though. If you took size as the determining factor and doubled 20GB every 18 months from 1994, you would now class a VLDB, in 2009, as approx 20TB.

What main issues do you face with a VLDB?

  • Backing up the database. With a VLDB, a daily backup of everything via RMAN or Hot Backup is simply not possible, as you can’t run the backup in 24 hours. You need to: Backup less often; backup only part of the DB; use hardware such as mirror splitting or deltas; some other trick like, say, never backing it up but having 3 standbys. I’ve seen it done.
  • Performance. You need to consider radical changes such as removing RI or designing around full table scans and ignoring the block buffer cache for the largest tables.
  • The number or size of objects starts causing bits of Oracle to break or work less efficiently (so many tables it takes 2 minutes to select them all or you hit an unexpected limit like th 2TB disk size in ASM, because you need to use bigger disc sizes as otherwise you need more discs than you can easily manage).
  • Maintenance tasks become a challenge in their own right. This could be stats gathering, it could be adding columns to a table, it could be recreating global indexes, all of which now take more time than you can schedule in the maintenance windows {so part of the definition of a VLDB could be down to how active a database is and how small you maintenance windows are – 1TB could be a VLDB if you can never spend more than an hour doing anything!}
  • GUIs are no use to you. Listing all the tablespaces in your database with OEM is a pain in the proverbial when you have 962 tablespaces. You can’t keep track of all of them, visually.
  • You can’t properly test or prototype as you cannot afford to create a full sized test system

I’d like to pick up that last point. With A VLDB, you often end up doing things on the live system that you have been unable to test or prove because you simply lack a test system that is even within an order of magnitude the size of your live system. RAC is a particular issue, it seems many sites are happy to have the live system as a RAC system but not the test or development systems. When you raise the issue, the response is often “well, there is not that much difference between RAC and non-RAC systems is there?”. You are not allowed to get violent with the client,or even deeply sarcastic. Yes, there is a huge difference.

A VLDB DBA is someone who has had to consider the above for more than a few months, or on more than one system. Or who simply cries when you mention Oracle breaking when it exceeds size limits.

How do you know when you are dealing with a ULDB? When you can find no one else who will speak publically about a database bigger than yours. When I was working with the Sanger Institute on my pet “it is really quite huge” database I would often have these frustrating conversations with Oracle Corp:

“X is giving me trouble as the database is just so large”

“It is not that large, we have customers with bigger database”

“Can you introduce me, so we can talk about these issues?”

“Errr, no , they don’t like to talk about their systems”.

Thanks. Thanks a bunch.

Remember, there is always someone with a bigger DB than you. But they probably won’t talk about it.

 

Enough for tonight….

Free Training (next Management and Infrastructure SIG) September 17, 2009

Posted by mwidlake in Meeting notes.
Tags:
4 comments

The next Management and Infrastructure SIG is coming up, 1st October in Oracle City office in London. The agenda and details can be found here.

I think the line up is a good balance of technical and management, which is what the SIG is all about – but then I am biased :-)
Real Application Testing, ASH/AWR, how to hire staff, Grid control for complex sites and flexible e-business suite architecture are all covered, plus the chance to ask questions of our Oracle Support representative (Along the lines of “how do I best escalate SRs”, not “what does bug number 2324567.1 on linux not happen on hp-ux”).

As some of you know, I currently chair the SIG {for those who do not know, a SIG is a Special Interest Group of the UK Oracle User Group}. The event is free to those with UKOUG membership {excluding bronze I think} and anyone can come along for £80.

Numbers for SIGs have been hit by the recession over the last 12 months – people’s travel budgets being slashed, pressure on staff/managers of staff that everyone always “looks busy”, reduced access to things that can be seen as a perk.

However, it strikes me as odd that SIGs don’t become more popular when things are tight. After all, they are in effect free (or cheap) training and opportunities to make or maintain contacts with people you can share experiences with. There are also domain experts around who love to talk about their chosen topic, so potentially free consultancy too.

If you have any interest in the topics covered, I’d encourage you to come along for some of this free stuff :-) Likewise, keep an eye on the other UKOUG SIG events, there is usually very good stuff at all of them.

Testing is Not Just for Code. September 16, 2009

Posted by mwidlake in Architecture, VLDB.
Tags: , , ,
7 comments

Someone I am currently working with has a wonderful tag line in her emails:

Next time we want to release untested, why don’t we just release undeveloped?

Testing is not limited to testing code of course. I have recently posted about how a backup is not a backup until you have tested it with a practice recovery.  How you think the database will work by looking at the data dictionary is just a nice theory until you run some actual tests to see how the database responds, as I have been doing with Histograms lately. Sadly, you could even say an Oracle feature is not an Oracle feature until you have tested it.

In my experience, this is particularly true when you test the edges of Oracle, when you are working on VLDBs {Very Large DataBases}.

Last month Jonathan Lewis posted about a 2TB ASM disc size bug, where if you allocated a disc over 2TB to ASM, it would fill it up, wrap around and write over the begining of the file. This week I heard from some past colleagues of mine that they hit this very same bug.
With these very same colleagues we hit a big in 10.1 where you could not back up a tablespace over 8TB in size with RMAN {I can’t give you a bug number for it as we were working with HP/Oracle direct at the time and they “handled it internally”, But when I mentioned it to him, Jonathan found a similar one, bug 5448714 , which stated a 4TB limit on backups. It could be the same bug}.

Yet another VLDB issue was we wanted to move just under one thousand tablespaces from one database to another {again, 10.1}, using transportable tablespaces. We tried to use the utility for checking you are working on a consistent set of tablespaces, but it could not cope with that many. But to plug them into the new tablespace you have to export the metadata and we found a 4000 character limit on the variable stating the tablespaces to transport. That’s 2.3 characters per tablespace, as you need comas to delimit them…Yes, you could manage if you renamed all tablespaces to AA, AB, AC…BA.,BB, BC etc.  If memory servers, the problem was with data pump export and we reverted to old style export which did not have the problem.

Another limit I’ve blogged on is that the automated stats job chokes on very large objects.

Some Data dictionary views can become very slow if you have several tens of thousands of tables/extents/tablespace/indexes

I can appreciate the issues and problems Oracle has with testing their code base, it is vast and people use the software in odd ways and it has to run on many platforms. You might also feel I am being picky by saying Oracle breaks a little when you have 8TB tablespaces or a thousand tablespaces. But

  • Oracle will say in big, glossy presentations, you can build Petabyte and Exabyte databases with Oracle {and have a product called Exadata, don’t forget}.
  • More and more customers are reaching these sizes as data continues to grow, for many site, faster than mores law.
  • Some of these limits appear with databases well below a Petabyte (say a tiddly small 50TB one :-) ).

I’ve been running into these issues with VLDBs since Oracle 7 and they are often with pretty fundamental parts of the system, like creating and backing up tablespaces! I think it is poor show that it is so obvious that Oracle has been weak in testing with VLDB-sized database before release. 

I wonder whether, with 11gR2, Oracle actually tested some petabyte data sizes to see if it all works? After all, as is often said, disk is cheap now, I’m sure they could knock one up quite quickly…

Follow

Get every new post delivered to your Inbox.

Join 171 other followers