jump to navigation

Friday Philosophy – Presenting Leads to Drinking, Discuss June 15, 2012

Posted by mwidlake in Friday Philosophy, humour, Meeting notes, UKOUG.
Tags: , ,

Just a quick Friday Philosophy {the day job is very demanding at the moment, thus the silence on the Blog front}

I’m presenting in Leeds at the start of July on UKOUG AIM SIG on “The First Few Things You Need To Know About Exadata”. As part of the final preparation of the agenda it’s been raised that we should have a beer after the event and put it on the agenda.

Now, when I ran the Management and Infrastructure SIG, there was always a last item of “retire to a pub for a drink or two”. It is a common feature of technical UKOUG SIGs and a great opportunity to chat to the speakers more.

Chat to the speakers. Now I think about it, most of the speakers always make it to the pub after a SIG. If the attendance on the day is 10% speakers, 10% committee and 80% delegates, the make-up in the pub will be 30% speakers, 20% committee and 50% delegates, or similar.

At conferences, the bars in the evening are covered in speakers (all still speaking – loudly and {usually} drunkenly).

So, is it that:

  • Speaking leads to elevated drinking
  • Drinking makes you more of a sucker for speaking
  • Speaking and Drinking have a shared genetic basis
  • It’s just me.

My excuse is that all that hot air coming out my mouth makes it dry and it needs a little wetting afterwards…

You Will Be Our Slave – Err, no, I Won’t May 27, 2012

Posted by mwidlake in contracting, Friday Philosophy, rant.
Tags: , , ,

For the sake of current clients, this posting has been time-shifted.

I’m looking at the paperwork for a possible new job in front of me. Document seven out of 13 is the Working Time Directive Waiver. It’s the one where you sign on then dotted line saying your proposed new client can demand more than 48 hours of work a week out of you. {This may be UK or European Union specific but, frankly, I don’t care}.

I’m not signing it. For one thing, I doubt the legality of the document under EU law – especially in light of the issues the UK government had with this and junior doctors {who often, and still do, end up making life-deciding decisions on patients when they are too tired to play Noughts and Crosses, having worked 80 hours that week}. For another, well, I don’t give a damn. I ain’t signing it.

Now, I’ve just completed about 60 hours this week for my client. Not a problem at all, something needed doing, I could do it and so I have. I have done, am doing and will continue to do long weeks for clients when there is a business need and it fits in with the rest of my life and it is not a chronic situation {chronic is a medical term that means “long lasting and on-going”}.

If I am doing 60 hours plus every week, that means I am trying to do 2 people’s job at the same time and doing both of them badly. I don’t care how great I am at doing what I do, if it is 60 hours each and every week, I’m doing it badly because I am too stressed and tired to be doing it well. Also, where is the rest of my life? I have no “rest of my life”.

If my client is asking me to do 60 hours this week and I say “no” and they sack me under the Working Time Directive waiver – that means it is not a request, it is an enforcible demand. I am their slave. Nope. Not happening. It is best all round if it is acknowledged up front before I arrive on site that the client may ask and I may well say yes – but I can say no.

I know, some of you will be reading this and saying “but I need my job and if that is what it takes, I do it”. Well, I’ve worked for 20+ years and I’ve realised that (a) there are organisations that don’t abuse you and (b) you actually get little real payback for those ridiculous hours. But it can ruin your non-work life, even your family life. I don’t need any individual job and I am bloody well not playing those games any more. Employment in a modern, democratic society is supposed to be a mutual agreement and, if is it not, I ain’t playing. That is my small win for all those years of grind and I’m insisting on it.

I know, some of you will say “look, it never comes to anything, just sign it and ignore it like the rest of us”. No. If you are right, it is a corporate lie and is not required. And, to my detriment, I know you are wrong and sometimes there is an attempt to enforce it. If you cannot get me to do the 60 hours by asking and explaining, either you do not have a valid reason {and history proves I am an utter push-over to a half-reasonable request} or there is a reason very important to me why I can’t comply. If you try and insist, you really are treating me like a slave. That empty space? That’s me having gone for a looong walk.

I am not signing a document saying “you can demand I work over 48 hours any and all weeks you like”. Your are not signing a form saying “I can demand any time off I like week in and week out”. All contracts have a clause saying “this is not working between us, we will curtail the agreement”. We will use that if need be, not a bullying document that says I am your slave.

I am not signing.

Table High Water Mark and How Empty the Table Is April 30, 2012

Posted by mwidlake in performance, statistics.
Tags: , ,
1 comment so far

On a recent post about reducing the high water mark in a table by shrinking it someone asked if you could tell how much “space” there was in a table that could be recovered by compressing it.

Yes. As pseudo-code:

select number_of_rows*average_row_length. That is how much data there is.
Divided by the block size, to give how many blocks-worth of data you have.
Divided this by a “overhead” fudge factor to allow for wasted space (see later) of about 20%
Compare to the number of blocks used by the table.

Here is some code to do it, showing some extra data and how we calculate the effective percentage of space used (pct_spc_used). It runs against an example table ‘HWM’

select uts.blocks                                     blks_used
      ,uts.empty_blocks                               empty_blks
      ,usse.blocks                                    alloc_blks
      ,greatest(uts.blocks,1)/greatest(usse.blocks,1) pct_hwm
      ,uts.num_rows*uts.avg_row_len                   data_in_bytes
      ,(uts.num_rows*uts.avg_row_len)/8192            data_in_blks
      ,((uts.num_rows*uts.avg_row_len)/8192)*1.25     mod_data_in_blks
      ,(((uts.num_rows*uts.avg_row_len)/8192)*1.25)/usse.blocks pct_spc_used
from user_tab_statistics uts
    ,user_segments       usse
where uts.table_name='HWM'
and   uts.table_name=usse.segment_name
---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
      1630       1221      50000         213         34       1664   .97957
     10650000   1300.04883       1625.06104    .97659918

I am collecting the data from USER_TAB_STATISTICS and USER_SEGMENTS. For this code to work you must have reasonably good stats against the table.

I’d like to run through some of the columns I have selected:
First of all, columns AVG_SPACE and EMPTY_BLKS are not populated by dbms_stats.gather_table_stats.. They are populated by the deprecated ANALYZE command that you should NOT use to gather table stats since V10 came along. These columns are populated as I did an ANALYZE to get the data in there, as well as a dbms_stats.
Next, I collect BLOCKS_ALLOCATED from DBA_SEGMENTS {and for this demo I just ignored the potential for partitioned tables) and I compare this to the BLOCKS_USED to get the High Water Mark, as a percentage of the table. I do this as EMPTY_BLOCKS is set to zero if you have never used ANALYZE and, even if you did, unless you use this deprecated command all the time, the value will not change.
On the second line of output I calculate the DATA_IN_BYTES as a simple num_rows*avg_row_len, convert it into blocks {for simplicity I do not collect the block size, I know it is 8k}. I then apply my “Overhead” fudge factor. A block has a header, using around 100 bytes {I’ve not checked the exact figure for years}, pctfree can be varied but defaults to 10% and as only whole rows fit, then an average of half a row of space is empty in each “full” block. Thus I reduce the space available by 20-25%. In this case, 25% as my rows are large.
Finally, I compare this modified data volume to the used blocks to get the actual space

Below I run through creating some test data, looking at the stats and my calculated High Water Mark and pct_space_used and finally shrink my table to see if my guesstimate is a reasonable guesstimate:

populate table

drop table hwm purge;
prompt populate table
set feed on
create table hwm
(id   number(10)
,num1 number(2)
,vc1  varchar2(100)
,vc2  varchar2(100)
insert into hwm 
select rownum
from dual connect by level < 50001
50000 rows created.

exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'HWM')

--where is the HWM compared to total segment size

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
      1630          0      50000         210          0       1664   .97957
     10500000   1281.73828       1602.17285   .962844262

The high water mark is 1630 blocks out of 1664 in the segment
My calculated PCT_SPC_USED is 96%. That is probably close enough.
{remember, the last used block will be only partly used, accounting for a bit of the difference}

-- I will use ANALYZE to fill the missing columns
analyze table hwm compute statistics;

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
      1630       1150      50000         213         34       1664   .97957
     10650000   1300.04883       1625.06104    .97659918

Now those two columns are populated. 
Not the slightly different AVG_ROW_LEN even though dbms_stats used 100% (as the table is so small)
and ANALYZE was compute 

-- clear 90% of the data randomly

45461 rows deleted.

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
      1630       1150       4539         210         34       1664   .97957
       953190   116.356201       145.445251   .087407002

PCT_HWM is not altered of course but PCT_SPC_USED has dropped dramatically. 
The table is now only 8.7% used, according to my calculations (compared to
90% empty) 

The BLKS_USED does not change. The AVG_SPACE and EMPTY_BLOCKS are the same as I 
used dbms_stats to update the statistics and it DOES NOT ALTER the columns that it does not
populate. Thus you have no idea how recent those columns are if you use a mixture of commands.

-- clear some blocks completely by deleting a range

2181 rows deleted.

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
      1630       1150       2358        210         34       1664   .97957
       495180   60.4467773       75.5584717   .045407735

Now the PCT_SPC_USED is down to 4.5%

-- has EMPTY_BLOCKS changed if I use ANALYZE?
analyze table hwm compute statistics;

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
      1630       7682       2358         213         34       1664   .97957
       502254   61.3103027       76.6378784   .046056417

As you can see, if I use ANALYZE AVG_SPACE alters. But EMPTY_BLOCKS does not, even though I cleared
a chunk of the table. So there are blocks that can be reused but not listed as empty.

I'll just take a quick side-step and show a quick "oddity" about dbms_stats
--deleting the stats (using dbms_stats)

exec dbms_stats.delete_table_stats(ownname=>user,tabname=>'HWM')

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------

dbms_stats.delete_table_statistics clears ALL statistics, even the ones it does not populate

--and now collect them via dbms_stats again

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
      1630          0       2358         210          0       1664   .97957
       495180   60.4467773       75.5584717   .045407735

--now to shrink the table
alter table hwm enable row movement;
alter table hwm shrink space;

---------- ---------- ---------- ----------- ---------- ---------- --------
------------- ------------ ---------------- ------------
        72          0       2358         210          0         80   .90000
       495180   60.4467773       75.5584717   .944480896

So I calculated that there was about 75 blocks of data in that table. having shrunk it, I was
a little bit out.

Having run through those examples we can see that the accuracy of the PCT_SPC_USED is down to the fudge factor employed but is probably close enough at 25%. After all, you are only likely to shrink a table that very clearly would benefit from it.

Friday Philosophy – It’s not “Why Won’t It Work!” it’s “What Don’t I Understand?” April 27, 2012

Posted by mwidlake in Friday Philosophy.
Tags: , ,
add a comment

I had a tricky performance problem to solve this week. Some SQL was running too slow to support the business need. I made the changes and additions I could see were needed to solve the problem and got the code running much faster – but it would not run faster consistently. It would run like a dream, then run slow, then run like a dream again 2 or 3 times and then run like a wounded donkey 3 or 4 times. It was very frustrating.

For many this would provoke the cry of “Why won’t it work!!!”. But I didn’t, I was crying “What don’t I understand???”. {I think I even did a bit of fist-pounding, but only quietly as my boss was sitting on the desk opposite me.}

I think I’ve always been a bit like that in respect of How Things Work”, but it has been enhanced within me by being blessed to work with or meet people for whom it is more important for them to understand why something is not working than fixing it.

I was reminded of this by a thoughtful comment in an email that one of the oaktable sent to the list. They made the comment that what they felt was common between members of the oaktable is “that we’re not interested (really) in what the solution is of most of our problems, but actually, what is the underlying issue that really causes the problem?”

It struck a real chord with me. Quite a few people I’ve come across seem to be fixated on wanting to know solutions – so that they can look knowledgeable and be “one of the best”. But that’s just stamp collecting really. It’s like one of those ‘games card’ fads that each generation of children has, where you want to collect the best cards so you can win. I never got it as a kid as there are few rules, tactics, ‘how it works’ to elucidate. What success is there in winning when it’s just down to the cards you have? {And being candid, I didn’t like them as partly as I never had the money to buy many cards and partly I was rubbish at trading them. No sales skills.}

I know the solve-it-don’t-just-fix-it position is a topic I have touched on before, but I think the attitude of trying to fix problems by understanding how it works is far more satisfying than doing so by knowing a set of solutions. You develop a deeper understanding to help solve new problems than any amount of solution-stamp-collecting ever will. However, another wise voice on the Oaktable discussion pointed out that you can be in a work environment where there is no time to investigate and you simply have to try your set of fixes and move on if you hit one that works. Your work environment can strongly influence how you work and, it some ways, the ways you think.

I bet some people are wondering what my problem at the start of this post actually was? Well, a nice technical blog about it may appear over the weekend, but the core reason for the toggling of working/not-working was partition swap. We have data coming into the system very fast. We build a new summary of the key data in one table and then swap it into active play via partition swap. On the live system, stats had not been gathered on the “swap” table we had introduced but had on the active table. So, each time the partition swapped, we went from good stats to “empty” stats or the other way around. The empty stats gave a quite, quite dreadful execution plan.

Rant – Unique means UNIQUE! Argh! April 22, 2012

Posted by mwidlake in rant.

I’m not a die-hard “Queen’s English”, “thou shalt not split infinitives” type but I am sick of people miss-using the word Unique.

The word unique means being one of a kind, the only example, the singular occurrence, the absolute only one. One. Singular. Get it? Still don’t get it? Well it means….unique! As a word that has only one unequivocal meaning, “unique” pretty much bloody well is it, by it’s absolute definition. It’s a yes/no situation. If you are unique in some respect, it means you are the only one example.

Now we lot in the database world should be bang on certain about this, what with Unique Keys and the like, and you would expect that other group of pedantic types – scientist – would be sticklers for the word as well. But no, last week I had someone who I thought was a good, solid IT person ask me “how unique” a situation was, I’ve just seen a scientist on TV describe a rock formation as “quite unique”. You can’t BE “quite unique”. You can be unusual, you can be quite rare, you can be uncommon. They all mean one of a few or a bit more blagh than usual. Unique means…The One. I can’t even think of another word that means “unique” in the way that word means. “One” and “Only” and “Singular” are close, but they all indicate something is unique. You cannot have a situation that is “quite ‘the only one'”. It is the only one or it is not the only one. Tick or cross. If you claimed a situation was unique only for someone to point out that it had happened before they would say “aha! So, it is not unique”.

It would be less of a linguistic stupidity to ask “how dead is the parrot – a bit dead or a lot dead or quite dead”. The parrot is in a binary state, dead or not. {As a biologist you can actually argue about this, but most of us accept the yes/no state of dead}. It is NOT “quite dead”.

Is Usain Bolt’s 100 meters fastest time Unique? Yes. He’s the fastest, not one of the fastest, not “fairly world record holding”.

Would it make sense to say “I have the fairly only stamp of it’s kind in my possession”? No. If someone said “this set of events have approximately never happened before” you would think “huh?” and ask for clarification – maybe ask “do you mean it’s a unique set of circumstances?” and would expect a yes or no answer. Only no, I would half expect “fairly unique”. Arrrgghh!!!

Friday Philosophy – Identifying and Nullifying Fake Urgency April 20, 2012

Posted by mwidlake in Friday Philosophy, Management.
Tags: , ,

You know how it goes. You get a call/mail/text with something along the lines of “I need to know all the details of customer orders placed on Tuesday 7th by customers based in Botswana – and I need it ASAP, by end of play today at the latest”. So you skip lunch, drop that task you have been trying to get around to doing all week and work out how to resolve the issue that has just been dropped on you. It takes a lot of effort and you finally get it sorted out around an hour after you told your girlfriend/boyfriend/cat you would be leaving the office that day – and mail it off to the requestor. You might even call them to let them know it is done, but oddly they don’t answer.

Next day, you see the guy who wanted this urgent request and ask if it was what they wanted “Oh, I have not looked at it yet – but thanks for doing it.”

NO! “Thanks” does not work in this situation. I’d have more respect for this guy if he laughed at me and said “got you again, sucker”. Many of you know what I mean don’t you – if you are in a support-type-role, this can be a big part of your life.

I had a job years back that seemed to consist 90% of such tasks. I was the development DBA team leader responsible for testing, validating and promoting code to production. Everyone’s changes were Urgency Level 1, to be done as an emergency release and many could not be put in place until after 5pm. I’d be sat there at 18:30 in a massive but virtually empty office, applying changes along with one or two of my guys. Everyone else had gone home. This was not once or twice a month, it was 4 or 5 times a week. What are you to do?

Well, I came up with one tactic that seemed to work pretty well.

Anyone who asked for an emergency change had to be there, on site, available when the change was done.
There were of course cries of protest and people stated it was ridiculous that they had to be there, they were not needed, the change had been tested thoroughly {oh how I laughed at that – a thoroughly tested “emergency” change huh?}. No, I replied, you had to be there in case it went wrong as it’s your system, your data and, frankly, your emergency. If it is not urgent enough for you – the guy wanting it to be done – to be inconvenienced, well it sure as hell is not urgent enough to inconvenience me. “You can call if there are problems” – What, after you have escaped the locality? Maybe turned off your phone? And if I get you , I have to wait for you to come back in? No no no. Urgent emergency now equates to presence in office. After all, I’ll be there.

I stuck to my rule. If the requester could not be bothered to stay, I downgraded the request to “Planned” and put it through the CAB process. If the requester dumped on one of their team and made them stay, I mentally marked them half a point down and factored it in next emergency.

The change was remarkable. I was no longer in the office on my own every evening. I was not there with someone else either. I was simply not there as, when you made the emergency a little bit inconvenient to the requester, it magically stopped being an emergency.

There was another change. Less cock-ups. Seeing as these changes now went through the CAB process and slightly more testing {like, some testing} the duff changes were more likely to be detected before they caused damage. My bosses went from regarding me as “not a team player” to “Not a team player – but we kind of get your point now”.

So my advice is, if someone wants to try and make something your emergency, find some way of making sure it remains inconvenient to them. If they are willing to put up with the inconvenience, then it is a real emergency and you need to crack on with it.

Shrinking Tables to Aid Full Scans April 18, 2012

Posted by mwidlake in performance, SQL.
Tags: ,

{This blog is about shrinking tables where the High Water Mark is higher than it needs to be. Prompted by one of the comments, I wrote a follow-up post on finding the High Water Mark and tables that consist mostly of empty space, which would be candidates for shrinking.}

This blog is about one of those things I do as almost an autonomous “not thinking about it” performance housekeeping task, one which I have been meaning to mention for ages.

There can be quite a lot to gain by checking out full scans on “small” tables and seeing if it is as efficient as it can be. Sometimes it is, sometimes it is not. Often it is “good enough”. Occasionally it is awful.

Recently I was just casting an eye over the “top 20″ SQL on a system for any code generating a lot of consistent gets. I came across the below:

-------------------- ------------- --------- ----------- -------------- ---------- ------------
-------------- -------------- ---------- ---------- --------------
2-03-14/18:00:10 bk9b5u6zyvy59 36,262 36,262 90,634,158 7 36,261
320,102.3 326,920.9 0 0 546

It caught my eye as it struck me as a lot of buffer gets for what I thought was quite a small table and it is executed quite often. In this case, a few thousand times a day. It takes 2500 buffer gets per count(*). All buffered, but the CPU has to chunk through it.

I check, it is a small table:

ABCD01> select count(*) from dev_ODI_XXXX.snp_session


ABCD01> @tab_lst
Enter value for tab_name: snp_session

------------------------------ ------------------------------ -------------

-- and a quick check on those key columns

-------- -------------------- ------------ ---------- --------------- --------------- ---- -----

The table IS small, only 73 rows. It is also interesting that the stats are out by quite a percentage, but the table is still seen as small according to the stats. This indicates the table is quite volatile, but consistently small. Another thing to notice is that the number of distinct values for each column in the WHERE clause is only 4 and thus an index lookup would not be very specific. With such a small number of records in the table and not very specific index I “know” that an index lookup would not be of much benefit {BTW Everything that you “know” in this way – do check it in reality every now and again…}. I don’t look at adding an index, I look at why such a small table needs so many buffer gets.

So, why so many buffer gets?

First of all, I need to be careful. The original source of information I was looking as was V$SQL and that holds cumulative stats for all executions of the same SQL statement. Thus the buffer gets could have been very high a while ago (say, yesterday) and now are small, and the average is 2500 BGs per select but right now it is running fine. So I verify that the code is still doing a lot of work for such a small number of records:

set autotrace on

Elapsed: 00:00:00.99

Execution Plan
Plan hash value: 1837310352
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 15 | 546 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 15 | | |
|* 2 | TABLE ACCESS STORAGE FULL| SNP_SESSION | 1 | 15 | 546 (1)| 00:00:01 |

Predicate Information (identified by operation id):

1 recursive calls
0 db block gets
2013 consistent gets
0 physical reads

That’s still a lot of consistent gets to scan all of 75 rows and do a count (which is the sort aggregate step). A second to process and 2013 consistent gets to look at 75 rows in a skinny table – just trust me it is a skinny table. And the optimiser cost the full scan step at 546 “equivalent single IOs”.
{Oh, if anyone is wondering what the “table access storage full” and predicate step “storage(“SESS_STATUS”=’D’ AND “AGENT_NAME”=’DEV_ODI_AGENT’)” it’s because this system is an exadata box. Just ignore this for now, it is not relevant to this issue. I’ll blog about it *next week*}.

What is the problem?

Well, let’s see how big the table is.

ABCD01> @seg_dets
Enter value for seg_name: snp_session
Enter value for owner: dev_odi%
Any Key>

-------- --------------- --- -------- ---------- --------- ---- ------- -------
DEV_ODI_ SNP_SESSION TAB DEV_ODI_ 16,384 2,048 31 64 1024

The table is 2048 blocks in size, the consistent gets to look at the table is 2013. That’s close.

The problem is when you full scan a table, it scans up to the high water mark of that table, ie the highest block that Oracle knows there has been data in. As Oracle inserts new data into a table and extends it, it keeps a note of the highest block data has been put into, the High Water Mark (HWM). When you delete data from a table, Oracle marks individual blocks to be reused as they empty (drop below a certain percentage full). But Oracle does NOT check to see if many blocks are being fully emptied and it can reduce the high water mark. In fact, there could well be a few records in the first block of the table and then nothing until a few records in the last block that has been used in the table, the High Water Mark. Oracle just scans all blocks and counts relevant records up to the high water mark. This is actually a very long-standing performance tweek, you can allocate GBs of data to a table but Oracle knows the very highest block it has put data into for that segment and only scans up to the High Water Mark. Usually, the HWM is useful.

I happen to know that this table used to have several thousand records in it but is now used far less extensively and holds only a few records. They are constantly deleted and new ones created. I have seen {very regularly, at least once a year} some extreme situations where a table has had many millions of rows in it at one point, setting the HWM to several hundred thousand blocks into the table segment, but the table now only holds a few hundred records. But a Full Table Scan scans all the way up to the HWM, taking several minutes to access a few hundred records. Often that table is being inefficiently cleared with a “delete from table_x” statement and then re-populated.

What is the answer? Well, I could truncate the table and set the HWM to zero, but this table actually holds some real data. I can’t truncate it in this situation. I could MOVE the table, ie get Oracle to recreate the table and move the existing data into it. That would set the HWM of the new segment to block one and then raise it as it inserts the data neatly into the first few blocks. The third option, and the one I took, is to SHRINK the table. This basically tells oracle to shuffle the data down to the start of the table and reset the HWM. You can do this on the fly and I have done it on relatively active systems, but I must give the usual advice to test this yourself and you may have locking issues on a busy system hitting the table you shrink very hard. Also, you do need to enable row movement on the table for Shrink to work, as is demonstrated below:

ABCD01> alter table dev_ODI_XXXX.snp_session shrink space cascade;
alter table dev_ODI_XXXX.snp_session shrink space cascade
ERROR at line 1:
ORA-10636: ROW MOVEMENT is not enabled

ABCD01> alter table dev_ODI_XXXX.snp_session enable row movement;

Table altered.

ABCD01> alter table dev_ODI_XXXX.snp_session shrink space cascade;

Table altered.

Elapsed: 00:00:01.98

So, how does my little select perform now?

set autotrace on

Elapsed: 00:00:00.04

Execution Plan
Plan hash value: 1837310352
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 15 | 546 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 15 | | |
|* 2 | TABLE ACCESS STORAGE FULL| SNP_SESSION | 1 | 15 | 546 (1)| 00:00:01 |

Predicate Information (identified by operation id):

1 recursive calls
0 db block gets
8 consistent gets
0 physical reads

Fabulous. The run time is very short and the number of consistent gets is tiny. You might notice that the Explain Plan still thinks the cost of the full scan will be 546 (ie the equivalent of 546 single block reads) but that is as I have not re-gathered stats. I left the system to do this automatically that night.

I’ll just mention one more thing. Shrinking a table is a bit like the old trick of rebuilding indexes to compact them. People misunderstood what rebuilding indexes was achieving and why the indexes were so much larger than they “needed” to be and would spend CPU and IO rebuilding indexes overnight – only for them to quickly explode back to the original size (creating a lot of overhead as they did). Shrinking oversized tables can have a great benefit but keep in mind that if some process pushes the HWM right up every day by shoving lots of data into the table and then clearing it down later, shrinking the table is just a waste of time. It gives benefit when the table really has shrunk down in size for a while. Or it only occasionally explodes to a massive size. I have in the past written “regular shrink” scripts for tables I know explode once in a while (eg once a week) but are usually small, but you need to know what the pattern is. Don’t just blindly shrink tables.

For me in this situation, that code scanning that table dropped out of my “top 20 most demanding SQL statements” and has stayed out of it for weeks. The code now completed in considerably under 0.1 seconds as opposed to a whole second – but no one has noticed. I’d removed about 5% of the workload of the system – but no one noticed. But it took me 1/2 an hour to investigate and fix and I’d freed up a non-trivial chunk of processing resource. In my Performance Tuning heart I know I did a good job on that day.

Friday Philosophy – The Abuse of Favours March 30, 2012

Posted by mwidlake in Friday Philosophy, humour, Perceptions, rant.
Tags: , , ,

You probably all recognise this situation:

Dave needs something doing that he can’t do himself – let’s say it is creating an API for the file management package. It isn’t your job to do but it is something you can do. Dave is blocked until the API is created.

So, being a nice person, you tell Dave you will see what you can do for him over the next couple of days.

So why is it that what Dave hears is “Dave, I love you more than life itself, I am dedicated to this task and I WILL complete it before the end of tomorrow. My other tasks, emergency production issues and the untimely demise of my cat are all secondary to this endeavour.”.

You see, 24 hours later, Dave is at your desk “When will this be done?! I’m blocked until I get this!!!”. If he’s the guy I had recently his next step is to slap his fist into his palm as he utters, almost shouts “I NEED this!”.

No. No you don’t need it. What you need is for that slap to be in your face, followed by “wake up! You don’t go shouting at the guy digging you out the hole!”.

I find this particularly unacceptable when the favour is to be fixing some mess that Dave created, or doing something Dave told his boss he had finished last week. Of course, those are the exactly situations where Dave is most likely to get upset, as he is in real trouble and most likely to commit that ultimate Favour sin:-

Dave to Boss “I Didn’t get my task done as Martin promised to create the API and he hasn’t. I’d be there now if I only did it myself”.

If you are thinking “Hmmm, I think I might have been ‘Dave’ recently” then Shame On You and go beg forgiveness. Of course, if you were ‘Dave’ you may well be the sort of sod who will twist the situation around in your head so it was not your fault anyway. Grrr, bad Dave.

For a while I gave up doing work favours as I got sick of the situation above playing out. Then I started doing favours again but being a bore about saying repeatedly, up front, that this was a favour, it was only if I had time, not to rely on me and, if it is that important, go ask someone else. Yeah, sounds really grumpy doesn’t it? That gave me a reputation for not being a Team Player (which is code for “mug”).

Now I have a rule system. As soon as someone starts getting demanding about the favour, I immediately challenge it. If they get shouty they lose their favour rights. No more favours for you until the requisite number of beers have been bought. It’s three.

Of course, you see this scene played out on help forums all the time. Initial message is nearly always in upper case text speak “PLS HLP ME, IS URGNT! CN U TELL ME HOW 2 DO MY JOB – THNX!!!” and soon degrades into helfull person asking for details of the exact person and Mr Shouty demanding more and more help. I don’t help. After all, this guy is never going to buy me a beer.

OUGN 2012 Third Day March 26, 2012

Posted by mwidlake in Meeting notes, Private Life.
Tags: , ,

The last day of the three (and second on the ferry and of the conference proper) had a lot of talks I wanted to see, especially Dan Morgan talking about “Insanely large Databases”. It was a good talk, with an interesting interlude when a very loud announcement let us know we had docked at Kiel. Dan handled it with aplomb. Dan was talking about one specific experience he had suffered recently and he covered quite a few things I did and some I planned to but never got that far – but he had more technical details about the issues he had encountered, so all in all probably of more immediate use to the audience than my talk. It was a very good session. I have to confess, there were times I laughed out loud as memories flooded in, prompted by his wise words – I fear others may have interpreted differently but, honestly, I was laughing tears of shared pain.

I was also looking forward to seeing Uwe Hesse talk about Dataguard. I’d had the pleasure of spending a lot of time and a few beers chatting with Uwe over the last few days. His presentation was very well done (as it should be, he’s a professional trainer! He exceeded my expectations, though). And I loved the last bit, where he demonstrated how, under 11G R2 (R1 as well???), if you have a physical standby, a block corruption can be fixed “on the fly” and invisibly to the end user. I just love that feature and, though I knew about it already, seeing it demonstrated and the errors appearing in the alert log – though the user query runs fine – was sweet.

The rest of the sessions I saw were also good {Maria Colgan on preventing sub-optimal plans which was, mostly, about avoiding implicit data conversions, which I think all developers and designers should have drummed into their heads with rubber hammers; Doug Burns on those OEM performance graphs which continue to get better and better} – but I had to given in and go for a sleep. These anti-seasickness pills seem to work but make me dozy. I’d love it if those anti-travel-sickness pills were really placebos and I had a placebo side effect :-)

The last day was rounded off with a nice meal and one or two (or three, or four) beers in a bar and some excellent times. I of course spent time with the Other Martins (we could not disband the cluster too easily), Doug, Holger, Maria, our Norwegian hosts and many more of the other people there. If only I had managed to fit in the other 10, 15 people I wanted to see but I’m getting old and I was very, very, very tired.

I have to say, it was one of the best conferences I have ever been to. OUGN 2013? Get yourself on that boat.

OUGN 2012 Second Day – Out on the Open Seas March 23, 2012

Posted by mwidlake in Meeting notes.
Tags: , , ,
1 comment so far

As I said yesterday, I am not one for blogging about conferences. So what the heck, I’ll do another post on this one :-).

You might have picked up on the fact that I am not very good on the sea and have a lot of nervousness about this idea of spending 2 days on an ocean-going liner. So today is the day we move over to being on the water, the OUGN conference proper. I’m delighted to say I am not the only one nervous about this boat lark, Marie Colgan {or, as I should be calling her, “The Optimizer Lady” – since Doug Burns christened her back at the UKOUG 2011 conference} feels the same. There is nothing better to reduce one’s apprehension about something than finding someone else who is just as if not more anxious about it. I suspect this is an evil way to think but I can’t help it.

The day went well, my overview of VLDBs could have maybe been a little more polished – I think I was trying to cover too many topics and not spending enough time on anything, apart from “why a small number of spindles is bad” which I spent too long on. But I did make two points that I think are major considerations with VLDBs and are actually not very technical. (1) If you are going to have to look after a massive database, try very, very, very hard to keep everything as simple, standardised and controlled as you can. (2) Find other people to talk to about VLDBs. Most companies that have them are reluctant to talk, either because of the industry type (Defense, Banks, Pharma) or a perceived giving away of corporate advantage (Banks, Pharma, Utilities and pretty much all companies really).

Anyhow, having done my talk I was now free for the next 2 days to enjoy other talks, socialise, relax – and keep internally checking if I felt nauseous or not. The sea has turned out to be very calm and the boat very stable. But I keep checking and, of course, that makes me feel a little anxious – and thus ill.

However, I have to say that the travel sickness pills I have been taking do seem to be very effective. They are effectively in making me feel woozy. But, and this is important, not ill. I’m having to rely on beer for the latter.

One thing I really, really like about this conference. Everyone is stuck on a boat, they can’t escape. Which means you get to see lots of people all day and it makes the whole thing have a nice sense of community.

Right, Maria is about to present again. I’m going to go and sit in the front row and sway lightly from side to side. Apparently it makes her feel even worse…


Get every new post delivered to your Inbox.

Join 166 other followers