jump to navigation

What Have I Let Myself in For! – UKOUG this year November 16, 2011

Posted by mwidlake in development, Meeting notes, UKOUG.
Tags: , , , ,
7 comments

One of my favourite Oracle happenings of the year is fast approaching, the UK Oracle User Group technical conference {see/click on the link on the right margin}. I’ve blogged before ( like here, last year) why I think it is so good.

I try and present at the conference each year and I go no matter if I am presenting or not.

However, this year I think I might have got myself into trouble. I put forward 3 talks, expecting one or possibly two to get through. One on Index Organized Tables, one on IT disasters and one as an introduction to database design – I’ve moaned about it being a dying art so I figured I should get off my backside and do something positive about it. Each talk is in a different stream.

Well, the IOT talk was accepted, the Disasters talk was rejected and the Database Design intro was put on the reserve list. I was happy with that. I did three talks the first year I presented and promised myself never to be that stupid again {I spent most of the conference in the Speaker’s lounge or my hotel putting the talks together and tweaking slides}.

What I was not expecting was for the OakTable to ask me to do the IOT talk on the OakTable Sunday. Yikes! {The OakTable Sunday is a great opportunity to see a set of presentations by people who really know their stuff in a smaller setting – You really want to get along to it if you can}. However I had two reasons not to do it:

  1. I would have to miss one of the other OakTable talks.
  2. That thing I said about people presenting who really know their stuff.

I was told that (1) was not a problem as the talks would be repeated in the main conference so I would have an opportunity to see  the one I missed and (2) stop being so British and do it. {In fact, one friend on the OakTable told me off after the last conference for my criticism of my own presentation that year – “yes it was poor for you but normally you do a good job, so keep doing it”}. Of course I said yes.

Then it struck me, I was presenting twice now. Once on Sunday and repeating on Wednesday in hall 5 {I’ll probably not simply repeat the contents, at the OakTable Sunday I’ll assume a little more knowledge by the audience and dig a bit deeper technically, in the main conference I’ll cover off the basics more, for those utterly new to IOTs}. At least it was only one set of slides to prepare.

A few days later I get a mail from the UKOUG office. A gap had appeared in the Development stream, would I be willing to do my “Oracle Lego – an introduction to database design” talk – but beef it up a little? Yes, sure. What do you mean about beef it up? The dev stream guys wanted something that went into more detail, was more about some of the more challenging systems I’ve work on. So we exchanged a few emails and it quickly became apparent that some wanted the intro talk I had originally proposed, to get people going with database design. Others felt there would be more audience for a more in-depth talk, so could I span both? I had to say no. I remember attending my Oracle database design course in 1993. It was 5 days long. If my memory serves there was also a second course a couple of weeks later that covered more advanced design for 3 days! I can talk fast but not 8 days fast. They were effectively asking for two quite different presentations, an intro and then a review of more challenging examples “OK” they said, “do Oracle Lego – But if another gap comes up, could you do the intermediate talk?”. Err, OK… So I wrote a quick synopsis for “Oracle Meccano” {Meccano is a toy construction kit made up of miniature girders, plates, bolts and stuff you can make proper things out of. If you liked Lego you would love Meccano as you got older} .

Since then I have been slightly anxious about getting an email from the UKOUG about a gap in the development stream for the conference…

This week I have started preparing the presentations for real {which so far has resulted in me breaking my server, finding a load of notes on blogs I was going to write and then doing this post} so I contacted the ladies in charge of the agenda and asked if I was now off the hook for the Oracle Meccano talk? “Yes, no more gaps, it is not on the agenda”. Phew. “But could you put it together in case of last minute cancellations?”. *sigh*. OK.

So I will, but I’m not signing up to do any Session Chairing, which I was about to. If you see me at the conference and I look a little crazed, it’s because I got a mail from the UKOUG just before the event about a sudden gap…

At least there is no chance I will be asked to do the Disasters talk at short notice, I saw the scores it got by the paper reviewers:-).

IOT Part 6(B) – OLTP Inserts into an IOT November 10, 2011

Posted by mwidlake in development, performance, Testing.
Tags: , , , , , ,
14 comments

<..IOT1 – the basics
<….IOT2 – Examples and proofs
<……IOT3 – Significantly reducing IO
<……..IOT4 – Boosting Buffer Cache efficiency
<……….IOT5 – Primary Key Drawback
<…………IOT6(A) – Bulk Insert slowed down

IOT 6 part A covered the sort of insert work used to create an IOT or push a large number of rows into it via an “insert into…select from” type format. ie Bulk Load.
In this, part B, I am going to test inserting a small number of rows (10,000) into a table already holding a million rows. Something more like an hourly load or the inserts that would occur over a working day – a more OLTP-like test

I’m going to do the tests against four things. A “non-insert” test to ensure my data creation is not taking a lot of resource {the Null test}, then against my three versions of test table – a normal heap table, my IOT and my Partitioned IOT. I run each test on each segment type three times, the minimum number needed to help identify variation that could invalidate the test.

You will probably be surprised by the results….

All three tables have the same column structure, the heap has indexes on the primary key and a unique constraint, the IOT and partitioned IOT are organized on a modified version of the Primary key and have the second index for the unique key. Below is the creation statement for the partitioned IOT (the IOT is the same without the partition clause). You can get full creation details from IOT part 2:

create table transaction_IOT_P
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(1000) not null
,vc_2      varchar2(1000)
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint tip_pk primary key(ACCO_type,ACCO_id,cre_date,tran_type,tran_id) 
--  using index tablespace index_01
,constraint tip_tran_uq unique (tran_type,tran_id)
  using index tablespace index_01
)
organization index
tablespace data_01
partition by range  (cre_date)
(partition rm20110901 values less than (to_date('01-09-2011','DD-MM-YYYY')) 
  tablespace data_01
,partition rm20111001 values less than (to_date('01-10-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20111101 values less than (to_date('01-11-2011','DD-MM-YYYY'))
  tablespace data_01
,PARTITION RMTOP  VALUES LESS THAN (MAXVALUE)
  tablespace USERS
)

The three tests are:

  • Insert 10,000 records for random accounts, in time order for the same day. This would be like processing data for or during a “day”
  • Generating the 10,000 records in order of account, to better match the IOT structure, again for a single day. For any given account 0,1,2 or 3 records are created, to be a little more “real life”. This would be like loading a pre-processed set of data.
  • All 10,000 records are created for the one account, in datetime order, as an extreme example of inserting into an IOT data that is ordered to perfectly match the order of the IOT.

The code is written ‘badly’, ie a single record is prepared and inserted into the table at a time, within a loop of 10000 iterations. This is to avoid the bulk-processing that may benefit the heap table and also to better match the reality of an OLTP system. I do however only commit after all records are created for any given test, which is maybe a mistake but that’s what I did.

Now, before I go any further, does anyone want to guess by how much inserting into the heap table will be faster than inserting into the IOT (partitioned or otherwise)? Also, whether the partitioned IOT will be faster or slower than the IOT and what, if any, impact there will be by the ordering of data?

To allow you to think about that and not see the results by accident, I’ll stick in one of the insert harnesses here so you can see how I am doing it. Skip over if you just want the results

--
execute s_snap.my_snap(' finished non-insert test1')
--
-- Transaction_heap random data test
declare
v_num         number :=10000; -- number of accounts
v_offset      number :=-4;
v_str1        varchar2(60);
v_str2        varchar2(60);
v_tran_type   number(2);
v_tran_id     number(10);
v_acco_type   number(2);
v_acco_id     number(10);
v_cre_date    date;
v_vc_1        varchar2(1000);
v_vc_2        varchar2(1000);
v_date_1      date :=sysdate;
v_num_1       number(2) :=10;
v_num_2       number(2) :=15;
begin
v_str1:=dbms_random.string('U',60);
v_str2:=dbms_random.string('U',60);
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..10000 loop
    v_tran_type :=mod(1,3)+1;
    v_tran_id   := 1000000+i;
    v_acco_type := 5+(trunc(dbms_random.value(1,3))*5); -- 10 or 15
    v_acco_id   :=trunc(dbms_random.value(1,v_num/2)); -- 1 to 5000
    v_cre_date  :=trunc(sysdate-v_offset)+(i/(60*60*24));
    v_vc_1      :=substr(v_str1,1,51+mod(i,10));
    v_vc_2      :=substr(v_str2,1,46+mod(i,15));
    insert into transaction_heap
      (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,vc_2,date_1,num_1,num_2)
    values
      (v_tran_type,v_tran_id,v_ACCO_type,v_ACCO_id,v_cre_date
      ,V_vc_1,v_vc_2,v_date_1,v_num_1,v_num_2);
end loop;
commit;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
--
execute s_snap.my_snap(' finished th insert test1')
--

Here are the results of three runs for all tests:

Elapsed time in seconds:

                    random insert             ordered-by-account       perfect IOT order match
Segment type    Test 1    2    3            Test 1    2    3           Test 1    2     3    
------------    ------ ---- ----            ------ ---- ----            ----- ---- ----
No insert          2.1  0.1  0.1               0.1  0.1  0.1              0.1  0.0  0.1   

                                    AVG                         AVG                        AVG
                                   ----                        ----                       ----
Heap              52.3 20.5 11.4   28.1       33.4 13.8 10.7   19.3       9.1 13.0  9.8   10.6
IOT              137.1 83.7 26.4   82.4      110.8 63.6 39.0   71.2       6.4  6.7  4.0    5.8
Partn'd IOT       33.2  9.5  8.7   17.1       17.1 10.4 14.3   13.9       8.1  5.3  4.5    6.0
                                   ----                        ----                       ----

Allow me to summarise that.

First, the preparation of the data and cycling around the loop, the no-insert tests, are negligible. {The first no_insert test is taking 2 seconds due to the parsing and collecting dictionary information overhead of the first thing done}. We can ignore the overhead of the test harness.

Much more significantly:

  • Each iteration of each test, things get faster {with one odd exception}
  • For Random and semi-ordered inserts, inserting into the IOT is slower than the Heap by a factor of 300% to 400% (3 times to 4 times slower) – what we all probably expected.
  • Inserting into the IOT is Faster than the heap when the insert order matches the structure of the IOT in the perfect_IOT test. Not just the same but 40% faster
  • The partitioned IOT is as Fast or Faster than the the Heap table, never mind the IOT. Over all the partitioned IOT takes 50% TO 75% THE TIME OF THE HEAP to be populated.

Hands up everyone expecting this? NB My hand would be down if I had not done similar tests before.

Why do we see these results?

I’ll just mention that the reason all the tests get quicker with each iteration is that the computer is doing the same thing over and over again and gets used to it, the relevant circuits in the chip are “oiling in”. Of course not, how silly – but it seems to be how non-IT people think of it:-) Basically more and more of each segment is getting cached and that reduces physical IO and speed things up. You see this a lot in testing and you need to be constantly vigilant about it. You might think I should really run the tests until they are steady-state, but that is a very false situation. On real system you don’t tend to get the same thing run over and over and over again on a system with nothing else running.

Much more interestingly, why is the partitioned IOT winning? Well, for the Heap table data is being poured into the leading edge of the table, which takes a little processing and some blocks to be created. However, there are also two indexes being maintained, the Primary Key and that Unique key. For each insert oracle has to find the right spot in each index (root block ->branch block->leaf block->check for row) and insert the new record, occasionally splitting a full leaf block.

For the IOT and the Partitioned IOT there is no table to maintain. In effect there are just two indexes to maintain. For the normal IOT this benefit is outweighed by the cost of working with a less space efficient Primary Key index. As there is the whole of the row in each index entry there are fewer index entries possible per leaf block and so more block need to be read in from disc and more splits are needed. Thus the inserts into the IOT are slower.
But for the partitioned IOT not only is there no table to maintain but the table (really the Primary Key) is partitioned – on date. We are creating data for a single day at a time. All the relevant blocks will be in the one partition (and it’s the latest and smallest partition) and quickly all cached in memory and processed in memory. It’s the efficient use of the cache again. For both the IOT and the Heap, the primary key segment being processed is larger and the inserts are spread across the whole index.

Ordering data to match the index order (ordered_by_account tests) helps for all three segment types – heap, IOT and partitioned IOT. We have two indexes and the data is being ordered for both in effect, as one is on the columns we are ordering the data creation for and the other, the unique key, is generated by an ascending sequence. The gain is about 25, 30% and is due to working through the indexes in order as opposed to skipping about. If the indexes were significantly larger than the buffer cache or on a highly active system where the turnover in the cache was very high (indeicated by a low buffer cache hit ratio:-) ) then the benefit would be higher.
The perfect ordering test (perfect_iot_order_match) is a special case. Not only is the data being processed in order of the indexes but only one part of the index is being updated – for a single account. In this situation the maintaining of the index is so efficient that the time saved not processing the heap table means both the IOT and IOT_P win out over the heap. So in this special case, inserts into IOTs are faster than inserts into a heap table.

And finally a quick summary:

  • Inserting records into an IOT carries about a 200-400% overhead, depending on the number of indexes
  • Ordering the data to match the IOT gives a 25-30% advantage, but it similarly aids update of the primary key on the heap
  • Partitioning the IOT such that a smaller chunk of the index is processed can be highly beneficial. I don’t cover it specifically in the above but you can probably appreciate that partitioning on a heap can be beneficial to processing indexes on the heap also.
  • If you can organise to create your IOT such that data is created only in one or a few distinct points in the structure (you might have to tweak you data load too of course), insert performance might match or even beat that for a heap table with the same indexes
  • Using methods to maximise efficient use of your buffer cache and reduce IO is a very powerful performance tuning aid, not just for select

BTW I am conscious of the fact that the above may surprise many and you might wonder if my data or tests are flawed. I have stripped out most of the code from this post as it was just becoming massive and confusing, but contact me and I will happily supply all the example scripts and data.

IOT P6(a) Update November 8, 2011

Posted by mwidlake in Architecture, development, performance, Testing.
Tags: , , , ,
3 comments

In my last post, IOT part 6, inserts and updates slowed down, I made the point that IOT insert performance on a relatively small Oracle system was very slow, much slower than on a larger system I had used for professional testing. A major contributing factor was that the insert was working on the whole of the IOT as data was created. The block buffer cache was not large enough to hold the whole working set (in this case the whole IOT) once it grew beyond a certain size. Once it no longer fitted in memory, Oracle had to push blocks out of the cache and then read them back in next time they were needed, resulting in escalating physical IO.

I’ve just done another test which backs up this claim. I altered my test database so that the block buffer cache was larger, 232MB compared to 100MB in my first tests. The full IOT is around 200MB

Bottom line, the creation of the IOT was greatly sped up (almost by a factor of 4) and the physical IO dropped significantly, by a factor of 20. As a result, the creation of the IOT was almost as fast as the partitioned IOT. It also shows that the true overhead on insert of using an IOT is more like a factor of 2 to 4 as opposed 6 to 8.

You can see some of the details below. Just to help you understand them, it is worth noting that I had added one new, larger column to the test tables (to help future tests) so the final segments were a little larger (the IOT now being 210MB as opposed to 180MB in the first tests) and there was a little more block splitting.

                        Time in Seconds
Object type           Run with       Run with
                     100MB cache    232MB cache
------------------  ------------    -----------   
Normal Heap table          171.9          119.4   
IOT table                1,483.8          451.4     
Partitioned IOT            341.1          422.6 

-- First reading 100MB cache
-- second reading 232MB cache 
STAT_NAME                            Heap    	IOT	      IOT P
-------------------------------- ---------- -----------  ----------
CPU used by this session            5,716         7,222       6,241
                                    5,498         5,967       6,207

DB time                            17,311       148,866      34,120
                                   11,991        45,459      42,320

branch node splits                     25            76          65
                                       25            82         107

leaf node 90-10 splits                752         1,463       1,466
                                      774         1,465       1,465

leaf node splits                    8,127        24,870      28,841
                                    8,162        30,175      40,678

session logical reads           6,065,365     6,422,071   6,430,281
                                6,150,371     6,544,295   6,709.679

physical read IO requests             123        81,458       3,068
                                      36          4,012       1,959

physical read bytes             2,097,152   668,491,776  25,133,056
                                1,400,832    34,037,760  16,048,128

user I/O wait time                    454       139,585      22,253
                                       39        34,510      19,293

The heap table creation was faster with more memory available. I’m not really sure why, the cpu effort was about the same as before and though there was some reduction in physical IO with the larger cache, I suspect it might be more to do with both the DB and the machine having been recently restarted.

All three tests are doing a little more “work” in the second run due to that extra column and thus slightly fewer rows fitting in each block (more branch node and leaf node splits), but this just highlights even more how much the IOT performance has improved, which correlates with a massive drop in physical IO for the IOT creation. If you check the session logical reads they are increased by a very small, consistent amount. Physical read IO requests have dropped significantly and, in the case of the IOT, plummeted.

I believe the 90:10 leaf node splits are consistent as that will be the maintaining of the secondary index on ACCO_TYPE and ACCO_ID, which are populated in order as the data is created (derived from rownum).

What this second test really shows is that the efficiency with which you are able to make use of the database cache is incredibly significant. Efficiently accessing data via good indexes or tricks like IOTs and hash tables is important but it really helps to also try and consider how data is going to be recycled within the cache or used, pushed out and then reused. A general principle for batch-type work seems to me to be that if you can process it in chunks that can sit in memory, rather than the whole working set, there are benefits to be gained. Of course, partitioning can really help with this.

{If anyone is wondering why, for the heap table, the number of physical IO requests has dropped by 70% but the actual number of bytes has dropped by only 30%, I’m going to point the finger to some multi-block read scan going on, either in recursive code or, more likely, my code that actually gathers those stats! That would also help explain the drop in user IO wait time for the heap run.}

Just for completeness, here is a quick check of my SGA components for the latest tests, just to show I am using the cache size I claim. All of this is on Oracle 11.1 enterprise edition, on a tired old Windows laptop. {NB new laptop arrived today – you have no idea how hard it has been to keep doing this blog and not play with the new toy!!!}. If anyone wants the test scripts in full, send me a quick email and I’ll provide them.:

-- sga_info.sql
-- Martin Widlake /08
-- summary
set pages 32
set pause on
col bytes form 999,999,999,999,999 head byts___g___m___k___b
spool sga_info.lst
select * 
from v$sgainfo
order by name
/
spool off
clear col
--
NAME                             byts___g___m___k___b RES
-------------------------------- -------------------- ---
Buffer Cache Size                         243,269,632 Yes
Fixed SGA Size                              1,374,892 No
Free SGA Memory Available                           0
Granule Size                                4,194,304 No
Java Pool Size                              4,194,304 Yes
Large Pool Size                             4,194,304 Yes
Maximum SGA Size                          401,743,872 No
Redo Buffers                                6,103,040 No
Shared IO Pool Size                                 0 Yes
Shared Pool Size                          142,606,336 Yes
Startup overhead in Shared Pool            50,331,648 No
Streams Pool Size                                   0 Yes

Friday Philosophy – The One Absolute Requirement for System Success October 14, 2011

Posted by mwidlake in development, Friday Philosophy, Perceptions.
Tags: , , , ,
6 comments

What must you always achieve for an IT system to be a success?

  • Bug free? Never happens.
  • Within budget/time frame? That would be nice.
  • Includes critical business functionality? Please define critical.
  • Secure? Well, it’s important for many systems but then it is often lacking (even when it is important).
  • That it is to specification? Well we all know that’s wrong.

There is only one thing that an IT system must always achieve to be a success.

User Acceptance.

For an individual system other considerations may well be very important, but the user acceptance is, I think, non-negotiable.

{Slight update – This blog was written in 2011 with corporate IT systems in mind, where the use of the system is almost if not explicitly mandated. With the explosion of Apps and web-based systems over the last 5 years, for a lot of IT systems the users, the customers, have to be attracted to the system. We now call User Acceptance User Experience and it is, if anything, even more vital. If the user does not find the system easy to use and of some benefit, they will click off somewhere else.}

The user must get enough out of using the system for it to be worth their while, otherwise at best they will resent using it and at worst… Well, at worst they will use it but will put in any old rubbish to fulfill the dictate that it be used. In this worst case you would be better off if they did not use the system at all. Here are a couple of examples from my working past.

In the first one, I was involved in extending a hospital management system so that it kept track of the expected arrival and departure times for ward patients, allowing a predication of when beds would become available and calculation of expected occupancy rates. Yes, this was a while ago (maybe 1990) and on an a system that was old then. The information was needed by someone with the title “bed sister” {or something similar – and would now be titled “head of occupancy management!…} so that they could better manage the flow of patients and keep a higher bed usage ratio. Was this to make the hospital more efficient? No, it was to satisfy a politically demanded report to the NHS executive. Oh, the overall intention was to increase efficiency – but the report soon became more important than any true efficiency. So, we added columns in tables and field on screens and prompts for the ward staff to fill in the required new information.

And they didn’t.

The nurses were busy, they were pretty demoralized {due to having recently been used by the government as a way to control public sector pay} and they had more nursing duties to do than they could manage. They were not going to waste a couple of minutes trying to check when Mrs Jenkins was going to be sent home when Mrs Leonard needed a bed pan. The nursing staff were given a hospital-wide telling off, this information had to be entered. So they put in the data – but guessed wildly, just to get management off their backs. Thus the design of the system enhancement was fine, the report was logically accurate, only the correct staff could run it, but we failed to achieve User Acceptance and thus the system was a failure.

So I added something else. It was a very crude screen (this was 1990) that showed a “diagram” of a ward – Down the left and right side of a VT220 screen you saw little oblong boxes labelled with a bed number and in it the patient name, the consultant’s initials, a medical specialty code and the arrival and departure date-time. This was based on some information we already had plus the new information we wanted and the screen was quite basic, limited and slow to draw. But it was useful to the ward staff. They could find any patient, they knew who to call if there was an emergency {not the actual consultant of course, but their secretary :-)}, they could check when they expected a patient to leave, they could even see when a new patient was expected. They used it. They put in the expected departure times {sobering thought, this might not be expected leaving alive}, patients would be booked into empty beds when they became available (often before they actually arrived) and the bed nurse could plan new arrivals and the oh-so-important report could be run. We had achieved User Acceptance and the system became a success.

The second example is also from healthcare but from a different system in a different hospital. We were putting together a system to schedule outpatient clinics. We knew what we were doing, it’s pretty simple. You have some medical staff holding the clinic (a consultant and probably a senior house officer), a period for the clinic (3 or 4 hours) and a set of people to be seen, say 40. We gave some flexibility in slot lengths (some people need 5 minutes, some 15), checked for and stopped double booking and verified against other data such as correct speciality for the outpatient’s clinical data. We did not go and ask the patient admin staff about the spec, we knocked up the design and the screens and asked them to test. After all, I was very experienced now, I’d been doing these systems for 3, 4 years…

They very quickly came back to us and said it was rubbish. Oh dear.

We went and saw them. “What’s the problem?” There were a few – but the main one was that you could not double book a slot. Why would you want to do that? Do two patients really want to be consulted at the same time with the same doctor? {NB this can happen in some special cases – such as fertility treatment, psychology and a couple of others, but not Urology or Orthopedics!}
“Err, maybe, it might happen, can we just be able to double book?” OK, we could maybe alter things to allow two patients to be seen at the same time… The patient admin staff are not looking happy. The hospital liaison is looking confused & concerned and interrupts – “You can’t do that! Patient confidentiality can’t be broken!” he says. It got worse. “We need to be able to book… several people in one slot”. How many? “We need to book all the patients into the first slot, with the consultant, so the letters go out to them saying come to see Mr Winders at 1pm”. The admin staff are now looking very shifty.

If any of you have worked in the health service you are probably way ahead of me. The admin staff needed to book all the patients in at this first slot so that the ill and suffering would all turn up at the start of the session. The consultant would then run through his notes and see the two or three he was interested in – and then go and play golf. The Senior House Officer (or whoever) would then work through the rest of the patients for the following three or four hours. If you have ever had to turn up at the start of a consultancy session and sat there for three hours, now you know why. You see, back then, the consultant was only a very small step away from deity level (and I leave it to you to decide if it was a step up or down). What they said went and if they wanted to go and play golf or store 200 medical records in the boot of their car or refuse to speak to “that stupid idiot in renal medicine” then you worked around it. {I’m assured that things are a lot better now, but I’d love to know how it really is}.

We had designed a sensible solution. The users of the system needed a non-sensible {to our mind} solution. Even the NHS liaison chap had never appreciated exactly how much the consultants abused the system, he thought they just booked the people s(he) wanted at the start of the session, but no. The consultant decided that day who was interesting and as a result every patient had to be there at the start.

I count myself lucky that I learnt from direct experience so soon in my working life that (a) you have to deliver what the user will accept and (b) the only way to know what they want is to show them the system and talk with them.

In Defense of Agile Development (and Their Ilk) September 21, 2011

Posted by mwidlake in development, Management.
Tags: , , , ,
10 comments

In my previous post I asked the question “why doesn’t Agile work?”. I’m not sure the nuance of the question came over correctly.

I’d just like to highlight that the question I asked was “Why does agile not work”. It was not “Why is Agile rubbish“. I’ve said a few times in the past couple of weeks that I like the ideology of Agile and I am (and have been for years and years) a strong proponent of prototyping, cyclic development, test driven design and many other things that are part of the Agile or XP methodologies.

That distinction in the title is a really important distinction and one I’d hoped I’d made clear in my post. Looking back at my post though, I think it is clear I failed😦. I highlighted reasons why I think Agile does not work and in my head I was thinking “if we avoid these, Agile could work” – but when you write something down it does not matter what is in your head if it does not reach the paper.

I’m actually frustrated that in the last few years I have not seen Agile really succeed and also that this must be the normal situation, going on the response you get when the topic of Agile comes up with fellow technicians and comments on my own blog.

However, on that post about Agile two people who’s opinion I deeply respect came back at me to say “Agile does work!”. Cary Millsap, who many of you will have heard of as the “Method R” guy and the person behind Oracle Flexible Architecture. And Mike Cox, who most of you won’t have heard of but Mike taught me a lot about sensible development back in the 90’s. He’s one of the best developers I have ever had the pleasure of working with and I know he has had great success with Agile and RED. I’m not sure if they read my post as “Agile is Rubbish” or they are, like me, simply frustrated that it can work but so often does not.

So I’ve been thinking about this a lot this weekend and I was helped by Cary’s paper on the topic that he mentioned in his comment. I’d highly recommend downloading it as it is an excellent description of not only why Agile can help but describes how and some of the pitfalls {I’d started my own post on that, but go read Cary’s}. I should add, you can see Cary present his case for Agile at the UKOUG conference this year.

So where does this bring me to? Well, I think “Is Agile good or bad” has become almost an “IT religion” topic, people love it or loath it and it is based on what they have seen of the methodology in real life. No, that’s wrong, it is based on what they have seen that has been labelled with that methodology in real life. Or worse, it is based on anecdotal opinion of those around them. The thing is, if you look at what XP is supposed to consist of or what Agile Programming is supposed to consist of, most of us would agree that a great deal of it makes sense in many situations. I’d disagree with some of the details in Cary’s paper but overall I’m in strong agreement. Sadly, What Agile and XP is supposed to be is not well matched by what you see on the ground in most cases. So even if these methodologies are right for the situation, what has been implemented is not the methodology but probably more a slap-dash process that simply jettisons documentation, design and proper testing. This whole thread sprung from my lamenting the demise of database design and several of the comments highlighted that the introduction of Agile seemed to equate, at least in part, with the demise of design. As MIke and Cary say, and as I think anyone who has successfully utilized Agile would say, Design is an integral part of Agile and XP methodology.

Agile can and does work. But many things can and do work, such as taking regular exercise to keep healthy or regularly maintaining your house to keep it weathertight. Like Agile, both take effort but the overall benefit is greater than the cost. And like Agile, do it wrong and you can make things worse. If your window frames are starting to rot and you just slap a new layer of top-coat on them all you will do is seal in the damp and rot and hide the problem – until the glass falls out. Going for a regular 5 mile run is good for you – but not if you are 10 stone (60KG) overweight and have not run in years. A 5 mile run is also not a good idea if you want to be a long-jumper. Right training (methodology) for the right aim. Also, just like keeping healthy, house maintenance or anything that takes effort but works, proponents tend towards extremism – probably as a reaction to the constant {perceived} pig-headedness of critics or the failure of people to just do what now seems so sensible to them {think reformed smokers}. I’ll have to buy Cary and Mike pints to make up for that jibe now, and promise them it was not aimed at them personally…

Sadly, the reality is, Agile does not work 90% of the time it is tried. So, does that mean Agile is actually rubbish? Or at least, not fit for purpose, because many companies are not able to use it? Companies are there to achieve something and the IT systems are part of achieving that something. If Agile cannot aid that IT department then Agile is the wrong way for that department and company.

*sigh* I’ve gone on and on about this and still not got to my own main point, which is this.

– Can we identify reasons for Agile and XP Failing.
– Having identified the Reasons, can we fix them in simple ways?
– Can we create some simple guidelines as to when a project should be more Agile and when it should be more Up-Front design.

I’d love to know people’s opinions on those three points above.

Friday Philosophy – Why doesn’t Agile work? September 16, 2011

Posted by mwidlake in development, Friday Philosophy, Management.
Tags: , ,
13 comments

Why doesn’t Agile Development Methodology seem to work?

I’m going say right here at the start that I like much of what is in Agile, for many, many years I’ve used aspects of Rapid Application Development {which Agile seems to have borrowed extensively from} to great success. However, after my post last week on database design, many of the comments were quite negative about Agile – and I had not even mentioned it in my post!

To nail my flag to the post though, I have not seen an Agile-managed project yet that gave me confidence that Agile itself was really helping to produce a better product, a product more quickly and most certainly not a final system that was going to be easy to maintain. Bring up the topic of Agile with other experienced IT people and I would estimate 90% of the feedback is negative.

That last point about ongoing maintenance of the system is the killer one for me. On the last few projects I have been on where the culture was Agile-fixated I just constantly had this little voice in my head going:

“How is anyone going to know why you did that in six months? You’ve just bolted that onto the side of the design like a kludge and it really is a kludge. When you just said in the standup meeting that we will address that issue ‘later’, is that the same “later” that accounts for the other half-dozen issues that seem to have been forgotten?”.

From what I can determine after the fact, that voice turns out to be reason screaming out against insanity. A major reason Agile fails is that it is implemented in a way that has no consideration for post-implementation.

Agile, as it is often implemented, is all about a headlong rush to get the job done super-quick. Ignore all distractions, work harder, be completely focused and be smarter. It really does seem to be the attitude by those who impose Agile that by being Agile your staff will magically come up with more innovative solutions and will adapt to any change in requirements just because they work under an agile methodology. Go Agile, increase their IQ by 10 points and their work capacity by 25%. Well, it doesn’t work like that. Some people can in fact think on their feet and pull solutions out of thin air, but they can do that irrespective of the methodology. People who are more completer-finishers, who need a while to change direction but boy do they produce good stuff, have you just demoralized and hamstrung them?Agile does not suit the way all people work and to succeed those people it does not suit need to be considered.

The other thing that seems to be a constant theme under Agile is utterly knackered {sorry, UK slang, knackered means tired, worn out and a bit broken} staff. Every scrum is a mad panic to shove it all out of the door and people stop doing other things to cope. Like helping outside the group or keeping an eye on that dodgy process they just adopted as it needed doing. Agile fails when it is used to beat up team. Also, I believe Agile fails when those ‘distractions’ are ignored by everyone and work that does not fall neatly into a scrum is not done.

I suppose it does not help that my role has usually been one that is more Production Support than development and Agile is incompatible with production support. Take the idea of the scrum, where you have x days to analyse, plan, design, unit test and integrate the 6 things you will do in this round. On average I only spend 50% of my time dealing with urgent production issues, so I get allocated several tasks. Guess what, if I end up spending 75% of my time that week on urgent production issues, and urgent production issues have to take priority, I can screw up the scrum all on my own. No, I can’t pass my tasks onto others in the team as (a) they are all fully assigned to their tasks and (b) passing over a task takes extra time. Agile fails when it is used for the wrong teams and work type.

I’ve come to the conclusion that on most projects Agile has some beneficial impact in getting tasks done, as it forces people to justify what they have done each and every day, encourages communication and gives the developers a reason to ignore anything else that could be distracting them as it is not in the scrum. Probably any methodology would help with all of that.

My final issue with Agile is the idiot fanatics. At one customer site I spent a while at, they had an Agile Coach come around to help the team to become more agile. I thought this was a little odd as this team was actually doing a reasonable job with Agile, they had increased productivity and had managed to avoid the worst of the potential negative impacts. This man came along and patronisingly told us we were doing OK, but it was hard for us to raise our game like this, we just needed help to see the core values of Agile and, once we did, once we really believed in it, productivity would go up 500% {That is a direct quote, he actually said “productivity will go up by 500%”}. He was sparkly-eyed and animated and full of the granite confidence of the seriously self-deluded. I think he managed to put back the benefits of Agile by 50%, such was the level of “inspiration” he gave us. Agile fails when it is implemented like a religion. It’s just a methodolgy guys.

I find it all quite depressing as I strongly suspect that, if you had a good team in a positive environment, doing a focused job, Agile could reap great rewards. I’m assured by some of my friends that this is the case. {update – it took my good friend Mike less than an hour to chime in with a comment. I think I hit a nerve}.

Friday Philosophy – The Dying Art of Database Design? September 9, 2011

Posted by mwidlake in Architecture, development, Friday Philosophy, rant.
Tags: , , ,
35 comments

How many people under the age of {Martin checks his age and takes a decade or so off} ohh, mid 30’s does any database design these days? You know, asks the business community what they want the system to do, how the information flows through their business, what information they need to report on. And then construct a logical model of that information? Judging by some of the comments I’ve had on my blog in the last couple of years and also the meandering diatribes of bitter, vitriolic complaints uttered by fellow old(er) hacks in the pub in the evening, it seems to be coming a very uncommon practice – and thus a rare and possibly dying skill.

{update – this topic has obviously been eating at my soul for many years. Andrew Clark and I had a discussion about it in 2008 and he posted a really good article on it and many, many good comments followed}

Everything seems to have turned into “Ready, Fire, Aim”. Ie, you get the guys doing the work in a room, develop some rough idea of what you want to develop (like, look at the system you are replacing), start knocking together the application and then {on more enlightened projects} ask the users what they think. The key points are the that development kicks off before you really know what you need to produce, there is no clear idea of how the stored data will be structured and you steer the ongoing development towards the final, undefined, target. I keep coming across applications where the screen layouts for the end users seem to almost be the design document and then someone comes up with the database – as the database is just this bucket to chuck the data into and scrape it out of again.

The functionality is the important thing, “we can get ‘someone’ to make the database run faster if and when we have a problem”.

Maybe I should not complain as sometimes I am that ‘someone’ making the database run faster. But I am complaining – I’m mad as hell and I ain’t gonna take it anymore! Oh, OK, in reality I’m mildly peeved and I’m going to let off steam about it. But it’s just wrong, it’s wasting people’s time and it results in poorer systems.

Now, if you have to develop a simple system with a couple of screens and a handful of reports, it might be a waste of time doing formal design. You and Dave can whack it together in a week or two, Chi will make the screens nice, it will be used by a handful of happy people and the job is done. It’s like building a wall around a flower bed. Go to the local builders merchants, get a pallet of bricks, some cement and sand (Ready), dig a bit of a trench where you want to start(Aim) and put the wall up, extending it as you see fit (Fire). This approach won’t work when you decide to build an office block and only a fool from the school of stupid would attempt it that way.

You see, as far as I am concerned, most IT systems are all about managing data. Think about it. You want to get your initial information (like the products you sell), present it to the users (those customers), get the new (orders) data, pass it to the next business process (warehouse team) and then mine the data for extra knowledge (sales patterns). It’s a hospital system? You want information about the patients, the staff, the beds and departments, tests that need doing, results, diagnoses, 15,000 reports for the regulators… It’s all moving data. Yes, a well design front end is important (sometimes very important) but the data is everything. If the database can’t represent the data you need, you are going to have to patch an alteration in. If you can’t get the data in quick enough or out quick enough, your screens and reports are not going to be any use. If you can’t link the data together as needed you may well not be able to DO your reports and screens. If the data is wrong (loses integrity) you will make mistakes. Faster CPUS are not going to help either, data at some point has to flow onto and off disks. Those slow spinning chunks of rust. CPUS have got faster and faster, rust-busting has not. So data flow is even more important than it was.

Also, once you have built your application on top of an inadequate database design, you not only have to redesign it, you have to:

  • do some quick, hacky  fixes to get by for now
  • migrate the existing data
  • transform some of it (do some data duplication or splitting maybe)
  • alter the application to cope
  • schedule all of the above to be done together
  • tie it in with the ongoing development of the system as hey, if you are not going to take time to design you are not going to take time to assess things before promising phase 2.

I’m utterly convinced, and experience backs this up, that when you take X weeks up front doing the database design, you save 5*X weeks later on in trying to rework the system, applying emergency hacks and having meetings about what went wrong. I know this is an idea out of the 80’s guys, but database design worked.

*sigh* I’m off to the pub for a pint and to reminisce about the good-old-days.

Friday Philosophy – Tainted by the Team August 26, 2011

Posted by mwidlake in development, Friday Philosophy, humour, Management, rant.
Tags: , , , ,
3 comments

A while ago whilst working on one project, a colleague came back to his desk next to mine and exclaimed “I hate working with that team! – they are so bad that it makes everyone who works with them look incompetent!”

Now there is often an argument to be made that working with people who are not good at their job can be great for you, as you always looks good in comparison {it’s like the old adage about hanging around with someone less attractive than you – but I’ve never found anyone I can do that with…}. It is to an extent true of course, and though it can seem a negative attitude, it is also an opportunity to teach these people and help them improve, so everyone potentially is a winner. I actually enjoy working with people who are clueless, so long as they will accept the clues. You leave them in a better state than when you joined them.

However, my friend was in the situation where the team he was dealing with was so lacking in the skills required that if you provided them with code that worked as specified, which passed back the values stated in the correct format derived from the database with the right logic… their application code would still fall over with exceptions – because it was written to a very, very “strict” interpretation of the spec.

In one example, the specification for a module included a “screen shot” showing 3 detail items being displayed for the parent object. So the application team had written code to accept only up to 3 detail items. Any more and it would crash. Not error, crash. The other part of the application, which the same people in the application team had also written, would let you create as many detail items for the parent as you liked. The data model stated there could be many more than 3 detail items. I suppose you could argue that the specification for the module failed to state “allow more than three items” – but there was a gap in the screen to allow more data, there was the data model and there was the wider concept of the application. In a second example, the same PL/SQL package was used to populate a screen in several modes. Depending on the mode, certain fields were populated or not. The application however would fail if the variables for these unused fields were null. Or it would fail if they were populated. The decision for each one depended on the day that bit of the module had been written, it would seem. *sigh*

The situation was made worse by the team manager being a skilled political animal, who would always try to shift any blame to any and all other teams as his first reaction. In the above examples he tried to immediately lay the blame with my colleague and then with the specification, but my colleague had managed to interpret the spec fine (he did the outrageous thing of asking questions if he was not sure or checked the data model). Further, this manager did not seem to like his people asking us questions, as he felt it would make it look like they did not know what they were doing. Oddly enough they did NOT know what they were doing. Anyway, as a consequence of the manager’s hostile attitude, the opportunity to actually teach the poor staff was strictly limited.

That was really the root of the problem, the manager. It was not the fault of the team members that they could not do the job – they had not had proper training, were unpracticed with the skills, siloed into their team, not encouraged to think beyond the single task in front of them and there was no one available to show them any better. The issue was that they were being made to do work they were not able to do. The problem, to my mind, was with the manager and with the culture of that part of the organisation that did not deal with that manager. He obviously did not believe that rule one of a good manager is to look after the best interests of your team. It was to protect his own backside.

But the bottom line was that this team was so bad that anything they were involved in was a disaster and no one wants to be part of a disaster. If you worked with them, you were part of the disaster. So we took the pragmatic approach. When they had the spec wrong, if we would alter our code to cope, we would alter our code. And document that. It gave us a lot of work and we ended up having a lot of “bugs” allocated to our team. But it got the app out almost on time. On-going maintencance could be a bit of an issue but we did what we could on our side to spell out the odditites.

I still know my friend from above and he still can’t talk about it in the pub without getting really quite agitated:-)

IOT Part 5 – Primary Key Drawback – and Workaround August 17, 2011

Posted by mwidlake in Architecture, development, performance, Uncategorized.
Tags: , , , ,
18 comments

<..IOT1 – the basics
<….IOT2 – Examples and proofs
<……IOT3 – Significantly reducing IO
<……..IOT4 – Boosting Buffer Cache efficiency
……….>IOT6a – Slowing Down Insert
…………>IOT6(B) – OLTP Inserts

One of the drawbacks of IOTs is that they have to be organised by the primary key of the table. If your table does not have a primary key, it cannot be Index Organized.

I would argue that any table that holds persistent data (ie it is not transient data about to be loaded into the database proper or a temporary working set) should have a Primary Key. If I am working on a system and come across a table without a Primary Key I immediately challenge it. {There are occasional, valid reasons for a persistent table to lack a PK, but I confess I am struggling right now to come up with one – but I digress}. I’m a big fan of database-enforced referential integrity.

The problem is, if you you are making a table into an Index Organized Table so that the records are clustered to match how you process the data, it could well be that the primary key is not related to how you want to order the data. Let me give you an example. {Oh, and for brevity, I’ll put the SQL statements to create the examples at the end of this post}.

mdw11> desc ACCOUNT
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ----------------------
 ACCO_TYPE                                             NOT NULL NUMBER(2)  ---PKK
 ACCO_ID                                               NOT NULL NUMBER(10) ---PK
 NAME                                                  NOT NULL VARCHAR2(100)
 DATE_1                                                NOT NULL DATE
 NUM_1                                                          NUMBER(2)
 NUM_2                                                          NUMBER(2)

mdw11> desc TRANSACTION_HEAP
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ----------------------
 TRAN_TYPE                                             NOT NULL NUMBER(2)  ---PK
 TRAN_ID                                               NOT NULL NUMBER(10) ---PK
 ACCO_TYPE                                             NOT NULL NUMBER(2)
 ACCO_ID                                               NOT NULL NUMBER(10)
 CRE_DATE                                              NOT NULL DATE
 VC_1                                                  NOT NULL VARCHAR2(100)
 DATE_1                                                         DATE
 NUM_1                                                          NUMBER(2)
 NUM_2                                                          NUMBER(2)

This is a classic parent-child relationship, each account has a set of transactions. I’ve expanded on my prior example by:

  • changing the parent to be called ACCOUNT and giving it a two-part Primary Key, ACCO_TYPE and ACCO_ID.
  • Changing the child to be called TRANSACTION and given it a Primary Key of TRAN_TYPE and TRAN_ID.
  • In a real system I would create a foreign key from TRANSACTION.ACCO_TYPE,ACCO_ID to the ACCOUNT table primary key.

Note that the Primary Key on the TRANSACTION table is NOT based on the account columns. Maybe in theory the primary key on the transaction table would be the account columns and the cre_date – if the cre_date held a datetime AND two records could not be created on the same second.  If we used a timestamp then you might be able to argue no record would be created in the same fraction of a second – except that often transactions get given a fixed time. Midnight springs to mind (consider when you would add the accrued interest on a savings account). So, a new surrogate Primary Key is intoduced, a transaction type and ID. TRAN_TYPE and TRAN_ID are the primary key of the TRANSACTION table.

I’d say that I see such two-part primary keys more often then single column primary keys these days. Possibly because so many databases receive information from other systems or even applications on the same database.

As before, I create 10,000 parent records (ACCOUNT) and 10,000 random child records (TRANSACTION_HEAP) each day for 100 days. 

Also as before, I want to select information grouped by account. I want all the transactions for an account, not all transactions on a day or for a range of transaction IDs. Hopefully this is a scenario most of you will recognise. 

Selecting a sum of one of the non-indexed columns and a count of records for a given account takes quite a bit of effort on the part of the HEAP table:

select sum(num_1), count(*) from transaction_heap th where acco_type=10 and acco_id=123

SUM(NUM_1)   COUNT(*)
---------- ----------
      1201        116
Elapsed: 00:00:02.68

Execution Plan
---------------------------------------------------------------------------------------
| Id  | Operation          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                  |     1 |    10 |  3466   (1)| 00:00:52 |
|   1 |  SORT AGGREGATE    |                  |     1 |    10 |            |          |
|*  2 |   TABLE ACCESS FULL| TRANSACTION_HEAP |   100 |  1000 |  3466   (1)| 00:00:52 |
---------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      13929  consistent gets
      13921  physical reads

Of course, it has to do a full table scan as my Primary Key is on two columns that have nothing to do with the query. I can repeat this statement as often as I like, it takes the same number of physical reads and consistent gets as it is not caching the information.

I add an index on the ACCO_TYPE, ACCO_ID and CRE_DATE columns and re-run the query:

select sum(num_1),count(*) from transaction_heap th where acco_type=10 and acco_id=123

SUM(NUM_1)   COUNT(*)
---------- ----------
      1201        116
Elapsed: 00:00:00.01

Execution Plan
---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     1 |    10 |   103   (0)| 00:00:02 |
|   1 |  SORT AGGREGATE              |                    |     1 |    10 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| TRANSACTION_HEAP   |   100 |  1000 |   103   (0)| 00:00:02 |
|*  3 |    INDEX RANGE SCAN          | TRHE_ACCO_CRDA_IDX |   100 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        120  consistent gets
          0  physical reads

I ran it twice to get rid of the parse overhead, but the first time it did a load of physical reads to support those 120 consistent gets.

I could recreate the TRANSACTION_HEAP table as an IOT of course – but it will be organized by the TRAN_TYPE and TRAN_ID columns. That is useless to me. Even if I add a secondary index on the ACCO_TYPE, ACCO_ID and CRE_DATE columns it will at best be no better than the above HEAP table and, because the secondary index will hold rowid guesses and will sometimes have to use the primary key information to walk down the index, it will be worse. {I am not sure I have explained that bit yet about row guesses. Post 6?}

So, if you want the information organized in an order that is not helped by the Primary Key of the table, an IOT is useless to you. You cannot achieve that physical record grouping by the IOT method.

I am going to do something else though. I’m going to sort of change the rules to work around the issue.

As far as the physical implementation is concerned, a Primary Key is in effect just a unique index and two rules. The rules are that all the columns in the Primary Key must be mandatory and there can only be one PK on a table. I can have as many unique indexes as I like, so long as the key combinations lead to no duplicate rows. I can alter my Primary Key – it is not set in stone.

Before I go any further I am going to stress that I am about to abuse the concept of the Primary Key. I’d need to do a seperate blog to fully justify saying what a Primary Key is, but part of the concept is that no column must be derivable from other columns in the PK and it must be the minimum number of columns required to make the key unique.

We want to group the data by the account columns and the creation date. So let’s define a Primary Key that is ACCO_TYPE, ACCO_ID, CRE_DATE and whatever else we need to guarantee the key is unique. In our case that would be TRAN_TYPE and TRAN_ID – the current Primary Key! If I knew I would always want all records for the account, I could drop the CRE_DATE out of my fake Primary Key, but I know that the creation date is very often important. You may want activity for the last month, last quarter, a stated date or even an exact datetime. For all those cases, including the CRE_DATE column is highly beneficial.

So, I create TRANSACTION_IOT below and populate it with data.

desc transaction_iot
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- --------------
 TRAN_TYPE                                                   NOT NULL NUMBER(2)
 TRAN_ID                                                     NOT NULL NUMBER(10)
 ACCO_TYPE                                                   NOT NULL NUMBER(2)
 ACCO_ID                                                     NOT NULL NUMBER(10)
 CRE_DATE                                                    NOT NULL DATE
 VC_1                                                        NOT NULL VARCHAR2(100)
 DATE_1                                                               DATE
 NUM_1                                                                NUMBER(2)
 NUM_2                                                                NUMBER(2)

--
--

OWNER    TABLE_NAME          NUM_ROWS      BLOCKS AVG_L GLS ULS LST_ANL      PRT  SAMP_SIZE
-------- -------------- ------------- ----------- ----- --- --- ------------ --- ----------
MDW      TRANSACTION_IO      1000,000                94 YES NO  160811 23:05 NO     1000000
         T
INDEX_NAME      TYP PRT UNQ BL     L_BLKS   DIST_KEYS       CLUSTF     LB_KEY     DB_KEY LST_ANL
--------------- --- --- --- -- ---------- ----------- ------------ ---------- ---------- ------------
TRIO_PK         IOT NO  UNI  2     21,433    1058,381            0          1          1 160811 23:05
TRIO_TRAN_UQ    NOR NO  UNI  2      4,386    1000,000      999,405          1          1 160811 23:05

INDEX_NAME                   TABLE_NAME       PSN COL_NAME
---------------------------- ---------------- --- ------------------------------------------------
TRIO_PK                      TRANSACTION_IOT  1   ACCO_TYPE
TRIO_PK                      TRANSACTION_IOT  2   ACCO_ID
TRIO_PK                      TRANSACTION_IOT  3   CRE_DATE
TRIO_PK                      TRANSACTION_IOT  4   TRAN_TYPE
TRIO_PK                      TRANSACTION_IOT  5   TRAN_ID
TRIO_TRAN_UQ                 TRANSACTION_IOT  1   TRAN_TYPE
TRIO_TRAN_UQ                 TRANSACTION_IOT  2   TRAN_ID

Now let’s select our data from that IOT.

select sum(num_1),count(*) from transaction_IOT th where acco_type=10 and acco_id=123

SUM(NUM_1)   COUNT(*)
---------- ----------
      1030         97
Elapsed: 00:00:00.00

Execution Plan
-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |    10 |     5   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |         |     1 |    10 |            |          |
|*  2 |   INDEX RANGE SCAN| TRIO_PK |   100 |  1000 |     5   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          5  consistent gets
          0  physical reads

5 consistent gets. It has walked down the IOT and scanned 3 blocks to collect that data. Our IOT based on an abused Primary Key does the job of supporting range scans efficiently, with the benefits to the Block Buffer Cache I refered to in IOT4

That “Primary Key” I created is NOT a real Primary key. It is not the minimum number of columns I need to uniquely identify a column. My Primary key is on ACCO_TYPE, ACCO_ID, CRE_DATE,TRAN_TYPE and TRAN_ID – the account, the datetime of the transaction and the transaction. What if I was to alter the datetime by a second? I could create a record with the same account, the same transaction_id as an existing record but a second into the future. That is just wrong. After all, the whole point of the TRAN_TYPE and TRAN_ID is to uniquely identify a record. If created the new record I stated above, there would be two records for the one TRAN_TYPE/TRAN_ID.

I protect against this ability to create incorrect records by creating a UNIQUE KEY against the table also, against columns TRAN_TYPE and TRAN_ID. This is unique index TRIO_TRAN_UQ as displayed in the information above. A Primary Key is usually the referenced parent of any referential integrity, ie foreign keys, between this table and any children. However, a Unique Key can also be the target of Referential Integrity. I cannot create a record in TRANSACTION_IOT with the same TRAN_TYPE/TRAN_ID as already exists due to this unique constraint:

insert into transaction_iot_p
values
(2,163 -- existing transaction type and id
,10,11111
,sysdate,'ASCAFWEWEHGWSHERJH',SYSDATE,7,7)
/

insert into transaction_iot_p
*
ERROR at line 1:
ORA-00001: unique constraint (MDW.TIP_TRAN_UQ) violated

Elapsed: 00:00:00.34

So, I have my IOT to support querying code and I have my Unique Constraint to police my original Primary Key and be used as the target for any Foreign Key requirements I might need. This is not a perfect solution – the design will look a little strange to anyone who looks at this database and the Unique Key is supported by a secondary index on an IOT which can have some issues. But it does work.

My “primary key” is no longer a true Primary Key. It is just a tool for allowing me to organise the data physically in a way that will support my application. That is what I meant about changing the rules.

I am willing to abuse a Primary Key in this way because of the performance benefits. It is a solution for a system where most of the query access is against a set of records which would be scatter-gunned across a table if you did not use some sort of physical grouping. If you are reading this and thinking “oh, I am not sure about you doing that to a Primary Key Martin” then you are probably OK to consider this solution. If you can’t see a problem with it then you are either very used to turning off referential integrity and understand the consequences – or you simply do not understand what RI does for your database. If you are in the latter camp, do not even consider doing this. If you are one of those people who works on data warehouse and for whom is it just part of the DW process to turn off RI as that is what you do for data warehouses – DON’T do this!

OK, I’m nearly at the end of this topic but I want to touch on partitioning. You can range partitition an Index Organized Table from 9i I think. It is certainly supported in Oracle 10 upwards. Partitioning is important in this technique because a unique index must contain the partition key if the index is to be locally partitioned – otherwise the index must be global, ie the one index object references all the partitions across the table.

Below is my table creation statement for the IOT organized by the account, creation date and transaction. The table is ranged partitioned by CRE_DATE, into months.

create table transaction_IOT_P
(tran_type number(2)     not null
,tran_id   number(10)    not null
,acco_type number(2)     not null
,acco_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint tip_pk primary key(ACCO_TYPE,ACCO_ID,CRE_DATE,TRAN_TYPE,TRAN_ID) 
--  using index tablespace index_01
,constraint tip_tran_uq unique (TRAN_TYPE,TRAN_ID)
  using index tablespace index_01
)
organization index
tablespace data_01
partition by range  (cre_date)
(partition rm20110601 values less than (to_date('01-06-2011','DD-MM-YYYY')) 
  tablespace data_01
,partition rm20110701 values less than (to_date('01-07-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20110801 values less than (to_date('01-08-2011','DD-MM-YYYY'))
  tablespace data_01
,PARTITION RMTOP  VALUES LESS THAN (MAXVALUE)
  tablespace USERS
)
/

You can see the definition of my fake Primary Key and the fact that it does not have a tablespace defined for it – as the ‘organization index’ statement lower down causes the table to be an IOT and the segment will go into the “table” tablespace.
I then state my Unique Index to police the integrity of my table – TIP_TRAN_UQ
I then state the partition clause, ‘partition by range (cre_date)’ followed by my initial partition definitions. It’s as simple as that to partition an IOT.

What gets created? A set of four segments for the IOT, which are primary key index segments of course, not table segments:

@seg_dets
Enter value for seg_name: tip_pk
Enter value for owner: mdw

OWNER    SEG_NAME        SEG TS_NAME     BYTES_K    BLOCKS exts   INI_K   NXT_K
-------- --------------- --- -------- ---------- --------- ---- ------- -------
MDW      TIP_PK RM201106 IP  DATA_01      45,056     5,632   59      64    1024
         01
MDW      TIP_PK RM201107 IP  DATA_01      60,416     7,552   74      64    1024
         01
MDW      TIP_PK RM201108 IP  DATA_01      61,440     7,680   75      64    1024
         01
MDW      TIP_PK RMTOP    IP  USERS        34,816     4,352   49      64    1024

Note that the SEG (type) is “IP” – my script decodes the type into a short mnemonic and IP is Index Partition. You can see the tablespaces those segments are in and the size of the segments. What about that unique index I created?

@seg_dets
Enter value for seg_name: tip_tran_uq
Enter value for owner: mdw

OWNER    SEG_NAME        SEG TS_NAME     BYTES_K    BLOCKS exts   INI_K   NXT_K
-------- --------------- --- -------- ---------- --------- ---- ------- -------
MDW      TIP_TRAN_UQ     IND INDEX_01     35,840     4,480   50      64    1024

It is a single segment, a normal index. I cannot have it as a locally partitioned index as it is a unique index and lacks the partitioning key in it’s definition.

This could be a problem. The usual reason you partition a table is because it is too large to comfortably be held as a single segment {and also for the benefit of partition exclusion, but you don’t usually need that on small tables!}. This means that the global index to support that primary key is going to be large. Now, I made a “mistake” when I created my partitioned IOT – I did not create a partition for this month, some data has gone into the MAXVALUE partition (see the size of the segment above, 34K and 49 extents). If I split that last partition to create a new partition for this month and a new MAXVALUE partition, I will invalidate the global index and I will have to rebuild it. Very large indexes can take a long time and a heck of a lot of temporary space to gather and sort the data. That could be an ongoing maintenance nightmare.

In a recent implementation I did using IOTs I did not create a global unique index to replace the original foreign key. I create a non-unique, locally partitioned index to support some queries using those columns and the table had no children so no Foreign Keys were needed. But there was something else I needed to do as I had removed the referential integrity rules for that table. Remember I sad I am a fan of database enforced referential integrity? Now I “know” the application will not create data that will break the removed Primary Key rule, I “know” I documented what I had done. And I know that in 12 months time there will almost certainly be data that will have duplicate values for that Primary Key if it is not enforced somehow, because it always happends. I need to implement a little script to regularly check for duplicate TRAN_TYPE/TRAN_ID conmbinations being created. If you remove RI from a relational database, you should replace it in some way. Otherwise, you will pretty soon have a non-relational database.

That’s it for this topic. The below is my example script for creating most of the above, in case anyone wants it or wants to verify what I have said.

-- test_iot2.sql
-- create test tables to show how you can work around the PK issue and
-- partition an IOt - and the possible impact on my PK workaround.
spool test_iot2.lst
--
set feed on timi on pause off
--
drop table account purge;
drop table transaction_heap purge;
drop table transaction_iot purge;
drop table transaction_iot_p purge;
--
-- create 10,000 parent records
create table mdw.account
(ACCO_type  number(2)     not null 
,ACCO_id       number(10)    not null 
,name     varchar2(100) not null
,date_1   date          not null
,num_1    number(2)
,num_2    number(2) 
,constraint ACCO_pk primary key(ACCO_type,ACCO_id) 
 using index tablespace index_01
)
tablespace data_01
/
insert into account
select 10 
,rownum
,dbms_random.string('U',mod(rownum,10)+50)
,sysdate-(mod(rownum,500)+1000)
,mod(rownum,99)+1
,trunc(dbms_random.value(0,100))
from dual connect by level <= 5000
/
insert into account
select 15 
,rownum
,dbms_random.string('U',mod(rownum,10)+50)
,sysdate-(mod(rownum,500)+1000)
,mod(rownum,99)+1
,trunc(dbms_random.value(0,100))
from dual connect by level <= 5000
/
--
-- create the table to hold the children as a heap table
create table transaction_heap
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint trhe_pk primary key(tran_type,tran_id) 
 using index tablespace index_01
)
tablespace data_01
/
--
create index trhe_ACCO_crda_idx
on transaction_heap(ACCO_type,ACCO_id,cre_date)
tablespace index_01
/
-- populate the Heap table
-- 100 days, 10000 people
declare
v_num number :=10000; -- number of people
v_str varchar2(60);
begin
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..100 loop --days to do
  v_str:=dbms_random.string('U',60);
  insert into transaction_heap
    (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,date_1,num_1,num_2)
  select mod(rownum,3)+1 
   ,((i-1)*v_num)+rownum
   , 5+(trunc(dbms_random.value(1,3))*5)
   ,trunc(dbms_random.value(1,v_num/2))
   ,sysdate-(100-i) + (rownum/(60*60*24) )
   ,substr(v_str,1,51+mod(rownum,10))
   ,sysdate-(100-i) + ((mod(rownum,30)+1)/3)
   ,mod(rownum,20)+1
   ,mod(rownum,99)+1
  from dual connect by level <=v_num;
end loop;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
--
--
--
create table transaction_IOT
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint trio_pk primary key(ACCO_type,ACCO_id,cre_date,tran_type,tran_id) 
--  using index tablespace index_01
,constraint trio_tran_uq unique (tran_type,tran_id)
  using index tablespace index_01
)
organization index
tablespace data_01
/
--
-- populate the IOT table
-- 100 days, 10000 people
declare
v_num number :=10000; -- number of people
v_str varchar2(60);
begin
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..100 loop --days to do
  v_str:=dbms_random.string('U',60);
  insert into transaction_IOT
    (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,date_1,num_1,num_2)
  select mod(rownum,3)+1 
   ,((i-1)*v_num)+rownum
   , 5+(trunc(dbms_random.value(1,3))*5)
   ,trunc(dbms_random.value(1,v_num/2))
   ,sysdate-(100-i) + (rownum/(60*60*24) )
   ,substr(v_str,1,51+mod(rownum,10))
   ,sysdate-(100-i) + ((mod(rownum,30)+1)/3)
   ,mod(rownum,20)+1
   ,mod(rownum,99)+1
  from dual connect by level <=v_num;
end loop;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
create table transaction_IOT_P
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint tip_pk primary key(ACCO_type,ACCO_id,cre_date,tran_type,tran_id) 
--  using index tablespace index_01
,constraint tip_tran_uq unique (tran_type,tran_id)
  using index tablespace index_01
)
organization index
tablespace data_01
partition by range  (cre_date)
(partition rm20110601 values less than (to_date('01-06-2011','DD-MM-YYYY')) 
  tablespace data_01
,partition rm20110701 values less than (to_date('01-07-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20110801 values less than (to_date('01-08-2011','DD-MM-YYYY'))
  tablespace data_01
,PARTITION RMTOP  VALUES LESS THAN (MAXVALUE)
  tablespace USERS
)
/
-- populate the IOT_P table
-- 100 days, 10000 people
declare
v_num number :=10000; -- number of people
v_str varchar2(60);
begin
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..100 loop --days to do
  v_str:=dbms_random.string('U',60);
  insert into transaction_IOT_P
    (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,date_1,num_1,num_2)
  select mod(rownum,3)+1 
   ,((i-1)*v_num)+rownum
   , 5+(trunc(dbms_random.value(1,3))*5)
   ,trunc(dbms_random.value(1,v_num/2))
   ,sysdate-(100-i) + (rownum/(60*60*24) )
   ,substr(v_str,1,51+mod(rownum,10))
   ,sysdate-(100-i) + ((mod(rownum,30)+1)/3)
   ,mod(rownum,20)+1
   ,mod(rownum,99)+1
  from dual connect by level <=v_num;
end loop;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
commit;
--
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'ACCOUNT')
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'TRANSACTION_HEAP')
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'TRANSACTION_IOT')
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'TRANSACTION_IOT_P')
--
select * from transaction_iot_p
where rownum < 10
/
insert into transaction_iot_p
values
(2,163 -- existing transaction type and id
,1,11111
,sysdate,'ASCAFWEWEHGWSHERJH',SYSDATE,7,7)
/
insert into transaction_iot_p
values
(3,163 -- new transaction type and id
,1,11111 -- but the whole of the rest of the record is the same.
,sysdate,'ASCAFWEWEHGWSHERJH',SYSDATE,7,7)
/
--
BEGIN
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
END;
/
--
spool off

IOT Part 4 – Greatly Boosting Buffer Cache Efficiency August 8, 2011

Posted by mwidlake in development, performance.
Tags: , , ,
11 comments

<..IOT1 – the basics
<….IOT2 – Examples and proofs
<……IOT3 – Significantly reducing IO
……..>IOT5 – Primary Key issues
……….>IOT6a – Slowing Down Insert
…………>IOT6(B) – OLTP Inserts

So far I have covered the basics of Index Organized Tables, created some examples and shown how IOTs can help reduce the cost of single row selects and then how they can greatly aid range scans. Follow the above links for details.

Now I’d like to show that the use of IOTs has the potential to make your block buffer cache (BBC) far more efficient. Going to disc is very,very slow compared to going to memory {NB solid state storage improves this situation but does not remove it}. The block buffer cache has always been critical to oracle SQL Select performance as it allows you to access data in memory rather than disc and in general the more block buffer cache you have the faster your system will be.
{I am of the opinion that the BBC is even more important now than ever. As hard discs get larger we are seeing fewer and fewer spindles per GB of storage and, in essence, disc storage is effectively getting slower – because more data is hosted on the same number of spindles and those spindles are not themselves getting faster – I digress, for more details see posts Big Discs are Bad and IOPs and Form Factors}

In the scenario I’ve covered in my previous posts on IOTs we have a system where child data is coming in for many parents every day for 100 days. With a heap table the data pours into the growing end of the table, usually a record or two per parent each day and no guarantee that if two records come in, they will be put into the same block.

So, when you select a child record for a parent you get the situation shown below:

For many systems, the Block Buffer Cache is holding a lot of data no queries asked for -collateral data

When oracle needs to collect a record from the table, it has to read the whole block. Oracle only reads in tablespace data in whole blocks. That record comes with many other records in it that you did not ask for or want. I refer to this as Collateral Data – innocent bystander data that has got pulled into the BBC just because it was in the same block as required data. The larger the block size, the more collateral data there is.

To get all the child records for the parent, you need to read all those individual table blocks holding one or two records of interest. For our 100 child records you will probably need to read in close to 100 table blocks. Your Block Buffer Cache is filling up with of blocks where only one row out of each block is “of interest”. If that is one row out of 80 in a block, you are effectively wasting 98.75% of the space that table takes up in the block buffer cache.

With an IOT the situation is very different. We have already seen in my previous post on reducing IO that for a range scan on the IOT, oracle does not need to go and collect records from blocks scattered throughout the table. It simply collects the IOT leaf blocks holding the relevant data. Not only does this require less IO, it also results in the fetched blocks mostly holding the required data. The percentage of collateral data is greatly reduced:

IOTs are a powerful tool in reducing collateral data and using the BBC more efficiently

Thus instead of 100 table blocks that mostly hold collateral data, you have 2 or 3 blocks holding mostly the data you are interested in. Your wastage, the collateral data, is about 33-50%. With my example tables from post IOT2, it is indeed an average of 2-3 IOT blocks holding all the data for a single parent and 100 heap table blocks holding the same data.

You can think of it another way.

With my IOT I use only 3% of the memory to cache a single set of records for a parent compared to that needed with a normal HEAP table.

Let’s extend that idea a little. Let’s say I have 100,000 customer and 5% of the customers are active.
Each customer has on average 500 * 200-byte activity records for 100K of data
Each 8K Heap Table block holds 40 records, a very inefficient IOT holds 20 records per block {I’ll go into the details of this in a later post, but I am being a little unfair on the IOT here}.
There is some grouping of records for the customer so that those 500 records are over 400 blocks.
The whole working set of 400 blocks * (100,000*0.05) customers *8K = 2,000,000K

Now replace the table with an IOT keyed on customer ID.
Those 400 records would be spread over (400/20)+2 blocks. The +2 is the start and end of the range.
That is 22 *(100,000*0.05) *8K = 110,000K

Both would also need the overhead of an index structure to be cached also, for the Heap table it is the Primary Key index, for the IOT it is the rest of the IOT structure.

So you would need 2GB or so of Block Buffer Cache dedicated to caching the working set held as a heap table and 110MB of Block Buffer Cache dedicated to caching the IOT equivalent.

How many of you spotted that the space needed to “cache the heap table working set” is actually twice the size of the table? Well, that is because with a 5% working set and 40 records per block, there is a fair chance that some of those Collateral Data records in each block are for the 5% working set. Over all, almost every block will hold data for two or three active users. This is one of the complications of working out how efficient you block buffer cache is likely to be and I’ll have to leave that to another post. In reality you would need to hold 99% of the heap table in memory to cache the whole working set, so still 1GB. The IOT is still far more efficient.

I’ll just finish by saying that on one project I worked on we would have needed several hundred GB of Block Buffer Cache to hold the working set of the main tables if held as heap tables. That volume of memory was simply not available. With IOTs this reduced to about 40GB. This was available. The majority of this working set was able to stay in their SGA and it meant that so long as the instance stayed up, that working set of data mostly stayed in cache. I was able to see on the live system that processing of the data for these active customers was mostly being supported by consistent gets and less than 1% disk reads.

So, by using IOTS I reduced consistent gets dramatically, made more efficient use of the block buffer cache and, as a result of that, reduced the number of physical reads needed to support the consistent gets.

Follow

Get every new post delivered to your Inbox.

Join 226 other followers