jump to navigation

With Modern Storage the Oracle Buffer Cache is Not So Important. May 27, 2015

Posted by mwidlake in Architecture, Hardware, performance.
Tags: , , , ,
11 comments

With Oracle’s move towards engineered systems we all know that “more” is being done down at the storage layer and modern storage arrays have hundreds of spindles and massive caches. Does it really matter if data is kept in the Database Buffer Cache anymore?

Yes. Yes it does.

Time for a cool beer

Time for a cool beer

With much larger data sets and the still-real issue of less disk spindles per GB of data, the Oracle database buffer cache is not so important as it was. It is even more important.

I could give you some figures but let’s put this in a context most of us can easily understand.

You are sitting in the living room and you want a beer. You are the oracle database, the beer is the block you want. Going to the fridge in the kitchen to get your beer is like you going to the Buffer Cache to get your block.

It takes 5 seconds to get to the fridge, 2 seconds to pop it open with the always-to-hand bottle opener and 5 seconds to get back to your chair. 12 seconds in total. Ahhhhh, beer!!!!

But – what if there is no beer in the fridge? The block is not in the cache. So now you have to get your car keys, open the garage, get the car out and drive to the shop to get your beer. And then come back, pop the beer in the fridge for half an hour and now you can drink it. That is like going to storage to get your block. It is that much slower.

It is only that much slower if you live 6 hours drive from your beer shop. Think taking the scenic route from New York to Washington DC.

The difference in speed really is that large. If your data happens to be in the memory cache in the storage array, that’s like the beer already being in a fridge – in that shop 6 hours away. Your storage is SSD-based? OK, you’ve moved house to Philadelphia, 2 hours closer.

Let's go get beer from the shop

Let’s go get beer from the shop

To back this up, some rough (and I mean really rough) figures. Access time to memory is measured in Microseconds (“us” – millionths of a second) to hundreds of Nanoseconds (“ns” – billionths of a second). Somewhere around 500ns seems to be an acceptable figure. Access to disc storage is more like Milliseconds (“ms” – thousandths of a second). Go check an AWR report or statspack or OEM or whatever you use, you will see that db file scattered reads are anywhere from low teens to say 2 or 3 ms, depending on what your storage and network is. For most sites, that speed has hardly altered in years as, though hard discs get bigger, they have not got much faster – and often you end up with fewer spindles holding your data as you get allocated space not spindles from storage (and the total sustainable speed of hard disc storage is limited to the total speed of all the spindles involved). Oh, the storage guys tell you that your data is spread over all those spindles? So is the data for every system then, you have maximum contention.

However, memory speed has increased over that time, and so has CPU speed (though CPU speed has really stopped improving now, it is more down to More CPUs).

Even allowing for latching and pinning and messing around, accessing a block in memory is going to be at the very least 1,000 times faster than going to disc, maybe 10,000 times. Sticking to a conservative 2,000 times faster for memory than disc , that 12 seconds trip to the fridge equates to 24,000 seconds driving. That’s 6.66 hours.

This is why you want to avoid physical IO in your database if you possibly can. You want to maximise the use of the database buffer cache as much as you can, even with all the new Exadata-like tricks. If you can’t keep all your working data in memory, in the database buffer cache (or in-memory or use the results cache) then you will have to do that achingly slow physical IO and then the intelligence-at-the-hardware comes into it’s own, true Data Warehouse territory.

So the take-home message is – avoid physical IO, design your database and apps to keep as much as you can in the database buffer cache. That way your beer is always to hand.

Cheers.

Update. Kevin Fries commented to mention this wonderful little latency table. Thanks Kevin.

“Here’s something I’ve used before in a presentation. It’s from Brendan Gregg’s book – Systems Performance: Enterprise and the Cloud”

Friday Philosophy – How many Consistent Gets are Too Much? October 30, 2009

Posted by mwidlake in Perceptions, performance.
Tags: , ,
2 comments

One of my good friends, Piet de Visser commented on a recent post that “380 {consistent gets} is too much” per row returned. He is right. He is wrong. Which?

Piet is actually referring to a given scenario I had just described, so he is probably {heck, he is spot on} correct as his comment was made in context – but it set me to thinking about the number of times I have been asked “is the number of consistent gets good or bad” without any real consideration to the context. The person asking the question usually just wanted a black/white good/bad ratio, which is what Piet also mentioned in his comment, a need to discuss such a ratio. I am on holiday in New England with my second bottle of wine, memories of having spent the day eating Lobster, kicking though Fall leaves and sitting by a warm fire reading a book, so I am mellow enough to oblige.

Sadly, out of context, no such ratio probably exists. *sigh*. There evaporates the warm glow of the day :-).

The question of “is the number of consistent gets per row good or bad?” is a bit like the question “is the pay rate good enough?”. It really depends on the whole context, but there is probably an upper limit. If I am helping my brother fit {yet another} kitchen then the pay rate is low. He has helped me fit a few, I have helped him fit a few, a couple of pints down the pub is enough and that equates to about 30p an hour. Bog standard production DBA work? 30-60 pounds an hour seems to be the going rate. Project Managing a system move that has gone wrong 3 times already? I need to factor in a long holiday on top of my normal day rate, so probably high hundreds a day. £10,000 a day? I don’t care what it is, I ain’t doing it as it is either illegal, highly unpleasant, both or involves chucking/kicking/hitting a ball around a field and slagging off the ref, and I ain’t good at ball games.

I have a rule of thumb, and I think a rule of thumb is as good as you can manage with such a question as “is {some sort of work activity on the database} per row too much?”. With consistent gets, if the query has less than 5 tables, no group functions and is asking a sensible question {like details of an order, where this lab sample is, who owes me money} then:

  • below 10 is good
  • 10-100 I can live with but may be room for improvement
  • above 100 per record, let’s have a look.

Scary “page-of-A4″ SQL statement with no group functions?

  • 100-1000 consistent gets is per row is fine unless you have a business reason to ask for better performance.

Query contains GROUP BY or analytical functions, all bets are pretty much off unless you are looking at

  • a million consistent gets or 100,000 buffer gets, in which case it is once again time to ask “is this fast enough for the business”.

The million consistent gets or 100,000 buffer gets is currently my break-even “it is probably too much”, equivalent to I won’t do anything for £10 grand. 5 years ago I would have looked quizzically at anything over 200,000 consistent gets or 10,000 buffer gets but systems get bigger and faster {and I worry I am getting old enough to start becoming unable to ever look a million buffer gets in the eye and not flinch}. Buffer gets at 10% of the consistent gets, I look at. It might be doing a massive full table scan in which case fair enough, it might be satisfying a simple OLTP query in which case, what the Hell is broken?

The over-riding factor to all the above ratios though is “is the business suffering an impact as performance of the database is not enough to cope”? If there is a business impact, even if the ratio is 10 consistent gets per row, you have a look.

Something I have learnt to look out for though is DISTINCT. I look at DISTINCT in the same way a medic looks at a patient holding a sheaf of website printouts – with severe apprehension. I had an interesting problem a few years back. “Last week” a query took 5 minutes to come back and did so with 3 rows. The query was tweaked and now it comes back with 4 rows and takes 40 minutes. Why?

I rolled up my mental sleeves and dug in. Consistent gets before the tweak? A couple of million. After the tweak? About a hundred and 30 million or something. The SQL had DISTINCT clause. Right, let’s remove the DISTINCT. First version came back with 30 or 40 thousand records, the second with a cool couple of million. The code itself was efficient, except it was traversing a classic Chasm Trap in the database design {and if you don’t know what a Chasm Trap is, well that is because Database Design is not taught anymore, HA!}. Enough to say, the code was first finding many thousands of duplicates and now many millions of duplicates.
So, if there is a DISTINCT in the sql statement, I don’t care how many consistent gets are involved, of buffer gets or elapsed time. I take out that DISTINCT and see what the actual number of records returned is.

Which is a long-winded way of me saying that some factors over-ride even “rule of thumb” rules. so, as a rule of thumb, if a DISTINCT is involved I ignore my other Rules of Thumb. If not, I have a set of Rules of Thumb to guide my level of anxiety over a SQL statement, but all Rules of Thumb are over-ridden by a real business need.

Right, bottle 2 of wine empty, Wife has spotted the nature of my quiet repose, time to log off.

Consistent Gets, db block gets and DML June 11, 2009

Posted by mwidlake in performance.
Tags: , ,
add a comment

A few posts back I talked about consistent gets, db block gets and how I had realised I had got by for years without fully undertanding the terms. I then described how I went about learning more.

Well, I am about to learn yet more on the subject.

Jonathan Lewis posted a question about just this recently and I’m pretty sure I am the blog he was looking at. I won’t link to it as wordpress can automatically convert a link to a comment on other wordpress blogs and I don’t want to do that yet. You can find Jonathan’s post easily enough from my blogroll.

When I looked at consistent gets and db block gets I did so from a purely select point of view – and that is my downfall. Oracle does a lot more than select. DB block gets appear when data is being changed…

You can get a copy ofmy test script here and this is the full output.

Below is a printout of my test script. I simply create an empty copy of the EMP table, populate it, select from it and then delete from it. I do it twice so that the second time all code is parsed. All I am looking at right now is how much db_block_get activity there is.

set pause off timi on echo on
drop table emp2 purge;
spool dml_gets_test1.lst
set autotrace on
-- create an empty table from emp
create table emp2 as select 
* from emp where rownum < 1;
-- round one of tests, where parsing overhead exists
select empno,ename from emp2;
insert into emp2 select * from emp;
select empno,ename from emp2;
delete from emp2;
-- round two, just the statements
insert into emp2 select * from emp;
select empno,ename from emp2;
delete from emp2;
commit;
select empno,ename from emp2;
spool off
set autotrace off
drop table emp2 purge;

So, what do we see? {i’ve trimmed the output to show only what I think are the relevant bits}

Oh, this is on version 11.1.0.6 of oracle on windows vista, 8k block size, db_file_multiblock_read_count 128 {wow, more than I thought}, straight as created from the download and instal.

select empno,ename from emp2;
no rows selected

Execution Plan
———————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 20 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| EMP2 | 1 | 20 | 2 (0)| 00:00:01 |
————————————————————————–

Statistics
———————————————————-
44 recursive calls
0 db block gets
8 consistent gets
0 physical reads

Just 8 consistent gets. No db_block gets, even though the statment is being parsed.

insert into emp2 select * from emp;
14 rows created.

Execution Plan
————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
————————————————————–
| 0 | INSERT STATEMENT | | 14 | 1218 | 3 (0)| 00:00:01
| 1 | LOAD TABLE CONVENTIONAL | EMP2 | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 1218 | 3 (0)| 00:00:01
————————————————————-

Statistics
———————————————————-
129 recursive calls
19 db block gets
107 consistent gets
0 physical reads
1672 redo size

As you can see, 129 recursive calls, 19 db block gets and 107 consistent gets. I believe a lot of that is due to parsing and maybe other things to do with the first time data is inserted into the table.

The second itteration gave the same plan but the statistics are very different:-

Statistics
———————————————————-
0 recursive calls
3 db block gets
8 consistent gets
0 physical reads

This helps show the overhead of parsing. however, we still see 3 db block gets

select empno,ename from emp2;

EMPNO ENAME
———- ———-
7369 SMITH
7499 ALLEN
{snip}
7934 MILLER

14 rows selected.

Execution Plan
———————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————–
| 0 | SELECT STATEMENT | | 14 | 280 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| EMP2 | 14 | 280 | 3 (0)| 00:00:01 |
————————————————————————–

Statistics (run two)
———————————————————-
0 recursive calls
0 db block gets
14 consistent gets
0 physical reads

No db block gets, 14 consistent gets as the data is read. Remember, this statement was parsed at the start of this script.

delete from emp2;
14 rows deleted.

Execution Plan
———————————————————-
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
——————————————————————-
| 0 | DELETE STATEMENT | | 1 | 3 (0)| 00:00:01 |
| 1 | DELETE | EMP2 | | | |
| 2 | TABLE ACCESS FULL| EMP2 | 1 | 3 (0)| 00:00:01 |
——————————————————————-

First run:

Statistics
———————————————————-
4 recursive calls
15 db block gets
15 consistent gets
0 physical reads
4036 redo size

second run:
Statistics
———————————————————-
0 recursive calls
15 db block gets
7 consistent gets
0 physical reads
4036 redo size

So, as you can see, with delete there appears to be plenty of db block gets activity that is nothing to do with parsing.

I’m not going to analyse or investigate any further, I’ll wait to see what Jonathan says or I have more free time.

What I do know for sure is that I need to know more about the details of db block gets and my advice that it is internal processing that you can do little about may be very questionable advice indeed…

Follow

Get every new post delivered to your Inbox.

Join 196 other followers