jump to navigation

Big Discs are Bad September 27, 2009

Posted by mwidlake in development, performance, Uncategorized.
Tags: , , ,
trackback

I recently came across this article on large discs for database by Paul Vallee. The article is over 3 years old but is still incredibly valid. It’s a very good description of why big discs are a problem for Oracle Database Performance. {Paul also introduces the BAHD-DB campaign – Battle Against Huge Disks for Databases, which I quite like}.

To summarise the article, and the problem in general, IT managers will buy big discs as they provide more GB per pound sterling. It saves money.
However, less discs is Bad For Performance. As an extreme example, you can now buy a single disc that is a TB in size, so you could put a 1TB Oracle database on one such disc. This one disc can only transfer so much data per second and it takes this one disc say 10ms to search for any piece of data. If you want the index entry from one place and the table row from another, that is at least two seeks. This will not be a fast database {and I am not even touching on the consideration of disc resilience}.

Now spread the data over 10 discs. In theory these 10 discs can transfer 10 times the total data volume and one disc can be looking for information while the others are satisfying IO requests {This is a gross over-simplification, but it is the general idea}.

IT Managers will understand this 1-to-10 argument when you go through it.

Kind of.

But then discussions about how many modern “fast” discs are need to replace the old “slow” discs ensure. It can be very, very hard to get the message through that modern discs are not much faster. A 1TB disc compared to a 4-year-old 100GB disc will not have a transfer speed 10 times faster and it will certainly not have a seek time ten times less, chances are the seek time is the same. And then there are the discussion of how much impact the larger memory caches of modern storage units have. Answer,(a) quite a lot so long as it is caching what you want and (b) even if it is perfectly caching what you want, as soon as you have read a cache-sized set of data, you are back to disc IO speed.

Bottom line. Disc Drives are now slower in proportion to the disc acerage than they used to be.

Anyway, I am certainly not the only person to have had these discussions, though I have had them for longer than most {due to my accidental work history of having worked on VLDBs for so long}. There are certainly practitioners of Oracle Arts who understand all of this far better than I and one of them, James Morle, recently started blogging. It’s looking good so far. If he keeps it up for a month, I’ll put him on my blog roll 🙂

There is, however, one aspect of the Big Disc Performance issue that does not seem to get much mention but is something I have suffered from more than a couple of times.

As a Database Performance person you have had the argument about needing spindles not disc acreage and won. The IT manager buys enough spindles to provide the I/O performance your system needs. Success.

However, the success has left a situation behind. You need 10 spindles over a couple of RAID 10 arrays to give you the IO you need. 250GB discs were the smallest you could buy. So you have 1.25TB of available storage (RAID 10 halves the storage) and have a 500GB database sitting on it. There is 750GB of empty storage there…

That 750GB of empty storage will not be left inviolate. Someone will use it. Someone will need “a bit of temporary storage” and that nice chunk of fast storage will be too inviting. Especially if it IS fast storage. It will be used.

Now your database, who’s storage you specified to support said database, is sharing it’s storage with another app. An  app that steals some of your IO and potentially {heck, let’s say this straight WILL} impact your database performance. And the galling thing? Twice, I had no idea my storage had become shared until I started getting odd IO latency issues on the database.

You may be able to make a logical argument for the spindles you need at design time. But you have almost no chance of protecting those spindles in the future. But who said working life was easy? 🙂

Comments»

1. Neil Chandler - September 28, 2009

Martin,

After you have designed the database layout, lie about the size of the database to the SAN and Unix admins. If you need 1.25TB of space for a 250MB database, pad it out. Get all of the disk allocated to the filesystem (not just the bit you need) and, if it is necessary to maintain the lie to the Unix guys, create empty datafiles to occupy the space. This gives the impression of huge utilisation and prevents the space being “stolen”. OK, you might have backup performance problems but it will compress well.

2. mwidlake - September 28, 2009

{Grinning at you} That’s just sneaky. I like it. Maybe a nice refinement of your plan would be to create a table or two with a BLOB column that would never be referenced and stuff them full of the same picture (nice ones of cats maybe).

Mind you, it could backfire. I got into an almighty argument with a Sys Admin/C programmer a few years ago who claimed Oracle was rubbish as my database took up 1.5 times as much space as the raw data as files did {and as you can probably appreciate, it was tough to create a database that skinny}. He would not listen to any arguments about indexing, ease-of-use, extra features etc and just went on about how much faster and space efficient flat files, hash arrays and lots of C code was. Wish I’d left him to develop the app in ‘C’…

3. Andy - September 28, 2009

Faking the usage? This is funny. Why didn’t I think of that!! 🙂

Seriously, your comment about “…xxxGB of empty storage will not be left inviolate. Someone will use it….” is so true. It has been repeating itself for all of my customers, on the redolog mount ….. blah

4. Neil Chandler - September 28, 2009

For the record, your manager/budget holder needs to be in on the lie and explicitly approve it, or you might be facing an uncomfortable grilling if you get found out and questioned about it. Unless you are the manager and budget holder, of course.

mwidlake - September 28, 2009

Are you suggesting this might be a tactic that could “set you free upon the oceans of opportunity” Neil? And is that the sound of pounding feet coming up behind your desk? 🙂

Only joshing.

5. Log Buffer #164: a Carnival of the Vanities for DBAs | Pythian Group Blog - October 2, 2009

[…] Widlake noted that Big Discs are Bad, his response to an older article by Paul Vallée. Martin writes, “Bottom line. Disc […]

6. Database Sizing – How much Disk do I need? (The Easy Way) « Martin Widlake's Yet Another Oracle Blog - November 11, 2010

[…] of disk you need is only one consideration. If you want your database to perform well you need to consider the number of spindles. After all, you can create a very large database indeed using a single 2TB disc – but any actual […]

7. IOT Part 4 – Greatly Boosting Buffer Cache Efficiency « Martin Widlake's Yet Another Oracle Blog - August 8, 2011

[…] and those spindles are not themselves getting faster – I digress, for more details see posts Big Discs are Bad and IOPs and Form […]


Leave a reply to Database Sizing – How much Disk do I need? (The Easy Way) « Martin Widlake's Yet Another Oracle Blog Cancel reply