jump to navigation

How Fast for £1,000 – Architecture August 5, 2010

Posted by mwidlake in Architecture, performance, Testing.
Tags: , , ,
trackback

My previous post proposed the creation of “the fastest Oracle server for a grand”, or at least an investigation into what might be the fastest server. I’ve had some really good feedback {which I very much appreciate and am open to even more of}, so I think I’ll explore this further.

My initial ideas for the hardware configuration, written at the same time as the original post, were:

  • A single-chip, quad core intel core i5 or i7 processor (I would like two chips but the cost of multi-chip motherboards seems too high for my budget)
  • 8GB of memory as the best price point at present, but maybe push to 16GB
  • Multiple small, fast internal disks for storage, maybe expand via eSATA
  • backup to an external drive (cost not included in the budget).
  • USB3 and use of memory sticks for temp and online redo.
  • If budget will stretch, SSD disc for the core database components. like core tables, index tablespaces (who does that any more!).
    ASM or no ASM?
    If I run out of internal motherboard connections for storage, can I mix and match with USB3, external e-SATA or even GB ethernet?

As for the Oracle database considerations, I have a good few things I want to try out also. In the past (both distant and recent) I have had a lot of success in placing components of the database in specific locations. I refer to this as “Physical Implementation” {Physical Implementation, if I remember my old DB Design courses correctly, also includes things like partitioning, extent management, tablespace attributes – how you actually implement the tables, indexes and constraints that came from logical data design}.

Physically placing components like undo and redo logs on your fastest storage is old-hat but I think it gets overlooked a lot these days.
Placing of indexes and tables on different tablespaces on different storage is again an old and partially discredited practice, but I’d like to go back and have a new look at it. Again, I had some success with improved performance with this approach as little as 8 years ago but never got to rigorously test and document it. { As an aside, one benefit I have been (un)fortunate to gain from twice through putting tables and indexes in separate tablespaces is when a tablespace has been lost through file corruption – only for it to be an index tablespace, so I was able to just drop the tablespace and recreate the indexes.}

Then there is the use of clusters, IOTs, Bitmap indexes and Single Table Hash Clusters (are you reading this Piet?) which I want to explore again under 11.

I don’t think I am going to bother with mixed block sizes in one DB, I think you need very specialist needs to make it worth the overhead of managing the various caches and the fact that the CBO is not so great at accurately costing operations in non-standard block sizes {issues with the MBRC fudge factor being one}. But I think I will re-visit use of “keep” and “recycle” caches. For one thing, I want to show that they are just caches with a name and not special, by using the “Recycle” cache as the keep and the “keep” as a recycle cache.

Should I be using RAT for testing all of this? I said I was not going to use any special features beyond Enterprise edition but RAT could be jolly useful. But then I would need two servers. Is anyone willing to give me the other £1000 for it? I’d be ever so grateful! 🙂

Comments»

1. Alan - August 7, 2010

I’ve made a wild assumption that you’ll be putting this on Linux?
Interested in how you will benchmark this?
Any free tools for performance testing?

mwidlake - August 7, 2010

Hi Alan,

I address these points a little in my first posting. I am pretty sure the server will be linux, I’ll look at Dom Giles’ Swingbench to maybe do the benchmarking but will hunt around the internet to see if anyone has been kind enough to put something else out there already which is free and easy to use.

Worst come to the worst I can use my ancient PL/SQL skills and oracle’s job scheduler to create workload 🙂

2. Martin Berger - August 7, 2010

A big fortune gave me the oportunity to rethink and refine my comment:
My first thought was: more data needed!
In detail primarily I’d like to know the size of the DB you’d like to run on this server. You can serve a DB with several TB, but if the DB is only some GB in size, this will lead to other hardware.
Also the requested availibility would be of some interrest.
I’m sure you have some assumptions: You are only counting hardware costs. No software licensing; no cost for manpower. We will use this later.
In more detail, I’m afraid you are suffering from a kind of Compulsive Tuning Disorder.
Maybe some smaller steps would bring an even better solution:
I suggest to start with the smallest (cheapest) possible system, just with reasonable extensions in mind.
With such a system, start your well known tuning: as licensing and manpower is ‘for free’ in this idea, use the tuning and diagnostic pack as good as you can. RAT is a fine thing, it can even be used on the same server. Even advanced compression can save a lot of resources. Partitioning – quite a kind of default for me 😉
Invest enough manpower to find where you are loosing time. First fix this within the application design. At the end, if you have the perfect software system and ‘physical impementation’, use the saved money to improve your still biggest time consumers. At the moments you can not tell if it will be harddisks, memory, cpu or even network cards. Therefore invest your savings only if you know where they will fit best.
At the end, please do not ever tell anything about your idea to any of ‘my’ developers or project managers. They might not understand why our IT is so much more expensive than yours.
As allways, the correct answer to your question: ‘How Fast …’ is: ‘It depends!’

Martin Widlake - August 10, 2010

Hi Martin,
Sorry for the delay, lots {non-work} going on!
I initially planned to create a 1TB database so that it was much, much larger than my SGA. I think I will scale that down to half a TB for the sake of quicker backup and recovery {though I am not considering backup performance in my plans as yet}. It will not be small enough to fit on a single SSD 🙂
I am not considering redundant hardware or the impact of using eg dataguard and, yes, I am only considering the cost of the hardware alone. No licence, no manpower, no sustaining coffee is factored in. This is not real-world but I can’t afford to buy your datacentre.
You may be correct in that this exercise is a case of mild CTD, but I like to think of it as a kind of proof-of-concept. I also want to use it to try out all of the things you mention, like compression and partitioning and lazy commit and altering initialization parameters. I know that compression can boost performance but by how much? What is the general level of impact of changing MBRC? If I have a cheap system with a repeatable workload (RAT could be very useful here) I can get a feel for the overall impact of such changes. Of course, “How Fast – it Depends” is utterly true but not very helpful when you are asked to speed up a system. If I know I can boost DSS-type workload by 25% by compression, but then CPU becomes my limiting factor, that is quite useful.

It will only be one system running fake workload, but knowing how this system responds is far more concrete than a theoretical expectation of some improvement by doing X. Once I have quantified what seems to be usefull, I’ll get you to try it on your real-world systems Martin 🙂

3. Niall Litchfield - August 22, 2010

Hi Martin

I think you’ll find swingbench pretty simple to get to grips with. I do have some java in my background, but it was really fairly easy to get the hang of. and of course it does come with out of the box demo schemas.

Anyway I suspect that the answer to the fastest server that I can get for £1000 for using for demos and experiments for Oracle is, well not a server at all probably, but an Amazon EC Instance (or 4 – for RAC). Now admittedly you can spend well over £1000 this way, but it’ll take a suprisingly long time to do I suspect.

There are all sorts of reasons to do what you are describing as well of course.

4. Fastest £1,000 server – what happened? « Martin Widlake's Yet Another Oracle Blog - July 12, 2011

[…] couple of people have asked me recently what happened to that “fastest Oracle server for a grand” idea I had last year, after all I did announce I had bought the […]

5. Fastest £1,000 server – what happened? « Ukrainian Oracle User Group - July 14, 2011

[…] couple of people have asked me recently what happened to that “fastest Oracle server for a grand” idea I had last year, after all I did announce I had bought the […]


Leave a reply to mwidlake Cancel reply