jump to navigation

Oracle Exadata – does it work? July 26, 2009

Posted by mwidlake in development, VLDB.
Tags: ,
trackback

Does Oracle Exadata work? 

That is a tricky question as, unlike most Oracle database features, you can’t download it and give it a test.

You can try out partitioning, bigfiles, oracle Text, InterMedia {sorry, Multimedia),} all sorts of things by downloading the software. You can even try out RAC pretty cheaply, using either VM-Ware or a couple of old machines and linux, and many hundreds of Oracle techies have. The conclusion is that it works. The expert conclusion is “yes it works, but is it a good idea? It depends {my fees are reasonable}” :-).

I digress, this ability to download and play allows Oracle technophiles to get some grounding in these things, even if their employer is not currently looking to implement them {BTW how often do you look at something in your own private time that your company will not give you bandwidth for – only to have them so very interested once you have gained a bit of knowledge? Answers on a postcard please…}.

Exadata is another beast, as it is hardware. I think this is an issue.

I was lucky enough to get John Nangle to come and present on Exadata at the last UKOUG Management and Infrastructure meeting, having seen his talk at a prior meeing. John gave a very good presentation and interest was high. I have also seen Joel Goodman talk {another top presenter}, so I understand the theory. I have to say, it looks very interesting, especially in respect of what is ,perhaps, my key area of personal expertise, VLDB. Databases of 10′s of terabytes.

I don’t plan to expand here on the concepts or physical attributes of Exadata too much, it is enough to say that it appears to gain it’s advantage via two main aspects:-

  • Intelligence is sited at the “disc controller” level {which in this case is a cheap 4-cpu HP server, not really the disc controller} which basically pre-filters the data coming off storage so only the data that is of interest is passed back to the database.  This means that only blocks of interest are chucked across the network to the database.
  • The whole system is balanced. Oracle have looked at the CPU-to-IO requirements of data warehouses and decide what seems to be a good balance, they have implemented fast, low latency IO via infiniband and made sure there are a lot of network pipes from the storage up the stages to the database servers. That’s good.

The end result is that there is lots of fast, balanced IO from the storage layer to the database and only data that is “of interest” is passed up to the database.

It all sounds great in theory and Oracle Corp bandy around figures of up to 100 times (not 100%, 100 times) speedup for datawarehouse activity, with no need to re-design your implementation. At the last M&I UKOUG meeting there was also someone who had tried it in anger and they said it was 70 times faster. Unless this was a clever plant by Oracle, that is an impressive independent stated increase.

I am still very interested in the technology, but still sceptical. After all, RAC can be powerful, but in my experience it is highly unlikely that by dropping an existing system onto RAC you will get any performance (or high availability) increase. In fact, you are more likely to just make life very, very difficult for yourself. RAC works well when you design your system up-front with the intention of working on the same data on the same nodes. {Please note, this is NOT the oft-cited example of doing different work types on different nodes, ie data load on one node, OLTP on another and batch work on the third. If all three are working on the same working set, you could well be in trouble. You are better off having all load, OLTP and Batch for one set of data on one node, OLTP-load-batch  for another set of data on another node etc, etc, etc. If your RAC system is not working well, this might be why}.  Similarly, partitioning is an absolutely brilliant feature – IF you designed it up-front into your system. I managed to implement a database that has scaled to 100 TB with 95% of the database read-only {so greatly reducing the backup and recovery woes} as it was designed in from the start.

Where was I? Oh yes, I remain unconvinced about Exadata. It sounds great, it sounds like it will work for datawarehouse systems where full table scans are used to get all the data and the oracle instance then filters most of the data out. Now the storage servers will do that for you.  You can imagine how instead of reading 500GB of table off disc, across the network and into Oracle memory and then filtering it, the  eight disc servers will do the filtering and send a GB of data each up to the database. It has to be faster.

BUT.

What if you have some OLTP activity and some of the data is in the SGA? That is what stops full-table-scans working at Multi-Block-Read_Count levels of efficiency.

What happens if some of the table is being updated by a load process at the same time?

 What happens if you want some of the data hosted under ASM and full Exadata performance brilliance but you have several 10′s of TB of less-active data you just want to store on cheap SATA raid 5 discs as well? How does Exadata integrate then?

You can’t test any of this out. I did email and ask John about this inability to play with and discover stuff about a solution that is hardware and very expensive. And he was good enough to respond, but I think he missed the point of my question {I should ask again, he is a nice chap and will help if he can}. He just said that the DBA does not have to worry about the technology, it just works. There are no special considerations.

Well, there are. And I can’t play with it as I would need to buy a shed load of hardware to do so. I can’t do that, I have a wife and cat to feed.

So even though Exadata sound great, it is too expensive for anyone but large, seriously interested companies to look in to.

And I see that as a problem. Exadata experts will only come out of organisations that have invested in the technology or Oracle itself. And I’m sorry, I’ve worked for Oracle and as an employee you are always going to put the best face forward.  So, skills in this area are going to stay scarce unless it takes off and I struggle to see how it will take off unless it is not just as good as Oracle says , but better than Netezza and Teradata by a large margin.

Does anyone have an exadata system I can play on? I’d love to have a try on it.

About these ads

Comments»

1. Noons - July 27, 2009

“you have several 10’s of TB of less-active data you just want to store on cheap SATA raid 5 discs as well?”

Indeed. This is one of the aspects of Exadata that hasn’t been explained satisfactorily. Yes: I am fully aware of the advantages of its architecture, but what happens when NOT ALL of my data is of the “super-IO-speed” nature – and therefore does not require the high level of expenditure associated with that?

We have very successfully implemented multiple tiers of storage speed with our SAN for our main DW. In the process we have saved heaps of moolah, as it was found we indeed did NOT need 800MB/s sustained I/O speed for ALL of it, only some portions. How do I go about achieving similar savings with Exadata is still not clear. To me at least it isn’t.

Hey, without a shadow of a doubt, a fantastic piece of hardware. And I’d dearly love the chance to use one. But still a bit unclear how it can be integrated into a whole. Our whole, at least.

“He just said that the DBA does not have to worry about the technology, it just works”

ROFL! Where did I hear this one before?…

mwidlake - July 27, 2009

The ability to match storage to requirements, such as you have done Noons, is I suspect an underused architecture more people could implement. For one system I built we put the bulk of the data on slow, big SATA disks and the key 10% of the database on FC storage. It worked well. Next step was to look at MAID storage {Massive Array of idle disks}, which promised massive bulk storage at a very reasonable price and low running costs. We never quite got there and I have not seen such a system in reality yet. But if you have spent most of your budget on Exadata, linking it up with cheap bulk storage for non-critical data would help reduce the wallet pain.

2. Doug Burns - July 27, 2009

Well I know a few people who are looking at Exadata and some who’ve bought it and I don’t think it’s really intended to be a general-purpose solution, but I emphasise that’s just my opinion.

So then it starts to become super-expensive if it’s just dedicated to a DW or two, right? But how many DWs do you think are sitting on their own storage arrays anyway, just so that they can achieved the performance levels they’re looking for? My guess – quite a few. I think that’s the target for Exadata and those dedicated storage arrays don’t do off-load processing which I think is the most important bit. Otherwise it’s just a bunch of commodity hardware, albeit in a nice configuration.

I think it’s good to raise the questions, but I’m not sure Oracle have or will position it as a replacement for all of your existing systems and kit, but more as a DW appliance that runs Oracle, so has the decent configuration and smarts of an appliance, but the range of functionality Oracle already contains and that’s not to be sniffed at!

Let me really put the cat amongst the pigeons, though.

What about HA?

OK, so we’re talking Dataguard and I’d be more than happy with that as an alternative to SRDF for Oracle. However, doesn’t that mean that you’re probably really talking about two boxes? So you *one* was expensive?!?

Still at least, there’s the half-size now …

mwidlake - July 27, 2009

I agree with your opinion Doug, it is not a general solution. I see it as a top-end solution to high performance requirements for those customers who have not just the need but also the money to implement it.

Your point about dedicated storage is spot-on. I had not thought about that {seems obvious now} but the shared enterprise storage solution seems ubiquitous these days. Even ASM-based databases have a tendency to be getting their storage from a shared enterprise-level solution. For Exadata you are basically buying dedicated hardware for the systems you implement on it. And yes, if you have DR, you could be looking at two or even three of these systems! It makes the potential to tie in cheap storage as well even more compelling.

{Incidentally, for a long time I’ve had a negative opinion of having one huge storage solution for several systems, especially where the storage solution is spreading everything over as many disks as it can. “it gives you maximum disk utilisation and bandwidth” say the vendors. Yes it does, so long as it works. Yes it does, across the whole set of systems attached to it. But it also helps ensure maximum contention and minimum visibility of who is using the system, at least from the end user’s perspective. If my database starts slowing down and it is because my IO wait times have gone through the roof, what do I do about it? Who is nicking my bandwidth? When will I get it back?}

3. Greg Rahn - July 27, 2009

One small correction: “4-cpu HP server” should be “2-cpu HP server”. The DL180 is a 2 socket (8 total cores) Harpertown sever.

My comments on price: Some may think that a DB Machine is expensive but I would ask how much would a SAN based solution cost that could deliver 14GB/s of scan bandwidth (and what would you plug those 35+ 4Gbps HBAs into)? My guess is certainly much more than a DB Machine and it would certainly take up more than 42U/1 Rack of space.

As someone who has had quite a bit of face time with the product with numerous customer POCs, I will say this: Once you go Exadata, you will wonder why you didn’t go sooner.

I can not offer you time on a DB Machine but I can offer one hours worth of live DB Machine action at Oracle Open World: The Terabyte Hour with the Real-World Performance Group. Highly recommended (though I may be slightly biased) =)

Doug Burns - July 28, 2009

Greg,

Is there any chance of this presentation being at UKOUG this year? I’ll hopefully see it in OOW, but not everyone will make it there …

4. Greg Rahn - July 28, 2009

@Doug

I guess that would depend on the availability of a slot to present it in. At this time it is unplanned, but things can change.

5. mwidlake - July 28, 2009

@Greg
That talk looks very interesting – unfortunatelty I won’t be at Oracle World.

The UKOUG conference agenda is full I think {they have informed presenters who have had their papers accepted} but some slots come up – I think it would be very interesting if the talk could be repeated there, I am pretty sure it would be a very popular talk.

I wonder if I could get someone with TWO slots to give you one of his?…. {No names, no indication which side of the border they live on… :-) }

6. Doug Burns - July 28, 2009

I wonder if I could get someone with TWO slots to give you one of his?

LOL. I would be happy to, but I’m not sure I would want to muck the UKOUG around. He’s not having my one-hour ‘pictures’ slot, though ;-)

I’m not sure you can get a reliable net connection to the Exadata box, either.

Notwithstanding all that, I think it would be a great presentation for the UKOUG to get on the agenda.

7. Amin Adatia - February 22, 2013

Unless you have a DBA Team that can think beyound “dont change anything – vanilla everywhere” frame of mind, then Exadata will be just yet another big iron. There are many things that need to be adjusted especially the “on demand” resource allocation instead of 2 nodes for App X, 2 Nodes for App Y, until all 8 Nodes are gone.

mwidlake - March 1, 2013

I agree with you.

I need to blog on this topic again as I posted the above before I had worked on an Exadata system, ie in ignorance (and that WAS part of my point). Now I have some experience, I should qualify some things

8. Amin Adatia - March 1, 2013

I think out-of-the-box Exadata configuration has to be taken with a ton of salt. A lot of work has to be done to determine what the configuration should be for the Exadata deployed application database. I dont see anyone in the DBA Teams provide the analysis of the work load. All that is being pushed is saving in storage with the COMPRESS FOR QUERY HIGH even when even Oracle’s own documentation and a zillion white papers suggest that for the environment “you have” the best would be COMPRESS FOR OLTP.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 165 other followers

%d bloggers like this: