jump to navigation

Memory Changes Everything July 12, 2010

Posted by mwidlake in Architecture, performance.
Tags: , , ,

I’ve got this USB memory stick which I use to carry around my scripts, documents, presentations, Oracle manuals and enough music to keep me going for a few days. It is on an 8GB Gizzmo Junior and it is tiny. By tiny I mean as wide as my little finger, the length of a matchstick and about the same thickness of said matchstick. So small that I did indeed lose the damn thing for 6 months before I realised it had got trapped behind a credit card in my wallet.

It cost me ten British pounds about 15 months ago (less than most 4GB USB sticks seem to cost now, but then it is nothing more than the memory chip and connectors wrapped in plastic) and it highlights how cheap solid-state “storage” is becoming.

Connected to this, I was looking at buying a new PC this week and this machine comes with 10 USB slots, if you include the ones on the supplied monitor and stubs on the motherboard.
10 USB slots, 8GB gizzmo memory sticks… That would be 80GB of cheap and fast storage. Now get a few USB hubs and bulk-buy a few dozen cheap USB2 sticks and you could soon have a solid-state database of a few hundred GB for a thousand pounds. Then of course you can have fun seeing where the pinch-points in the system are (USB2 has a maximum speed per port and going USB3 right now is going to break that 1 grand barrier. But give it a year…).

This really started me thinking about when memory-based storage would take over from spinning disk as the best option for enterprise-level storage and my gut feeling is in about 5 years. I think it will be both technically possible and financially viable in much less than that, say as little as 2 years, but the cost of solid-state storage per MB will still be higher than disk by then but potentially much faster. A few considerations going through my mind were:-

  • Disk is getting a lot slower in relation to acreage. By this I mean that, for a single disc drive, capacity is doubling about every 18 months but seek time has hardly reduced in a decade and transfer rate (reading from the physical platters to the units buffer) is again almost stationary, at about 120MB/s for 10,000rpm disk and up towards 180 for those very expensive and noisy 15,000 rpm disks. Being a tad ridiculous to make the point, with modern 3TB disks you could build most Oracle database on one disc. Let’s make it two in a raid 10 configuration for redundancy. My point is, your 3TB database could well be being run right now, for real, across say 5 physical disks with a total sustainable physical throughput of around 500MB a second.
  • Solid state storage seems to be halving in price in more like 8-10 months.
  • IO subsystems are made faster by using RAID so that several physical discs can contribute to get towards the 300MB or so speed of the interface – but solid state is already that fast.
  • IO subsystems are made faster by building big caches into them and pre-fetching data that “might” be requested next. Oh, that is kind of solid state storage already.
  • Solid state storage, at least the cheap stuff in your USB stick, has the problem that you can only write to each bit a thousand or so times before it starts to get unreliable. But physical disk has exactly the same issue.
  • There are new methods of solid-state memory storage coming along – “New Scientist” had a nice article on it a few months ago, and these versions will be even higher density and more long-term reliable.
  • Seek time on solid-state memory is virtually zero, so random IO is going to be particularly fast compared to spinning disk.

Solid state memory needs less power, and thus less cooling, is silent, is potentially denser and is less vulnerable to temperature and humidity fluctuations. I can see it not needing to be kept in a specialist server room with the need for all that air con and ear defenders when you go in the room.
Just somewhere with normal air con and a lock on the door should suffice.
We do not need Solid State storage to match the size of current disks or even be as cheap to take over. As I have already pointed out, it is not acreage you need with physical disks but enough spindles and caches to make it fast enough in relation to the space. Further, we can afford to pay more for solid state if we do not need to keep it in such expensive clean-room like environments.

I can see that in a couple of years for a given computer system, say a mixed-workload order processing system, to support the storage needs we will have maybe a dozen solid-state chunks of storage, perhaps themselves consisting of several small units of memory in some sort of raid for resilience, all able to flood the IO channels into our processing server and the issue will be getting the network and io channels into the server to go fast enough. So don’t, stick all the storage directly into the server. You just got rid of half your SAN considerations.

I’m going to stop there. Partly because I have run out of time and partly because, in checking out what I am writing, I’ve just spotted someone did a better job of this before me. Over to James Morle who did a fantastic post on this very topic back in May. Stupid me for not checking out his blog more often. Jame also mentions that often it is not total throughput you are interested in at all but IOPS. That zero latency of solid-state memory is going to be great for supporting very high IOPS.



1. Noons - July 12, 2010

Not at all redundant, Martin! The more we talk about it and thrash it out now, the less of a disruptive influence it’ll have later on.
Thanks for another thought provoking post on the whole subject of persistent storage.

mwidlake - July 14, 2010

Thanks Noons

2. Neil Chandler - July 12, 2010


As James says, IOPS are everything. When “ordering” disk from the SAN boys, specifying space required is rarely the most appropriate metric. You need to specify all metrics that you need to satisfy, and the most important is IOPS. You normally find that when asking for 100,000 sustained IOPS, the SAN boys get all coy about being able to supply it (certainly in a guaranteed form), and indignant about the amount of space that entails on their enormous spindles. Simply ask for 2TB and you might as well shop at ebuyer. The HBA’s get saturated surprisingly quickly (at much less than their nominal max Mb/s throughput) with lots of small blocks too.



mwidlake - July 14, 2010

I would not say IOPS are everything {mostly as I said nothing about them initially so I have to justify my woefull omission 🙂 } but you and James are right that they are very, very significant. OK, your storage subsystem has a massive cache, reads ahead very efficiently, has a couple of dozen spindles, reads a couple of MB from a spindle at a time and can throw 4GB a second of data at your box ina busrt of eager frenzy. But if that is done with only 50 IOPS per spindle, when you are reading sinlge 4k blocks via an index lookup, you are going to struggle to get 5MB of information from your storage a second (4kb*24 spindles*50 IOPS). Suddenly you are in the real world of physical storage performance and it is not so good.
You need a lot of spindles to get sustained high IOPS and you simply can’t buy small disks anymore, so there will be lots of “wasted” space. And on to one of my favorite rants about storage – if there is spare space, some idiot will go and use it!!! And that someone is now nicking your IOPS.
You need a very high buffer cache hit ratio to avoid this IOPS situation. Aim for 99% {ducks from certain well known anti-ratio experts:-) }

3. Graham Oakes - July 14, 2010

Hi Martin,

we have a few databases that use SSD (mainly Fusion IO cards). My personal experience was pretty positive, Doug’s wasn’t. I’m sure he’ll fill you in on the details if you ask him, but it seems like if you have a database that requires the IOPS that SSD can provide you’re likely to run into all kinds of weird and wonderful bugs / features of oracle that aren’t usually found.

The biggest drawbacks / limitations I found with the Fusion IO cards were the firmware upgrades and the limitation on the number of slots on the host you could plug the cards into (not really a limitation of the SSD).

At the moment it’s very much a tailored solution, but as you say prices are coming down. Also factor in to the equation that for large organisations the price of SAN will be more expensive than the ‘off the shelf’ price as it will need to go through various engineering hoops before being made available to the business. 5 years actually seems a little conservative to me.

mwidlake - July 14, 2010

Hi Graham,

I was involved in trying an SSD about 6 years ago, providing about 16GB of supposedly super-fast IO. Our storage guys had been working on it for two or three weeks and passed it on to my DBA team as a lost cause – they were getting wildly varying results. Turns out the vendor had sent it with duff drivers and once I got the latest ones off them it worked well. We tried it for online redo logs and a key index tablespace and it sped things up. But by then, we did not trust it. So, it is sad you hit the same issues with more modern equipment.

The issue of starting to hit bugs and gremlins in the Oracle DB when you get things to work fast is a very, very interesting one. Breaking Oracle when you reach the edges of what it can do has been the bane of my working life. I think that is a challenge to our replacing physical storage with memory-based storage.

I’m not sure if you think solid state storage will take over in more or less than 5 years?!

Graham Oakes - July 21, 2010

Taking over?… I’m not sure, as there an awful lot of systems that just don’t require great performance and never will, but I do think it will become much more widespread in the next 2 years, to the point where it will become a default choice for anything that does require fast IO.

mwidlake - July 21, 2010

NO! you don’t understand, every system needs top performance and tuning. My continuing employment depends on this 🙂

It is a valid point Graeme, you would not spend more money on memory storage if you do not need to, but I think in 5 years times the solid-state option could be cost-equal to disk (if manufacturers do not collude to keep it high). If there are not any unresolved issues (like the number of times you can safely rewrite) then I think the advantages of noise, format, power will mean if you are buying hardware you would not buy physical disks.

After all, how many people have bought CRTs in the last 5 years? Maybe a few people for whom colour reporoduction is paramount and I think LCDs have got good enough for that niche too now.

4. More Memory Meanderings – IOPS and Form Factors « Martin Widlake's Yet Another Oracle Blog - July 19, 2010

[…] Tags: Architecture, system development trackback I had a few comments when I posted on solid state memory last week and I also had a couple of interesting email discussions with […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: