jump to navigation

Friday Philosophy – Tosh Talked About Technology February 17, 2012

Posted by mwidlake in Friday Philosophy, future, Hardware, rant.
Tags: , ,
9 comments

Sometimes I can become slightly annoyed by the silly way the media puts out total tosh and twaddle(*) that over-states the impact or drawbacks about technology (and science ( and especially medicine (and pretty much anything the media decides to talk about)))). Occasionally I get very vexed indeed.

My attention was drawn to some such thing about SSDs (solid State Discs) via a tweet by Gwen Shapira yesterday {I make no statement about her opinion in this in any way, I’m just thanking her for the tweet}. According to Computerworld

SSDs have a ‘bleak’ future, researchers say

So are SSDs somehow going to stop working or no longer be useful? No, absolutely not. Are SSDs not actually going to be more and more significant in computing over the next decade or so? No, they are and will continue to have a massive impact. What this is, is a case of a stupidly exaggerated title over not a lot. {I’m ignoring the fact that SSDs can’t have any sort of emotional future as they are not sentient and cannot perceive – the title should be something like “the future usefulness of SSDs looks bleak”}.

What the article is talking about is a reasonable little paper about how if NAND-based SSDS continue to use smaller die sizes, errors could increase and access times increase. That is, if the same technology is used in the same way and manufacturers continue to shrink die sizes. It’s something the memory technologists need to know about and perhaps find fixes for. Nothing more, nothing less.

The key argument is that by 2024 we will be using something like 6.4nm dies and at that size, the physics of it all means everything becomes a little more flaky. After all, Silicon atoms are around 0.28nm wide (most atoms of things solid at room temperature are between 0.2nm and 0.5nm wide), at that size we are building structures with things only an order of magnitude or so smaller. We have all heard of quantum effects and tunneling, which means that at such scales and below odd things can happen. So error correction becomes more significant.

But taking a reality check, is this really an issue:

  • I look at my now 4-year-old 8GB micro-USB stick (90nm die?) and it is 2*12*30mm, including packaging. The 1 TB disc on my desk next to it is 24*98*145mm. I can get 470 of those chips in the same space as the disc, so that’s 3.8TB based on now-old technology.
  • Even if the NAND materials stay the same and the SSD layout stays the same and the packaging design stays the same, we can expect about 10-50 times the current density before we hit any problems
  • The alternative of spinning platers of metal oxides is pretty much a stagnant technology now, the seek time and per-spindle data transfer rate is hardly changing. We’ve even exceeded the interface bottleneck that was kind-of hiding the non-progress of spinning disk technology

The future of SSD technology is not bleak. There are some interesting challenges ahead, but things are certainly going to continue to improve in SSD technology between now and when I hang up my keyboard. I’m particularly interested to see how the technologists can improve write times and overall throughput to something closer to SDRAM speeds.

I’m willing to lay bets that a major change is going to be in form factor, for both processing chips and memory-based storage. We don’t need smaller dies, we need lower power consumption and a way to stack the silicon slices and package them (for processing chips we also need a way to make thousands of connections between the silicon slices too). What might also work is simply wider chips, though that scales less well. What we see as chips on a circuit board is mostly the plastic wrapper. If part of that plastic wrapper was either a porous honeycomb air could move through or a heat-conducting strip, the current technology used for SSD storage could be stacked on top of each other into blocks of storage, rather then the in-effect 2D sheets we have at present.

What could really be a cause of technical issues? The bl00dy journalists and marketing. Look at digital cameras. Do you really need 12, 16 mega-pixels in your compact point-and-shoot camera? No, you don’t, you really don’t, as the optics on the thing are probably not up to the level of clarity those megapixels can theoretically give you, the lens is almost certainly not clean any more and, most significantly, the chip is using smaller and smaller areas to collect photons (the sensor is not getting bigger with more mega-pixels you know – though the sensor size is larger in proper digital SLRs which is a large part of why they are better). This less-photons-per-pixel means less sensitivity and more artefacts. What we really need is maybe staying with 8MP and more light sensitivity. But the mega-pixel count is what is used to market the camera at you and I. As a result, most people go for the higher figures and buy something technically worse, so we are all sold something worse. No one really makes domestic-market cameras where the mega-pixel count stays enough and the rest of the camera improves.

And don’t forget. IT procurement managers are just like us idiots buying compact cameras.

(*) For any readers where UK English is not a first language, “twaddle” and “tosh” both mean statements or arguments that are silly, wrong, pointless or just asinine. oh, Asinine means talk like an ass :-) {and I mean the four-legged animal, not one’s bottom, Mr Brooks}

Will the Single Box System make a Comeback? December 8, 2011

Posted by mwidlake in Architecture, future, Hardware.
Tags: , ,
15 comments

For about 12 months now I’ve been saying to people(*) that I think the single box server is going to make a comeback and nearly all businesses won’t need the awful complexity that comes with the current clustered/exadata/RAC/SAN solutions.

Now, this blog post is more a line-in-the-sand and not a well researched or even thought out white paper – so forgive me the obvious mistakes that everyone makes when they make a first draft of their argument and before they check their basic facts, it’s the principle that I want to lay down.

I think we should be able to build incredible powerful machines based on PC-type components, machines capable of satisfying the database server requirements of anything but the most demanding or unusual business systems. And possibly even them. Heck, I’ve helped build a few pretty serious systems where the CPU, memory and inter-box communication is PC-like already. If you take the storage component out of needing to be centralise (and this shared), I think that is a major change is just over the horizon.

At one of his talks at the UKOUG conference this year, Julian Dyke showed a few tables of CPU performance, based on a very simple PL/SQL loop test he has been using for a couple of years now. The current winner is 8 seconds by a… Core i7 2600K. ie a PC chip and one that is popular with gamers. It has 4 cores and runs two threads per core, at 3.4GHz and can boost a single core to 3.8 GHz. These modern chips are very powerful. However, chips are no longer getting faster so much as wider – more cores. More ability to do lots of the same thing at the same speed.

Memory prices continue to tumble, especially with smart devices and SSD demands pushing up the production of memory of all types. Memory has fairly low energy demands so you can shove a lot of it in one box.

Another bit of key hardware for gamers is the graphics card – if you buy a top-of-the-range graphics card for a PC that is a couple of years old, the graphics card probably has more pure compute grunt than your CPU and a massive amount of data is pushed too and fro across the PCIe interface. I was saying something about this to some friends a couple of days ago but James Morle brought it back to mind when he tweeted about this attempt at a standard about using PCI-e for SSD. A PCI-e 16X interface has a theoretical throughput of 4000MB per second – each way. This compares to 600MB for SATA III, which is enough for a modern SSD. A single modern SSD. {what I am not aware of is the latency for PCI-e but I’d be surprised if it was not pretty low}. I think you can see where I am going here.

Gamers and image editors have probably been most responsible for pushing along this increase in performance and intra-system communication.

SSD storage is being produced in packages with a form factor and interface to enable an easy swap into the place of spinning rust, with for example a SATA3 interface and 3.5inch hard disk chassis shape. There is no reason that SSD (or other memory-based) storage cannot be manufactured in all sorts of different form factors, there is no physical constraint of having to house a spinning disc. Density per dollar of course keeps heading towards the basement. TB units will soon be here but maybe we need cheap 256GB units more than anything. So, storage is going to be compact and able to be in form factors like long, thin slabs or even odd shapes.

So when will we start to see cheap machines something like this: Four sockets for 8/16/32 core CPUs, 128GB main memory (which will soon be pretty standard for servers), memory-based storage units that clip to the external housing (to provide all the heat loss they require) that combine many chips to give 1Gb IO rates, interfaced via the PCIe 16X or 32X interface. You don’t need a HBA, your storage is internal. You will have multipath 10GbE going in and out of the box to allow for normal network connectivity and backup, plus remote access of local files if need be.

That should be enough CPU, memory and IO capacity for most business systems {though some quote from the 1960’s about how many companies could possible need a computer spring to mind}. You don’t need shared storage for this, in fact I am of the opinion that shared storage is a royal pain in the behind as you are constantly having to deal with the complexity of shared access and maximising contention on the flimsy excuse of “sweating your assets”. And paying for the benefit of that overly complex, shared, contended solution.

You don’t need a cluster as you have all the cpu, working memory and storage you need in a 1U server. “What about resilience, what if you have a failure?”. Well, I am swapping back my opinion on RAC to where I was in 2002 – it is so damned complex it causes way more outage than it saves. Especially when it comes to upgrades. Talking to my fellow DBA-types, the pain of migration and the number of bugs that come and go from version to version, mix of CRS, RDBMS and ASM versions, that is taking up massive amounts of their time. Dataguard is way simpler and I am willing to bet that for 99.9% of businesses other IT factors cause costly system outages an order of magnitude more times than the difference between what a good MAA dataguard solution can provide you compared to a good stretched RAC one can.

I think we are already almost at the point where most “big” systems that use SAN or similar storage don’t need to be big. If you need hundreds of systems, you can virtualize them onto a small number of “everything local”
boxes.

A reason I can see it not happening is cost. The solution would just be too cheap, hardware suppliers will resist it because, hell, how can you charge hundreds of thousands of USD for what is in effect a PC on steroids? But desktop games machines will soon have everything 99% of business systems need except component redundancy and, if your backups are on fast SSD and you a way simpler Active/Passive/MAA dataguard type configuration (or the equivalent for your RDBMS technology) rather than RAC and clustering, you don’t need that total redundancy. Dual power supply and a spare chunk of solid-state you can swap in for a failed raid 10 element is enough.

Friday Philosophy – Oracle Performance Silver Bullet August 5, 2011

Posted by mwidlake in Architecture, Friday Philosophy, performance.
Tags: , , ,
15 comments

Silver Cartridge and Bullet

For as long as I have been working with Oracle technology {which is now getting towards 2 decades and isn’t that pause for thought} there has been a constant search for Performance Silver Bullets – some trick or change or special init.ora parameter {alter system set go_faster_flag=’Y’} you can set to give you a guaranteed boost in performance. For all that time there has been only one.

There are a few performance Bronze Bullets…maybe Copper Bullets. The problem is, though, that the Oracle database is a complex piece of software and what is good for one situation is terrible for another. Often this is not even a case of “good 90% of the time, indifferent 9% of the time and tragic 1% of the time”. Usually it is more like 50%:30%:20%.

Cartridge with copper bullet &spent round

I’ve just been unfair to Oracle software actually, a lot of the problem is not with the complexity of Oracle, it is with the complexity of what you are doing with Oracle. There are the two extremes of OnLine Transaction Processing (lots of short running, concurrent, simple transactions you want to run very quickly by many users) and Data Warehouse where you want to process a vast amount of data by only a small number of users. You may well want to set certain initialisation parameters to favour quick response time (OLTP) or fastest processing time to completion (DW). Favouring one usually means a negative impact on the other. Many systems have both requirements in one… In between that there are the dozens and dozens of special cases and extremes that I have seen and I am just one guy. People get their database applications to do some weird stuff.

Partitioning is a bronze bullet. For many systems, partitioning the biggest tables makes them easier to manage, allows some queries to run faster and aids parallel activity. But sometimes (more often than you might think) Partitioning can drop rather than increase query or DML performance. In earlier versions of Oracle setting optimizer_index_caching and optimizer_index_cost_adj was often beneficial and in Oracle 9/8/7 setting db_file_multiblock_read_count “higher” was good for DWs….Go back to Oracle 7 and doing stuff to increase the buffer cache hit ratio towards 98% was generally good {and I will not respond to any comments citing Connors magnificent “choose your BCHR and I’ll achieve it” script}.
You know what? There was an old trick in Oracle 7 you could maybe still look at as a bronze bullet. Put your online redo logs and key index tablespaces on the fastest storage you have and split your indexes/tables/partitions across the faster/slower storage as is fit. Is all your storage the same speed? Go buy some SSD and now it isn’t….

Cartridge with Wooden Bullet

Then there are bronze bullets that you can use that very often improve performance but the impact can be catastrophic {Let’s call them wooden bullets :-) }. Like running your database in noarchivelog mode. That can speed up a lot of things, but if you find yourself in the situation of needing to do a recovery and you last cold backup is not recent enough – catastrophe. A less serious but more common version of this is doing things nologging. “oh, we can just re-do that after a recovery”. Have you done a test recovery that involved that “oh, we can just do it” step? And will you remember it when you have a real recovery situation and the pressure is on? Once you have one of these steps, you often end up with many of them. Will you remember them all?

How many of you have looked at ALTER SYSTEM SET COMMIT_WRITE=’BATCH,NOWAIT’? It could speed up response times and general performance on your busy OLTP system. And go lose you data on crash recovery. Don’t even think about using this one unless you have read up on the feature, tested it, tested it again and then sat and worried about could possibly go wrong for a good while.

That last point is maybe at the core of all these Performance Bronze Bullets. Each of these things may or may not work but you have to understand why and you have to understand what the payback is. What could now take longer or what functionality have I now lost? {hint, it is often recovery or scalability}.

So, what was that one Silver Bullet I tantalizingly left hanging out for all you people to wait for? You are not going to like this…

Look at what your application is doing and look at the very best that your hardware can do. Do you want 10,000 IOPS a second and your storage consists of less than 56 spindles? Forget it, your hardware cannot do it. No matter what you tune or tweak or fiddle with. The one and only Performance Silver Bullet is to look at your system and your hardware configuration and work out what is being asked and what can possibly be delivered. Now you can look at:

  • What is being asked of it. Do you need to do all of that (and that might involve turning some functionality off, if it is a massive drain and does very little to support your business).
  • Are you doing stuff that really is not needed, like management reports that no one has looked at in the last 12 months?
  • Is your system doing a heck of a lot to achieve a remarkably small amount? Like several hundred buffer gets for a single indexed row? That could be a failure to do partition exclusion.
  • Could you do something with physical data positioning to speed things up, like my current blogging obsession with IOTs?
  • You can also look at what part of your hardware is slowing things down. Usually it is spindle count/RAID level, ie something dropping your IOPS. Ignore all sales blurb from vendors and do some real-world tests that match what you app is or wants to do.

It’s hard work but it is possibly the only Silver Bullet out there. Time to roll up our sleeves and get cracking…

{Many Thanks to Kevin Closson for providing all the pictures – except the Silver Bullet, which he only went and identified in his comment!}

Fastest £1,000 Server – back from supplier July 23, 2011

Posted by mwidlake in One Grand Server.
Tags: ,
6 comments

At the risk of turning my Blog into some sort of half-way-house tweet update thing (correct, I’ve never logged into twitter), as a couple of people asked about the outcome with the broken £1,000 server, I’m happy to report it came back this week. The motherboard had died. I’d convinced myself it was the PSU when I trawled the net as it seems to be one of those things that is most likely to die having fired up in the first place, but no, the motherboard. I guess some solder “dried” or the pc pixies just don’t like me. One month turnaround is not very impressive…

They had another motherboard exactly the same in stock so I got a like-for-like swap. I was kind of hoping for a different one with more SATA3 and USB3 headers :-)

Now I’m trying to download the latest oracle 11 for 64 bit windows. I live out in the wilds of North Essex (for non-UK people, this is all of 62 Kilometers North-Northeast of London as the crow flies, so not exactly in an obscure and remote part of the UK! For those who DO know the UK, it is nothing like “the only way is Essex” out here. We have trees, fields, wildlife and a lack of youth culture.) As such, my broadband connect is sloooow. The connection keeps breaking and I lose the download. *tsch*. I’m sure I had a download manager somewhere which got around these issues…

Fastest £1,000 server – what happened? July 12, 2011

Posted by mwidlake in One Grand Server, performance.
Tags:
7 comments

A couple of people have asked me recently what happened to that “fastest Oracle server for a grand” idea I had last year, after all I did announce I had bought the machine.

{Update – it came back.}
Well, a couple of things happened. Firstly, what was a small job for a client turned into a much more demanding job for a client – not so much mentally harder as time-consuming harder and very time consuming it was. So the playing had to go on hold, the client comes first. The server sat in the corner of the study, nagging me to play with it, but it remained powered down.
Secondly, when the work life quietened down last month and I decided to spend a weekend getting that server set up I hit an issue. I turned on the server and it turned itself straight off. It than rested for 5 seconds and turned itself back on for half a second – and then straight off. It would cycle like that for as long as I was willing to let it.

OK, duff power switch, mother board fault, something not plugged in right, PSU not reaching stable voltage… I opened the case and checked everything was plugged in OK and found the manufacturer had covered everything with that soft resin to hold things in place. I pressed on all the cards etc in hope but no, it was probably going to have to go back. It is still in warranty, the manufacturer can fix it.

So I rang the manufacturer and had the conversation. They were not willing to try and diagnose over the phone so I had to agree to ship it back to them to be fixed {I did not go for on-site support as the only time I did, with Evesham Micros, they utterly refused to come out to fix the problem. Mind you, it turns out they were counting down the last week or two before going bust and, I suspect, knew this}. I shipped it back and the waiting began. Emails ignored, hard to get on touch over the phone. Over three weeks on and they only started looking at the machine last Friday (they claim).

On the positive side, this delay means that solid state storage is becoming very affordable and I might be able to do some more interesting things within my budget.
On the bad side the technology has moved on and I could get a better server for the same money now, but that is always the case. Mine does not have the latest Sandy Bridge Intel processor for example. Also, I have time now to work on it, I hope not to have time next month as I’d like to find some clients to employ me for a bit!

I better go chase the manufacturer. If it is not fixed and on its way back very, very soon then they will be off my list of suppliers and I’ll be letting everyone know how good their support isn’t.

Follow

Get every new post delivered to your Inbox.

Join 161 other followers