jump to navigation

Friday Philosophy – Tosh Talked About Technology February 17, 2012

Posted by mwidlake in Friday Philosophy, future, Hardware, rant.
Tags: , ,
9 comments

Sometimes I can become slightly annoyed by the silly way the media puts out total tosh and twaddle(*) that over-states the impact or drawbacks about technology (and science ( and especially medicine (and pretty much anything the media decides to talk about)))). Occasionally I get very vexed indeed.

My attention was drawn to some such thing about SSDs (solid State Discs) via a tweet by Gwen Shapira yesterday {I make no statement about her opinion in this in any way, I’m just thanking her for the tweet}. According to Computerworld

SSDs have a ‘bleak’ future, researchers say

So are SSDs somehow going to stop working or no longer be useful? No, absolutely not. Are SSDs not actually going to be more and more significant in computing over the next decade or so? No, they are and will continue to have a massive impact. What this is, is a case of a stupidly exaggerated title over not a lot. {I’m ignoring the fact that SSDs can’t have any sort of emotional future as they are not sentient and cannot perceive – the title should be something like “the future usefulness of SSDs looks bleak”}.

What the article is talking about is a reasonable little paper about how if NAND-based SSDS continue to use smaller die sizes, errors could increase and access times increase. That is, if the same technology is used in the same way and manufacturers continue to shrink die sizes. It’s something the memory technologists need to know about and perhaps find fixes for. Nothing more, nothing less.

The key argument is that by 2024 we will be using something like 6.4nm dies and at that size, the physics of it all means everything becomes a little more flaky. After all, Silicon atoms are around 0.28nm wide (most atoms of things solid at room temperature are between 0.2nm and 0.5nm wide), at that size we are building structures with things only an order of magnitude or so smaller. We have all heard of quantum effects and tunneling, which means that at such scales and below odd things can happen. So error correction becomes more significant.

But taking a reality check, is this really an issue:

  • I look at my now 4-year-old 8GB micro-USB stick (90nm die?) and it is 2*12*30mm, including packaging. The 1 TB disc on my desk next to it is 24*98*145mm. I can get 470 of those chips in the same space as the disc, so that’s 3.8TB based on now-old technology.
  • Even if the NAND materials stay the same and the SSD layout stays the same and the packaging design stays the same, we can expect about 10-50 times the current density before we hit any problems
  • The alternative of spinning platers of metal oxides is pretty much a stagnant technology now, the seek time and per-spindle data transfer rate is hardly changing. We’ve even exceeded the interface bottleneck that was kind-of hiding the non-progress of spinning disk technology

The future of SSD technology is not bleak. There are some interesting challenges ahead, but things are certainly going to continue to improve in SSD technology between now and when I hang up my keyboard. I’m particularly interested to see how the technologists can improve write times and overall throughput to something closer to SDRAM speeds.

I’m willing to lay bets that a major change is going to be in form factor, for both processing chips and memory-based storage. We don’t need smaller dies, we need lower power consumption and a way to stack the silicon slices and package them (for processing chips we also need a way to make thousands of connections between the silicon slices too). What might also work is simply wider chips, though that scales less well. What we see as chips on a circuit board is mostly the plastic wrapper. If part of that plastic wrapper was either a porous honeycomb air could move through or a heat-conducting strip, the current technology used for SSD storage could be stacked on top of each other into blocks of storage, rather then the in-effect 2D sheets we have at present.

What could really be a cause of technical issues? The bl00dy journalists and marketing. Look at digital cameras. Do you really need 12, 16 mega-pixels in your compact point-and-shoot camera? No, you don’t, you really don’t, as the optics on the thing are probably not up to the level of clarity those megapixels can theoretically give you, the lens is almost certainly not clean any more and, most significantly, the chip is using smaller and smaller areas to collect photons (the sensor is not getting bigger with more mega-pixels you know – though the sensor size is larger in proper digital SLRs which is a large part of why they are better). This less-photons-per-pixel means less sensitivity and more artefacts. What we really need is maybe staying with 8MP and more light sensitivity. But the mega-pixel count is what is used to market the camera at you and I. As a result, most people go for the higher figures and buy something technically worse, so we are all sold something worse. No one really makes domestic-market cameras where the mega-pixel count stays enough and the rest of the camera improves.

And don’t forget. IT procurement managers are just like us idiots buying compact cameras.

(*) For any readers where UK English is not a first language, “twaddle” and “tosh” both mean statements or arguments that are silly, wrong, pointless or just asinine. oh, Asinine means talk like an ass :-) {and I mean the four-legged animal, not one’s bottom, Mr Brooks}

The Most Brilliant Science Graphic I Have Ever Seen January 5, 2012

Posted by mwidlake in biology, Perceptions.
Tags: ,
18 comments

The below link takes you to an absolutely fantastic interactive demonstration of the relative size of everything. Everything. Stop reading this and go look at it, when it finishes loading, move the blue blob at the bottom of the screen left and right.

The Relative_scale_of_everything

The raw web link is:

http://www.primaxstudio.com/stuff/scale_of_universe/scale-of-universe-v1.swf

The web page says scale_of_the_universe but it should be relative_scale_of_everything_in_the_universe. Did you go look at it? NO!?! If it’s because you have seen it before then fair enough – otherwise stop reading this stupid blog and Look At It! NOW! GO ON!!!

Yes, I do think it is good.

I have to thank Neil Chandler for his tweet about this web page which led me to look at it. Neil and I talked about relative sizes of things in the pub towards the end of last year, in one of the Oracle London Beers sessions. I think it was Neil himself who suggested we should convert MB, GB and TB into time to get a real feel for the size of data we are talking about, you know, when we chuck the phrases GB and TB around with abandon. Think of 1KB as a second. A small amount of time for what is now regarded as a small amount of data – This blog so far is around 1.2kb of letters. Given this scale:

1KB = 1 second. About the time it takes to blink 5, possibly 6 times, as fast as you can.
1MB = Just under 17 minutes. Time enough to cook fish fingers and chips from scratch.
1GB = 11 and a half days. 1KB->1GB is 1 second -> 1.5 weeks.
1TB = Just under 32 years. Yes, from birth to old enough to see your first returning computer fad.
1PB = pretty much all of known human history, cave paintings and Egyptian pyramids excepting, as the Phoenicians invented writing about 1150BC ago.

The wonderful thing about the web page this blog is about is that you can scan in and out and see the relative sizes of things, step by step, nice and slowly. Like how small our sun is compared to proper big ones and how the Earth is maybe not quite as small compared to Saturn as you thought. At the other end of the scale, how small a HIV virus is and how it compares to the pits in a CD and the tiniest of transistors on a silicon chip. I’m particularly struck by the size of DNA compared to a human red blood cell, as in how relatively large DNA is. Red blood cells are pretty big cells and yet all human cells (except, ahem, red blood cells) have 3.2 billion letters of DNA in each and every one of them. That’s some packaging, as cells have a lot of other stuff in there too.
{NB, do remember that the zooming in and out is logarithmic and not linear, so things that are close to each other in the graphic are more different than first appears, especially when the image becomes large and in effect covers a wide part of the screen}

Down at the sub-atomic scale there are a fair number of gaps, where one graphic is pretty much off the scale before the next one resolves from a dot to anything discernable, but that is what it’s like down that end of things. Besides. It’s so small it’s hard to “look around” as there is nothing small enough (like, lightwaves went by several orders of magnitude ago) to look around with.

My one criticism? It’s a shame Blue Whale did not make it into the show :-)

I actually had flashbacks looking at this web page. I remember, back in the mid-70’s I think, going to the cinema. Back then, you still had ‘B’ shows, a short film, cartoon or something before the main event. I no longer have a clue what the main event was, but the ‘B’ movie fascinated me. I think it started with a boy fishing next to a pond and it zoomed in to a mosquito on his arm, then into the skin and through the layers of tissue to blood vessels, to a blood cell… you get the idea, eventually to an atom. Some of the “zooming in” where it swapped between real footage was poor but it was 1970 or so and we knew no better. It then quickly zoomed back out to the boy, then to an aerial view of the field, out to birds-eye… satellite-like…the earth… solar system… I think it stopped at milky way. I wish I knew what that documentary was called or how to find it on the web…

{Update, see comments. Someone links to the film. I know I looked for this film a few years back and I did have a quick look again before I posted this message. I did not immediately find it but someone else did, in 10 seconds via Google. Shows how rubbish I am at using web searches…}

Friday Philosophy – The start of Computing October 7, 2011

Posted by mwidlake in Friday Philosophy, history.
Tags: , , ,
add a comment

This week I finally made a visit to Bletchley Park in the middle of England. Sue and I have been meaning to go there for several years, it is the site of the British code-breaking efforts during the second world war and, despite difficulties getting any funding, there has been a growing museum there for a number of years. {Hopefully, a grant from the Heritage Lottery Fund, granted only this month, will secure it’s future}.

Why is Bletchley Park so significant? Well, for us IT-types it is significant because Alan Turing did a lot of work there and it was the home of Colossus, one of the very first electrical, programmable computers. More generally of interrest, their efforts and success in cracking enemy ciphers during WW2 were incredibly important and beneficial to the UK and the rest of the allies.

In this post, I am not going to touch on Colossus or Alan Turing, but rather a machine called the “Bombe”. The Bombe was used to help discover the daily settings of the German Enigma machines, used for decrypting nearly all German and Italian radio messages. All the Bombes were destroyed after the war (at least, all the UK ones were) to help keep secret the work done to crack the cyphers – but at Bletchley Park the volunteers have recreated one. Just like the working model of Babbage’s Difference Engine, it looks more like a work of art than a machine. Here is a slightly rough video I took of it in action:

My slightly rough video of the bombe

{OK, if you want a better video try a clearer video by someone else.}

I had a chat with the gentleman you see in both videos about the machine and he explained something that the tour we had just been on did not make clear – the Bombe is a parallel processing unit. Enigma machines have three wheels. There are banks of three coloured disks in the bombe (see the picture below). eg, in the middle bank the top row of disks are black, middle are yellow and bottom are red. Each vertical set of three disks, black-yellow-red, is the equivalent of a single “enigma machine”. Each trio of disks is set to different starting positions, based on educated guesses as to what the likely start positions for a given message might be. The colour of the disk matches, I think, one of the known sets of wheels the enigma machines could be set up with. The machine is then set to run the encrypted message through up to 36 “Enigmas” at once. If the output exceeds a certain level of sense (in this case quite crucially, no letter is every encrypted back to itself) then the settings might be correct and are worth further investigation. This machine has been set up with the top set of “Enigmas” not in place, either to demonstrate the workings or because the machine is set up for one of the more complex deciphering attempts where only some of the banks can be used.

This is the bombe seen from the front

The reason the chap I was talking to really became fascinated with this machine is that, back in about 1999, a home PC programmed to do this work was no faster than the original electro-mechanical machines from 1944 were supposed to have taken. So as an engineer he wanted to help build one and find out why it was so fast. This struck a chord with me because back in the late 1990’s I came across several examples of bespoke computers designed to do specific jobs (either stuff to do with natural gas calorific value, DNA matching or protein folding), but by 2000, 2002 they had all been abandoned as a general PC could be programmed to be just as fast as these bespoke machines – because bespoke means specialist means longer and more costly development time means less bangs for your buck.

Admittedly the Bombe is only doing one task, but it did it incredibly fast, in parallel, and as a part of the whole deciphering process that some of the best minds of their time had come up with (part of the reason the Bletchley Park site was chosen was that it was equidistant between Oxford and Cambridge and, at that time, there were direct train links. {Thanks, Dr Beeching}. ).

Tuning and reliability was as important then as it is now. In the below picture of the back of the machine (sorry about the poor quality, it was dim in that room), you can see all the complex wiring in the “door” and, in the back of the machine itself, those three rows of bronze “pipes” are in fact…Pipes. Oil pipes. This is a machine, they quickly realised that it was worth a lot of effort to keep those disks oiled, both for speed and reliability.

All the workings of the Bombe from the back

Talking of reliability, one other thing my guide said to me. These machines are complex and also have some ability to cope with failures or errors built into them. But of course, you needed to know they were working properly. When these machines were built and set up, they came with a set of diagnostic tests. These were designed to push the machine, try the edge cases and to be as susceptible to mechanical error as possible. The first thing you did to a new or maintained machine was run your tests.

1943, you had awesome parallel processing, incredible speed and test-driven development and regression testing. We almost caught up with all of this in the early 21st Century.

Where is Sun? January 3, 2011

Posted by mwidlake in Friday Philosophy, Perceptions.
Tags: ,
2 comments

First of all, may I wish everyone who comes by my blog a heartfelt Happy New Year.
Secondly, I promise I’ll blog more often and more on technical aspects this year than I have for most of 2010.
Thirdly, I’ll admit the title to this blog is nothing to do with the hardware company now owned by Mr Larry Ellison, but is about the huge glowing ball of fire in the sky (which we have not seen a lot of here in England and Wales for the last couple of weeks – not sure about Scotland but I suspect it has been the same). I apologise for the blatantly misleading (and syntactically poor) title.

A quick question for you – It is the depths of winter for most of us, and it has been unusually cold here in the UK and much of Europe. When are we, as a planet, furthest from the Sun during winter? January the 1st? The Shortest day (21st December)? The day the evening start drawing out (December 14th)?
I think many in the Northern Hemisphere will be surprised to learn that we are closest to the sun today (3rd Jan 2011). A mere 147.104 million kilometers from the centre of our solar system. I mentioned this to a few friends and they were all taken aback, thinking we would be furthest from the warmth of the sun at the depths of our winter.

Come the 4th July 2011 it is not only some strange celebration in the US about having made the terrible decision to go it alone in the world {Joke guys!}, but is the day in the year that the Earth is furthest from the sun – 152.097 million kilometers. That is about 3.39% further away and, as the energy we receive from the sun is equal to the square of the distance, does account for a bit of a drop in the energy received. {Surface of a sphere is 4*pi*(R{adius}squared), you can think of the energy from the sun as being spread over the sphere at any given distance}.

Some of you may be wondering why this furthest/closest to the sun does not match the longest/shortest day. As some of you may remember, I explained about the oddities of the shortest day not matching when the nights start drawing out about this time last year. It is because as we spin around our own pole and around the sun, things are complicated by the fact that the earth “leans over” in it’s orbit.

Check out this nice web site where you can state the location and month you want to see sunrise, sunset, day length and (of particular relevance here) the distance from the sun for each day.

I find it interesting that many of the things us most of us see as “common sense” are often not actually right (I always assumed that the shortest day coincided with both the evenings starting to draw out and mornings getting earlier until I stumbled across it when looking at sunset times – I had to go find a nice Astronomer friend to explain it all to me). I also like the fact that a very simple system – a regularly spinning ball circling a large big “fixed point” in a fixed way – throws up some oddities due to little extra considerations that often go overlooked. Isn’t that so like IT?

That lean in the Earth’s angle of spin compared to the plane we revolve around the sun is slowly rotating too, so in a few years (long, long, long after any of us will be around to care) then the furthest point in the orbit will indeed match the northern hemisphere winter. Again like IT, even the oddities keep shifting.

Team Work & The Science of Slacking July 23, 2010

Posted by mwidlake in Friday Philosophy, Management, Perceptions.
Tags: , , ,
add a comment

We all know that working in a team is more efficient than working on your own (and I did say a week or two back how I was enjoying the rare privilege of working in a team of performance guys). Many of us also know about team dynamics and creating a balanced team of ideas people, completer-finishers, implementers, strategists and so forth. Those of use who have been exposed to training courses or books on team management know all these good things about teams and how we are supposed to get the most out of them.

How many of us, though, have been introduced to the work of the French Agronomist Max Ringelmann and the aspect of teams named after him, the Ringelmann Effect? In summary the Ringelmann Effect proposses that people in teams try less hard than they do when working alone. Especially if they think no one is watching them.

Back at the start of the 20th century Ringelmann tested out his ideas using a tug-of-war experiment. He would get people to pull on a rope as hard as they could and record their efforts using a strain gauge. Then he would get them to pull on the rope as part of a team, from 2 to 8 people. As soon as people were part of a team, they pulled less hard. With two people in the team, each pulled 93% as hard as on their own, with three people this dropped down to 85% and with 4 it was just 77%. By the time there were 8 people in the team, effort was down to 50%.

This idea of shirking work more and more as the team increased in size became established in modern psychology and was given Mr Ringelmann’s name. Psychologists explain that when someone is part of a group effort then the outcome is not solely down to the individual and, as such, is not totally in their control. This acts as a demotivating factor and the person tries that little bit less hard. The larger the team, the greater the demotivation and the more significant the drop in effort. Ringelmann found that effort was down to 50% in a team of 8 so how bad can the impact of the team be? I think most of us have at least witnessed, and quite possibly been in, the position of feeling like just a cog in a massive corporate team machine. Thoroughly demotivating (though, of course, we all of us still tried as hard as we could, didn’t we?).

The effect is also know under the far more entertaining title of Social Loafing.

Monsieur Ringelmann was far kinder at the time and pointed out that these chaps pulling on the rope could well have been suffering from a lack of synergy. They had not been trained together to pull as a team so that could account for the drop in effort, they were not synchronising their effort.

However, in the 1970’s Alan Ingham in Washington University revisited Ringelmanns work and he was far sneekier. Sorry, he was a more rigorous scientist. He used stooges in his team of rope-pullers, blindfolds and putting the one poor person pulling for real at the front of the team pulling the rope. Thus he could record the effort of the individual. Ingham found that there was indeed a drop in efficiency due to the team not pulling as one. But sadly, this was not the main factor. It remained that the drop in effort was mostly down to the perceived size of the rest of the team. The bottom line was proven to be the human capacity to try less hard when part of a team and that the drop in effort was directly proportional to the size of the team.

We are of course not immune to this effect in the IT world and someone has even gone to the effort of checking that out, James Suleiman and Richard T Watson.

It seems the ways to reduce this problem are:-

  • Don’t give people boring jobs.
  • Don’t give the same job to several people and let them know they all have the same job.
  • Ask people how they are getting on and give them mini-goals along the way.
  • Atually reward them for success. Like saying “thank you” and NOT giving them yet another boring, hard job to do as they did the last one so well.

I think it is also a good argument for keeping teams small {I personally think 5 or 6 people is ideal} and split up large projects such that a single team can cope. Then give tasks to individuals or pairs of people.

If you like this sort of thing you might want to check out one of my first blog post (though it is more an angry rant than a true discussion ofthe topic) which was on the Dunning-Kruger effect, where some people are unaware of their own limitations – though I did not know it was called the Dunning-Kruger effect until others told me, which only goes to show that maybe I am not aware of my own limits… Read the comments or click through to the links from there to get a better description of some people’s inability to guage their own inabilities.

Friday Philosophy – The Science of Oracle June 11, 2010

Posted by mwidlake in Friday Philosophy, Meeting notes.
Tags: , ,
3 comments

The title to this blog is very misleading. It is not about scientifically understanding how Oracle technologies work or even about the technology itself.

It is actually about the fact that a lot of scientific organisations, both academic and commercial, work with Oracle technology in ways to do directly with the science {as opposed to using it for CRM, HR or tracking students and results, which they also do but I’m not interested in that}.

If you have worked in Academia or charitable scientific organisations it can be a little suprising that Oracle is used so much, as it is expensive and corporate – there is a tendency to be poor and anti-corporate in such environments. But the thing is, Oracle is able to handle large amounts of complex data, in many formats, in many ways, and most programming languages can easily access the data in the database. You can achieve a lot with just PL/SQL and Java of course.

Commercial scientific organisations, like large Pharmaceuticals, use it for the same reasons of course, but for them the cost is not such an issues {I can imagine IT managers in such organisations going “It damn well IS an issue!” but trust me, not in the same critical way}.

What is the point of this blog? Well, it’s about user communities. The scientific community have a tendency to push the Oracle database further than most Oracle users do. Take data volumes. I worked for many years for the UK-side of the Human Genome Project and part of what I did was create an Oracle database that scaled to 100TB. Even now that is pretty large but I was designing and implementing it back in 2004-2005. The data volumes CERN are going to have to handle for the Large Hadron Collider just dwarf that, and they only hold summarised data of summaries of the actual raw scientific data generated.

Another aspect is coping with very rapid change, for example systems to support lab processes. This is similar to your standard factory management system except that the level of change can be daunting. The process can change, well, weekly, as the science and techniques improve in the lab. Those scientist might even completely change what they are doing when some unexpected avenue opens up. I say “might”, seemed to happen every month.

In scientific organisations there tends to be more openness about what and how they do things. Academic and charitable scientific organisations tend to put less barriers in the way of exchanging knowledge than corporations do and so that encourages more exchange of information. When I was working in the area I was positively encouraged to go to conferences and present. Obviously this is not always true and scientific corporations, like Pharmaceuticals, have gained {rightly or wrongly} a reputation for being very reticent about sharing any knowledge at all. But often the individuals involved will share.

So, the scientific community push areas of the technology very hard, they tend to be an open bunch of people, cost is often critical and, the final thing I have not mentioned, is that they often speak a language only vaguely recognisable as English, due to the jargon. Sounds like a community to me.

The real reason I mention all this is that it looks like, after about 4 years of considering and discussing having a science SIG {Special Interest Group} in the UKOUG, I will finally be putting together an agenda for an initial meeting for such a thing. I wonder if it will be a success?

Follow

Get every new post delivered to your Inbox.

Join 156 other followers