jump to navigation

Finally Getting Broadband – Via 4G! (cable-free, fast-ish internet) February 15, 2021

Posted by mwidlake in Architecture, Hardware, off-topic, Private Life.
Tags: , , ,
2 comments

I live in a field. Well, I live in a house, but it and a few other houses make up a tiny hamlet surrounded by fields. My back garden is itself a small field. It’s nice to live surrounded by countryside, but one of the drawbacks of this rural idyll is our internet connection. Our house is connected to the World Wide Web by a copper telephone line (via a British Telecom service), and it’s not a very good copper telephone line either. I joke that wet, hairy string would be just as useful. Our connection speed is, at best, about 5Mbps download and 0.4Mbps upload. That is not a typo – nought point four megabits a second. My dual ISDN line back in 2003 could just about manage that. At busy times it’s worse – a lot worse – as all the houses in our hamlet contend for whatever bandwidth we have back to civilisation. Evenings & weekends it has almost become a not-service. It is not broadband, it’s narrowband. I pay £23 a month for the wire charge & unlimited calls, another £25 a month for the calls beyond “unlimited” (yeah, tell me) and £30 a month for the narrowband connection. Ouch!

It’s all fields and very little infrastructure…

 

Good BT service.

My neighbours have complained to BT about our internet speed many times over the years and I did also. I was told (and I paraphrase) “you are at the end of a long piece of old copper-wire-based technology and there are not enough people living there for us to care about, so it is not changing”. Which to me utterly sums up my 30 or so years experience of British Telecom, a company I loath with a passion. I asked if I could get a discount for such a shit service and was told I was on their cheapest deal already. “Would I get better service if I paid more?” No. Well, at least they were honest about that.

About 2 years ago a company called Gigaclear came down the road to our little, rural hamlet. They are being paid a lot of money by Essex county council to lay fibre cable to rural locations all over the district. This raised our hopes. Gigaclear dug channels, lay down the ducting, put a green telecommunications box in place by “Joe on the Donkey” (this is the real name of a neighbour’s house) – and went away. They’ve been back a few times since, but the promised super-mega-fast fibre broadband service they touted has not come to fruition. The last two visits have been to check out why they can’t connect anyone. It might partly be that one pair could not even understand that the green box 100 meters away is probably the one that is supposed to service our house, not the one way across two fields that they have not dug a channel from.

 

Bad Weekend BT service

I first realised there was another solution when, forced by evenings & weekends when download speeds dropped to below 1Mbps, I started using my iPhone as a hotspot for my laptop. 5/0.4Mpps was replaced by 15/2.0Mbps. I was soon using my phone to upload pictures to social media and the charity I foster cats for, plus for video conferencing for work & social purposes. If my mobile phone was giving me better connection speed, why in hell was I using an expensive & slower connection from my physical telephone line? One problem was I only have so much download allowance on my mobile phone (10GB). The other was you need to keep the mobile by the computer to tether it. It was not a mobile anymore!

I was then chatting to a neighbour and he said he’d tried a relative’s 4G broadband from EE – EE is about the only phone network we can get a decent signal with here and we use them for our mobile phones – and he was pleasantly surprised at the speed. He said it was expensive though…

As a result of this chat I did a quick check on the EE website. A 4G EE broadband device (basically a box with the electronics for a mobile phone and a router in it) would be cheaper than my current BT solution! £35 a month, no up-front fee, and their advertising blurb claimed 31Mbps on average “in some places”. I had no expectation of getting anything near that sales-pitch speed, but repeated speed test on my EE mobile phone was confirming 15 Mbps download and 2 Mbps upload usually, much better than the BT landline. And the offerings of Gigaclear, should they ever plumb us in, was for 30Mbps for a similar cost to EE 4G broadband, and 100Mbps if you spent more spondoolies. All in all, EE seemed not that expensive really and, as I said, cheaper than sod-all bandwidth with BT!

The last thing I checked was if you could get a physical EE 4G signal booster if your signal was poor. Yes, you can, but from 3rd party companies for about £250-£400. Our EE signal in Widlake Towers is better than any other mobile phone operator but it is never an all-bars signal.

The Change to 4G, cable-free Broadband

I decided it was worth a risk. All I wanted was the speed my iPhone was getting and, if it was poorer, a one-off spend of maybe £300 would help matters. I ordered an EE 4G router and 200GB of data a month. Why so much data? Well, I had never exceeded my 10GB data a month on my mobile phone, I am not what you could call a big data user – I do not download films or live stream stuff. But my connection speed had been so bloody awful for so long I had never even dreamed of downloading 1/2 hour TV programs, let alone whole movies! Maybe I might with a better connection. And I was about to start running training courses for a client. I figured I would bet on the safe side.

My EE 4G router turned up the next day, I was so excited!

It was broken. It would get the 4G signal no trouble but it’s wifi service was unstable, it shut down under load. It was so annoying as for 10 minutes I had FORTY Mbps download and FIFTEEN Mbps upload performance! I count this as mental cruelty, to let me see the sunny uplands of normal 1st world internet access but to then immediately remove it from me…

It was clearly a faulty unit so I was straight on to EE. It took over an hour and a half to contact & talk through the issue with EE but, to be fair, they have a process to go through and they have to make sure the customer is not doing something daft like keeping the router in a basement, and they sent me a replacement unit the very next day.

This is more like it!

It arrived. I plugged it in. It worked. It’s was great! The bandwidth was an order of magnitude better than the old BT router over the fixed telephone cable. Not only that, it also far exceeded both what I had got via my phone and also the estimates of EE. I got over 60Mbps download on average and often above 70 Mbps. The highest I have seen so far is 98Mbps. Upload averages around 14Mpbs and has gone up to 30 Mbps at times – but I have to say I see the peak upload when download is a little depressed. On average I am now getting consistently over 60Mbps download and 10Mbps upload speeds, though sometimes when the local network is busy (mid workday afternoon) I see a little less. “Peak performance” is weekend and evening times, I get fantastic performance, maybe as business users in the area are quieter and few domestic clients are using the 4G network.

So, over 60Mbps download and 10Mbps upload average and sometimes more – I’ll take that! more than 10 times faster download and, well, 30-50 times faster upload then BT over tired copper.

It’s utterly transformed my online experience. I can honestly say that when I see slow internet performance on web pages now I am just as inclined to blame the remote site as my connection. And I can upload pictures & emails in a way I have never been able to before. Until now I was notable to put up short videos of our foster cats to the charity website unless I did it on my phone in the garden, and that was hit-and-miss. Now I can just chuck videos over to them and not worry about it. For me it is a game changer.

My 4G Choice

In the window, catching rays – 4G rays

I had little choice but to go for EE as no other mobile phone company has decent coverage in my area. You may also have only 1 choice but, it you live in an area where many 4G services are available (i.e. you live in a place where other people live!) then look into which is best – not just for speed/cost but also customer service. Many companies are offering wireless 4 and 5G services. Personally I would stick to 4G as 5G is still shiny and new enough to come with a price hike for not-a-lot more total throughput. I’ve always been really pleased with EE customer service. For years I’ve popped over to one of the two local-ish EE shops whenever I have needed to change something or had a problem and they always sort me out quickly. Not only that, on a couple of occasions I’ve suggested I go for a more expensive plan (say to get more roaming data allowance) and they have looked at my historic usages – “Mate, you’ve never been even close to the smaller plan, save yourself £10 a month and take the cheaper one. You can always upgrade”.

I went for EE’s standard 4G Home Router as the only difference I could see with it and their 4G Home Router 2 was the Home Router 2 supported 64 devices not 32, for an extra £50 up front.. Between us Mrs Widlake and I have fewer than 10 devices, let alone over 32…. At the time of writing there is no initial charge for the 4G Home Router, just a £35-£55 monthly charge depending on what data allowance you want £35=100GB, you get an extra 100GB for each additional £5 up to 500GB but then at £55 it becomes unlimited. You can choose between 18 month contract or no contract and an up-front fee, but go look at the website for details, it will have changed by the time you look (I know this as they have introduced a 5G home router between the time I started this blog post an ended it! But I have no 5G signal so of no consideration for me).

In line of sight of the study window

Initially I had the EE 4G home router in the main room of the house so I could fiddle with it if needed, but I soon moved it upstairs to a bedroom where prior tests had shown I got a better 4G signal. (You want as little in the way of building materials and geography between you and the 4G mast, so upstairs by a window is ideal. And in my house the top floor where I put the router is made of wood, straw, mud, & horse shit. Other parts have fully insulated plasterboard which includes a thin metal foil layer to both reflect heat and, unfortunately, block electromagnetic radiation).

Spreading The Network

Another consideration for me was allowing the wifi signal to get to the study. The study is above the garage, a structure covered in black clapperboard which is strangely attached to the neighbour’s house (this is the sort of thing you get with properties hundreds of years old – things just got built). A few years ago when we had the study/garage rebuilt to modern standards we got another company to provide telephone services to the study, to see if it was better than BT. It was. A bit. And it still is. But that company is now part of BT (as is EE to be fair) and is slower than my mobile phone. If the new router reached the study we could stop using BT AND we could stop using this backup supplier (which was cheaper than BT but more limited in some respects). With line-of-sight I hoped the wifi would reach the study. It did  – but it was right at the range limit and the signal would drop :-(. If you moved your laptop or tablet away from the window and clear line-of-site, you lost the Wifi signal from the new 4G broadband router.

I see you (just) router

Well, I had a possible solution to this too.

There are many wifi extenders on the market at many prices, some just use wifi and some use your power cables and others create a mesh. If 30 years in I.T. have taught me anything it is that there is something deficient in my head that means I personally have no affinity for networks. I need simple. I knew I could not use a power cable solution. With these you plug one device in a socket and it communicates over your domestic power lines to a second plugged-in device which provides a wifi service. For historical reasons my study is on a different power circuit to the house, I doubt it would work. I did not want to go to Mesh as I felt (based on experience) I would fuck it up. I just wanted a simple, single WiFi extender.

After a few hours on the internet I came to the conclusion that there was a solution, a relatively old device (first sold in 2016) that might be what I wanted. A TP Link RE450, also known as an AC1750. It was simple and excelled at having a long range. I ordered one for £50 quid.

It came and, following the simple instructions and maybe half an hour of my part-time attention, I had it working and connecting to both the 5 and 2.4 GHz networks of my EE 4G broadband router. I moved the TP Link RE450 over to the study and plugged it in so it had line-of-site to my EE 4G router. The connection light flashed blue and red, which suggested it was not happy – but I worked out that it was happy with the 2.4Ghz connection but not the 5Ghz one. It was right on the edge of it’s range. A bit of fiddling of orientation (hat tip to Mrs W who realised it was better on it’s side) over 2 days, including moving the router a whole 30cm closer, and now both are happy.

The end result is I now have access to the 4G EE broadband router in the study & garage at about 20Mbps download and 12 Mbps upload. I think the limit is the TP Link to EE router connection, which is just down to distance. Bottom line, I now have access to the internet from every part of my house and separate study, and the whole front garden, and the edge of the field opposite the house, and some of the back garden, at speeds substantially faster than my old landline.

British Telecom will be getting a cancellation notice from me by the end of the month (I need to change some email addresses) and the third party service to the study will also be terminated. I will replace a service from BT that was costing me £80 a month and another that was £30 a month with just one at £40 a month, which gives me a much, much better service.

That feels good.

Latest speed test? Done as I completed this post, I recorded 77Mbps download & 30Mbps upload, which I am incredibly pleased with. I don’t expect to get that all the time though.

Speed test the morning I posted this. It will do 🙂

 

Introducing I.T. to an Elderly Relative February 25, 2019

Posted by mwidlake in Hardware, off-topic, Perceptions, Private Life.
Tags: , ,
3 comments

Introducing an older person to the connected world can be a challenge. So I thought I would describe my recent experiences in introducing my elderly mother to I.T and the internet. Each such situation will be different of course, depending on the prior experience of the person and the skills you believe they have. I’m going to run through what I think are the main initial considerations. I knew from the start it was going to be a particular challenge with my mother, so I think she is a good example. Hopefully, for many the task will be a little easier…

From cheezburger dot com

Firstly, why are we doing this?

Not everyone has to be on the internet and I knew it was going to be stressful for everyone involved, so the first question to ask is “Is it in the best interest of Uncle Bob to go through this?”

For years my mother has shown very little interest in computers or the internet, and at times she has been quite “Oh, those damn things!” about it all. But over the last 2 or 3 years Mum’s started showing an interest. This has nothing to do with the fact that her youngest son’s whole working life has been in I.T., I think she’s simply started to feel she is missing out as there are so many references on TV programs and the newspaper to things on the internet. “Just go to blingy bong for more information!”. And to her, it really is “blingy bong”.

I think it is vital that the person wants to get online – and this is not a one-week wonder.

Before now my mum had mentioned getting online but then lost interest when the one thing she was interested in disappeared, such as checking the state of play in the Vuelta cycling race as it was not on her TV. Setting someone up on the internet is not cheap and I knew she would insist on paying. You have to organise broadband to the property, buy a device and then spend time in training them. If mum lost interest after a couple of days of trying, it would all be a waste of effort. But she had been constant in mentioning this for a couple of months.

Another reason to get Mum online is so she can stay in touch more easily {am I really sure I want this?!?}. Her hearing is not as good as it was and phone calls are a ‘dedicated, binary activity’. What do I mean by that? Well, when you are on the phone, you have to keep the conversation going and you are doing nothing else, this is your only chance to communicate – dedicated. And when you are not on the phone you are not in contact – Binary (all or nothing).

I think those of us in the technology industry or who grew up in the last… 4 decades maybe take this for granted, but with email, texts, messenger, whatsapp etc you can throw a message or two at people when the need occurs to you, and leave them for the person to pick up. It is a more relaxed way of communicating and, in many ways, more reliable. At present if mum needs me to come over and change light bulbs she needs to call me in the evening. She won’t call me during the day, she is convinced nothing short of death is important enough to call during the day! So she also needs to remember to call and mum is getting worse for that. If she is online she can send me a message when she notices the bulb in hall has blown.

The next step is to assess the capabilities of the person you are helping.

I’ve introduced a few other people (mother-in-law, brother to some degree, relatives of friends) to computers and the internet over the years and the size of the challenge is very much dictated by their skills. I think you need to be honest about how much and how soon people can learn, especially if they are older or have learning needs. It’s great to be surprised by them doing better than you expected, but if they do worse then it can be demoralising for both parties.

My mother-in-law was a retired science teacher, interested in a dozen things, confident, and self-motivated. When she asked me to help her get on the internet I knew it was not going to be too hard.  But something I did not consider is that she had never typed at all (which surprised me, but there you go), so the keyboard was an initial, surprise challenge to the task. Just think about it, you have to explain the “enter” key, the “delete” key, “shift” key, special symbols… But the Mother-in-law was used to using equipment and took to it well. It did mean that the first session was almost totally about introducing her to the keyboard and just a few basics on turning the machine on and off and using email. After that I went on in later sessions to show her the basics of Windows, email, web browsing and she was soon teaching herself. She got a couple of “computes for dummies” and went through them.

Learning skills deteriorate as you age – but each individual is different. Be realistic.

My mother had also never used a typewriter – but she is also not good with technology. Getting her to understand how to use a video player was a task way back when.  It is not that she is no good with mechanical things or controlling them, she was a sewing machinist all her career – but she never moved from a simple sewing machine with just a dozen manually selected stitch patterns to ones which you can program or that have a lot of controls. This might be mean to say, but she struggled with an electronic cat-flap when we installed one for her! {Well, we installed it for the cats to be honest, we do not make Mum enter and exit the house on her hands and knees through a small hole in the door}. My mum has also never had (or wanted) a mobile phone, let alone a smart phone. Apps, widgets, icons, touch screens are all things she has never used.  We were going to have to keep it very, very simple. Mum also lacks focus and retention of details. Lots of repetition would be needed to learn, and only a few things at a time.

Third Question – What hardware?

This is a major consideration. A few years ago if you wanted internet access and email the choice was simply “Mac or PC” and probably came down to what you personally preferred and felt most comfortable supporting.

I realised from the very start that my mum would never cope with a Windows PC or a Mac. I know some people are so Mac-fanboy that they will insist it is “so easy anyone could use them” but no, Macs can have issues and there is a lot of stuff to initially learn to get going. And, like PC’s, they DO go wrong and have issues.

Choice made – will it be the correct one?

I did initially investigate if I could make a Windows PC work for my mum. I can sort out most issues on a PC and so it would be easier for me to support her. You can set Windows up to be simpler for an older person. I was more than happy setting up other older people with a PC in the past, as I’ve mentioned. Another big advantage with a PC would be I could set it up so I could remote access it and help. I live 2.5 hours from Mum, remote access would be a major boon. In another situation I think I would go down that route, set up a Windows laptop, reduce what was available on it, put on the things I felt they would want initially and ensure I had full access to the machine. I could then do interactive “show and tell” sessions. Of course, you have to consider privacy if you have full access to someone’s machine. But I felt I was trying to come up with a solution that was more easy for me rather than more easy for the person I was helping.

My final factor in my decision on what to go for was “the internet”. There is bad stuff on the internet (I don’t mean content so much, what my Mum looks at is up to her and I am under no illusions that when someone gets old they do not become a child to protect. I don’t understand why some people seem to think old people are sweet and innocent! Old people used to be young, wild, risk-taking and randy. They’ve lived a life and learnt about the world and they know what they do and do not like). What bothers me about the internet is viruses, spyware, downloads that screw your system over. No matter how much I would explain to my mum, there was a good chance she would end up clicking on something and downloading some crap that messed up the system or stole her details. Machines that are not Windows PCs suffer from this a lot less.

For a while my mum said she wanted an Alexa or something similar. Something she could ask about Lonnie Donegan’s greatest hits (this is a totally true example). But talking to her she also wanted email and BBC news and sport. Also, I’ve seen people using an Alexa and getting it to understand & do what you want is pretty hit & miss, I could see that really frustrating my Mum. Also I don’t like the damned, nasty, spying, uncontrolled bloody things – they listen all the time and I don’t think it is at all clear what gets send back to the manufacturer, how it is processed, how they use it for sales & marketing.

So, for my mum a tablet was the way to go. It is simpler, much more like using a phone (you know, the mobile phone she has never had!) and has no complication of separate components. Plus it is smaller. I decided on an iPad because:

    • The three people she is most likely to be in contact with already have an iPad mini or iPhone,
    • They are simple. Simple-ish. Well, not too complicated.
    • I felt it was big enough for her to see things on it but not so big as to be in the way.
    • The interface is pretty well designed and swish.
    • They are relatively unaffected by viruses and malware (not impervious though)
    • It will survive being dropped on the carpeted floor of her house many, many, many times.
    • You can’t harm them by just typing things and running apps. {Hmm, I’ll come back to that in a later post…}
    • If she really hated it, I could make use of a new iPad 🙂

The biggest drawback to an iPad is I cannot get remote access. I’ve had a play with one remote viewing tool but it is too complex for Mum to do her part of things, at least initially. If anyone has any suggestions for dead simple remote access to iPads (and I don’t mind paying for such a service) please let me know. I have access to all her passwords and accounts, at least until she is happy taking control, so I can do anything to get access.

I did not make the decision on her hardware on my own though. Having thought through all the above myself, the next time I visited Mum I took an iPad mini and an iPhone and I asked her what she thought she wanted. We talked about Alexas and PCs too. She did not want a PC, she hated the home computer my father had had (it made funny noises in the corner and disturbed her watching “Eastenders”). Even a laptop was too big – her table in the living room must remain dedicated to her jigsaws! Mum felt an iPhone was too small for her. I won’t say I did not lead the conversation a little, but if she had been adamant she wanted just a phone or a laptop, I’d have tried to make it happen.

Decision made, it will be a standard iPad.

Are we all set?

No, not quite. There is one last thing before starting down this route. Getting advice from others on how to do this (which might be why you are reading this). As well as looking around on the internet a little I tweeted out to my community within I.T. to ask for simple advice. After all, many of us are of an age where we have had to deal with helping our older relatives get online. And I got quite a lot of good advice. I love it when the community helps.

A lot of the advice was on how to set up the device. However, I think it best to cover the setting up of the device under a dedicated post. That will be next.

Friday Philosophy – Size is Relative February 15, 2019

Posted by mwidlake in Architecture, Friday Philosophy, Hardware.
Tags: ,
add a comment

The below is a USB memory stick, a 64GB USB memory stick which I bought this week for about 10€/$. I think the first USB memory stick I bought was 8MB (1/8000 the capacity) and cost me twice as much.

Almost an entry level USB memory stick these days

This is a cheap, almost entry level USB memory stick now – you can get 0.5TB ones for around €50. I know, I know, they keep getting larger. As does the on-board memory of computers, disc/SSD size, and network speed. (Though compute speed seems to be stalling and has dropped below Moore’s law, even taking into account the constant rise in core count). But come on, think about what you can store on 64GB. Think back a few years to some of the large systems you worked on 10 years ago and how vast you might have thought the data was.

What made me sit back a little is that I’ve worked with VLDBs (Very Large DataBases) for most of my career, the first one being in about 1992/3. And that was a pretty big one for it’s time, it was the largest database Oracle admitted to working on in the UK back then I think. You guessed it – this little USB memory stick would hold the whole database, plus some to spare. What did the VLDB hold? All the patient activity information for a large UK hospital – so all patients who had ever been to the hospital, all their inpatient and outpatient stays, waiting list, a growing list of lab results (blood tests, x-rays)… The kit to run this took up the space of a large lorry/shipping container. And another shipping container of kit to cool it.

What makes a database a VLDB? Well, This old post here from 2009 probably still covers it, but put simply it is a database where simply the size of it gives you problems to solve – how to back it up, how to migrate it, how to access the data within it in a timely manner. It is not just about raw volume, it also depends on the software you are using and the hardware. Back in the mid 2000’s we had two types of VLDB where I worked:

  • Anything above 100GB or so on MySQL was a VLDB as the database technology struggled
  • Anything above 2TB on Oracle was a VLDB as we had trouble getting enough kit to handle the IO requirements and memory for the SGA.

That latter issue was interesting. There was kit that could run 2TB Oracle database with ease back then, but it cost millions. That was our whole IT budget, so we had to design a system using what were really beefed-up PCs and RAC. It worked. But we had to plan and design it very carefully.

So size in I.T. is not just about the absolute volume. It is also to do with what you need to do with the data and what hardware is available to you to solve your volume problems.

Size in I.T. is not absolute – It is relative to your processing requirements and the hardware available to you

That USB stick could hold a whole hospital system, possibly even now if you did not store images or DNA information. But with a single port running at a maximum of 200MB/s and, I don’t know, maybe 2,000 IOPS (read – writes can suck for USB memory sticks), could it support a whole hospital system? Maybe just, I think those figures would have been OK for a large storage array in 1993! But in reality, what with today’s chatty application design and larger user base, probably not. You would have to solve some pretty serious VLDB-type issues.

Security would also be an issue – it’s real easy to walk off with a little USB Memory stick! Some sort of data encryption would be needed… 🙂

Friday Philosophy – The Singular Stupidity of the Sole Solution April 22, 2016

Posted by mwidlake in Architecture, Exadata, Friday Philosophy, Hardware.
Tags: , , ,
13 comments

I don’t like the ‘C’ word, it’s offensive to some people and gets used way too much. I mean “cloud” of course. Across all of I.T. it’s the current big trend that every PR department seems to feel the need to trump about and it’s what all Marketing people are trying to sell us. I’m not just talking Oracle here either, read any computing, technical or scientific magazine and there are the usual adds by big I.T. companies like IBM and they are all pushing clouds (and the best way to push a cloud is with hot air). And we’ve been here before so many times. It’s not so much the current technical trend that is the problem, it is the obsession with the one architecture as the solution to fit all requirements that is damaging.

No clouds here yet

No clouds here yet

When a company tries to insist that X is the answer to all technical and business issues and promotes it almost to the exclusion of anything else, it leads to a lot of customers being disappointed when it turns out that the new golden bullet is no such thing for their needs. Especially when the promotion of the solution translates to a huge push in sales of it, irrespective of fit. Technicians get a load of grief from the angry clients and have to work very hard to make the poor solution actually do what is needed or quietly change the solution for one that is suitable. The sales people are long gone of course, with their bonuses in the bank.

But often the customer confidence in the provider of the solution is also long gone.

Probably all of us technicians have seen it, some of us time after time and a few of us rant about it (occasionally quite a lot). But I must be missing something, as how can an organisation like Oracle or IBM not realise they are damaging their reputation? But they do it in a cyclical pattern every few years, so whatever they gain by mis-selling these solutions is somehow worth the abuse of the customer – as that is what it is. I suppose the answer could be that all large tech companies are so guilty of this that the customer end up feeling it’s a choice between a list of equally dodgy second hand car salesemen.

Looking at the Oracle sphere, when Exadata came along it was touted by Oracle Sales and PR as the best solution – for almost everything. Wrongly. Utterly and stupidly wrongly. Those of us who got involved in Exadata with the early versions, especially I think V2 and V3, saw it being implemented for OLTP-type systems where it was a very, very expensive way of buying a small amount of SSD. The great shame was that the technical solution of Exadata was fantastic for a sub-set of technical issues. All the clever stuff in the storage cell software and maximizing hardware usage for a small number of queries (small sometimes being as small as 1) was fantastic for some DW work with huge full-segment-scan queries – and of no use at all for the small, single-account-type queries that OLTP systems run. But Oracle just pushed and pushed and pushed Exadata. Sales staff got huge bonuses for selling them and the marketing teams seemed incapable of referring to the core RDBMS without at least a few mentions of Exadata
Like many Oracle performance types, I ran into this mess a couple of times. I remember one client in particular who had been told Exadata V2 would fix all their problems. I suspect based solely on the fact it was going to be a multi-TB data store. But they had an OLTP workload on the data set and any volume of work was slaying the hardware. At one point I suggested that moving a portion of the workload onto a dirt cheap server with a lot of spindles (where we could turn off archive redo – it was a somewhat unusual system) would sort them out. But my telling them a hardware solution 1/20th the cost would fix things was politically unacceptable.

Another example of the wonder solution is Agile. Agile is fantastic: rapid, focused development, that gets a solution to a constrained user requirement in timescales that can be months, weeks, even days. It is also one of the most abused terms in I.T. Implementing Agile is hard work, you need to have excellent designers, programmers that can adapt rapidly and a lot, and I mean a LOT, of control of the development and testing flow. It is also a methodology that blows up very quickly when you try to include fix-on-fail or production support workloads. It also goes horribly wrong when you have poor management, which makes the irony that it is often implemented when management is already failing even more tragic. I’ve seen 5 agile disasters for each success, and on every project there are the shiny-eyed Agile zealots who seem to think just implementing the methodology, no matter what the aims of the project or the culture they are in, is guaranteed success. It is not. For many IT departments, Agile is a bad idea. For some it is the best answer.

Coming back to “cloud”, I think I have something of a reputation for not liking it – which is not a true representation of my thoughts on it, but is partly my fault as I quickly tired of the over-sell and hype. I think some aspect of cloud solutions are great. The idea that service providers can use virtualisation and container technology to spin up a virtual server, a database, an application, an application sitting in a database on a server, all in an automated manner in minutes, is great. The fact that the service provider can do this using a restricted number of parts that they have tested integrate well means they have a way more limited support matrix and thus better reliability. With the Oracle cloud, they are using their engineered systems (which is just a fancy term really for a set of servers, switches, network & storage configured in a specific way with their software configure in a standard manner) so they can test thoroughly and not have the confusion of a type of network switch being used that is unusual or a flavor of linux that is not very common. I think these two items are what really make cloud systems interesting – fast, automated provisioning and a small support matrix. Being available over the internet is not such a great benefit in my book as that introduces reasons why it is not necessarily a great solution.

But right now Oracle (amongst others) is insisting that cloud is the golden solution to everything. If you want to talk at Oracle Open World 2016 I strongly suspect that not including the magic word in the title will seriously reduce your chances. I’ve got some friends who are now so sick of the term that they will deride cloud, just because it is cloud. I’ve done it myself. It’s a potentially great solution for some systems, ie running a known application that is not performance critical that is accessed in a web-type manner already. It is probably not a good solution for systems that are resource heavy, have regulations on where the data is stored (some clinical and financial data cannot go outside the source country no matter what), alter rapidly or are business critical.

I hope that everyone who uses cloud also insists that the recovery of their system from backups is proven beyond doubt on a regular basis. Your system is running on someone else’s hardware, probably managed by staff you have no say over and quite possibly with no actual visibility of what the DR is. No amount of promises or automated mails saying backs occurred is guarantee of recovery reliability. I’m willing to bet that within the next 12 months there is going to be some huge fiasco where a cloud services company loses data or system access in a way that seriously compromises a “top 500” company. After all, how often are we told by companies that security is their top priority? About as often as they mess it up and try to embark on a face-saving PR exercise. So that would be a couple a month.

I just wish Tech companies would learn to be a little less single solution focused. In my book, it makes them look like a bunch of excitable children. Give a child a hammer and everything needs a pounding.

The “as a Service” paradigm. October 27, 2015

Posted by mwidlake in Architecture, Hardware, humour.
Tags: , , ,
4 comments

For the last few days I have been at Oracle Open World 2015 (OOW15) learning about the future plans and directions for Oracle. I’ve come to a striking realisation, which I will reveal at the end.

The message being pressed forward very hard is that of compute services being provided “As A Service”. This now takes three flavours:

  1. Being provided by a 3rd party’s hardware via the internet, ie in The Cloud.
  2. Having your own hardware controlled and maintained by you but providing services with the same tools and quick-provisioning ideology as “cloud”. This is being called On Premise (or just “On Prem” if you are aiming to annoy the audience), irrespective of the probably inaccuracy of that label (think hosting & dedicated compute away from head office)
  3. A mix of the two where you have some of your system in-house and some of it floating in the Cloud. This is called Hybrid Cloud.

There are many types of  “as a Service offerings, the main ones probably being

  • SaaS -Software as a Service
  • PaaS – Platform as a Service
  • DBaas – Database as a a Service
  • Iass – Infrastructure as a Service.

Whilst there is no denying that there is a shift of some computer systems being provided by any of these, or one of the other {X}aaS offerings, it seems to me that what we are really moving towards is providing the hardware, software, network and monitoring required for an IT system. It is the whole architecture that has to be considered and provided and we can think of it as Architecture as a Service or AaaS. This quick provisioning of the architecture is a main win with Cloud, be it externally provided or your own internal systems.

We all know that whilst the provision time is important, it is really the management of the infrastructure that is vital to keeping a service running, avoiding outages and allowing for upgrades. We need a Managed Infrastructure (what I term MI) to ensure the service provided is as good as or better than what we currently have. I see this as a much more important aspect of Cloud.

Finally, it seems to me that the aspects that need to be considered are more than initially spring to mind. Technically the solutions are potentially complex, especially with hybrid cloud, but also there are complications of a legal, security, regulatory and contractual aspect. If I have learnt anything over the last 2+ decades in IT it is that complexity of the system is a real threat. We need to keep things simple where possible – the old adage of Keep It Simple, Stupid is extremely relevant.

I think we can sum up the whole situation by combining these three elements of architecture, managed infrastructure and simplicity into one encompassing concept, which is:

KISS MI AaaS.

.

.

And yes, that was a very long blog post for a pretty weak joke. 5 days of technical presentations and non-technical socialising does strange things to your brain

With Modern Storage the Oracle Buffer Cache is Not So Important. May 27, 2015

Posted by mwidlake in Architecture, Hardware, performance.
Tags: , , , ,
11 comments

With Oracle’s move towards engineered systems we all know that “more” is being done down at the storage layer and modern storage arrays have hundreds of spindles and massive caches. Does it really matter if data is kept in the Database Buffer Cache anymore?

Yes. Yes it does.

Time for a cool beer

Time for a cool beer

With much larger data sets and the still-real issue of less disk spindles per GB of data, the Oracle database buffer cache is not so important as it was. It is even more important.

I could give you some figures but let’s put this in a context most of us can easily understand.

You are sitting in the living room and you want a beer. You are the oracle database, the beer is the block you want. Going to the fridge in the kitchen to get your beer is like you going to the Buffer Cache to get your block.

It takes 5 seconds to get to the fridge, 2 seconds to pop it open with the always-to-hand bottle opener and 5 seconds to get back to your chair. 12 seconds in total. Ahhhhh, beer!!!!

But – what if there is no beer in the fridge? The block is not in the cache. So now you have to get your car keys, open the garage, get the car out and drive to the shop to get your beer. And then come back, pop the beer in the fridge for half an hour and now you can drink it. That is like going to storage to get your block. It is that much slower.

It is only that much slower if you live 6 hours drive from your beer shop. Think taking the scenic route from New York to Washington DC.

The difference in speed really is that large. If your data happens to be in the memory cache in the storage array, that’s like the beer already being in a fridge – in that shop 6 hours away. Your storage is SSD-based? OK, you’ve moved house to Philadelphia, 2 hours closer.

Let's go get beer from the shop

Let’s go get beer from the shop

To back this up, some rough (and I mean really rough) figures. Access time to memory is measured in Microseconds (“us” – millionths of a second) to hundreds of Nanoseconds (“ns” – billionths of a second). Somewhere around 500ns seems to be an acceptable figure. Access to disc storage is more like Milliseconds (“ms” – thousandths of a second). Go check an AWR report or statspack or OEM or whatever you use, you will see that db file scattered reads are anywhere from low teens to say 2 or 3 ms, depending on what your storage and network is. For most sites, that speed has hardly altered in years as, though hard discs get bigger, they have not got much faster – and often you end up with fewer spindles holding your data as you get allocated space not spindles from storage (and the total sustainable speed of hard disc storage is limited to the total speed of all the spindles involved). Oh, the storage guys tell you that your data is spread over all those spindles? So is the data for every system then, you have maximum contention.

However, memory speed has increased over that time, and so has CPU speed (though CPU speed has really stopped improving now, it is more down to More CPUs).

Even allowing for latching and pinning and messing around, accessing a block in memory is going to be at the very least 1,000 times faster than going to disc, maybe 10,000 times. Sticking to a conservative 2,000 times faster for memory than disc , that 12 seconds trip to the fridge equates to 24,000 seconds driving. That’s 6.66 hours.

This is why you want to avoid physical IO in your database if you possibly can. You want to maximise the use of the database buffer cache as much as you can, even with all the new Exadata-like tricks. If you can’t keep all your working data in memory, in the database buffer cache (or in-memory or use the results cache) then you will have to do that achingly slow physical IO and then the intelligence-at-the-hardware comes into it’s own, true Data Warehouse territory.

So the take-home message is – avoid physical IO, design your database and apps to keep as much as you can in the database buffer cache. That way your beer is always to hand.

Cheers.

Update. Kevin Fries commented to mention this wonderful little latency table. Thanks Kevin.

“Here’s something I’ve used before in a presentation. It’s from Brendan Gregg’s book – Systems Performance: Enterprise and the Cloud”

Fixing my iPhone with my Backside May 18, 2015

Posted by mwidlake in Hardware, off-topic, Perceptions, Private Life.
Tags: , ,
10 comments

{Things got worse with this phone >>}

iPhone 5 battery getting weak? Damn thing is saying “out of charge” and won’t charge? Read on…

NB a link to the battery recall page is at the end of this post and the eventual “death” of my phone and my subsequent experience of the recall with my local apple store can be found by following the above link…

Working with Oracle often involves fixing things – not because of the Oracle software (well, occasionally it is) but because of how it is used or the tech around it. Sometimes the answer is obvious, sometime you can find help on the web and sometimes you just have to sit on the issue for a while. Very, very occasionally, quite literally.

Dreaded “out of battery” icon

Last week I was in the English Lake District, a wonderful place to walk the hills & valleys and relax your mind, even as you exhaust your body. I may have been on holiday but I did need to try and keep in touch though – and it was proving difficult. No phone reception at the cottage, the internet was a bit slow and pretty random, my brother’s laptop died – and then my iPhone gave up the ghost. Up on the hills, midday, it powers off and any attempt to use it just shows the “feed me” screen. Oh well, annoying but not fatal.

However, I get back to base, plug it in…and it won’t start. I still get the “battery out of charge” image. I leave it an hour, still the same image. Reset does not help, it is an ex-iPhone, it has ceased to be.

My iPhone is version 5 model, black as opposed to the white one shown (picture stolen from “digitaltrends.com” and trimmed), not that the colour matters! I’ve started having issues with the phone’s battery not lasting so well (as, I think, has everyone with the damned thing) and especially with it’s opinion of how much charge is left being inaccurate. As soon as it gets down to 50% it very rapidly drops to under 20%, starts giving me the warnings about low battery and then hits 1%. And sometimes stays at 1% for a good hour or two, even with me taking the odd picture. And then it will shut off. If it is already down at below 20% and I do something demanding like take a picture with flash or use the torch feature, it just switches off and will only give me the “out of charge” image. But before now, it has charge up fine and, oddly enough, when I put it on to charge it immediately shows say 40-50% charge and may run for a few hours again.

So it seemed the battery was dying and had finally expired. I’m annoyed, with the unreliable internet that phone was the only verbal way to keep in touch with my wife, and for that contact I had to leave the cottage and go up the road 200 meters (thankfully, in the direction of a nice pub).

But then I got thinking about my iPhone and it’s symptoms. It’s opinion of it’s charge would drop off quickly, sudden drain had a tendency to kill it and I had noticed it lasting less well if it was cold (one night a couple of months ago it went from 75% to dead in 10, 15 mins when I was in a freezing cold car with, ironically, a dead battery). I strongly suspect the phone detects it’s level of charge by monitoring the amperage drop, or possibly the voltage drop, as the charge is used. And older rechargeable batteries tend to drop in amperage. And so do cold batteries {oddly, fully charged batteries can have a slightly higher voltage as the internal resistance is less, I understand}.

Perhaps my battery is just not kicking out enough amperage for the phone to be able to either run on it or “believe” it can run on it. The damn thing has been charging for 2 or 3 hours now and still is dead. So, let’s warm it up. Nothing too extreme, no putting it in the oven or on top of a radiator. Body temperature should do – We used to do this on scout camps with torches that were almost exhausted. So I took it out of it’s case (I have a stupid, bulky case) so that it’s metal case is uncovered and I, yep, sat on it. And drank some wine and talked balls with my brother.

15 minutes later, it fires up and recons it is 70% charged. Huzzah, it is not dead!

Since then I have kept it out it’s case, well charged and, frankly, warm. If I am out it is in my trouser pocket against my thigh. I know I need to do something more long-term but right now it’s working.

I tend to solve a lot of my IT/Oracle issues like this. I think about the system before the critical issue, was there anything already going awry, what other symptoms were there and can I think of a logical or scientific cause for such a pattern. I can’t remember any other case where chemisty has been the answer to a technology issue I’m having (though a few where physics did, like the limits on IO speed or network latency that simply cannot be exceeded), but maybe others have?

Update: if you have a similar problem with your iPhone5, there is a recall on some iPhone5’s sold between December 2012 and January 2013 – https://www.apple.com/uk/support/iphone5-battery
{thanks for letting me know that, Neil}.

Friday Philosophy – Tosh Talked About Technology February 17, 2012

Posted by mwidlake in Friday Philosophy, future, Hardware, rant.
Tags: , ,
9 comments

Sometimes I can become slightly annoyed by the silly way the media puts out total tosh and twaddle(*) that over-states the impact or drawbacks about technology (and science ( and especially medicine (and pretty much anything the media decides to talk about)))). Occasionally I get very vexed indeed.

My attention was drawn to some such thing about SSDs (solid State Discs) via a tweet by Gwen Shapira yesterday {I make no statement about her opinion in this in any way, I’m just thanking her for the tweet}. According to Computerworld

SSDs have a ‘bleak’ future, researchers say

So are SSDs somehow going to stop working or no longer be useful? No, absolutely not. Are SSDs not actually going to be more and more significant in computing over the next decade or so? No, they are and will continue to have a massive impact. What this is, is a case of a stupidly exaggerated title over not a lot. {I’m ignoring the fact that SSDs can’t have any sort of emotional future as they are not sentient and cannot perceive – the title should be something like “the future usefulness of SSDs looks bleak”}.

What the article is talking about is a reasonable little paper about how if NAND-based SSDS continue to use smaller die sizes, errors could increase and access times increase. That is, if the same technology is used in the same way and manufacturers continue to shrink die sizes. It’s something the memory technologists need to know about and perhaps find fixes for. Nothing more, nothing less.

The key argument is that by 2024 we will be using something like 6.4nm dies and at that size, the physics of it all means everything becomes a little more flaky. After all, Silicon atoms are around 0.28nm wide (most atoms of things solid at room temperature are between 0.2nm and 0.5nm wide), at that size we are building structures with things only an order of magnitude or so smaller. We have all heard of quantum effects and tunneling, which means that at such scales and below odd things can happen. So error correction becomes more significant.

But taking a reality check, is this really an issue:

  • I look at my now 4-year-old 8GB micro-USB stick (90nm die?) and it is 2*12*30mm, including packaging. The 1 TB disc on my desk next to it is 24*98*145mm. I can get 470 of those chips in the same space as the disc, so that’s 3.8TB based on now-old technology.
  • Even if the NAND materials stay the same and the SSD layout stays the same and the packaging design stays the same, we can expect about 10-50 times the current density before we hit any problems
  • The alternative of spinning platers of metal oxides is pretty much a stagnant technology now, the seek time and per-spindle data transfer rate is hardly changing. We’ve even exceeded the interface bottleneck that was kind-of hiding the non-progress of spinning disk technology

The future of SSD technology is not bleak. There are some interesting challenges ahead, but things are certainly going to continue to improve in SSD technology between now and when I hang up my keyboard. I’m particularly interested to see how the technologists can improve write times and overall throughput to something closer to SDRAM speeds.

I’m willing to lay bets that a major change is going to be in form factor, for both processing chips and memory-based storage. We don’t need smaller dies, we need lower power consumption and a way to stack the silicon slices and package them (for processing chips we also need a way to make thousands of connections between the silicon slices too). What might also work is simply wider chips, though that scales less well. What we see as chips on a circuit board is mostly the plastic wrapper. If part of that plastic wrapper was either a porous honeycomb air could move through or a heat-conducting strip, the current technology used for SSD storage could be stacked on top of each other into blocks of storage, rather then the in-effect 2D sheets we have at present.

What could really be a cause of technical issues? The bl00dy journalists and marketing. Look at digital cameras. Do you really need 12, 16 mega-pixels in your compact point-and-shoot camera? No, you don’t, you really don’t, as the optics on the thing are probably not up to the level of clarity those megapixels can theoretically give you, the lens is almost certainly not clean any more and, most significantly, the chip is using smaller and smaller areas to collect photons (the sensor is not getting bigger with more mega-pixels you know – though the sensor size is larger in proper digital SLRs which is a large part of why they are better). This less-photons-per-pixel means less sensitivity and more artefacts. What we really need is maybe staying with 8MP and more light sensitivity. But the mega-pixel count is what is used to market the camera at you and I. As a result, most people go for the higher figures and buy something technically worse, so we are all sold something worse. No one really makes domestic-market cameras where the mega-pixel count stays enough and the rest of the camera improves.

And don’t forget. IT procurement managers are just like us idiots buying compact cameras.

(*) For any readers where UK English is not a first language, “twaddle” and “tosh” both mean statements or arguments that are silly, wrong, pointless or just asinine. oh, Asinine means talk like an ass 🙂 {and I mean the four-legged animal, not one’s bottom, Mr Brooks}

Will the Single Box System make a Comeback? December 8, 2011

Posted by mwidlake in Architecture, future, Hardware.
Tags: , ,
15 comments

For about 12 months now I’ve been saying to people(*) that I think the single box server is going to make a comeback and nearly all businesses won’t need the awful complexity that comes with the current clustered/exadata/RAC/SAN solutions.

Now, this blog post is more a line-in-the-sand and not a well researched or even thought out white paper – so forgive me the obvious mistakes that everyone makes when they make a first draft of their argument and before they check their basic facts, it’s the principle that I want to lay down.

I think we should be able to build incredible powerful machines based on PC-type components, machines capable of satisfying the database server requirements of anything but the most demanding or unusual business systems. And possibly even them. Heck, I’ve helped build a few pretty serious systems where the CPU, memory and inter-box communication is PC-like already. If you take the storage component out of needing to be centralise (and this shared), I think that is a major change is just over the horizon.

At one of his talks at the UKOUG conference this year, Julian Dyke showed a few tables of CPU performance, based on a very simple PL/SQL loop test he has been using for a couple of years now. The current winner is 8 seconds by a… Core i7 2600K. ie a PC chip and one that is popular with gamers. It has 4 cores and runs two threads per core, at 3.4GHz and can boost a single core to 3.8 GHz. These modern chips are very powerful. However, chips are no longer getting faster so much as wider – more cores. More ability to do lots of the same thing at the same speed.

Memory prices continue to tumble, especially with smart devices and SSD demands pushing up the production of memory of all types. Memory has fairly low energy demands so you can shove a lot of it in one box.

Another bit of key hardware for gamers is the graphics card – if you buy a top-of-the-range graphics card for a PC that is a couple of years old, the graphics card probably has more pure compute grunt than your CPU and a massive amount of data is pushed too and fro across the PCIe interface. I was saying something about this to some friends a couple of days ago but James Morle brought it back to mind when he tweeted about this attempt at a standard about using PCI-e for SSD. A PCI-e 16X interface has a theoretical throughput of 4000MB per second – each way. This compares to 600MB for SATA III, which is enough for a modern SSD. A single modern SSD. {what I am not aware of is the latency for PCI-e but I’d be surprised if it was not pretty low}. I think you can see where I am going here.

Gamers and image editors have probably been most responsible for pushing along this increase in performance and intra-system communication.

SSD storage is being produced in packages with a form factor and interface to enable an easy swap into the place of spinning rust, with for example a SATA3 interface and 3.5inch hard disk chassis shape. There is no reason that SSD (or other memory-based) storage cannot be manufactured in all sorts of different form factors, there is no physical constraint of having to house a spinning disc. Density per dollar of course keeps heading towards the basement. TB units will soon be here but maybe we need cheap 256GB units more than anything. So, storage is going to be compact and able to be in form factors like long, thin slabs or even odd shapes.

So when will we start to see cheap machines something like this: Four sockets for 8/16/32 core CPUs, 128GB main memory (which will soon be pretty standard for servers), memory-based storage units that clip to the external housing (to provide all the heat loss they require) that combine many chips to give 1Gb IO rates, interfaced via the PCIe 16X or 32X interface. You don’t need a HBA, your storage is internal. You will have multipath 10GbE going in and out of the box to allow for normal network connectivity and backup, plus remote access of local files if need be.

That should be enough CPU, memory and IO capacity for most business systems {though some quote from the 1960’s about how many companies could possible need a computer spring to mind}. You don’t need shared storage for this, in fact I am of the opinion that shared storage is a royal pain in the behind as you are constantly having to deal with the complexity of shared access and maximising contention on the flimsy excuse of “sweating your assets”. And paying for the benefit of that overly complex, shared, contended solution.

You don’t need a cluster as you have all the cpu, working memory and storage you need in a 1U server. “What about resilience, what if you have a failure?”. Well, I am swapping back my opinion on RAC to where I was in 2002 – it is so damned complex it causes way more outage than it saves. Especially when it comes to upgrades. Talking to my fellow DBA-types, the pain of migration and the number of bugs that come and go from version to version, mix of CRS, RDBMS and ASM versions, that is taking up massive amounts of their time. Dataguard is way simpler and I am willing to bet that for 99.9% of businesses other IT factors cause costly system outages an order of magnitude more times than the difference between what a good MAA dataguard solution can provide you compared to a good stretched RAC one can.

I think we are already almost at the point where most “big” systems that use SAN or similar storage don’t need to be big. If you need hundreds of systems, you can virtualize them onto a small number of “everything local”
boxes.

A reason I can see it not happening is cost. The solution would just be too cheap, hardware suppliers will resist it because, hell, how can you charge hundreds of thousands of USD for what is in effect a PC on steroids? But desktop games machines will soon have everything 99% of business systems need except component redundancy and, if your backups are on fast SSD and you a way simpler Active/Passive/MAA dataguard type configuration (or the equivalent for your RDBMS technology) rather than RAC and clustering, you don’t need that total redundancy. Dual power supply and a spare chunk of solid-state you can swap in for a failed raid 10 element is enough.