jump to navigation

Testing Methodology – Getting Community Information January 22, 2010

Posted by mwidlake in Testing, Uncategorized.
Tags: ,
4 comments

<Last post I tried to highlight that knowing specifically what you want to understand before you start testing is important and that it is beneficial to start with the booring official documentation. This is so that you can filter what is Out There in the web, as well as get a good grounding in what is officially possible.

There is no doubt that you can get a lot more information on Oracle features from the community than you can from the manuals and that it is often presented in a more easily understood and relevant way. But some of it is not right for your version or is just plain wrong. I can do no better than to refer to this recent posting by Jonathan Lewis on the topic. In this case, it is the persistance of old and now irrelevant information, perpetuated by the kind but flawed habit people have of pulling information together – but not testing it.

Treat everything on the net with caution if it has no proof. Even then be cautious :-) – I know I have “proved” a couple of things that turned out to be misunderstandings…

You can of course post to forums to ask for information and help, and people will often come to your aid. But you are often not really understanding the subject by doing that, you are just learning facts. It’s like learning the dates of the two World Wars and not understanding why the wars occured. I would point you to this posting by Lisa Dobson from a couple of years back, which makes the point about what you stand to learn, and also says something about how to ask for help. If it helps you to do the “right thing”, think of it selfishly. You will get better and more specific help if you have already looked into your issue and can ask specific questions. Also, if you properly learn something rather than just get a couple of facts about it, you are less likely to look foolish when you repeat those facts in a context they are not valid in. And you will.

Sorry, side tracked a little into one of my pet annoyances. Back to topic.

I want to know about SQL AUDIT. So I google SQL AUDIT ORACLE. Lots of hits. Let’s start filtering. First off, anything by “experts-exchange” and the other pay-for forums can forget it. Sorry guys but if I am going to spend money I’ll buy a manual. I will also ignore any stuff by sites that constantly suggest you go on training courses on boats with them or are “Team America” {everyone seen the film by Trey Parker? Puts the claim “We are the American Team” into a different light…}.

Now I go and look at the sites and I try making my search more intelligent, maybe adding in the word DBA_AUDIT_TRAIL and the like. I try and think what words and phrases someone explaining the topic would use which would not be common for other topics. That is why I often use database object names and commands in my searches.

In the case of SQL AUDIT, there are plenty of sites that generally pull together the basics from the manuals and tell you how to audit a few things and see the results and give some nice examples. It gets you further forward, but it’s mostly not very detailed. Just cookery instructions on what to do but not how it works. Thankfully, there is an excellent article by one of the experts in the field, Pete Finnigan.
Maybe I could have replaced this post by simply suggesting you just go to Pete’s web site, read what is there and follow the link from it to the ones he likes. It would work for the topic of SQL AUDIT. However, although it is always good to go to the sites of people you know and trust, it is always worth doing a google and going beyond the first page or two of results. Those first pages are mostly for a handful of very popular sites and sources. A new article or someone who is relatively unknown but has the answers you need may be further down the list, like pages 3 or 4. It is worth 5 mins just checking.

However, you are not likely to find everything you need. Certainly with SQL AUDIT you won’t find much about performance impact other than bland and not-very-helpful generic warnings that “Use of SQL AUDIT can carry a significant performance overhead”. How much overhead? for what sort of audit? And how do you know? In fact, when searching I found this, admittedly old, comment by Pete about there being relatively little “out there” about performance impact of Auditing. That made me smile, as that information was exactly what I was after.

*sigh*. The information I want does not seem to be out there. I need to do some testing.

That is for the next post (“at LAST”, I hear you all cry).

Testing Methodolgy – The Groundwork January 20, 2010

Posted by mwidlake in Perceptions, Testing, Uncategorized.
Tags: , ,
add a comment

<Previous PostNext Post …>

I want to start learning about using SQL AUDIT, as I mentioned a couple of days ago.

First question. What do I want to know about SQL AUDIT? I might just have thought “well, I do not know much about this area and I think I should – so I want to know everything”. That is OK, but personally I find it helps to make up a specific aim. Otherwise you can just flounder about {well, I do}. In this case I have decided I want to know:

  1. The options for auditing who is selecting key tables.
  2. How to audit when those key tables get changed.
  3. The impact on performance of that audit, and if it could be an issue.
  4. (4) follows on from (3), in that I want to find out how to control that performance impact.

For any of you who have been or are code developers, you hopefully appreciate test-driven coding. That is, you decide up front what the new code must achieve and design tests to ensure the code does it. You do not write any code until you have at least thought of the tests and written them down in a document. {Ideally you have written the tests and built some test data before you start, but then in an ideal world you would get paid more, have more holidays and the newspapers would tell the truth rather than sensational rubbish, but there you go :-) }

I do not think that learning stuff/testing as much different from developing code, thus the list above. I now know what I want to understand.

What next? I’m going to go and check the Oracle Documentation for the feature. And I am very careful to check the documentation for the version I will use. This is 10.2 for me. I know, the Oracle documentation can be pretty dry, it can be rather unhelpful and it is not very good at examples. But you can expect it to be 90% accurate in what it tells you. You can also expect it to be not-very-forthcoming about the issues, gotchas and bits that don’t work. {I have this pet theory that the official documentation only mentions how things do not work once a feature has been out for a version and people have complained that the documentation should let on about the limitations}.

So, for SQL AUDIT I suggest you go and read:

  • Concepts Manual, chapter 20 Database Security. If I am not rushed I would read the whole chapter, I might realise that what I want to do is better done with some other tool (If I wanted to see who had changed records months down the line, I think I would pick up that database triggers were a better bet, for example).
  • SQL Reference, chapter 13 , the section on AUDIT (no surprises there). I do not do much more than read through the SQL manual once though, as frankly I find it pretty useless for explaining stuff, but it puts into your head what the parts of the command there are and pointers to other parts of the documentation. I’ll read the concepts manual with more attention. In this case, the manual will lead me to:
  • Database Security Guide chapter 8. Which is pretty good, actually.
  • My next source of information, may not immediately spring to mind but I find it very valuable, is to find out which data dictionary objects are involved in the feature. In this case, the previous sources would lead me to go to the Database Reference and check out:
  • DBA_AUDIT_TRAIL, DBA_OBJ_AUDIT_OPTS, DBA_PRIV_AUDIT_OPTS, DBA_STMT_AUDIT_OPTS. And of course, SYS.AUD$. {I actually queried DBA_OBJECTS for anything with the word “AUDIT” in it, check out all the tables and also had a quick peak at the text for any views, which would have led me to SYS.AUD$ if I did not already know about it}.

Why do I go and look at the data dictionary objects and the reference guide? After all, is it not nerdy enough to have put myself through reading the SQL Reference manual? {and let us be honest, it is rarely enjoyable reading the SQL Reference manual}. Because  I want to know how it works, not just how to use it. Seeing the table give me a lot of information and the description of the columns may well tell me a lot more. First thing, SYS.AUD$ only has one index, on a column SESSIONID# (I know, there is another column in the index, I need to get to a DB to check this). Any queries not via this index are going to scan the whole damned thing. Right off, I suspect this will be an issue.

I will DESC the tables, see if there is already any information in them. In this case, there was not. A clean sheet.

Why did I not just go to Google (Bing, Yahoo, whatever) and search? Because I can trust the manuals to tell me stuff which is mostly true and is valid for my release. Not stuff about older versions, about later versions, urban myths or just outright fallacies. The manuals certainly will not tell me everything, far from it, but what it does say is solid stuff. With this reliable start I can now go to other sources and have some chance of filtering  all the other stuff that is Out There. Once filtered, it is probably a lot more informative and easier to digest than the manuals.  

I’ll ramble on about that next posting.

New Year, same old rambling thoughts January 5, 2010

Posted by mwidlake in Blogging, Perceptions.
Tags: ,
3 comments

It’s not Friday but, heck, it’s a New Year, there are many of us who might appreciate a non-techie, pointless ramble at the start of the first full working week of a new decade…A Friday Philospohy for the New Year. {If anyone wants to point out the New Decade starts on 1st Jan 2011, go take a running jump – popular opinion is against you, even if logic is for you}.

I found the UKOUG techie conference this year particularly interesting as it was the first major meeting I have been to since I started blogging, and I came across two main opinions about my attempts:

Those who like my blog as it is “chatty” and rambles a bit.
Those who dislike it – because it is “chatty” and rambles a bit…
{oh, and the third opinion, the most common, of utter ignorance of my blog – there goes the ego}.

Well, you can’t please everyone. I was a little saddened, however, as I spoke to a couple of people I really admire in the Oracle Knowledge world and they landed on the “chatty and rambling – bad” side of things. Damn. But they are so good at what they do, I forgive them. The swines.

But then I remembered what I said to a fellow blogger the other month. We bloggers/twitterers all put forward what we blog about in our own style. We might not blog something that is new, we might blog something that is “well known”, but we put it in our own style. Some like it, some do not. It matters not, so long as it adds to the sum of decent knowledge out there.
Some will hate our style and not read, some will read and enjoy. So long as the information gets out there to more people, that is fine.

So, do I think everything I blog is decent knowledge? Oh, I wish. I like to think it is mostly there {and I wish it was all correct} but I am realistic. I test most of what I blog, or I have lived for real most of what I blog, but I will make mistakes. And sometimes I will hit the edge of something good and I put it up there in the hope others will contribute {like the recent one one translating min-max column values into human readable stuff}. And often people do contribute and that is really, really good.

But I do and will continue to make mistakes, be daft, or just put things poorly. I have learned a fair bit in the last 8 months about written communication, the art of communicating to a global audience and also about how not to spread a topic over several weeks as you hope you can “just finish of those half-written blogs in an hour or two” and find it takes weeks. If anyone wants to give me any constructive criticism, please do, but maybe use my email (mwidlake@btinternet.com) rather than flame my postings.

And my last rambling thought for the start of 2010? I am probably going to post less in the next 6 months. I am always sad when the blog by someone I enjoy goes very quiet, but then we all have real jobs to do, so I try to be patient. In my own case, I have noticed I now read a lot less of other people’s blogs as writing my own takes so long. And I am missing too much. There are blogs I really admire or I have discovered in the last 6 months (sometimes both) that I simply fail to really read and they know stuff. So I need to read them. I am going to try and maintain a steady 2-3 blog entries a week, but for the next 6 months I am going to concentrate on learning. Something blogging has taught me is I am really quite ignorant :-)

Good wishes for 2010 to all and everyone who stumbles across my ramblings.

Friday Philosophy – Memory (of the Human Kind) November 14, 2009

Posted by mwidlake in Perceptions, Uncategorized.
Tags:
7 comments

One of the Joys of having been around for a while (I’ve hit that point where I occasionally find myself working with people who were not born when I got my first job in IT) is that you start to re-learn things. Well, I do. {I have to point out that, this being a Friday Philosophy, you will learn nothing yourself, especially not about Oracle. from this opost. If you want Oracle knowledge you might want to pop by Dion cho, Tanel Poder or Richard foote}

Several times over the last couple of weeks I have learnt something “new” and realised a little while later that I already knew it, but had forgotton (BLEVEL is one, consistent gets being impacted by array fetch size is another) . This is not something to be proud of – not knowing something you once did is far worse, and slightly worrying, then never having known it…

I like to think that it is all the new stuff I am learning, pushing previous knowledge out of my brain’s buffer cache {if it is not warmed by re-using it}, but I suspect that it might just be an uptime-related memory leak. A bit like having your database open for a few months and doing a lot of work that includes massive PL/SQL in-memory tables and hash joins (pickler fetches). No one seems sure of the exact mechanics, but after a while you have a hell of a lot less spare memory in your serever than you started with :-)

Maybe the memory loss is closer to the “pickler query” analogy than I thought, you can preserve soft tissue for a long time in alcohol. I’ll have another glass of wine and think on that.

This Forgetting Stuff was actually a major factor to my starting a blog. I was in the process of putting a load of notes on Oracle Odditites and things I had learnt from years past onto a web site so I could get at it from wherever I was in the world, and a friend told me I was being stupid – I should put them on a blog. So I did. There are only two little issues with this.

  • I can’t remember what I have already put on my blog.
  • I’ve forgotten where I put the ZIP disk with the notes on.

So I was contemplating this drift of knowledge and two things struck me.

1) I reckon that the very best people, in any area of expertise, are blessed with excellent memories. There is a comedian in the UK called Stephen Fry, and he is renowned for being stunningly clever. I think he probably is stunningly clever, but he also remembers everything he has learnt (Rory McGrath is another UK comedian with a perfect memory, but somehow he lacks the charm of Stephen Fry, and he has a beard, so not so highly renowned).
2) My poor memory is not maybe such a bad thing. I don’t have to get so upset when a whole chunk of my Oracle knowledge becomes obsolete. I used to be really good at sizing objects and utilizing space, taking into account the 5-block minimum, getting the extents to fit nicely into the datafiles, little scripts to resize segments into a small but sensibele number of extents to reduce wasted space, considering initrans, pctfree, pctused… Pretty much all pointless now :-)

Buffer Cache Hit Ratio – my “guilty” Confession November 1, 2009

Posted by mwidlake in Perceptions, performance.
Tags: , ,
14 comments

My Friday Philosophy this week was on Rules of Thumb on buffer gets per row returned.

Piet de Visser responded with a nice posting of his own, confessing to using ratios to help tuning {We seem to be playing some sort of blog-comment tag team game at the moment}.

Well, I have a confession so “guilty” or “dirty” that I feel I cannot inflict it on someone else’s blog as a comment.

I use the Buffer Cache Hit Ratio.

And the Library Cache Hit Ratio and the other Ratios.

As has been blogged and forum’d extensively, using these ratios is bad and stupid and anyone doing so does not know what they are doing as they do not help you solve performance problems. I mean, hell, you can download Connor McDonald’s/Jonathan Lewis’s script  to set it to what you want so it must be rubbish {go to the link and chose “tuning” and pick “Custom Hit Ratio” – it’s a rather neat little script}.

The point I am trying to make is that once the Buffer Cache Hit Ratio (BCHR) was wrongly elevated to the level of being regarded as a vital piece of key information but the reaction against this silly situation has been that it is now viewed by many (I feel) as the worst piece of misleading rubbish. Again a silly situation.

I think of the BCHR as similar to a heart rate. Is a heart rate of 120 good or bad? It’s bad if it is an adult’s resting heart rate, but pretty good if it is a kitten’s resting heart rate. It’s also probably pretty good if it is your heart rate as you walk briskly. Like the BCHR it can be fudged. I can go for a run to get mine higher, I can drain a couple of pints of my blood from my body and it will go up {I reserve the right not to prove that last one}. I can go to sleep and it will drop. Comparing my resting heart rate to yours (so like comparing BCHRs between systems) is pretty pointless, as I am a different size, age and metabolism to you {probably} but looking at mine over a year of dieting and exercising is very useful. If only I could keep up dieting and exercising for a year…

So what do I think the much-maligned Buffer Cache Hit Ratio gives me? It gives me what percentage of sql access, across the whole database activity, is satisfied from memory as opposed to disc. Or, put another way, the percentage of occurences a block has to be got from the I/O subsystem. Not how many blocks are read from storage or memory though, but you can get that information easily enough. As Physical IO is several orders of magnitude slower than memory access {ignoring I/O caches I should add} , it gives me an immediate feel for where I can and can’t look for things to improve.

If I am looking at a system that is overall very slow (eg high process wait queues under l/unix, the client has said the system is generally slow) and I see that the BCHR is low, say below 90%, this tells me I probably can get some performance increase by reducing physical access. I’ll go and look for those statements with the highest physical IO and the hottest tablespaces/objects in the DB.
If the BCHR is already up at the 99% level, I need to look at other things, such as tuning sort, looking at removing activity in the database, to be very mindful of nested loop access where maybe it is not the best access method (very likely due to old stats on tables).

When I have got to know a system and what it’s BCHR generally sits at, a sudden change, especially a drop, means there is some unusual physical IO going on. If the phones start going and someone is complaining “it’s all slow”, the BCHR is one of the first things to look at – especially as it is available from so many places.

Another thing the BCHR gives me is, if I am looking at a given SQL statement or part of an application, it’s specific BCHR can be compared to the system BCHR. this does not help me tune the statement itself, but I know if it’s specific BCHR is low then it has unusually high IO demands compared to the rest of the system. Further, Reducing it might help the whole system, so I might want to keep an eye on overall system throughput. If I reduce the statement’s execution time by 75% and the whole system IO by 1%, the client is likely to be more happy, especially if that 1% equates to other programs running a little faster “for free”.

So, I don’t use the BCHR to tune individual statements but I feel confident using it to track the general health of my database, so long as I am mindful of the impact of new functionality or upgrades. It’s a rule of thumb. It’s a database heart rate. (and so is redo generation and half a dozen other things).

When do We Learn #2 October 20, 2009

Posted by mwidlake in Blogging, Perceptions.
Tags:
4 comments

I exchanged a couple of mails with a friend a few weeks back about how the same topic can arise in a couple of blogs at the same time. Well, I had just blogged myself on when we learn and, blow me over with a feather, that Jonathan Lewis goes and post in a similar vein. He must be nicking ideas off my blog :-) {and yes, I am being tongue-in-cheek here}. We both post thought about needing spare capacity in your life to be able to spend the time to really understand how something works. Yes you learn a lot in the heat of a crisis, but you rarely reallu understand the details, ie become an expert, without having time to digest and qualify that knowledge.

I did write a long comment on his posting, including some links back to my own meandering thoughts on the topic, then realised that I would come across as a bit “me too” so I trimmed it and took out the links. But that is part of why I do my own blog, I found I was spamming other people’s pages with my diatribes and so decide to spam my own. {And I know I am meandering, I’m a bit sleep-deprived, stream of consciousness at the moment}. So here I can refer back to my own stuff and say “me too”, but you are already here reading this, so you only have yourself to blame :-)… Anyway, I wanted to refer back to a very early blog of mine about how much knowledge is enough. I try and make the point that you do not need to know everything, you can become a small-field or local expert just by being willing to learn a bit more.

Jonathan raises the point that he does not have a full time commitment to one client and so he has the luxury to investigate the details and oddities of what he looks into. He suggest this is a large part of why he is an expert, which I feel is true, and I am very happy to see one of the Oracle Names acknowledging that relative freedom from other pressures is key to having the luxury to chase down the details. Those of us in a full time role doing eg DBA, development or design work, have more than enough on our workday plates to keep us too busy. We cannot be top experts, we have a boss to satisfy and a role to fulfill. {Jonathan does not mention that chosing a career where you have luxury of time is also a pretty brave choice – you stand a good chance of earning a lot, lot less whilst working very hard to establish enough of a reputation to be able to earn enough to feed yourself and the cat}.

But this is not a black and white situatuation. There is room for many of us to become experts in our domain or in our locality. Our breadth of knowledge may never be as wide as others, we may not know more than anyone else in a given area {and let’s face, logically there can only be one person who knows the most about a given topic, and that one person is probably in denial about their superiority, which seems to be a defining quality of an expert – it is not so much humility I think as an acknowledgement of there being more to know and a desire to know it}. However, most of us can become the person in our organisation who knows most about X, or who can tie A, B and C together in a more holistic way than others (and that can be a real trick you know). There are always the top experts that you can call on for the worst problems, but you could become the person people come to first.

My advice would be to not try and learn everything about all aspects of Oracle, because you can’t, but rather learn a lot about one or two areas {and consider areas that are more unusual, not just “tuning SQL” or “the CBO”} and expand just your general knowledge of the wider field. And never forget that there is more to learn. So long as you are taking in more knowledge and understanding, you are improving. The best way to do it? Don’t just read other people’s stuff, try teaching someone else. It never ceases to amaze me how stupid I realise I am when I try and show someone else how something works. But that’s OK, so long as they learn it’s fine. If I learn as well, it’s great, and I nearly always do.

I’m getting on a bit, I think I am finally getting the hang of the idea that the more you know the more you realise you don’t know, I wish I knew that when I knew nothing.

A Tale of Two Meetings – 11GR2 and MI SIG October 5, 2009

Posted by mwidlake in Meeting notes, Perceptions.
Tags: , ,
7 comments

Last week I attended two Oracle events, each very different from the other.

The first was an Oracle Corp event, giving details of the new 11GR2 release and what it was introducing. It was in a nice hotel in London with maybe 250, 300 attendees and all quite swish.

The other was a UK Oracle User Group meeting, the last Management and Infrastructure SIG for 2009. 30 people in the Oracle City office and far more unassuming {And note, as I chair the MI SIG, anything I say about the day is liable to bias…}.

Both events were useful to attend and I learnt things at both, but I also found the difference between the two quite interesting.

Oracle 11G Release 2

The official Oracle 11GR2 presentation was where you went for the definitive information on what Oracle Corp feel are the new features of 11G R2 that are of interest (though some of it was not R2-specific but general 11G).

Chris Baker started off by telling us “there has never been a better time” to move to the latest technology or a greater need to gain business advantage through using said latest technology. You know, it would be really nice, just once, to go to such a corporate event and not be given this same thread of pointless posturing? I know it is probably just me being old and grumpy and contrary, but after 20 years in the business I am sick to the hind teeth of Keynotes or Announcements that say the same empty “Raa-Raa” stuff as the previous 19 years – the need “now” to get the best out of your technology has been the same need since the first computers were sold to businesses, so give it a rest. Just tell us about the damned technology, we are smart enough to make our own decision as to whether  it is a big enough improvement to warrant the investment in time and effort to take on. If we are not smart enough to know this, we will probably not be in business too long.

Sorry, I had not realised how much the Corporate Fluff about constantly claiming “Now is the time”, “Now things are critical” gets to me these days. Anyway, after that there were some good overviews of the latest bits of technology and following form them some dedicated sessions in two streams on specific areas, split between semi-technical and management-oriented talks, which was nice.

There was plenty of talk about the Oracle Database Machine, which appears to be exadata version 2 and sits on top of Sun hardware, which is no surprise given the latest Oracle Acquisition. I have to say, it looks good, all the hardware components have taken a step up (so now 40Gb infiniband interconnect, more powerful processors, even more memory), plus a great chunk of memory as Sun’s “FlashFire” technology to help cache data and thus help OLTP work. More importantly, you can get a 1/4 machine now, which will probably make it of interest to more sites with less money to splash out on a dedicated Oracle system. I’ll save further details for another post, as this is getting too long.

The other interesting thing about the new Oracle Database Machine was the striking absence of the two letters ‘P’ and ‘H’. HP was not mentioned once. I cannot but wonder how those who bought into the original exadata on HP hardware feel about their investment, given that V2 seems only available on Sun kit. If you wanted the latest V2 featries such as the much-touted  two-level disc compression is Oracle porting that over to the older HP systems, are Oracle offering a mighty nice deal to upgrade to the Sun systems or are there some customers with the HP kit currently sticking needles into a clay model of top Oracle personnel?

The other new feature I’ll mention is RAT – Real Application Testing. You can google for the details but, in  a nutshell, you can record the activity on the live database and play it back against an 11g copy of the database. The target needs to be logically identical to the source {so same tables, data, users etc} but you can alter initialisation parameters, physical implementation, patch set, OS, RAC… RAT will tell you what will change.

For me as a tuning/architecture guy this is very, very interesting. I might want to see the impact of implementing a system-wide change but currently this would involve either only partial testing and releasing on a wing and a prayer or a full regression test on an expensive and invariably over-utilised full test stack , which often does not exist. There was no dedicated talk on it though, it was mentioned in parts of more general “all the great new stuff” presentations.

Management and Infrastructure SIG

RAT leads me on to the MI SIG meeting. We had a talk on RAT by Chris Jones from Oracle, which made it clearer that there are two elements to Real Application testing. One is the Database Replay and the other is SQL Performance Analyzer,  SPA. Check out this oracle datasheet for details.

SPA captures the SQL from a source system but then simply replays the SELECT only statements, one by one, against a target database. The idea is that you can detect plan changes or performance variations in just the Select SQL. Obviously, if the SELECTS are against data created by other statements that are not replayed then the figures will be different, but I can see this being of use in regression testing and giving some level of assurance. SPA has another advantage in that it can be run against a 10g database, as opposed to RAT which can only be run against 11 (though captured from a terminal 10g or 9i system – that is a new trick).
There are no plans at all to backport RAT to 10, it just ain’t gonna happen guys.

The SIG also had an excellent presentation on GRID for large sites (that is, many oracle instances) and how to manage it all. The presentation was as a result of requests for a talk on this topic by people who come to this SIG and Oracle {in the form of Andrew Bulloch} were good enough to oblige.

The two Oracle Corp talks were balanced by technical talks by James Ball and Doug Burns, on flexible GRID architectures and using OEM/ASH/AWR respectively. These were User presentations, mentioning warts as well as Wins. Not that many Warts though, some issues with licence daftness was about it as the technology had been found to work and do it’s job well. Both talks were excellent.

The fifth talk was actually an open-forum discussion, on Hiring Staff, chaired by Gordon Brown {No, not THAT Gordon Brown, as Gordon points out}. Many people joined in and shared opinions on or methods used in getting new technical staff. I found it useful, as I think did many. These open sessions are not to everyone’s taste and they can go wrong, but Gordon kept it flowing and all went very well.

 

The difference between the two meetings was striking. Both had strong support from Oracle  {which I really appreciate}. Both included talks about the latest technology. However, the smaller, less swish event gave more information and better access to ask questions and get honest answers. There was also almost no Fluff at the SIG, it was all information or discussion, no “Raa-Raa”. But then, the lunch was very nice and there were free drinks after the Corporate event {we shared rounds at a local pub after the SIG event – maybe one round too much}. 

I guess I am saying that whilst I appreciate the Big Corporate event, I get a lot more out of the smaller, user group event. Less fluff, more info. Thankfully, Oracle support both, so I am not complaining {except about the “there has never been a better time” bit, I really AM sick of that :-( ).

 So if you don’t support your local Oracle user group, I’d suggest you consider doing so. And if, like so many sites seem to, you have membership but don’t go along to the smaller events, heck get down there! There is some of the best stuff at these SIG meetings.

Friday Philosophy – Disasters September 4, 2009

Posted by mwidlake in Perceptions.
Tags: , ,
3 comments

Some of you may be aware that I occasionally do presentations called something like:-

“5 Ways to Progress your Career Through Systems Disasters”

The intention of the presentations are to comment on things I have “been in the vicinity of” going wrong in I.T. and ways to avoid them, in a light-hearted manner. Having a bit of a laugh whilst trying to make some serious points about project management, infrastructure, teams, people and the powers of chaos.

I’ll see about putting one up on my Web Site so that you can take a look {check back this weekend if you like}

The talks usually go down well but there are two potential issues in giving the talks:

  • The disasters or problems, if in some way my fault, could make me look like an idiot or incompetent.
  • If it is known who I was working for when I “witnessed a disaster”, it could make that company look bad.

I never used to worry too much about this when I worked permanently for a company that was pretty relaxed about “stuff happens, it is good to share”. After all, if I looked like an idiot then that is fair enough and if I said anything that could be linked back to a prior (or current) employer {which, I hasten to point out, I did aim to avoid} then, well, sorry. But I only said things that were true.

However, when I returned to being self-employed, a good friend of mine took me to one side and suggested such talks could harm my career. I argued that it was not malicious and was helpful to people. My friend argued back that potential employing companies would not look so favourably on this, especially if they suspected that they may one day feature.

Hmmmm…. This was true.

So I toned down the talk…

The next time I did the presentation, the sanitised one, it was not such a hit. In fact, it was a bit rubbish.

The question is, should I have toned it down? Hands up anyone who has not personally done something unbelievably stupid at least once in their working life? Can everyone who has worked for an organisation that has not messed up at least one I.T. project please also raise their hand?

I can’t see any raised hands from here :-)

We all make mistakes.
All companies get things wrong at times.

Something you find when you start presenting or organising events is that the talks people most appreciate and learn the most from are about things going wrong.

So why can’t we all be grown-ups about admitting them, talking about them and learning? Personally, when I have interviewed people for jobs, I am always impressed by someone who will admit to the odd failure, especially if they can show what they learnt from it.

Oh, if anyone is reading this before offering me a position, I never made a mistake in my life, honest. I never deleted every patient record from a hospital information system, I was not even on-site when the incident didn’t happen. And if anyone suggest otherwise, it was a long time ago when it didn’t happen. ..

{I got all the data back, anyway. Never start work without a backup…}

Friday Philosophy – Friends who Know Stuff August 29, 2009

Posted by mwidlake in Uncategorized.
Tags: ,
2 comments

Being a consultant/contractor can be hard work. You are expected to turn up on-site and become useful before the second cup of coffee has been drunk. Then, if you manage that, there is an expectation that you will continue to pull IT Rabbits out of computer Hats for as long as you are there. Which is actually reasonable, we are after all usually paid well for being consultants/contractors.

This need to always have an answer can become quite hard, especially as those of us who chose not to be permanent staff have a bit of a disadvantage, one which those who have normal jobs may not appreciate.

Permies have a group around them called “the team” who they can call upon and talk about issues. Permies tend to stay in an organisation for many years and build up strong contacts with people, who they can call upon years after they have moved on. For us nomads, it can be far harder to make strong links to people you can call upon. That is not to say most teams are unfriendly when you go on site, it is just that by the nature of starting off as a temporary member of the group and moving on after a year, 6 months, even a week or two, developing strong ties to people so that you feel able to badger them 2 years later is less likely to happen.

Don’t under estimate the benefit of being able to call on old friends to get a second opinion (or of being called yourself to help assist some old friend who has got to get to grips with some section of IT knowledge that you had to deal with for 2 years). It really helps.

Some of you are probably thinking “well, you Consultant types just ask each other, the other experts that you all know”. Well, sometimes, but we tend not to work with each other much. Contractors rarely get to work with each other repeatedly on different projects unless you find yourself in a position where the client needs someone with X skills and you have a friend with X skills who is looking for a new position.

This is something I have become very aware of, having gone from Contractor for half a decade, to Permy for 6 years and back again to contractor/consultant. I miss having a stable team of collegues to discuss things with.

So, Friends with Skills are important. And it is a two-way thing, if you expect to be able to call on some old collegues for help, then you need to be helpful when they call on you.

Is this a case of “who you know not what you know”? Yes and no. It is not about contacts getting you a leg up. It’s about developing and keeping a group of work-related friends where you all help each other when there is a need. Proper friendship is about sharing, not using.

Friday Philosophy – Should the Software or the User be the Stupid One? August 7, 2009

Posted by mwidlake in internals, performance.
Tags: , ,
5 comments

Oracle’s performance engine is complex and copes with a lot of database situations automatically – or to be more precise, it tries to cope with lots of database situations automatically.

Over the last few versions, Oracle has added many, many things to allow the database to cope automatically with all sorts of different data volumes, spreads of data, relationships between tables, use of different oracle technologies (By this I mean bitmap tables, index tables, partitions, clusters, external tables). All of these things aim to allow the database to just “cope” with whatever you need it to do, with less and less effort by the users {by users, I mean technical users; DBAs and Developers}. Thus it allows for “stupid” users. {I mean no offence, maybe read “inexperienced” instead of stupid}.

As an example, you can now have some very large tables consisting of several partitions and some status look-ups. You query against them. Oracle’s CBO will automatically ignore partitions it can ignore, use indexes or full table scans to use the least amount of IO,use histograms to spot where clauses are on low-cardinality values, Hash joins rather then nested loops as appropriate depending on memory availability, use bitmap indexes when it thinks it can and merge the results from several bitmap indexes, use function based indexes to support functions in where clauses….
It even self-gathers the information to look after all this. Column usage and table modifications are tracked, statistics are gathered when needed and in ways to support data skew, PGA and SGA can be automonitoring and managing…

It all sounds great. In fact, most of the time, for most people, it is great. {I know, most people reading this post are probably people who have encountered the problem systems and so know it goes wrong and so you need more knowledge to cope – you are a biased set of people. In the nicest way, I should add :-) } The idea is, I believe, that you do not neet to be smart to look after Oracle.

If it is not great, if this highly complex system gets it wrong and tries to satisfy SQL statements in sub-optimal ways, then the User has to step in and fix things. ie You.

It is now horrifically complex for us technical users to understand what is going on. You have to not only be “not stupid”, but “not average” either. Sometimes you have to be “not great”, ie brilliant.

In my example, we need to look at if the SQL is constructed to allow the indexes to be used, are functions correctly laid out to use function indexes, are partitions being maintained correctly, when were stats last gathered, did it include histograms and do they help, has oracle missed the need for histograms, are the indexes analyzed at a high enough sample size, are the bitmaps greatly slowing down inserts, have hints been used on the code, are initialisation parameters set to override default fucntionality…

You get the idea, I won’t drone on further. I didn’t even mention memory considerations though {OK, I’ll shut up}.

My point is, the more complex the software, the more “intelligent” it is, the more it is designed to allow for “stupid” users, then the more super-intelligent the user has to be to cope when it breaks.

How about an alternative?

How would it be if we went back to the Rule Based Optimizer and no automatic management of complex situations?

Oracle would maybe need to add a few rules to the RBO for it to cope with later developments, so it would be slightly more complex than V6 but not a lot.
Everything else, the User decides. You only gather stats you decide to gather, on objects you decide need them. No you don’t, it’s a Rule Based Optimizer – no stats gathering! {But see below}.

No automatic memory management. No automatic anything.

The User {the technical user, the DBA and Developer} would have to be smart. Not brilliant, just smart. You would probably have to do more, but most of it would be easier as the levels of complexity and interdependence are reduced. All those tweaks and tricks in the CBO and all the monitoring to cope with “complex” would not exist to go wrong.

Plus it might solve another concern I have. I think there is a chasm growing as there is no need to solve simple problems as Oracle copes but then having to solve complex problems when Orcle does not cope. If you don’t develop skills and experience solving the simple problems, how do you solve the complex ones? I think this is why most Oracle performance and architecture experts are old {Sorry, pleasantly middle-aged}. Young people new to the arena have a massive learning mountain to climb.

So, if we have stupid software, maybe we can get away with more stupid “smart” expert users. ie ALL of us can cope. You cut your teeth on smaller, simpler systems and learn how to cope with the stupid software beast. As you learn more, you learn to cope with more complex situations and they never get that complex as the database is not so “clever”

I’d actually still argue that all the intelligence gathering the Oracle database does should still continue – stats gathered on objects, the ability to gather information on memory usage and thus advice on changes, tracking column usages and table changes. But We, the Stupid Users get to look at it and use it as we see fit for our systems.

I’m sure many systems would not work quite so fast in my senario, but I’d rather have a system working at 75% it’s theoretical fastest all the time rather than one working at 95% and breaking regularly, and in ways so complex it needs weeks to work out and fix.

I now await all the comments to tell me how stupid I am {I can be blindlingly stupid, especially on Fridays}.

Follow

Get every new post delivered to your Inbox.

Join 163 other followers