jump to navigation

The Frustrated User’s perspective. November 28, 2009

Posted by mwidlake in Perceptions.
Tags: ,
1 comment so far

I got the below email from a friend this evening. Said friend does not work in IT. He works in a large organisation with a large and active IT department that might just be forgetting they provide a service as opposed to laying down the law…

****************************************************************
Hi Martin

For the last few weeks since {an edited out software virus disaster} we have been bombarded with unsolicited security policies from I.T. They pop up during the 10-15 minutes it takes to logon to our computers. You then have to download the policy and sign at the bottom to say whether you accept or decline the policy. When I scanned through the 10th policy I was struck by the fact that none of it applied to my area of responsibility except for one small part that had been covered in excruciating detail in one of their previous pathetic attempts at communicating what is expected of us. And all said missives using what looks like a variation of the english language. Having skipped the policy during a number of recent logons I was now being informed that it is “mandatory” to accept the policy or decline it giving a reason. I declined giving the above observation on the lack of relevance to my role as a reason.

I have now been informed that it is not possible to issue only the relevant policies to individuals (and presumably having identified this is not possible, have not bothered trying in the first place?) and in any case there might come a time when I “might” be given a task where the latest I.T policy applies and therefore I have to be aware of the existance of the policy. I think this latest one was something to do with purchasing software packages from suppliers -although this isn’t entirely clear. There is no way that I would be allowed to purchase software packages, which is a shame as there are off the shelf products that do what we require, whereas the in-house system foisted upon us simply does not provide any reliable or useful information what-so-ever.

The following senario occurs to me. I write a policy on controlling legionella – not unreasonable given that we have swimming pools, showers, air con etc. in our premises. I then send a copy to every employee requiring them to open it — expect them to read it —- understand it —- and accept it, “just-in-case” they get asked to go and run a sports centre. What response do think I would get?

Although the risk of catching legionella is low, people have died as a result, but we do not require everyone to sign a policy for this or any of the other more serious hazards they face at work. I am not aware of any software-purchasing-related deaths of late. For dangerous stuff employees sign one policy when they join the organisation. If they have to deal with a hazard we make them aware by warning them about it and if necessary give them additional training, guidance and support so that they can manage the risk in accordance with the overall policy.

Perhaps we have got this wrong. Maybe we should require all computer users (just for example) to complete a workstation assessment online every day when they start work – and if they don’t their computer should blow up in their face and a guilotine then drop from the ceiling removing their hands so they can’t sue for RSI or eyestrain.

That’ll teach them
************************************************************

I hope I have never been responsible for inflicting enough inconvenienve on my users to make them as aggrieved and angry as my friend.. Thing is, I now worry that I might have…

Friday Philosophy – Memory (of the Human Kind) November 14, 2009

Posted by mwidlake in Perceptions, Uncategorized.
Tags:
7 comments

One of the Joys of having been around for a while (I’ve hit that point where I occasionally find myself working with people who were not born when I got my first job in IT) is that you start to re-learn things. Well, I do. {I have to point out that, this being a Friday Philosophy, you will learn nothing yourself, especially not about Oracle. from this opost. If you want Oracle knowledge you might want to pop by Dion cho, Tanel Poder or Richard foote}

Several times over the last couple of weeks I have learnt something “new” and realised a little while later that I already knew it, but had forgotton (BLEVEL is one, consistent gets being impacted by array fetch size is another) . This is not something to be proud of – not knowing something you once did is far worse, and slightly worrying, then never having known it…

I like to think that it is all the new stuff I am learning, pushing previous knowledge out of my brain’s buffer cache {if it is not warmed by re-using it}, but I suspect that it might just be an uptime-related memory leak. A bit like having your database open for a few months and doing a lot of work that includes massive PL/SQL in-memory tables and hash joins (pickler fetches). No one seems sure of the exact mechanics, but after a while you have a hell of a lot less spare memory in your serever than you started with :-)

Maybe the memory loss is closer to the “pickler query” analogy than I thought, you can preserve soft tissue for a long time in alcohol. I’ll have another glass of wine and think on that.

This Forgetting Stuff was actually a major factor to my starting a blog. I was in the process of putting a load of notes on Oracle Odditites and things I had learnt from years past onto a web site so I could get at it from wherever I was in the world, and a friend told me I was being stupid – I should put them on a blog. So I did. There are only two little issues with this.

  • I can’t remember what I have already put on my blog.
  • I’ve forgotten where I put the ZIP disk with the notes on.

So I was contemplating this drift of knowledge and two things struck me.

1) I reckon that the very best people, in any area of expertise, are blessed with excellent memories. There is a comedian in the UK called Stephen Fry, and he is renowned for being stunningly clever. I think he probably is stunningly clever, but he also remembers everything he has learnt (Rory McGrath is another UK comedian with a perfect memory, but somehow he lacks the charm of Stephen Fry, and he has a beard, so not so highly renowned).
2) My poor memory is not maybe such a bad thing. I don’t have to get so upset when a whole chunk of my Oracle knowledge becomes obsolete. I used to be really good at sizing objects and utilizing space, taking into account the 5-block minimum, getting the extents to fit nicely into the datafiles, little scripts to resize segments into a small but sensibele number of extents to reduce wasted space, considering initrans, pctfree, pctused… Pretty much all pointless now :-)

Buffer Cache Hit Ratio – my “guilty” Confession November 1, 2009

Posted by mwidlake in Perceptions, performance.
Tags: , ,
14 comments

My Friday Philosophy this week was on Rules of Thumb on buffer gets per row returned.

Piet de Visser responded with a nice posting of his own, confessing to using ratios to help tuning {We seem to be playing some sort of blog-comment tag team game at the moment}.

Well, I have a confession so “guilty” or “dirty” that I feel I cannot inflict it on someone else’s blog as a comment.

I use the Buffer Cache Hit Ratio.

And the Library Cache Hit Ratio and the other Ratios.

As has been blogged and forum’d extensively, using these ratios is bad and stupid and anyone doing so does not know what they are doing as they do not help you solve performance problems. I mean, hell, you can download Connor McDonald’s/Jonathan Lewis’s script  to set it to what you want so it must be rubbish {go to the link and chose “tuning” and pick “Custom Hit Ratio” – it’s a rather neat little script}.

The point I am trying to make is that once the Buffer Cache Hit Ratio (BCHR) was wrongly elevated to the level of being regarded as a vital piece of key information but the reaction against this silly situation has been that it is now viewed by many (I feel) as the worst piece of misleading rubbish. Again a silly situation.

I think of the BCHR as similar to a heart rate. Is a heart rate of 120 good or bad? It’s bad if it is an adult’s resting heart rate, but pretty good if it is a kitten’s resting heart rate. It’s also probably pretty good if it is your heart rate as you walk briskly. Like the BCHR it can be fudged. I can go for a run to get mine higher, I can drain a couple of pints of my blood from my body and it will go up {I reserve the right not to prove that last one}. I can go to sleep and it will drop. Comparing my resting heart rate to yours (so like comparing BCHRs between systems) is pretty pointless, as I am a different size, age and metabolism to you {probably} but looking at mine over a year of dieting and exercising is very useful. If only I could keep up dieting and exercising for a year…

So what do I think the much-maligned Buffer Cache Hit Ratio gives me? It gives me what percentage of sql access, across the whole database activity, is satisfied from memory as opposed to disc. Or, put another way, the percentage of occurences a block has to be got from the I/O subsystem. Not how many blocks are read from storage or memory though, but you can get that information easily enough. As Physical IO is several orders of magnitude slower than memory access {ignoring I/O caches I should add} , it gives me an immediate feel for where I can and can’t look for things to improve.

If I am looking at a system that is overall very slow (eg high process wait queues under l/unix, the client has said the system is generally slow) and I see that the BCHR is low, say below 90%, this tells me I probably can get some performance increase by reducing physical access. I’ll go and look for those statements with the highest physical IO and the hottest tablespaces/objects in the DB.
If the BCHR is already up at the 99% level, I need to look at other things, such as tuning sort, looking at removing activity in the database, to be very mindful of nested loop access where maybe it is not the best access method (very likely due to old stats on tables).

When I have got to know a system and what it’s BCHR generally sits at, a sudden change, especially a drop, means there is some unusual physical IO going on. If the phones start going and someone is complaining “it’s all slow”, the BCHR is one of the first things to look at – especially as it is available from so many places.

Another thing the BCHR gives me is, if I am looking at a given SQL statement or part of an application, it’s specific BCHR can be compared to the system BCHR. this does not help me tune the statement itself, but I know if it’s specific BCHR is low then it has unusually high IO demands compared to the rest of the system. Further, Reducing it might help the whole system, so I might want to keep an eye on overall system throughput. If I reduce the statement’s execution time by 75% and the whole system IO by 1%, the client is likely to be more happy, especially if that 1% equates to other programs running a little faster “for free”.

So, I don’t use the BCHR to tune individual statements but I feel confident using it to track the general health of my database, so long as I am mindful of the impact of new functionality or upgrades. It’s a rule of thumb. It’s a database heart rate. (and so is redo generation and half a dozen other things).

Friday Philosophy – How many Consistent Gets are Too Much? October 30, 2009

Posted by mwidlake in Perceptions, performance.
Tags: , ,
2 comments

One of my good friends, Piet de Visser commented on a recent post that “380 {consistent gets} is too much” per row returned. He is right. He is wrong. Which?

Piet is actually referring to a given scenario I had just described, so he is probably {heck, he is spot on} correct as his comment was made in context – but it set me to thinking about the number of times I have been asked “is the number of consistent gets good or bad” without any real consideration to the context. The person asking the question usually just wanted a black/white good/bad ratio, which is what Piet also mentioned in his comment, a need to discuss such a ratio. I am on holiday in New England with my second bottle of wine, memories of having spent the day eating Lobster, kicking though Fall leaves and sitting by a warm fire reading a book, so I am mellow enough to oblige.

Sadly, out of context, no such ratio probably exists. *sigh*. There evaporates the warm glow of the day :-).

The question of “is the number of consistent gets per row good or bad?” is a bit like the question “is the pay rate good enough?”. It really depends on the whole context, but there is probably an upper limit. If I am helping my brother fit {yet another} kitchen then the pay rate is low. He has helped me fit a few, I have helped him fit a few, a couple of pints down the pub is enough and that equates to about 30p an hour. Bog standard production DBA work? 30-60 pounds an hour seems to be the going rate. Project Managing a system move that has gone wrong 3 times already? I need to factor in a long holiday on top of my normal day rate, so probably high hundreds a day. £10,000 a day? I don’t care what it is, I ain’t doing it as it is either illegal, highly unpleasant, both or involves chucking/kicking/hitting a ball around a field and slagging off the ref, and I ain’t good at ball games.

I have a rule of thumb, and I think a rule of thumb is as good as you can manage with such a question as “is {some sort of work activity on the database} per row too much?”. With consistent gets, if the query has less than 5 tables, no group functions and is asking a sensible question {like details of an order, where this lab sample is, who owes me money} then:

  • below 10 is good
  • 10-100 I can live with but may be room for improvement
  • above 100 per record, let’s have a look.

Scary “page-of-A4″ SQL statement with no group functions?

  • 100-1000 consistent gets is per row is fine unless you have a business reason to ask for better performance.

Query contains GROUP BY or analytical functions, all bets are pretty much off unless you are looking at

  • a million consistent gets or 100,000 buffer gets, in which case it is once again time to ask “is this fast enough for the business”.

The million consistent gets or 100,000 buffer gets is currently my break-even “it is probably too much”, equivalent to I won’t do anything for £10 grand. 5 years ago I would have looked quizzically at anything over 200,000 consistent gets or 10,000 buffer gets but systems get bigger and faster {and I worry I am getting old enough to start becoming unable to ever look a million buffer gets in the eye and not flinch}. Buffer gets at 10% of the consistent gets, I look at. It might be doing a massive full table scan in which case fair enough, it might be satisfying a simple OLTP query in which case, what the Hell is broken?

The over-riding factor to all the above ratios though is “is the business suffering an impact as performance of the database is not enough to cope”? If there is a business impact, even if the ratio is 10 consistent gets per row, you have a look.

Something I have learnt to look out for though is DISTINCT. I look at DISTINCT in the same way a medic looks at a patient holding a sheaf of website printouts – with severe apprehension. I had an interesting problem a few years back. “Last week” a query took 5 minutes to come back and did so with 3 rows. The query was tweaked and now it comes back with 4 rows and takes 40 minutes. Why?

I rolled up my mental sleeves and dug in. Consistent gets before the tweak? A couple of million. After the tweak? About a hundred and 30 million or something. The SQL had DISTINCT clause. Right, let’s remove the DISTINCT. First version came back with 30 or 40 thousand records, the second with a cool couple of million. The code itself was efficient, except it was traversing a classic Chasm Trap in the database design {and if you don’t know what a Chasm Trap is, well that is because Database Design is not taught anymore, HA!}. Enough to say, the code was first finding many thousands of duplicates and now many millions of duplicates.
So, if there is a DISTINCT in the sql statement, I don’t care how many consistent gets are involved, of buffer gets or elapsed time. I take out that DISTINCT and see what the actual number of records returned is.

Which is a long-winded way of me saying that some factors over-ride even “rule of thumb” rules. so, as a rule of thumb, if a DISTINCT is involved I ignore my other Rules of Thumb. If not, I have a set of Rules of Thumb to guide my level of anxiety over a SQL statement, but all Rules of Thumb are over-ridden by a real business need.

Right, bottle 2 of wine empty, Wife has spotted the nature of my quiet repose, time to log off.

When do We Learn #2 October 20, 2009

Posted by mwidlake in Blogging, Perceptions.
Tags:
4 comments

I exchanged a couple of mails with a friend a few weeks back about how the same topic can arise in a couple of blogs at the same time. Well, I had just blogged myself on when we learn and, blow me over with a feather, that Jonathan Lewis goes and post in a similar vein. He must be nicking ideas off my blog :-) {and yes, I am being tongue-in-cheek here}. We both post thought about needing spare capacity in your life to be able to spend the time to really understand how something works. Yes you learn a lot in the heat of a crisis, but you rarely reallu understand the details, ie become an expert, without having time to digest and qualify that knowledge.

I did write a long comment on his posting, including some links back to my own meandering thoughts on the topic, then realised that I would come across as a bit “me too” so I trimmed it and took out the links. But that is part of why I do my own blog, I found I was spamming other people’s pages with my diatribes and so decide to spam my own. {And I know I am meandering, I’m a bit sleep-deprived, stream of consciousness at the moment}. So here I can refer back to my own stuff and say “me too”, but you are already here reading this, so you only have yourself to blame :-)… Anyway, I wanted to refer back to a very early blog of mine about how much knowledge is enough. I try and make the point that you do not need to know everything, you can become a small-field or local expert just by being willing to learn a bit more.

Jonathan raises the point that he does not have a full time commitment to one client and so he has the luxury to investigate the details and oddities of what he looks into. He suggest this is a large part of why he is an expert, which I feel is true, and I am very happy to see one of the Oracle Names acknowledging that relative freedom from other pressures is key to having the luxury to chase down the details. Those of us in a full time role doing eg DBA, development or design work, have more than enough on our workday plates to keep us too busy. We cannot be top experts, we have a boss to satisfy and a role to fulfill. {Jonathan does not mention that chosing a career where you have luxury of time is also a pretty brave choice – you stand a good chance of earning a lot, lot less whilst working very hard to establish enough of a reputation to be able to earn enough to feed yourself and the cat}.

But this is not a black and white situatuation. There is room for many of us to become experts in our domain or in our locality. Our breadth of knowledge may never be as wide as others, we may not know more than anyone else in a given area {and let’s face, logically there can only be one person who knows the most about a given topic, and that one person is probably in denial about their superiority, which seems to be a defining quality of an expert – it is not so much humility I think as an acknowledgement of there being more to know and a desire to know it}. However, most of us can become the person in our organisation who knows most about X, or who can tie A, B and C together in a more holistic way than others (and that can be a real trick you know). There are always the top experts that you can call on for the worst problems, but you could become the person people come to first.

My advice would be to not try and learn everything about all aspects of Oracle, because you can’t, but rather learn a lot about one or two areas {and consider areas that are more unusual, not just “tuning SQL” or “the CBO”} and expand just your general knowledge of the wider field. And never forget that there is more to learn. So long as you are taking in more knowledge and understanding, you are improving. The best way to do it? Don’t just read other people’s stuff, try teaching someone else. It never ceases to amaze me how stupid I realise I am when I try and show someone else how something works. But that’s OK, so long as they learn it’s fine. If I learn as well, it’s great, and I nearly always do.

I’m getting on a bit, I think I am finally getting the hang of the idea that the more you know the more you realise you don’t know, I wish I knew that when I knew nothing.

Friday Philosophy – when do we learn? October 17, 2009

Posted by mwidlake in Perceptions, Private Life.
Tags: ,
5 comments

I’ve had a theory for a while that there are two times when we learn:

  • When we are under extreme duress
  • When we are under no duress at all

I think all technicians would agree with the former. We learn a lot when something very important needs doing urgently, like getting the database back up or finding out why the application has suddenly gone wrong {Hint, very often the answer is to find What Changed}. Another example is when a decision has been made to implement something a manager has seen a nice sales presentation on and they really like the look of it. We technicians have to make it actually work {and I admit to once or twice having been the Manager in this situation :-). I apologise to my people from back then}.

I’ve also believed for a while that the other time you learn, or at least can learn, is when things are unusually quiet. When work is just at it’s normal hectic pace, it’s hard to spend the extra effort on reading manuals, trying things out and checking out some of those technical blogs. You spend all your spare effort on The Rest Of Your Life. You know, friends, partners, children, the cat.

So I think you need some slack time to learn and that is when the most complete learning is done. Yes, you learn a lot when the pressure is on, but you are generally learning “how to get the damned problem resolved” and probably not exactly why the problem occurred; did you fix the problem or just cover it over? Did you implement that new feature your boss’s boss wanted in the best way, or in a way that just about works. You need the slack time to sort out the details.

When do we get slack time? Weekends and holidays. How many of us have snuck the odd technical book or two into our luggage when going on holiday? {And how many of us have had that look from our partners when they find out?}.

Well, at the end of this week I am going on two and a half weeks holiday, over to New England in the US. A few days in Boston, up through Maine, across to Mount Washington to a little hotel where we had possibly the best meal of our lives, down to Mystic and then over to Washington to see some friends.

I am not taking any manuals. I am not taking any technical books.  I am not taking a laptop with Oracle on it. I am not even likely to blog for the duration. Why? I have not been as mentally and physically shattered as I am now since I finished my degree 20 years ago. I just want to switch off for a while.

So I am revising my theory of when we learn. I now think we learn when:

  • When we are under extreme duress {that just does not change}
  • When we have spare mental capacity and the drive to use it.

Right now, I think I have the mental capacity of a drunk squirrel. So from the end of next week, I’m going to sleep, read sci-fi, eat and drink well and maybe do a bit of culture.  The computers and the learning can wait for a little while.

Friday Philosophy -Do I think Oracle is Rubbish? October 8, 2009

Posted by mwidlake in Blogging, Perceptions.
Tags:
1 comment so far

This should be a “Friday Philosophy” posting really, but heck it’s my blog, I can do what I want {quiet smile}. Besides, by the time I finish this, it might well BE Friday. Oh, heck, I’ll just change the title now to a Friday Philosophy one…

I’ve been reviewing some of my blog this week {it is coming up to 6 months since I started so I was looking back at how it has gone}. Something struck me, which is I can be pretty negative about Oracle software and even Oracle Corp at times.

I mostly seem to pick up on oddities, things that do not work as first seems, even outright bugs. I do not often post about “this is how this cool Oracle feature works” or “I used this part of Oracle to solve this problem”. Partly the reason is that there are a lot of blogs and web pages about “how this feature works”, so the need is generally already met. Partly it is that I, like most people, are more interested in exceptions, gotchas and things going wrong. If it works, heck you just need to read the manual don’t you?

So, do I like Oracle?

Yes. Over all I really like working with Oracle. This is because:

  • I can store and work with pretty much whatever data I have ever needed to with Oracle. It is rare for me to be utterly stumped how to achieve something, though it could take time and maybe be a tad slow or a little inelegant, but it can be done.
  • Despite my recent complaints, you can chuck a hell of a lot of data at Oracle. Back in 2002 I was asked if I could put 7 or 8 Terabytes of data into an Oracle database. I did not even pause before saying “Yes!” – though I knew it would be a big job to do so in a way that was maintainable. I’d now feel the same about a couple of hundred TB.
  • The core technology works really well. We all complain about bits and pieces admitedly, but if I have a complex SQL statement with 15 tables and 25 where clauses, I don’t worry about the database giving me the wrong answer, I worry about the developer having written it wrongly {or Oracle running it slowly, but that keeps me in work, hehe.}. I can back up Oracle in many ways and, once I have proven my recovery, I know I can rely on the backup continuing to work, at least from an Oracle perspective. I’ve never yet lost any production data. Do I worry about transactional consistency? Never. Maybe I should, I’ve seen a couple of blogs showing how it can happen, but in my real-work life, I never even think about it.
  • Oracle does continue to improve the core products and they will listen to the community. It might not seem like it at times, I know, but they do. It can just take a long time for things to come through. As an example, I worked with the Oracle InterMedia developers back with the Oracle 10 beta program in 2003. They {well, to be specific, a very clever lady Melli Annamalia} were adding stuff back then that we and others needed that did not get to see the light of day in 10GR1, but was there as  a load of PL/SQL to do it in 10GR2. Melli said she was adding it into the code base as ‘C’ as well but it would take a while. It did, I think it was part of the 11G release.

Will this stop me complaining and whining on about bits of Oracle I don’t like or that do not work as they should? Absolutely not. As Piet de Visser said on a comment to one of my recent blogs, it is beholden on us Users to keep Oracle Corp honest. But I thought I ought to mention, at least once, that I do actually like Oracle.

I Like Oracle, OK?

Grudgingly :-)

A Tale of Two Meetings – 11GR2 and MI SIG October 5, 2009

Posted by mwidlake in Meeting notes, Perceptions.
Tags: , ,
7 comments

Last week I attended two Oracle events, each very different from the other.

The first was an Oracle Corp event, giving details of the new 11GR2 release and what it was introducing. It was in a nice hotel in London with maybe 250, 300 attendees and all quite swish.

The other was a UK Oracle User Group meeting, the last Management and Infrastructure SIG for 2009. 30 people in the Oracle City office and far more unassuming {And note, as I chair the MI SIG, anything I say about the day is liable to bias…}.

Both events were useful to attend and I learnt things at both, but I also found the difference between the two quite interesting.

Oracle 11G Release 2

The official Oracle 11GR2 presentation was where you went for the definitive information on what Oracle Corp feel are the new features of 11G R2 that are of interest (though some of it was not R2-specific but general 11G).

Chris Baker started off by telling us “there has never been a better time” to move to the latest technology or a greater need to gain business advantage through using said latest technology. You know, it would be really nice, just once, to go to such a corporate event and not be given this same thread of pointless posturing? I know it is probably just me being old and grumpy and contrary, but after 20 years in the business I am sick to the hind teeth of Keynotes or Announcements that say the same empty “Raa-Raa” stuff as the previous 19 years – the need “now” to get the best out of your technology has been the same need since the first computers were sold to businesses, so give it a rest. Just tell us about the damned technology, we are smart enough to make our own decision as to whether  it is a big enough improvement to warrant the investment in time and effort to take on. If we are not smart enough to know this, we will probably not be in business too long.

Sorry, I had not realised how much the Corporate Fluff about constantly claiming “Now is the time”, “Now things are critical” gets to me these days. Anyway, after that there were some good overviews of the latest bits of technology and following form them some dedicated sessions in two streams on specific areas, split between semi-technical and management-oriented talks, which was nice.

There was plenty of talk about the Oracle Database Machine, which appears to be exadata version 2 and sits on top of Sun hardware, which is no surprise given the latest Oracle Acquisition. I have to say, it looks good, all the hardware components have taken a step up (so now 40Gb infiniband interconnect, more powerful processors, even more memory), plus a great chunk of memory as Sun’s “FlashFire” technology to help cache data and thus help OLTP work. More importantly, you can get a 1/4 machine now, which will probably make it of interest to more sites with less money to splash out on a dedicated Oracle system. I’ll save further details for another post, as this is getting too long.

The other interesting thing about the new Oracle Database Machine was the striking absence of the two letters ‘P’ and ‘H’. HP was not mentioned once. I cannot but wonder how those who bought into the original exadata on HP hardware feel about their investment, given that V2 seems only available on Sun kit. If you wanted the latest V2 featries such as the much-touted  two-level disc compression is Oracle porting that over to the older HP systems, are Oracle offering a mighty nice deal to upgrade to the Sun systems or are there some customers with the HP kit currently sticking needles into a clay model of top Oracle personnel?

The other new feature I’ll mention is RAT – Real Application Testing. You can google for the details but, in  a nutshell, you can record the activity on the live database and play it back against an 11g copy of the database. The target needs to be logically identical to the source {so same tables, data, users etc} but you can alter initialisation parameters, physical implementation, patch set, OS, RAC… RAT will tell you what will change.

For me as a tuning/architecture guy this is very, very interesting. I might want to see the impact of implementing a system-wide change but currently this would involve either only partial testing and releasing on a wing and a prayer or a full regression test on an expensive and invariably over-utilised full test stack , which often does not exist. There was no dedicated talk on it though, it was mentioned in parts of more general “all the great new stuff” presentations.

Management and Infrastructure SIG

RAT leads me on to the MI SIG meeting. We had a talk on RAT by Chris Jones from Oracle, which made it clearer that there are two elements to Real Application testing. One is the Database Replay and the other is SQL Performance Analyzer,  SPA. Check out this oracle datasheet for details.

SPA captures the SQL from a source system but then simply replays the SELECT only statements, one by one, against a target database. The idea is that you can detect plan changes or performance variations in just the Select SQL. Obviously, if the SELECTS are against data created by other statements that are not replayed then the figures will be different, but I can see this being of use in regression testing and giving some level of assurance. SPA has another advantage in that it can be run against a 10g database, as opposed to RAT which can only be run against 11 (though captured from a terminal 10g or 9i system – that is a new trick).
There are no plans at all to backport RAT to 10, it just ain’t gonna happen guys.

The SIG also had an excellent presentation on GRID for large sites (that is, many oracle instances) and how to manage it all. The presentation was as a result of requests for a talk on this topic by people who come to this SIG and Oracle {in the form of Andrew Bulloch} were good enough to oblige.

The two Oracle Corp talks were balanced by technical talks by James Ball and Doug Burns, on flexible GRID architectures and using OEM/ASH/AWR respectively. These were User presentations, mentioning warts as well as Wins. Not that many Warts though, some issues with licence daftness was about it as the technology had been found to work and do it’s job well. Both talks were excellent.

The fifth talk was actually an open-forum discussion, on Hiring Staff, chaired by Gordon Brown {No, not THAT Gordon Brown, as Gordon points out}. Many people joined in and shared opinions on or methods used in getting new technical staff. I found it useful, as I think did many. These open sessions are not to everyone’s taste and they can go wrong, but Gordon kept it flowing and all went very well.

 

The difference between the two meetings was striking. Both had strong support from Oracle  {which I really appreciate}. Both included talks about the latest technology. However, the smaller, less swish event gave more information and better access to ask questions and get honest answers. There was also almost no Fluff at the SIG, it was all information or discussion, no “Raa-Raa”. But then, the lunch was very nice and there were free drinks after the Corporate event {we shared rounds at a local pub after the SIG event – maybe one round too much}. 

I guess I am saying that whilst I appreciate the Big Corporate event, I get a lot more out of the smaller, user group event. Less fluff, more info. Thankfully, Oracle support both, so I am not complaining {except about the “there has never been a better time” bit, I really AM sick of that :-( ).

 So if you don’t support your local Oracle user group, I’d suggest you consider doing so. And if, like so many sites seem to, you have membership but don’t go along to the smaller events, heck get down there! There is some of the best stuff at these SIG meetings.

Friday Philosophy – Cats and Dogs October 2, 2009

Posted by mwidlake in Perceptions.
Tags: , ,
2 comments

I like cats. Cats are great. I don’t like dogs. I’ve been attacked by a nasty bitie dog and that is my reason. And dogs growl at you. And woof.

This is of course unfair, I have been bitten by cats lots more than dogs (seeing as I own cats and have never owned a dog, this is to be expected), cats scratch, cats hiss at you and yowl and they have been known to leave “presents” in my slippers.

My animal preference comes down to personal, even personality, reasons as opposed to logic. A dog needs attention, a walk twice a day, they follow you around and always want attention and tend to be unquestioning in their affection. Cats can often take you or leave you, will come when called only if they had already decide to come over and the issue of who owns who is certainly not clear. If you do not keep your cat happy, there is always Mrs Willams down the road who Tiddles can up and go and live with instead.

These same illogical preferences riddle IT I think. People make decisions for what I sometimes term “religious” reasons. As an example, I’ve worked with a lot of people who either are strongly for or against Open Source. There are logical and business reasons for and against Open Source, but it seems to me that many people have decided which they prefer for personal reasons {often, Open Source people tend towards anti-establishement and anti-corporation views, Open Source detractors tend towards supporting business and personal wealth}. They then will argue their corner with the various pros and cons but you know there is no swaying their opinion as it was not derived from logic.

In the same way I will not stop preferring cats to dogs. And I know I personally have a couple of Religious decisions about IT that are not based on cold logic {And I am not changing them, OK!}.

I think it helps to realise that people do make decisions this way (some make most of them this way, most make some decisions this way) and it’s not worth getting that angry or annoyed when someone seems to be intractable in their stance against your ideas. After all, you might have made a “religious” decision which side you are on and they can’t understand why you don’t agree with them :-)

opinions formed in this manner are difficult to change. They can and do change, but usually only over time and in a gradual way, certainly not from someone saying to them they are an idiot for preferring Sybase to Ingress and verbally berating them with various arguments for and against.

So, if it is only a work thing {and heck, computers and software really are not that important} be passionate, but try and be a little flexible too.

This post was, of course, just a shallow excuse to include a link to a Cat thing – my favorite cat animation. Sorry Dog lovers {It’s your own faulty for liking nasty, smelly dogs}.

Data Dictionary Performance – reference September 29, 2009

Posted by mwidlake in internals, Perceptions.
Tags: , ,
1 comment so far

I’ve got a couple more postings on Data Dictionary performance to get where I plan to go to with this, but if you want to deep-dive into far more technical details then go and check out Dion Cho’s excellent posting on fixed object indexes.

I was not planning on getting into the sys.x$ fixed objects as you need SYS access to look at them, which not everyone has, but this is where Dion goes. His posts are always good, I need to check them far more.

As a soft-technical aside, I often mention to people when doing courses on SQL or writing standards or even the odd occasions I’ve discussed perception, that we Westerners are taught to read from left to right, top-to-bottom and we pick out left justification very well. Code laid out like the below we find easy to read:

select pers.name1                surname
      ,pers.name2                first_forename
      ,pers.nameother            middle_names
      ,peap.appdate              appointment_date
      ,decode (addr.name_num_ind
                 ,'N', to_char(addr.housenum)
                 ,'V', addr.housename
                 ,'B', to_char(addr.housenum
                              ||' '||addr.housename)
                                 house_no_name
      ,addr.address2             addr_street
      ,addr.address3             addr_town
      ,addr.address4             addr_dist
      ,addr.code                 addr_code
from person              pers
     ,address            addr
    ,person_appointments peap
where pers.addr_id     =addr.addr_uid
and   pers.pers_id     =peap.pers_id
and   pers.active_fl   ='Y'
and   pers.prim_cons   ='ANDREWSDP'
and   peap.latest_fl   ='Y'

But this is not true of other cultures, where people do not read left to right, top to bottom. I have had this confirmed just a couple of times when people who were born in Eastern cultures are in the course/conversation.

So I was very interested to see Dion’s Korean version of the blogpost I reference above (I really hope this link here to the korean version is stable).
The main body of the page is on the right, not left, but the text appears to be left justified.

Of course, I am horribly ignorant, I do not know which direction Koreans read in :-(. I could be spouting utter rubbish.

Follow

Get every new post delivered to your Inbox.

Join 163 other followers