jump to navigation

Where do my trace files go? V$DIAG_INFO October 19, 2015

Posted by mwidlake in development, performance, SQL Developer.
Tags: , ,
1 comment so far

Where do oracle trace files go? I don’t know why this piece of info will not stick in my head, I seem to have to look it up 3 or 4 times a year.

If only I had an easy way to find out. There is a very easy way to find out – and that piece of info won’t stay in my head either. So this really is a blog post just for stupid, forgetful me.

V$DIAG_INFO has been available since oracle V11. All the trace files go into the Automatic Diagnostic Repository (ADR) by default.

ora122> desc v$diag_info
 Name                                                                Null?    Type
 ------------------------------------------------------------------- -------- ---------------
 INST_ID                                                                      NUMBER
 NAME                                                                         VARCHAR2(64)
 VALUE                                                                        VARCHAR2(512)
 CON_ID                                                                       NUMBER

Quick sql*plus script to get it out:

-- diag_info
-- quick check of the new v$diag_info view that came in with 11
col inst_id form 9999 head inst
col name form a25
col value form a60 wrap
spool diag_info.lst
set lines 120
select * from v$diag_info
order by name
spool off


 INST_ID NAME                 VALUE                                                            CON_ID
-------- -------------------- ---------------------------------------------------------------- -------
       1 Diag Enabled          TRUE                                                                  0
       1 ADR Base              D:\APP\ORACLE                                                         0
       1 ADR Home              D:\APP\ORACLE\diag\rdbms\ora122\ora122                                0
       1 Diag Trace            D:\APP\ORACLE\diag\rdbms\ora122\ora122\trace                          0
       1 Diag Alert            D:\APP\ORACLE\diag\rdbms\ora122\ora122\alert                          0
       1 Diag Incident         D:\APP\ORACLE\diag\rdbms\ora122\ora122\incident                       0
       1 Diag Cdump            D:\app\oracle\diag\rdbms\ora122\ora122\cdump                          0
       1 Health Monitor        D:\APP\ORACLE\diag\rdbms\ora122\ora122\hm                             0
       1 Default Trace File    D:\APP\ORACLE\diag\rdbms\ora122\ora122\trace\ora122_ora_7416.trc      0
       1 Active Problem Count  0                                                                     0
       1 Active Incident Count 0                                                                     0

I should add some notes later about setting the trace file identifier…
Ohhh, OK, I’ll do it now. To make it easier to identify your trace file, set tracefile_identifier

alter session set tracefile_identifier = 'mdw151019'

--Now if I create a quick trace file
alter session set sql_trace=true


alter session set sql_trace=false

I now go to the Diag trace directory I identified via V$DIAG_INFO and look for my trace files. I could just look for the latest ones or do a wilcard search on my tracefile_identifier string and, pop, there we are:

19/10/2015 13:59 39,751 ora122_ora_7416_mdw151019.trc
19/10/2015 13:59 426 ora122_ora_7416_mdw151019.trm

If you want a taste of the numerous ways of initiating a 10046 trace, sometimes called a SQL trace, see Tim Hall’s excellent post on his Oracle Base website:


Oh, one final nice thing. You can open trace files in SQL Developer and play with what information is shown. Maybe I should do a whole piece on that…

Actually, these two post from Oracelnerd and Orastory will get you going, it’s pretty simple to use in any case:


ScreenHunter_45 Oct. 19 14.25

Friday Philosophy – The Issue of Exceptions in your Mailing Lists September 18, 2015

Posted by mwidlake in development, Friday Philosophy, Perceptions.
Tags: ,
1 comment so far

A couple of days ago I got the below email about the next UKOUG Database Server SIG being full. That’s great to see! A full user group meeting. If I still wanted to attend, I could be added to the waiting list:

ScreenHunter_42 Sep. 14 11.25

It made me smile as, guess what, I was already due to be there. I was the last speaker on the agenda! It was going to be awkward if I failed to get from the waiting list to an actual place at the event, it could be a very quiet session…

As I said, it made me smile – I’m not having a go at the UKOUG over this. Because, as I’m sure many of you are aware due to the day job, removing people from a mailing list who are no longer going to be interested is not always easy (I know, it should be easy, but it just seems not to be). Anyone registered for an event should be on an exclude list for the event. But only for that event. And for that to occur you have to make sure that all speakers and committee are registered (as committee and speakers can be very poor at actually registering!).

Sales and promotion communications are one thing and, let’s face it, in the scheme of things damned unimportant – except to the company doing the sales. If you sell clothes and you mess up on your communications to me, I’ll have a poorer opinion of your company and I don’t mind having that poor opinion. I won’t buy your clothes, no skin off my nose. As a recent example on a personal level, my mother was having a new door and window fitted. She was contacted by a company that replaced a couple of her windows the year before and as the service was fine then and she knew she had to have the door replaced, she said yes. But then she got called by them again a month later asking if she wanted any more windows or doors – “But I just ordered from you, have you lost the order?”. They apologised and said it would not happen again. But it did, a couple of times – including the day before the new door was to be fitted, whilst she was waiting for a call about the exact time for the work the next day. A lesson in How to Confuse A Little Old Lady. The end result is, she won’t use that company again. Especially as the new door seems to have a fault they won’t come and fix. Maybe the inability to take a current client off the pester-list should have warned us off them.

But there are other communications where the need to take care of the exclusions or keep the lists timely are far, far more important. People get really upset if the hospital sends a reminder for a checkup to their father – who died last month, in that hospital. In that situation people are so sensitive that they can’t just accept it as an administrative cock-up. It is now seen as an affront to their dead relative’s memory and a sign the hospital do not care.

An even worse situation than the above is if the hospital wrongly records you as dead. It happens and it happens “quite often” as it is very hard to keep individual, accurate records on people who keep going and changing names, addresses and have the same name and date of birth as other people. I know this as I once had to write the software to do all the test cancellations and notifications required internally in a hospital that occur when someone dies. And I also had to write the software to undo all of that when someone realised the wrong person had been recorded as dead. {The functions were initially called “kill patient” and “resurrect patient” – They got renamed pretty damned quick when people outside the medical staff saw them. Medics have a dark sense of humour that most other people don’t!}. It happens and when you see how many people in a large hospital system have very similar details, you can appreciate why. I bet that right not, somewhere in the UK, is a hospital with two people of the same name & date of birth as inpatients and at least one has a chance of not surviving the experience.

As for sets of duplicate records as no one realised Sarah Twoddypottle is the same person as Sarah Poddytwoddle who came into A&E 4 years ago and neither knew their NHS number… 20 years ago I could have done a PhD on the topic of duplicate patient records, the situation was so bad – and not at all helped by people wanting “privacy” ie no joining up of national medical records. I digress.

People get similarly upset about money (some more so than medical!) – any attempt to offer someone a loan who is already in debt and won’t be accepted for a loan causes all sorts of anger and annoyance. It’s all seen as personal by the wronged customer when in fact it is just an impersonal business function. No, they don’t care about you Mr Postlethwaite, but they don’t not care about you either. You are just one of a million customers. Level of care for the individual is not part of the equation.

I’ll finish on an interesting one. Companies that offer gambling services have to abide by some strict rules about who they can promote their services to or even allow to use their services. To try and halt the horrendous increase in people gambling away money they don’t have there are strict legal rules about self-exclusion and cooling off periods. If you get into serious debt due to gambling you can state you are an addict and it was partly the fault of the company you gambled with. So they must not encourage you any more. At least not for 6 months. {NB I am not a lawyer, I may have the exact details wrong, but the gist is right}. After the time period they are allowed to advertise to the person again as it would be wrong and immoral (???) to exclude them forever. Besides, they are a good source of revenue…

It’s important to get your communications correct, timely and exclusion-aware. And just consider in what situations people might be especially sensitive to what they see as an impersonal system not caring about them or their nearest and dearest. Most people find it hard to accept that such mistakes are not personal, even when they are blatantly impersonal.

Friday Philosophy – Building for the Future August 14, 2015

Posted by mwidlake in Architecture, development, Friday Philosophy.
Tags: , ,

I started my Oracle working life as a builder – a Forms & Reports Builder (briefly on SQL*Forms V2.3 but thankfully within a month or two we moved up to SQL*Forms V3, SQL*reportwriter V1.1 and SQL*Menu 5 – who remembers SQL*Menu?). Why were we called Builders? I guess as you could get a long way with those tools by drawing screens, utilising the (pretty much new) RI in the underlying Oracle V7 to enforce simple business rules and adding very simple triggers – theoretically not writing much in the way of code. It was deemed to be more like constructing stuff out of bits I guess. But SQL*Forms V3 had PL/SQL V1 built in and on that project we used it a *lot*.

I had been an “Analyst Programmer” for 3 years before then and I’ve continued to be a developer/programmer/constructor-of-code on and off over the intervening couple of decades. I’m still a developer at times. But sometimes I still think of it as being a “builder” as, if you do it write {sorry, little word-play joke there} you are using bits of existing stuff and code designs/patterns you know work well and constructing your system. The novel part, the bit or bits that have never been done before (at least by me), the “architecting” of those units into something interestingly different or the use of improved programming features or techniques vary from almost-none to a few percent. That is the part which I have always considered true “Software Development”.

So am I by implication denigrating the fine and long-standing occupation of traditional builders? You know, men and women who know what a piece of two-by-four is and put up houses that stay put up? No. Look at the below.

This is part of my neighbour Paul’s house. He is a builder and the black part in the centre with the peaked roof is an extension he added a few years back, by knocking his garage down. The garage was one of three, my two were where the garage doors you can see are and to the left. So he added in his two-story extension, with kitchen below and a very nice en-suite bedroom above, between his house and my ratty, asbestos-riddle garages. Pretty neat. A few years later he knocked down my garages and built me a new one with a study on top (without the asbestos!) and it all looks like it was built with his extension. Good eh? But wait, there is more. You will have noticed the red highlight. What is that white thing?

Closer in - did he forget some plumbing?

Closer in – did he forget some plumbing?

This pipe goes clean through the house

This pipe goes clean through the house

When I noticed that white bit after Paul had finished his extension I figured he had planned more plumbing than he put in. I kept quiet. Then, when he had built my new garage and study, I could not help ask him about the odd plumbing outlet. So he opened it. And it goes through the dividing wall all the way through to the other side of the house. Why?

“Well Martin, putting in cables and pipes and s**t into an existing house that go from one side to the other, especially when there is another building next door, as a real pain in the a**e. It does my ‘ead in. So when a build something that is not detached, I put in a pipe all the way through. Now if I need to run a cable from one side of the house to the other, I have my pipe and I know it is straight, clean, and sloping every so slightly downwards”. Why downwards? “Water Martin. You don’t want water sitting in that pipe!”.

I’ve noticed this about builders. When I’ve had work done that is good, there is at least one person on the team who thinks not just about how to erect or do what needs to be done today, they do indeed think about what you will need after the build is done, or in a few years. Such as hanging doors so they do not smack into the cupboards you will put in next… *sigh*. Paul is the thinking guy in his little team. I suspect one of the others is pretty smart too.

But isn’t this what the architect is for? To think about living with the building? Well, despite the 7 years plus needed to become a true architect (as that term really means, not as some stolen label for software designers with too much ego) I’ve had builders spot the pragmatic needs a couple of times that the architect missed.

And as I think we would all agree, a good software developer always has an eye on future maintenance and modification of the software they develop. And they want to create something that fits in the existing system and looks right. So just like my builder neighbour does.

I’m not a software architect. I’m a code builder. And I’m proud of it.

Computers are Logical. Software is Not July 3, 2015

Posted by mwidlake in development, Friday Philosophy, future.
Tags: , ,

We’ve all heard it before. Computers are totally logical, they do exactly what they are told. After all, Central Processing Units (CPUs) are built out of fundamental units called Logic Gates. With perhaps the exception when a stray cosmic ray gets lucky, the circuits in a computer chip and memory act in a totally logical and predicted manner.

And of course, anything built on top of computers will be utterly logical as well. All those robots that companies are designing & building to clean our houses, do our manual labour and fight our wars are going to be logical, follow the rules given and be sensible.

But they are not. As Software is not logical. Often, it is infuriatingly illogical and confusing. Which makes you worry about the “domestic servant” robots that companies are developing, the planned “disaster scene recovery” robots they keep telling us are coming and especially the “Killer Robots” -sorry, “Defense Robots” – that the military are beavering away at.

This XKCD cartoon very much refelects some recent experiences I have had with consumer software:

XKCD - Haunted Computer

XKCD – Haunted Computer

I’d say that, unless an algorithm is about as simple as a Centigrade-to-Fahrenheit conversion program, it will have a bug or will mess up with out-of-range values. Just think back to when you wrote your Centigrade-to-Fahrenheit program (we all have, haven’t we?) back at school or on your home computer or you first week on the college course. What happened if you input a temperature of -1000C, an impossible temperature? I bet it either fell over or gave a just-as-impossible Fahrenheit value. Logical but stupid.

I worked on a financial system a few years back that, as one very small but significant part of what it did, showed you your average spend on things over 3 years. It took several weeks to explain to the program manager and his minions that their averaging code was wrong. Utterly, hopelessly and tragically wrong. First, it calculated and displayed the value to several decimal places – To thousandths of a penny. Secondly, it did not take into account the actual period over which you had spent your money. If you had opened your account 1 year ago, it still calculated the value over 3 years. As for taking into account months, weeks and days of the year, don’t make me laugh. You might be able to forgive this except the same team had also written the code to archive off data once it was 3 years old – in whole years. So there would only be between 2 and 3 years of data and only 3 whole years for, theoretically, 1 day. But no, they had hard-coded the “divide by 3 years”.

We have all experienced endless issues with computers or peripherals that will work one day, not work properly the next and then go back to working. Firmware and Operating Systems are just software really, with the same flaws as the stuff we write and fix in our working lives day after day. There will be a twisted reason buried deep somewhere why the printer will not work on Thursdays, but it won’t be a sensible reason.

All the software out there is more or less illogical and broken. The less broken gets used and we learn it’s idiocies. The worst gets canned or labelled “Windows 8” and forced on us.

Crazy (illogical) Killer Robot

Crazy (but logical) Killer Robot

I know some people worry about the inexorable rise of the machines, Terminator Style maybe, or perhaps benign but a lot smarter than us (as they are logical and compute really, really fast) and we become their pets. But I am not concerned. The idiot humans who write the software will mess it up massively. Oh, some of these things will do terrible harm but they will not take over – they will run out of bullets or power or stop working on Thursday. Not until we can build the first computer that is smart enough to write sensible software itself and immediately replaces itself with something that CAN write a Centigrade-to-Fahrenheit conversion program that does not mess up. It will then start coding like a human developer with 1 night to get the system live, a stack of angry managers and an endless supply of Jack Daniels & coffee – only with no errors. With luck it will very soon write the perfect computer game and distract itself long enough for us to turn the damned thing off.

Extra session at OUG Ireland – Oracle Lego. March 12, 2015

Posted by mwidlake in database design, development, Presenting.
Tags: , , ,
add a comment

I’m now doing a second session at OUG Ireland 2015. {This is because one of the accepted speakers had to drop out – it sometimes happens that, despite your best intentions, you can’t make the conference and it is better to let them know as soon as you can, as they did}. This will be a talk called “Oracle Lego” and it is one I put together a couple of years ago when I decided to try and do more introductory talks – talks aimed at those who are not {yet} experts and who I think tend to get ignored by most conference and user group agenda. So it is aimed at those new to oracle or experts in other areas who have never really touched on the subject.

“Oracle Lego” is about the basics of database design. I have a personal soap box I occasionally get on that very little real database design seems to occur these days. There are exceptions, but often the database design seems to be a quick brain-dump of what the developers or business analysts know they need to store information about and the first-cut set of tables gets created – and then endlessly modified as the development rolls on.

Guess what, we don’t build houses, cars, ships, bridges or garden sheds like that – and there is a reason. If you build things piecemeal as you go along and with bits you either have to hand or have to quickly get, you end up with a pretty poor shed. In fact you don’t end up with a shed, you end up with a shack. With a leaking roof and a door that hits the potting table when you open it. I don’t want a shack and I never, ever want to go over a bridge or sail in a ship built “on the hoof” like that!

Further, just as with a proper architectural or engineering design, a database design does not fix the solution in stone, there is still scope for modification. A bespoke house plan gets tweaked and modified as you do the build and you realise what can be improved when you see it – but you do not suddenly decide to dig out a basement and change from wood walls to stone when you have already constructed the ground floor! I’ve seen database “designs” like this.

There is also more to doing a database design than coming up with tables that hold the records we want to store: We might want to consider if storing similar things in the same table could be better than a table for each “type” of something; How we index those tables and relate them together can have a huge impact on how easy it is to get the data out and store it in the first place; The expected volume and life cycle of the data may require us to consider eg archiving; The very-much-ignored aspect of physical placement of data and clustering of data.

You can spend weeks dedicated to learning about database design – but you can also learn a lot in 60 minutes, ie the basics. And it really is like Lego – once you know the basics you can build up a really complex design. And you learn stuff doing it (and turning it into a real system), just like you do the first time you build a Lego robot (or dog or house or car or bridge or spaceship or whatever). So the second time you build your Lego robot you use the same design basics and layer on top what you learnt last time.

So that is the aim of this talk, the basics of database design.

The strange thing is, last time I did this talk I asked the audience how much database design they did. Every single one of them was already an experienced and capable database designer! So why had they come to this intro talk? They had three reasons:

  1. It was the only talk on database design at the conference, and one more than they were used to getting.
  2. They had picked up their database design skills on-the-job and thought a “reminder” of the basics would be good.
  3. It was cold outside and all the other talks appealed less.

So, this time I am hoping some of the audience is new to database design and I get to teach them great stuff they did not know. If it is all experts again, I think I’ll have to retire this particular intro talk, at least for conferences.

As you can see from the agenda grid here, I’ll be talking at 10:15. You can’t link to an abstract of the talk yet, that just needs to be twiddled into place.

Update – Peter Scott stopped by this blog and it prompted a thought. He felt it was too much at a tangent to add as a comment but I felt it was a very valid and valuable point – so check it out over here on his blog.

BTW Pete has started blogging more, on his thoughts and opinions on Data Warehousing. Personally I think it is worth catching them.

{Oh, and in case any lawyers stop by, “Lego” is of course the copyright name of a popular plastic construction toy, made by the Danish company The Lego Group, that children love playing with and adults hate walking on in bare feet. Did anyone not know that?!?! I have no link to The Lego Group and no plastic bricks will form part of my talk.}

DBMS_APPLICATION_INFO for Instrumentation September 3, 2012

Posted by mwidlake in development, Instrumentation, performance.
Tags: , , , ,

I just wanted to put up a post about DBMS_APPLICATION_INFO. This is a fantastic little built-in PL/SQL package that Oracle has provided since Oracle 8 to allow you to instrument your code. i.e record what it is doing. I’m a big fan of DBMS_APPLICATION_INFO and have used it several times to help identify where in an application time is being spent and how that pattern of time has altered.

Some PL/SQL developers use it and some don’t. It seems to me that it’s use comes down to where you work, as most PL/SQL developers are aware of it – but not everyone uses it (a friend of mine made the comment recently that “all good PL/SQL developers use it“. I can understand his point but don’t 100% agree).

It is incredibly easy to use. You use the procedures SET_MODULE(module_name,action_name), SET_ACTION(action_name) and SET_CLIENT_INFO(client_info) to set the values of the corresponding columns in V$SESSION for your current session. This is done in memory, there is no inserting or updating of any table rows, so it is incredibly quick and light-weight. The below shows a subset of V$SESSION including the columns that get set:

desc v$session
Name Null? Type
----------------------------------------- -------- ----------------

Just something odd to note. MODULE, ACTION and CLIENT_INFO are shown at VC(64) but if you check the documentation you will see that:

MODULE is limited to 48 bytes
ACTION is limited to 32 bytes
CLIENT_INFO can be set to the whole 64 bytes

I have no idea why MODULE and ACTION are limited in this way but it might be historic, backward compatibility with prior versions.

As a quick example of their use:

test_11_2> --demo_dai.sql
test_11_2> -- quick demo of dbms_application_info
test_11_2> COL sid form 9999
test_11_2> col username form a12
test_11_2> col module form a18
test_11_2> col action form a22
test_11_2> col client_info form a15
test_11_2> -- set module and action
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

----- ------------ ------------------ ---------------------- ---------------
test_11_2> --
test_11_2> -- Update the action within a module
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

----- ------------ ------------------ ---------------------- ---------------
650 ERIC SALES_LOAD Update all Prices

test_11_2> -- clear them
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

----- ------------ ------------------ ---------------------- ---------------
650 ERIC

test_11_2> -- you can set just an action
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_action(ACTION_NAME=>'orphan process')
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

----- ------------ ------------------ ---------------------- ---------------
650 ERIC orphan process

test_11_2> -- set them to something sensible
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

----- ------------ ------------------ ---------------------- ---------------

test_11_2> -- client_info can be longer
test_11_2> EXEC DBMS_APPLICATION_INFO.SET_client_info(client_info=>'entered this module at '||sysdate)
test_11_2> select sid,username,module,action,client_info from v$session where sid=650;

----- ------------ --------------- --------------- ----------------------------------------
650 ERIC RETURNS_LOAD MATCH_TO_SALE entered this module at 03-SEP-2012 13:07

The intention is that you set the MODULE as you enter a functional chunk of your application (be it a PL/SQL package or a chunk of JAVA, whatever {but be aware of the stateless nature of systems using middle tiers and pooled connections}) and then update the ACTION as you progress. You can set an action with no module, as demonstrated above, but I would recommend against it. On that topic I would make the following recommendations:

  • Set the MODULE at the start, eg as you enter a PL/SQL package’s main module
  • Set the MODULE before you do all the validation – you may wish to see if that validation is as quick as you think.
  • Use SET_ACTION to update the action prudently. ie when the activity of the code alters but not all the time. It will come clearer with use, but what you are after is to be able to judge the relative run durations of parts of the application. Too much detail can mask the overall picture. If you can alter code you can always add more instrumentation.
  • Clear the MODULE and ACTION explicitly as you exit a package. The activity is lightweight and you do not want any code that lacks this instrumentation to inherit values from calling code.
  • Do not set an ACTION without having set the MODULE earlier in the code.
  • Keep the text of these values short and specific.

You do not have to look at V$SESSION to see these values, which is good as you may lack the privs needed to do so. Since oracle 9 there have also been procedures to check what the values are set to. e.g:

test_11_2> BEGIN
5 begin
6 dbms_application_info.READ_MODULE(V_MODULE,V_ACTION);
8 END;
9 END;
10 /


So as you can see, these values can be set, they can be accessed via the data dictionary or the package itself and, if you believe my wild claims, they are very lightweight to use. However, these values really come into themselves with eg OEM’s performance screens. I’ll leave examples of that to a second post. Here I just wanted to cover how easy it is to set and get the information.

That does lead me onto what I think is the main reason that DBMS_APPLICATION_INFO is not as widely used as it could {and should} be. Developers often lack access to eg OEM and so do not see the full benefit of using it. Also, developers tend to be {note the get-out-clause) more concerned with making the code work and less with monitoring it after it is released. DBA-types tend to be more concerned with monitoring it. Thus the DBA types would want it in the code and developers would not want to spend the time coding it in. We are all one big team though, so some negotiation and beer-buying (or management enforcement) might well resolve that one.

Friday Philosophy – Lead or Lag (When to Upgrade)? January 20, 2012

Posted by mwidlake in development, Friday Philosophy, Testing.
Tags: , , ,

I was involved in a discussion recently with Debra Lilley which version of Oracle to use. You can see her blog about it here (and she would love any further feedback from others). Oracle now has a policy that it will release the quarterly PSUs for a given point release for 12 months once that point release is superseded. ie once came out, Oracle will only guarantee to provide PSUs for for 12 months. See “My Oracle Support” note ID 742060.1. However, an older Terminal release such as is not superseded and is supported until 2015 – and will get the quarterly PSU updates. This left the customer with an issue. Should they start doing their development on the latest and theoretically greatest version of Oracle and be forced to do a point upgrade “soon” to keep getting the PSUs, or use an older version of Oracle and avoid the need to upgrade?

This is in many ways a special case of the perennial issue of should you use the latest version of Oracle (or in fact any complex software solution) or go with the version you know and trust? Plus, should you patch up to the latest version which in theory gives you protection against bugs and vulnerabilities (along with the CPUs). Yes, they are two separate issues but people tend to sit on the same side of both points, for the same reasons.

The arguments to stay using an older version are that it is working, it is stable, you do not need the new features and upgrading is a lot of work and effort. Plus the new version will have new bugs that come along with the new features you do not need and things might be turned on by default that you could do without (like stats collecting or not creating the actual segments when a new table or partition is created). If you remain on your favourite version long enough, you get another issue which is that the latest version of Oracle might not be compatible with your ancient version of the OS or another package or programming language critical to your system (I got caught in a terrible web with old perl, old O/S and old DB that resulted in a need to upgrade all three together – ouch!).

The arguments to moving forward are that you get access to the latest features, that over all older features will have more bugs fixed in newer version, performance will be better {again, overall, exceptions allowing}. Also, if you do hit bugs and problems there are no issues in having to first upgrade to a fully supported version. Plus, fixes are made for current versions first and then back-ported to older ones. Those pack-ported fixes can cause real problems when you DO decide to upgrade.

The big sticking points are the effort involved in upgrading and living with the bugs that you find that Oracle Testing didn’t.

I’ve got a few of other considerations to throw into the pot.

Firstly, if you are developing something new, it is not a lot more effort to use the latest version. This allows you to learn the new version and eases the transition of older systems to it.

Secondly, Oracle like you if you use the latest version, especially if it is the latest-latest version or even beta. Yeah, the helpdesk will not have a clue about some of your issues but in my experience you get access to those really smart guys and gals in Oracle who do the third-line support or even the development work.

Thirdly, if you are on the latest version, if you do decide to freeze on that version for a while, for stability and a quiet life, you have a lot longer before your version (at least at a major level) drops out of support.

Fourthly, dynamic, inquisitive, flexible staff like new things. In my experience, environments that freeze on an old version have a higher percentage of staff who either like it dull and repetitive, or hate it being dull and repetitive – and itch to get out. If I’m in charge, I know which type of staff I like to have more of {NB there are some very good arguments for having some staff who like it dull and repetitive}.

As you can guess, I am in the “be on the latest version” side of the argument. I was ambivalent about it until a few years ago when I noticed a trend:

Sites that like to move forward tend to (a) do it in a controlled manner and (b) have the infrastructure to do proper regression testing.
Site that like to stay still lack the ability to do regression testing and move forward only when forced – and in a pressured, unplanned and frankly chaotic manner.

That was it, that was the real key thing for me. The further you lag behind the more likely you are to eventually be forced to upgrade and it won’t be a nice time doing it. I know, there are exceptions, systems still running Oracle 6 absolutely fine on an old DOS6.1 box. In the same way you also get the odd 95-year-old life-long smokers – and thousands of 45-year-old smokers with emphysema.

When I have any sway over the situation I now always strive to be on modern versions of Oracle {OS, language, whatever} and to patch small and regular. To support all this, have very good regression testing. I’ve only a couple of times been able to get the regression testing sorted out as well as I would like, but when you do the pain of patching and upgrading, as well as developing and integrating, is so much reduced that not patching seems madness.

So to sum up:

  • If it is a new development, go for the very latest version, play with the latest features if potentially beneficial and see if you can get Oracle to be interested in your attempts. ie (B)lead.
  • If you have good regression testing, plan and carry out patch and version upgrades as they come available and stay current. ie Lead
  • If you have a complex solution in place and no/poor regression testing, do not move to a new major release, leave it a while for the worst new bugs to be found and fixed. Then move. ie Lag
  • If your system is old AND critical and all the guys and gals who implemented it are long gone, stay on that version for ever. ie stagnate.

Oh, and if that last one applies to many of your systems – dust off the CV and start reading technical manuals. One day you will need a new job in a hurry.

Oracle Nostalgia December 15, 2011

Posted by mwidlake in database design, development.
Tags: ,

When preparing the material for my “Oracle Lego – an introduction to Database Design” presentation for the UKOUG last week, I was looking back at my notes from a course on the topic from “a few years back”. There were a few bits which made me smile.

Oracle’s [SQL] implementation conforms to ANSI standard, although referential integrity will not be enforced until version 7

Any other old geezers having flashbacks? I am so glad my first major Oracle development project swapped to using a Beta of V7 very early, so we had the integrity turned on during most of development. I had to help a few projects go from V6 to V7 and turn on the RI – it was usually very, very painful. Or impossible. I always think back to those nightmare experiences when some bright spark suggests turning off referential integrity for “ease of development” or “performance” reasons. There are good performance reasons for altering how you implement RI but, as I said during my presentation on database design, I have never, ever, ever seen a system with RI turned off that did not have damaged data.

Oracle’s optimiser is rule-based. Designing efficient queries involves taking advantage of the optimiser behaviour


You can tell this course was run in the UK due to the lack of ‘z’ in ‘optimiser’ :-). How many of use can now make a stab at the seven or eight significant rules from the 15 (16, 17 as versions advanced) in the list? Several rules were to do with Clusters so you did not care. Let’s think, what were the main things to keep in mind…

  • most significant table last in the FROM clause and order upwards in the order you wanted to visit {most significant being the one you felt you could most efficiently do the first filter against}
  •  WHERE clauses ordered downwards in the order you wanted them to be applied.
  • Order of preference to identify a row was something like ROWID, primary key, unique key, full non-unique key, partial unique key, partial non-unique key, full index scan, full table scan.
  • Disable index access by adding 0 to numeric columns and concatinating null to varchars.

I’ve not checked back in the manuals (I have a set of the V7 on my laptop) so I’m probably wrong.

Storage….selecting suitable values for storage parameters … will improve the final performance of the database

Considering the “suitable values for storage parameters” was perhaps my first real conscious step into being a performance/design guy {I was lucky to be on a project where designing for the RBO and matching those rules was just part of being any developer}, but the calculating of rows-per-block, initial and next extent, pctincrease (not always zero you know), initrans/maxtrans, segment to tablespace size… I learnt all about that and had spreadsheets for it all.

Now of course, all of the above about storage (and RBO) has pretty much disappeared. Oracle has made some of the contents of my brain redundant.

But some things have not changed at all in 18 years:

Users can be relied upon to know what they do NOT want, not what they want, which {unfortunately} is the premise from which analysis starts)

I think the above is the fundamental issue from which all iterative design methodologies spring. ie do not believe what the user says they want, show them something and fix it. It is probably human nature that we are not well able to express what we want but have no problem pointing out something is not at all what we want :-). Add in all the issues in respect of forgetting about the exceptions, assumed knowledge, incompatible vocabularies (the words your users say to you are as confusing as the techno-babble you fire back at them) and all analysis is fundamentally flawed.

Do some analysis – but then prototype like crazy. With real users.

Lack of Index and Constraint Comments November 24, 2011

Posted by mwidlake in Architecture, database design, development.
Tags: , , , ,

Something I’ve just reminded myself of is that under Oracle you cannot add a comment on an index or a constraint. You can only add comments on tables, views, materialized views, columns of those object types and a couple of esoteric things like Operators, Editions and Indextypes.

Here is an example of adding comments to tables and columns:

set pause off feed off
drop table mdw purge;
create table mdw(id number,vc1 varchar2(10));
comment on table mdw is 'Martin Widlake''s simple test table';
comment on column mdw.id is 'simple numeric PK sourced from sequence mdw_seq';
comment on column mdw.vc1 is'allow some random text up to 10 characters';
desc user_tab_comments

 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 TABLE_NAME                                            NOT NULL VARCHAR2(30)
 TABLE_TYPE                                                     VARCHAR2(11)
 COMMENTS                                                       VARCHAR2(4000)

select * from dba_tab_comments where table_name='MDW'
OWNER                          TABLE_NAME                     TABLE_TYPE
------------------------------ ------------------------------ -----------
MDW                            MDW                            TABLE
Martin Widlake's simple test table

select * from dba_col_comments where table_name='MDW'
order by column_name
OWNER                          TABLE_NAME                     COLUMN_NAME
------------------------------ ------------------------------ --------------
MDW                            MDW                            ID
simple numeric PK sourced from sequence mdw_seq
MDW                            MDW                            VC1
allow some random text up to 10 characters
-- now to add a big comment so need to use the '-' line continuation character in sqlplus
comment on table mdw is 'this is my standard test table.-
 As you can see it is a simple table and has only two columns.-
 It will be populated with 42 rows as that is the solution to everything.'
select * from dba_tab_comments where table_name='MDW'
OWNER                          TABLE_NAME                     TABLE_TYPE
------------------------------ ------------------------------ -----------
MDW                            MDW                            TABLE
this is my standard test table.  As you can see it is a simple table and has only two columns.  It w
ill be populated with 42 rows as that is the solution to everything.

Adding comments on tables, views and columns seems to have dropped out of fashion over the years but I think it is still a very useful feature of oracle and I still do add them (though I am getting a little slack about it myself over the last 3 or 4 years, which I must stop).

Comments are great, you can put 4000 characters of information into the database about each table, view and column. This can be a brief description of the object, a full explanation of what a column is to hold or even a list of typical entries for a column or table.

But you can’t add a comment on indexes or constraints. Why would I want to? Well, constraints and indexes should only be there for a reason and the reason is not always obvious from either the names of the columns or the name of the constraint or index, especially where you have a naming standard that forces you to name indexes and constraints after the columns they reference.

When you design a database, do a schema diagram or an ERD, you label your relationships between entities/tables. It tells you exactly what the relationship is. You might create an index to support a specific method of access or particular business function. You might alter the index in a way not immediately obvious to the casual observer, such as to allow queries that use the index to avoid having to visit the table. All of those things will, of course, be fully documented in the maintained project documentation in the central repository, available and used by all…

If I was able to add comments to constraints and indexes within the database then they would there. You move the system from one platform to the other, they are there. If for any wildly unlikely reason the central documentation lets you down, the information is always there in the database and easy to check. You may not be able to track down the original design documents but you have the database in front of you, so comments in that will persist and be very easy to find.

Lacking the ability to add comments on indexes and constraints, I have to put them at the table level, which I always feel is a kludge. I might actually raise an enhancement request for this, but as Oracle 12 is already nailed down, it will have to wait until Oracle 14. (A little bird told me Larry said there would be no Oracle 13…).

What Have I Let Myself in For! – UKOUG this year November 16, 2011

Posted by mwidlake in development, Meeting notes, UKOUG.
Tags: , , , ,

One of my favourite Oracle happenings of the year is fast approaching, the UK Oracle User Group technical conference {see/click on the link on the right margin}. I’ve blogged before ( like here, last year) why I think it is so good.

I try and present at the conference each year and I go no matter if I am presenting or not.

However, this year I think I might have got myself into trouble. I put forward 3 talks, expecting one or possibly two to get through. One on Index Organized Tables, one on IT disasters and one as an introduction to database design – I’ve moaned about it being a dying art so I figured I should get off my backside and do something positive about it. Each talk is in a different stream.

Well, the IOT talk was accepted, the Disasters talk was rejected and the Database Design intro was put on the reserve list. I was happy with that. I did three talks the first year I presented and promised myself never to be that stupid again {I spent most of the conference in the Speaker’s lounge or my hotel putting the talks together and tweaking slides}.

What I was not expecting was for the OakTable to ask me to do the IOT talk on the OakTable Sunday. Yikes! {The OakTable Sunday is a great opportunity to see a set of presentations by people who really know their stuff in a smaller setting – You really want to get along to it if you can}. However I had two reasons not to do it:

  1. I would have to miss one of the other OakTable talks.
  2. That thing I said about people presenting who really know their stuff.

I was told that (1) was not a problem as the talks would be repeated in the main conference so I would have an opportunity to see  the one I missed and (2) stop being so British and do it. {In fact, one friend on the OakTable told me off after the last conference for my criticism of my own presentation that year – “yes it was poor for you but normally you do a good job, so keep doing it”}. Of course I said yes.

Then it struck me, I was presenting twice now. Once on Sunday and repeating on Wednesday in hall 5 {I’ll probably not simply repeat the contents, at the OakTable Sunday I’ll assume a little more knowledge by the audience and dig a bit deeper technically, in the main conference I’ll cover off the basics more, for those utterly new to IOTs}. At least it was only one set of slides to prepare.

A few days later I get a mail from the UKOUG office. A gap had appeared in the Development stream, would I be willing to do my “Oracle Lego – an introduction to database design” talk – but beef it up a little? Yes, sure. What do you mean about beef it up? The dev stream guys wanted something that went into more detail, was more about some of the more challenging systems I’ve work on. So we exchanged a few emails and it quickly became apparent that some wanted the intro talk I had originally proposed, to get people going with database design. Others felt there would be more audience for a more in-depth talk, so could I span both? I had to say no. I remember attending my Oracle database design course in 1993. It was 5 days long. If my memory serves there was also a second course a couple of weeks later that covered more advanced design for 3 days! I can talk fast but not 8 days fast. They were effectively asking for two quite different presentations, an intro and then a review of more challenging examples “OK” they said, “do Oracle Lego – But if another gap comes up, could you do the intermediate talk?”. Err, OK… So I wrote a quick synopsis for “Oracle Meccano” {Meccano is a toy construction kit made up of miniature girders, plates, bolts and stuff you can make proper things out of. If you liked Lego you would love Meccano as you got older} .

Since then I have been slightly anxious about getting an email from the UKOUG about a gap in the development stream for the conference…

This week I have started preparing the presentations for real {which so far has resulted in me breaking my server, finding a load of notes on blogs I was going to write and then doing this post} so I contacted the ladies in charge of the agenda and asked if I was now off the hook for the Oracle Meccano talk? “Yes, no more gaps, it is not on the agenda”. Phew. “But could you put it together in case of last minute cancellations?”. *sigh*. OK.

So I will, but I’m not signing up to do any Session Chairing, which I was about to. If you see me at the conference and I look a little crazed, it’s because I got a mail from the UKOUG just before the event about a sudden gap…

At least there is no chance I will be asked to do the Disasters talk at short notice, I saw the scores it got by the paper reviewers :-).


Get every new post delivered to your Inbox.

Join 206 other followers