Really Simple Naming Standard To Simplify Everything November 29, 2022
Posted by mwidlake in Architecture, database design, development, PL/SQL, SQL.Tags: PL/SQL, SQL, standards, system development
5 comments
Many years ago I came up with a way of naming indexes, variables, table aliases, and several other things that is incredibly simple and, for me at least, Just Works. This might not sound like a big deal but it’s one of the things that has really helped me create readable & easily understandable code, consistent naming for objects, and database designs that make a lot of sense just by looking at them.
Not only that, but this naming convention also saves me a lot of time & mental effort and allows me to apply most of my limited brain capacity on making my SQL & PL/SQL code work, and not on stuff that ultimately should not matter – like what to call a variable.
The Starting Point
We generally name our tables after the objects or data they hold, so they are called PERSON, ORDER, ORDER LINE, ADDRESS, INPATIENT_STAY, and for intersection entities they are called things like PERSON_NAME, EMPLOYEE_ROLE. Anyone looking at a database like this will get the general idea of what ii in each table and how some tables relate to each other. That’s all good.
Naming columns is generally easy too. FIRST_NAME, LAST_NAME, HOUSE_NUMBER, HOUSE_NAME, ORDER_DATE but some columns are more generic like ID or UNIQUE_ID for a system generated primary key. START_DATE & END_DATE, CREATED_DATE, STATUS all generally makes sense within the table. A lot of places have some simple standards for column names, such as name DATE columns so that you indicate if it has a time element (ADMITTED_DATETIME), or that if it is an inherited key from a parent table, that is indicated in the name (CUSTOMER_ID, CUST_ID, CTSMR_ID… hmm, maybe this is where issues start occurring).
Table and column names are generally defined once, mostly when the application is first designed (please tell me you at least design you schema at the start) and then groups of new ones are introduced when new functionality is added.
But then you need to come up with thinks like index names or what your variable names will be in code and often there are no standards for that. This can also be true of those columns inherited from a the other end of a foreign key. So try this:
Naming Secondary Objects
You use a short (general 4 letter) acronym or alias based on the leading letters of the words in the parent object name.
If name of the parent object is one word, like PERSON then the alias is the first four letters – PERS
If the name of the parent object is two words like PESON_NAME then the alias is the first two letters of the two words – PENA
If the name of the parent object has more than two words like PATIENT_CASENOTES_FRONTSHEET then the alias is the first letter of each word – PCF. A long table like REGIONAL_SALES_SUMMARY_BY_MONTH would be RSSBM (these are true acronyms)
Apply this naming convention takes no thought and no decision making in 99% of cases. And, especially once you have been looking at the system for a couple of hours, the aliases make total sense:

You get the idea.
You might have wanted to use ORD as the alias for ORDER, as it reads better or you are used to it being reduced to ORD, but stick to the rules. As soon as you break the rules you start losing the benefit as you have to remember the exceptions and mistakes are likely to happen “further down” the naming hierarchy. Over the years I have been using this method it is when I am less firm about applying them rigorously that I get less benefit. But see the end of this post for what you do when you are forced to make exceptions.
Using The Alias
The first thing you do with the alias is use it in column names. I don’t like having a column called simply “ID” for a system generated unique identifier. It means you have loads of columns in the data dictionary called ID and it makes looking up information on the column a bit more clunky. In many systems I’ve seen a real mix of using full_table_name_id, whitespaceremoved_id, part_of_nme_id. Not any more, now you can have a rule that it is always tablealias_id. Similarly the primary key created on a table is tablealias_pk.

Foreign keys? It’s tablealiasfrom_tablaliasto_fk so the foreign key from the PERSON table to their current ADDRESS is PERS_ADDR_FK. If things are a little more complex such as having links from a CUSTOMER table to a business’s head office address and delivery address you might amend the relevant column names to include a brief descriptor of the address type, and then use the alias rules for that descriptor too, as I show below:

The name of the sequence for generating a unique identifier for the CATALOGUE table? CATA_SEQ { I don’t know why people insist on putting SEQ or similar in the name of sequences, it just seems almost universal…}
I also use this naming convention for my indexes. What do you think the below index is on?
PERS_SENA_FINA_DOB
Yes, SECOND_NAME, FIRST_NAME, DOB on the person table. If I see that index name in an explain plan I know immediately what columns are involved on what table. Now, if I was being lazy that index would be SN_FN_DOB and in some ways it is easier to read – but once you start breaking the standard then you have to make a decision when adding indexes if you stick to the rule or not, you get more situations where you might not be sure which column SN really is for (OK, maybe not in this case).
Simplification Of Code
Where this all really comes into it’s own is naming variables and table aliases in code. I strongly believe any production-quality SQL should always use table alias and the use of A,B,C etc for them should be punishable with chilli juice in the left eye. With the naming method I describe here you no longer have to think about variable names at all and it makes the code incredibly readable.
I did not have to think about the table alias when I wrote the below. I did not have to check what I had set them to when I used them in the code. Anyone looking at the code can see that the table aliases mean something and it makes it clear what each column is, what table it is from, even if the actual column name is a little ambiguous. With alias of A, B, C or T2134, T3341 etc you endlessly have to keep check which table (or subquery) is being referenced.

(please don’t critique the code or tell me about bugs or *anything*, it’s just a stupid example I knocked up in a text editor to make the point, OK?)
I use the same alias method for naming variables in PL/SQL too, as then I don’t need to think much about the names of many of them. The below is a sample of some PL/SQL code, using a lot of variables. I have a standard batch processing architecture using the tables PROCESS_MASTER, PROCESS_RUN and PROCESS_LOG. What to call the package level record variables derived from them? I prefix all package variables “PV” and record variable are suffixed with “REC” (not RECO, it’s not derived from an object name is my excuse) so they are PV_PRRU_REC & PV_PRMA_REC, no thought needed. And once you spot the other three table names referenced in the code, the other variables make sense. For as long as you look at the code you’ll know what TONA, RONA and ROTY are referring to:

Pesky Issues
The most common drawback of the “four letter alias” (as I inaccurately think of it) is alias clashes. This tends to happen most with table names that are two words long. CONSULTATION_DEPENDENCIES and COUNTRY_DETAILS would both have an alias of CODE. So you pick one and make the alias first letter of the first work, three letter of the second, so COUNTRY_DETAILS gets an alias of CDET. In my experience of using this naming convention for, oh, 25 years I’ve only had a few clashes and the 1:3 change in the letter pattern has worked.
The other drawback is getting an alias that is not, let us say, acceptable in polite company. AREA_SERVICES was an early one. SOFTWARE_HARDWARE_INTEGRATION_TESTS was another, but I strongly suspect that was an aimed-for alias by someone who was not happy in their role. Again, the swapping to a 1:3 letter derivation cures the problem.
There are usually so few exceptions that you are forced to introduce that everyone involved on the system gets to know them very easily.
When I was working on a very, very large project for an energy company I got them to start using this naming convention and people did get really quite worked up about the potential for alias clashes so they introduced a reference table that you populated with all the table names and it would generate the alias and, if it clashed with an existing one it would swap to 1:3 then 3:1 then raise an error for manual intervention. I always felt they had taken things a little too far.
SUMMARY
Generate a short alias for all objects by taking the first letter(s) of the object name.
Use that in column, constraint, index, and sequence names.
Now uses the alias in your code for table aliases and variables.
Greatly reduce both the time you spend coming up with names for these things, and the potential for code confusion.
Spend that time and brain effort on making the actual code do what it is supposed to do and go home early.
Friday Philosophy – Early Mistakes Uncorrected Means Poor Final Systems August 13, 2021
Posted by mwidlake in Architecture, development, Friday Philosophy.Tags: DIY, Humour, system development
4 comments
A few days ago I fitted a sink (hand basin) in my garage. Bear with me, this IS related to I.T.
It’s maybe is not so clear from the pictures, but that sink is at an angle (see the spirit level in the second shot, that bubble is way over from being in the centre), leaning back towards the wall on the left. And the splash back is also not straight, it is a good 5 degrees from the vertical. I’m pretty good at DIY – putting up shelves, building cupboards, fixing furniture, building the odd bit of wall, erecting & insulating sheds (which I take power and lighting down to)… And I am very, very meticulous about things like getting things straight and level. As I progressed with the job of putting up this sink and reflected on the process, it so reminds me of several application development projects I’ve been involved in (or was brought in to help crowbar back into shape)
Where Did I Go Wrong?
I’ll add some pictures to the bottom of this blog, but for now I’ll just describe what happened. When I offered up the sink to the wall I realised that the pipework (put in by competent builders but with no real consideration of my planned improvements) was exactly where the sink should go. And I also realised that even if the sink was put higher (way too high for me) or lower (well to low for a normal heighted person) the pipework would still be an issue. Also, the back wall is not quite vertical and the two walls are not quite at 90 degrees. This sort of thing is not unusual in any building (though slightly vexing in something built only 12 years ago) and to me it’s like the not-quite-perfect situation most application developments start at. You might not be using the best database for the solution, you might have to use a computer language that is not the best but you have staff skilled in it. There will be bizarre parts of the application you are replacing that you have to keep as the end users have got used to them – or even built business processes around and no one wants to change those now.
Good developers and systems engineers can work around such issues. A good DIYer can take into account badly placed pipes and wonky walls. I could have cut out and replaced the pipework – but it would have been a lot of work and I am not very good at plumbing (which is why I use push-fit plumbing: less neat & more expensive, but dead easy & quick). This would be like changing those slightly crazy business practices forced by the old system for ones that would benefit the business eventually, but it’s a lot of effort to change. As for the wonky walls, sorting that out would be like re-skilling or replacing my whole development team to move to better technology – Possible, but not going to happen.
The pragmatic solution was to mount the sink on wooden battens and make the battens thick enough to avoid the pipework and allow me to correct for the wonky walls. A cheap, pragmatic solution to the imperfect starting conditions.
Only, I messed up. I spent hours first doing the extra plumbing work to get an isolation valve beyond where the sink & splashbacks would be (so the water could be turned off for maintenance or leaks), and a feed for the hot water unit to come. Then I failed to plane off the battens for the sink properly and they had to come off and be messed with several times. When it came time to mount the sink, which goes on two metal pegs, I screwed one of them in too high. Bloody hell.
I was tired, annoyed. To fix the peg I would have to replace the batten. I did not want to take that sodding batten off the wall and replace it (it had been the hard one to make). It was also the end of the day. So I did a “bodge job” – which means a quick and poor quality alteration to get you past a problem. I packed out the space between one mounting peg and a guessed proper height for the sink. I replaced proper planning and good development with something you throw together at the end of an Agile Sprint, so I could meet my objectives and go home.
The next morning I realised the sink was more than a little uneven – but I decided to continue. And that was it, I had messed up the whole job. In my head I thought I could modify the upright battens, get a little leeway with the tiling and grouting and make up for the “bodge job”. I won’t bore you with the full details but I was so utterly wrong. What would have been two totally square and vertical battens for one splashback and a pair of slightly trimmed for the other turned into endless alterations and re-working, especially as towards what should have been the end of the job, it was obvious the tiles were going to look awful (good tiling is all about straight, parallel, horizontal, well squared lines – they magnify imperfections, of which I now had many) so more reworking was required.
If I had fixed the sink mounts first thing that morning, I think I would have finished the job in that same morning, not one and a half days later.
It’s Not The Starting Conditions, It’s The Cutting Corners
The more I have thought about that sink, the more it echoes what I’ve seen in the world of computer application development.
Many times, when the development of an application has gone wrong, I hear people blame the starting conditions. “We had to use Java Struts” or “The analytics team demanded they could query anything” or “We had to make the system talk to the old warehouse system on Informix” or whatever. I doubt these starting conditions were really a big problem, you knew about them and your design should have taken them into account. A good designer will plan around them, a good developer will code in an interface, a good DBA will add in the required indexes.
Not properly planning the system or not prototyping the solution will invariably lead to disaster. In my sink scenario, that would have been the case if I had not used my experience and thought about the long term and decided to try to attach the sink to the plasterboard walls directly, just under the pipework. I’d never have got the tiles on and the sink would have fallen off the wall very soon. I’m not saying you don’t see this failure to plan upfront in application development, it has sadly become more common. Too many places think “Agile” or similar is just hacking the system together and fixing what is wrong. That is, to use a coarse term, just bollocks. Agile only works well (in my experience) when done on top of a very good design, created by skilled people. It’s not easy.
However, the most common cause of issues I have witnessed, by a long way, is that of “bodging a solution” – and again it is very strongly associated in my experience with “agile” done wrong, or even worse the “develop to fail – and then fix quickly” mentality. Yes, you will get something that works. Very quickly. And it will probably be an “uneven sink”. It will do the job (hold water, let people register for an event) but as soon as you try and extend the system, it gets harder and harder and harder. And you spend more and more time working through that technical debt you inflicted on yourself. And the end result will be very much sub-optimal. OK, a simple thing will be fine when put together like this (a shelf to hold photographs, a simple calendar app) but a complicated system like a hospital patient administration system or a fraud detection application, a whole bathroom refurbishment… It’s going to be a mess. I’ve seen and suffered from this cascade of crap so many times over the years.
The whole idea of things like sprints, where you are put under tremendous pressure to just make your bit work, causes a lot of bodging of things. Yes, it does, I’ve seen it. I’m sure if things like code review, pair development, and periods of planning are put into Agile (like they are supposed to be) the bodges could be un-bodged. But I simply don’t see this. And the whole “fail fast and fix” idea seems to me to absolutely guarantee a system covered in duct tape and almost unmaintainable. All for the sake of get it done NOW.
I learnt many years ago in I.T. that it is far more efficient if you fix the mistakes when you first see them and fix them well. Always build on a stable base and keep it solid. Otherwise, as you progress, the pain just builds and builds and builds. This latest DIY experience just hammered it home.
The sink I put up does work. None of the pipework leaks, the splashbacks do not have gaps to let water in, the tiles are aligned (mostly), they are high enough to catch splashes. I know I can put the water heater above it… but it will be higher than I would want and if I do not make it lean to the left, it will make the sink look even more skew whiff. But by the end I put in twice the effort (if not more) than I needed to in order to have a functional sink – and it looks weird.
Do not bodge the beginning of a project, make it right and save yourself from endless pain later on!
For the interested, a pictorial progress of the job.
APEX Connect – A Slightly Different Conference May 13, 2019
Posted by mwidlake in conference, development, Meeting notes.Tags: behaviour, design, knowledge, Meeting
add a comment
I wanted to do a write-up for the APEX Connect conference that happened in Bonn, Germany, a few days ago, as it was a slightly different conference than I normally go to and a slightly different experience for me.
APEX Connect is a German event (put on by DOAG) that is focused on APEX, SQL & PL/SQL, and JavaScript. So more of a developers’ conference. It was an unusual conference for me for a few reasons:
- I don’t generally go to developer-focused events, my current roles tend to be performance & architecture based and, despite having been a developer or development DBA for most of the last 30 years, I (wrongly) mentally class myself as a “DBA type”.
- I was doing a keynote! I’m not used to that, but I was honoured to be asked.
- I was doing only the opening keynote, so once that was out the way I had pretty much 3 days of being a normal delegate. That made a really nice change.
The conference was well attended and well run. A couple of things that they did right was to have good catering and coffee was always available – good coffee! It really makes a difference and it is something some conferences (including the ones I help organise) struggle with. You have no idea how much venues in the UK want to charge to make coffee available all day, let alone *good* coffee!
Something else that struck me was that the audience was a little younger than many Oracle-focused conferences. This was helped by DOAG’s #NextGen programme which encourages students and recent graduates to come to conferences and meet professionals working in the industry. I met a few of these students/recent students as they had been tasked with asking all the keynote speakers a question, which I thought was a nice way for them to meet these “stars” and realise we are just normal people, doing a job.
Another thing was the male:female ratio. Looking at the sessions I was in, it was about 75%:25%, which in our industry is a little unusual – and very encouraging to see. I had a good discussion with Niels de Bruijn on the subject of sex (balance), who is the main organiser, and it is a topic I have discussed a few times with Sabine Heimsath, who organised the SQL & PL/SQL stream and who asked me to present. Niels was pleased with the balance at the conference, and I shared my experiences of trying to increase the balance in the UK (I’d love 25%!). It is not a simple issue and I think (note, these are my words and not theirs) that it is almost a lost cause for my generation. I think things went wrong in the home computer industry in the 80’s and continued through the 90’s, with IT being portrayed by the general media as typically male and the IT-focused industry keeping to a very male-focused stance. I won’t stop trying to address the balance but I wonder if where we can really make the difference is in events where the audience is younger…
Anyway, APEX Connect had a nicely balanced and lively feel.
As I said earlier, I had been asked to do the opening keynote. My job was to say something that would be relevant to the whole audience, which was not too technically deep, and set the scene for APEX Connect. An added bonus would be for the audience to leave the talk energised for the rest of the conference. My normal talks are about tech… but I do branch out into talks on presenting, disasters and, err, beer. Talks to entertain basically. So that is what I aimed for, to entertain and to energise.
I’m one of those annoying presenters who does not usually get particularly nervous before a talk, I simply step up to (OK, behind) the lectern and go. But for the second time in 6 months (the other being the opening words for UKOUG 2018) I was really nervous for this. I think it is because when you talk on a technical subject, you are describing how something works and, so long as the audience understand, the rest of the talk (little stories, extra facts) are window dressing – enjoying the talk is a good thing to achieve but is secondary. With a keynote the Window Dressing is actually the main thing, and if the audience does not enjoy the talk you have not done the job. I’m glad to say I got a lot of positive feedback and I was very much relieved. I think I hit “peak enjoyment” for this talk when I described my early career (Builder, Oracle 7 PL/SQL developer, server-side developer, Development DBA) and used the graphical slide here.
Server. (on it’s) Side. Developer.
I have to say, with that talk out the way I was able to really enjoy being simply “at a conference”, seeing some great talks (Toon Koppelaars, Connor McDonald, Kamil Stawiarski, Eric van Roon – I missed Luiza Nowak & Heli Helskyaho but I had seen those talks before), chatting to lots of people, and enjoying the socialising.
I want to say a big Thank You to the organisers – Simone Fischer, Niels de Bruijn, Tobias Arnhold, Kai Donato and all the others behind the scenes. I know what it’s like doing this! And a special Thank You to Sabine Heimsath for asking me to present and for helping me get places and answering odd questions on German culture!
I’ll finish with what I think conferences and the community are all about. This does not just apply to developers of course, but to all of us in the industry:
Friday Philosophy – Despair of the Dyslexic Developer and Your Help Please June 1, 2018
Posted by mwidlake in development, Friday Philosophy, Perceptions, Private Life.Tags: development, perception, private
2 comments
Like a surprisingly large number of people, I’m dyslexic. I’ve mentioned this before, describing how I found out I was dyslexic and also how I think it is sometimes used as an odd sort-of badge of distinction. I am mildly dyslexic, the letters do not try to “merge or run away” from my eye, if I hit a large word I am unfamiliar with I can visually chop it up and get through it. But that is just me. So, today, I want to ask you all, if you are a dyslexic developer or know one, are there any steps you have taken to reduce the impact?
A recent, slightly jokey, conversation on twitter reminded me of the issues I have had in typing the wrong thing (over and over and over again – my usual example is how often I have tried to “shitdwon” an oracle instance). And that in turn reminded me of a more serious conversation I had when at the OUG Ireland conference back in March.
As a developer, I sometimes struggle to spot spelling mistakes or use of the wrong (or missing) punctuation in my code. As my friend JimTheWhyGuy said in the twitter conversation, spotting you had spelt UDPATE wrong. I was telling the audience that I was something of a slow developer, partly due to dyslexia. I can stare at code for ages, especially if I am using a new construct to me, not understanding why I am getting an error. It is often not a syntax problem but a spelling one. I had real problems with the word “partitioning” (and still do) when I started using that feature. – it is a little long and has almost-repeated sections in the middle and I “spin” in the middle if I read it or try to write it. It’s a little too long for my wrists to learn to automatically tap it out.
After the talk a lady came over and asked me if I had any advice on how to reduce the impact of dyslexia when writing code. She’d been diagnosed at school and so had grown up knowing she was dyslexic. (I was not diagnosed as a child, which oddly enough I am still glad about – as I learnt to cope with it in my own way. But I am NOT glad I am dyslexic). I do not know what support and advice she had been given through school, but it was obviously still something that impacted things. All I could come up with were a couple of tricks I use.
One is to copy text into MS Word and see if it highlights anything. You have to teach your version of MS Word (*other word processors with spell checkers are available) that the normal syntax words are real, but all the punctuation and special characters get in the way. Where it does help a lot is reducing the number of errors in specifications & documentation I produce and, now, articles I write. But as I know most of you who come by here have already realised – spelling errors that give another correct word are not picked up by a lot of spell checkers, such as this WordPress site. My blogs are full of missing words, wrong words and other crap.
The other major advance is the use of, Software Development Tools (SDTs – and YES, I spelt SDT wrong first time around writing this!) or Interactive Development Environments (IDEs). These highlight syntax errors (so highlighting typos), allow auto-completion of command words and provide common code constructs. They help, but I’ve never been that good at getting the best out of them. I use SQL*Developer more than the others and it does help.
The final other thing is that I just factor in that it’s going to take me more time to write or read stuff. Like many dyslexics, there is nothing wrong with my comprehension (I went off the scale for reading age when I was 12) but it takes me longer and is more effort.
Looking around on the web about this, there is a lot of stuff, the above point about IDEs being a main one. One common thing is to use different fonts to help stop letters skipping about or moving, but I don’t have that sort of dyslexia so I’ve never looked into that. I was going to review the topic of dyslexic developers more before putting this article together, but reading it all was taking me too long! That and I found the constant “It gave me an advantage” to be bloody annoying.
So, knowing a few of you out there are also dyslexic to some degree or another, have you any tips to share? If you have something to share but do not want to be identified, contact me directly.
I’d really appreciate it, if not for me then for if ever anyone else asks me how I cope as a dyslexic developer.
My main opt-out of course was to move into performance. It’s somehow more “pictorial” in my mind and you write less code…
Overloaded Indexes (for ODC Appreciation Day) October 10, 2017
Posted by mwidlake in database design, development, performance, SQL.Tags: index organized tables, performance, SQL
add a comment
ODC Appreciation Day is an idea that Tim Hall (aka Oracle-Base) came up with, to show out appreciation for the Oracle Technology Network (OTN)/Oracle Developer Community.
I want to show my support but rather than mention an Oracle “feature” I particularly like I’d like to mention a “trick” that I (and many others) often employ to help performance. That trick is Overloaded Indexes.
We can all appreciate that accessing data in a table via an index is a very good thing to do when we want just the few records for a given index key from the thousands or millions of rows in the table. As an example, we are looking up all the records for someone and we know their first name, last name and date of birth – a very common occurrence in things like hospital or billing systems. So our PERSON table has an index on those three columns. Oracle will read the root block of the index, read the one relevant block in each of the branch levels for the key, find all of the leaf block entries for that key – and then collect the relevant rows from the table. Fig 1 shows how we think of this working. i.e. most of the records we want to find will be close together in the table.
Actually, a lot of people who just “use” the Oracle database as a store of information sort-of think this is how an index always works. It efficiently identifies the rows that you want and that is the end of it. If the index is on the value(s) you are looking up rows for (say LAST_NAME, FIRST_NAME, DATE_OF_BIRTH) the index is ideal and that is as good as it gets.
But in reality, the index lookup is often far less efficient than this and is more like fig 2. Working down the index structure to get all of the required index entries is exactly the same, but the rows you want are scattered all over the table. Oracle has to fetch many table blocks to get your data, maybe as many blocks as there records to be found. This is far from efficient.
So what can you do about this? You already have the “perfect” index, on LAST_NAME, FIRST_NAME, DATE_OF_BIRTH, the values you are looking up. Maybe you could add another column to the index to avoid those situations where there are many people with the same name and date of birth. But you may not have that extra information or it is simply not possible to identify the values in the table any more accurately, you really do need all the rows scattered though that table for the given search key.
There are “architectural” things you can do such as create the table as an Index Organised Table (see my little set of blogs about them starting here). You can also use various methods to group the relevant rows together in the table. But all of those methods are Big Impact. You need to recreate the table or plan for this eventuality up-front when you design the system.
But there is a very specific, easy thing you can do to address this particular problem, for the SQL statement you need to speed up. You can add all the columns your query needs into the index. This is an Overloaded Index.
An Overloaded Index holds not only the table columns in the WHERE clause but all the columns needed from that table for the query.
Why does this work? Because when Oracle identifies the range of keys for the key (LAST_NAME, FIRST_NAME, DATE_OF_BIRTH) all the other columns it needs are also in those index leaf entries and there is no need to get the rows from the table. All those lookups to the table are avoided. Fig 3 at the end of this article demonstrates this.
However, I’ll give you a real world example I had recently. A client had a SELECT statement something like the below, with the execution plan shown, and it was running too slowly for the business requirements:
SELECT (SUM(NVL(T.TRAN_VALUE_CR,0))-SUM(NVL(T.TRAN_VALUE_DB,0))) , COUNT(*) FROM ACCOUNTS A , TRANSACTIONS T WHERE A.ACC_XYZ_IND =:3 AND A.ACC_ACCOUNT_NO =:1 AND A.ACC_SUBACC_NO =:2 AND T.TRAN_XYZ_IND =A.ACC_XYZ_IND AND T.TRAN_ACCOUNT_NO =A.ACC_ACCOUNT_NO AND T.TRAN_SUBACC_NO =A.ACC_SUBACC_NO AND T.TRAN_P_IND =:4 AND T.TRAN_DLM_DATE >=TO_DATE(:5,'YYYYMMDD') ------------------------------------------------------------ | Operation | Name | ------------------------------------------------------------ | 0 | SELECT STATEMENT | | | 1 | SORT AGGREGATE | | | 2 | NESTED LOOPS | | | 3 | NESTED LOOPS | | |* 4 | INDEX RANGE SCAN | ACC_PRIME | |* 5 | INDEX RANGE SCAN | TRAN2_3 | |* 6 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | ------------------------------------------------------------ Statistics ---------------------------------------------------------- 4740 consistent gets 3317 physical reads The index used on TRANSACTIONS is: INDEX_NAME TABLE_NAME PSN COL_NAME ---------------------------- ---------------- --- -------------- TRAN2_3 TRANSACTIONS 1 TRAN_ACCOUNT_NO TRAN2_3 TRANSACTIONS 2 TRAN_SUBACC_NO TRAN2_3 TRANSACTIONS 3 TRAN_DLM_DATE
The index TRAN2_3 on the TRANSACTION table that you can see being used in the plan was for all the columns being used in the WHERE clause that actually helped identify the TRANSACTION records required – TRAN_ACCOUNT_NO, TRAN_SUBACC_NO and TRAN_DLM_DATE (the TRAN_XYZ_IND and TRAN_P_IND were always the same so “pointless” to index).
I added a new index to the TRANSACTION table. I added a new index rather than change the existing index as we did not want to impact other code and we wanted to be able to drop this new index if there were any unexpected problems. I added all the columns on the TRANSACTION table that were in the SELECT list, were in the the WHERE clauses even though they did not help better identify the rows needed. If there had been TRANSACTION columns in an ORDER BY or windowing clause, I would have added them too. So my index looked like this:
create index TRAN2_FQ on TRANSACTIONS (TRAN_ACCOUNT_NO ,TRAN_SUBACC_NO ,TRAN_DLM_DATE ,TRAN_P_IND ,TRAN_XYZ_IND ,TRAN_VALUE_CR ,TRAN_VALUE_DB)
It is very, very important that the new index holds every column from the TRANSACTION table that the query needs.To prevent accessing the table, all the data the query needs for that table must be in the index.
The query could now satisfy the query by just using the new index, as the below explain plan shows.
---------------------------------------------------- | Id | Operation | Name | ---------------------------------------------------- | 0 | SELECT STATEMENT | | | 1 | SORT AGGREGATE | | | 2 | NESTED LOOPS | | |* 3 | INDEX RANGE SCAN | ACC_PRIME | |* 4 | INDEX RANGE SCAN | TRAN2_FQ | ---------------------------------------------------- Statistics ---------------------------------------------------- 56 consistent gets 52 physical reads
There is now no line in the plan for visiting the table TRANSACTIONS and we are using the new TRAN2_FQ index. The consistent gets and physical reads to satisfy the query have gone down from 4740 and 3317 respectively to 56 and 52. I think that is good enough.
Fig 3 shows what is happening. The new index is effectively a “mini IOT” designed to support the given query.
There are of course a few caveats. The new index needs to be maintained, which is an overhead on all INSERT/UPDATE/MERGE/DELETE activity on the table. The index will only remove the need to visit the table for queries that are very, very similar to the one it is designed for – ones that use the same rows from the TRANSACTIONS table or a subset of them. If you alter the query you, e.g. select another column from the TRANSACTION table you would need to revisit this overloaded index.
Finally, be careful of modifying existing indexes to overload them to support specific queries. If the index is there to support referential integrity you need to think carefully about this and the modified index may be less efficient for other queries that used the original index (as adding columns to an index make it “wider”).
BTW if you think you recognise this from a recent Oracle Scene article then you would be right. I was pushed for time, I used something I had written before 🙂
Friday Philosophy – Sometime The Solution Has To Not Only Match The Problem But Also… August 4, 2017
Posted by mwidlake in Architecture, development, Friday Philosophy, Perceptions, Programming.Tags: behaviour, perception, performance, system development
3 comments
…The People!
When you design a system for end users, a good designer/developer considers the “UX” – User eXperience. The system has to be acceptable to the end user. This is often expressed as “easy to use” or “fun” or “Quick”. But in reality, the system can fail in all sort of ways but still be a success if the end user gets something out of using it. I’ve said it before and I’ll say it again and again until I give up on this career. In my opinion:
User Acceptance is the number one aim of any I.T. system.
OK, you all know about UX probably. But what about solutions that have no End Users? I’m thinking about when you create a technical solution or fix for an internal system, to be used by fellow I.T. professionals. How many have you considered the skills and temperament of the people who are going to house-keep the solution you create? I suppose I have had opportunity to think about this more than some of you due to how I work:- I’m a consultant who gets called in to fix things and then leave. At times I have chosen a solution that has been influenced by the people who will be looking after it.
I’ll give you an example. At one site that I worked at for about 9 months, I did a lot of work for one system. The developer/systems administrator who looked after the system was…stupid. I don’t really like saying that, we all vary in our skill set, experience, intelligence, *type* of intelligence (I know some people who can speak 3 languages or know a lot about history but could not wire a plug). But this guy really seemed to struggle with logic, cause-and-effect or learning anything new. And I had to help him look after this database application with one main, huge, hulking table. It had to be partitioned, those partitions maintained and the data archived. I implemented the partitioning, I explained partitions to him several times, what was needed to maintain them, where to look in the data dictionary for information. It was like talking to my mum about it. He just seemed not to understand and his efforts to code something to do what needed to be done were woeful.
I knew it was not me, I’ve run enough training sessions and presented so often that I know I can explain myself (Well, I hope so! Maybe I am deluded). He just was not getting it. Maybe he was in the wrong job. So I wrote him a set of SQL-generating scripts to get him going. He kept messing up running them. In the end, I knew I was about to leave and when I did within 3 months the real customer would have a broken system. So I wrote a mini-application in PL/SQL for him to do what needed to be done. And set it to email a central team if it failed. The team he would call when he broke it all again. I also simplified the solution. My original system had some bells and whistles to help with future changes, such as over-riding where new partitions went or how old ones were compressed. I stripped it out to keep it as simple as possible. I altered the solution to suit the person that would run it.
I’ve done something like this a few times over the years. Usually it is more to do with the skill set of the team as opposed to actual ability. I’ve on occasion worked with people who are new to Oracle and my time is limited so, rather than give them a solution written in PL/SQL that none of them know, I have done so with SQL and cookery instructions/shell scripts. It’s not the best solution but it is something they can live with.
More recently I had to look at fixing the performance of some SQL statements. Baselines would have done the job perfectly. However, the team were all Java experts and had no desire at all to learn about database administration. (To be frank, they had no time to learn either, it was the usual situation of them having 75 hours of work each every week as management thought just shouting would get things fixed, not hiring enough people). I strongly suspected that they would forget about the baselines and if they had a problem they would be confused as to what was going on. So I fixed the key SQL statements with a set of hints to force both the overall structure of the execution plans as well as which indexes to use etc – and said over and over and over and over that if they ever changed indexes or migrated to a later version of Oracle, those hints would need reviewing. They were, in effect, part of their code base. A big advantage of the hints was that they would see them in their code and it would remind them what had been done. They seemed happy with that.
My point is, sometimes the “best” solution is not the correct one, even when you are keeping within the walls of the computing department(s). Sometimes you need to think about who you are giving the solution to and change the solution accordingly.
Friday Philosophy – “Technical Debt” is a Poor Term. Try “Technical Burden”? June 30, 2017
Posted by mwidlake in database design, development, Friday Philosophy, Management.Tags: Architecture, design, system development
5 comments
Recently my friend Sabine Heimsath asked a few of us native English speakers what the opposite of “technical debt” was. My immediate reaction was to say:
I’d say (sarcastically) “proper development” or “decent designer” or even “what we did 25 bloody years ago when we were allowed to take pride in the software we created!”
But my next comment was less reactive and more considered. And that was to say that I did not like the phrase “Technical Debt”:
A debt is when you owe something to someone, to be paid back. You do not owe anything to someone when you build poor systems, you are actually creating a “technical burden” – something those in the future will need to live with and may eventually have to sort out. Those who created the bad app or design will probably not be the ones fixing it – as in paying the debt.
That is of course not always true. Some of us have had to end up fixing a poor solution that we implemented – usually implemented despite our protestations that it was a daft thing to do. But the usual case is that a badly thought-out solution is implemented in a rush, with little design, or with inadequate testing, because of a management pressure to be “agile” or “fast moving”. And it is done with cheap or over-stretched resource.
Also, “technical debt” to me sounds too organised and too easy to fix. If you have a financial debt, you simply pay it back with some interest. In almost all situations I have seen where there is a “technical debt”, the interest to pay – the extra effort and time – is considerably more than was saved in the first place. Sometimes it is more than the original cost of the whole project! Loan Shark territory.
When the poorly designed/implemented system falls over in a heap sometimes the hard-pressed local staff lack the skills or bandwidth to fix it and “Experts” are called in to sort it out. And part of the time taken to fix it is the expert going “why in f**k did you ever think this was a good idea?” (Maybe using better terminology, but that is what they mean!). Being more serious, sometimes the largest slice of time is when as an “Expert” you have to persuade the people who own this mess that it really does need sorting out properly, not just another quick hack – and it really will take much , much more effort than what they originally saved by deciding to implement this fast & dirty. Sorry, I mean “lean & mean”.
This situation often has a secondary impact – it makes the people who originally implemented the solution look poor. And that may or may not be fair. I’ve seen many cases where the original staff (including myself) were forced to do things they did no like by timing constraints, lack of budget or simply the ridiculous demands by someone higher up the organisation who thought simply shouting and demanding would make good things happen. They don’t, you end up creating a burden. Though I have also seen poor solutions because the original team were poor.
I think at the moment a lot of what is called “systems development” is more like a desperate drive to constantly cut corners and do things quicker. If it goes wrong, it’s just a debt, we pay it back. No, no it is not. It’s often a bloody mess and a Burden for years. I keep hoping that, like many things in I.T. this will be a phase we can cycle out of and back into doing proper planning and implementation again. Yes, anything that speeds things up without losing planing/design is great. And if you have the skills, you can do proper Agile, designing the detail as you go – IF you have the general over-arching design already in place. But I wish there was more consideration of the later cost of quick & dirty.
So what was the term Sabine wanted? Well, I can’t speak for her, I am not 100% sure what she was looking for. But from my perspective, we should not say “Technical Debt” but “Technical Burden”. And the opposite might be “technical Investment”. You spend a bit of time and effort now in considering how you can create a solution that can expand or is flexible. I know from my own personal experience that it is when you are given the chance to do those things that you provide a solution that last and lasts and lasts. Only when I have been allowed to properly consider the business need do I create something still used in 10 years. Or 15. Or maybe even 20. That might need some checking!
So, if you really want to build systems to support a successful business, and not a short-lived flash, perhaps you should be saying something like:
You are asking me to create a Technical Burden. Would you rather not help me create a Technical Investment?
If anything else, you might at least be adding a new entry to “Buzzword Bingo”.
Friday Philosophy – When Tech Fails to Deliver, is it Always a Problem? December 9, 2016
Posted by mwidlake in Architecture, development, ethics, Friday Philosophy.Tags: behaviour, knowledge, system development
11 comments
I nipped out to the local supermarket this lunch time to get stuff. I use one of those self-use barcode scanners to log all the goods I put in my basket (apart from the bottle of whisky I was stealing). I then go to the payment machine, scan the “finish shopping” barcode and try to pay. I can’t pay.
I can’t pay as I bought some paracetamol (note to US readers, you know it as acetaminophen). It turns out you need to be 12 to buy paracetamol. Fair enough, but why did I have to stand there and waste 30 seconds of my life before the assistant for the area noticed and came over? She had to uses her special device to access the permissions screen, check I was 12 (the greying beard helps) and authorise it.
I asked why I had to wait. “So I can ensure you are old enough – the machine does not know.” But it does! Or at least it should. I’m using their self-scan service for which I have to be registered. They know my name, address, age, hair colour and inside leg measurement. The system knows I am old enough. Plus I have to pay with a credit/debit card (no cash option with this system). You can’t have a credit card until you are 18 in the UK so by using one of them it knows I am old enough to buy the pills – and even the bottle of whisky I was stealing. And when you use any card, it checks your details. So if I was using a debit card it could check my age at that point and stop me when it makes the check. It’s possible and should be done!
The assistant had wandered off long before I finished making this logical case. I was just an annoying customer and she’d done what I needed her to do. But it really annoyed me – it is possible for the system to check me using technology and the data at hand, and not make me wait. The problem is, they were too lazy to build this limited smarts into the system!
There is a lesson here. And that lesson is this – I should stop being such a self-centred, argumentative and miserable old sod. Firstly, I had to wait 30 seconds (and I am probably exaggerating that). Big deal, I had hardly been inconvenienced and it was a lot quicker than going to a normal till. Secondly, the assistant can’t do anything about the software behind the system. I mean, many of us spend our lives working on computer systems and often we can’t make any changes. Thirdly, I am aware that some parents give their children their credit card & number (the idiots!) so even though it is illegal to do this, the result is there a lot of people under the age of credit who have the means to pay for dangerous things (booze, cigarettes, paracetamol, knives, DIY expanding foam, ‘Viz’ magazine…).
Just because something is possible with the data to hand, sometimes it is not really worth much effort to make it happen.
And sometimes, although it seems logical & sensible given all the parameters (they have my info, no one but me should be using that card) in the real world those rules and data associations are not reliable. There is no enforced RI on our lives, at best there is a set of intended/encouraged limits. A person checking my age is way more reliable than some algorithm in this case.
So next time I whine about waiting 30 seconds in the queue, I hope the assistant just gives me a withering look and tells me to grow up.
I also hope they do not check my basket for un-scanned booze.
(* Just for the record, everything about the whisky was untrue. It was gin).
((And being serious, such system prevent fraud by 2 methods.
The first is that 1 in X times you get re-scanned. The assistant has no idea if they scan anything you had not and this is on purpose – so there is no scene in the shop. But the comparison is made and recorded, for further action.
The second is that apparently they can spot likely cheats just by the data you give them when you sign up and your spending habits. Now that is ‘Big Data Analysis’.
))
Getting Your Transaction SCN – USERENV(COMMITSCN) January 19, 2016
Posted by mwidlake in development, performance, SQL.Tags: data dictionary, performance, PL/SQL, SQL
3 comments
A few days ago I was introduced (or re-introduced) to USERENV(‘COMMITSCN’) by Jonathan Lewis. This is an internal function that allows limited access to the SCN of your transaction.
I was trying to find a way to get the actual commit SCN easily as it struck me that Oracle would have it to hand somewhere and it would be unique to the change and generated very efficiently. I could not find anything to do it so I asked Jonathan and he pointed me straight to this post he did about it a while back. What a nice chap. However, the post is from 1999 (last CENTURY!) so I thought I should just check it out first…
I’ll say up front that this is an undocumented feature of a now-deprecated function. Well, almost undocumented – older SQL reference manuals mention that ‘commitscn’ returns a NUMBER as opposed to the VARCHAR2 returned by most parameters you can use with USERENV, but it does not list it as a valid parameter for that function.
USERENV has been deprecated since Oracle 10g (see the old 10g documentation link here about USERENV) and you have been instructed to use SYS_CONTEXT(‘userenv’,’parameter‘) as described in the 12c database SQL reference manual here. However, there is no way to get the commit SCN from SYS_CONTEXT that I can find, so I thought I’d check out if USERENV(‘COMMITSCN’) still works. It does, on my version of Oracle 12.1.0.2!
There are some strict limits to this function. To begin with, you can’t select it, you can only use it on insert/update:
-- attempt a simple select of the SCN mdw> select userenv('commitscn') from dual; select userenv('commitscn') from dual * ERROR at line 1: ORA-01725: USERENV('COMMITSCN') not allowed here --But I can use in an insert, mdw> insert into test_scn (seq_no,vc1,scn_no) 2 values (100,'abcd',userenv('commitscn')) 1 row created. mdw> select * from test_scn where seq_no=100 SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 100 abcd 11144739 -- Now commit my new record mdw> commit; Commit complete. mdw> select * from test_scn where seq_no=100 2 / SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 100 abcd 11144753 --LOOK at the value for SCN_NO now! Compare to before the commit!
If you look at the above you will see a couple of things. The first is that, as I said, you cannot SELECT the function USERENV(‘COMMITSCN’).
The other is, though a value is put into the column when I insert a row using it, and I see that when I query the information back, it changes when I commit. This is because Oracle is recording something at the point of commit, not at the point of the SQL statement running – and the new SCN is only generated when you commit. A lot could have happened since I did the INSERT, I might have gone for a cup of tea and a batch job kicked off doing 1 million transactions, each with it’s own SCN. So though a placeholder of the current SCN is put into your view of the table row, the value put in the actual table is generated at the time of the commit.
Another limiting rule is that you can only reference USERENV(‘COMMITSCN’) once in a transaction, for one row. If I try and create 2 rows using the function in the same transaction I get an error, if I try to update more than 1 row I get an error:
mdw> insert into test_scn (seq_no,vc1,scn_no) 2 values (101,'abcd',userenv('commitscn')) 1 row created. mdw> insert into test_scn (seq_no,vc1,scn_no) 2 values (102,'abcd',userenv('commitscn')) insert into test_scn (seq_no,vc1,scn_no) * ERROR at line 1: ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction -- now test updating several records mdw> commit; Commit complete. mdw> select * from test_scn; SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 1 AAAAAAA 11143743 2 AAAAAAA 11143746 3 AAAAAAA 11143749 4 AAAAAAA 11143774 5 AAAAAAA 11143777 100 abcd 11144753 101 abcd 11145543 mdw> update test_scn set scn_no = userenv('commitscn'); update test_scn set scn_no = userenv('commitscn') * ERROR at line 1: ORA-01721: USERENV(COMMITSCN) invoked more than once in a transaction -- but one row works mdw> update test_scn set scn_no = userenv('commitscn') where rownum =1; 1 row updated.
USERENV(‘COMMITSCN’) is old, undocumented and limited in use. So why am I looking at it, let alone even telling you all about it? {Apart from the fact that a lot of you *love* this sort of obscure, tid-bitty stuff}. Well, as there is no replacement for it that I am aware of. You can get the current SCN in a couple of ways, the easiest probably being to get it from V$DATABASE:
mdw> select current_scn from v$database; any key> CURRENT_SCN ----------- 11146718
However, that is the last SCN used at the time you check it and is not the SCN when you commit. ie it is a different thing. I always find it irksome on those odd occasions when something is deprecated in Oracle with nothing really to replace it.
I just demonstrate again that USERENV(‘COMMITSCN’) is a little special below, and not the same as just selecting SCN from V$DATABASE. Before I go any further, I think the value USERENV(‘COMMITSCN’) puts into the table is the actual COMMIT SCN minus 1. I mostly think this as Jonathan said so :-). I do see each time I run this test that the first select from V$DATABASE and then my insert and a commit straight away results in a value in the table 2 higher than the select.
Further iterations (2nd and 3rd in this case) show the value selected from V$DATABASE and the value inserted into TEST_SCN immediately after are the same, and are 3 higher than the previous iteration. I anticipated an increase of two, once for the change to the UNDO tablespace for the insert and once for the insert. I am not sure where the third one comes in.
However, in the fourth iteration I have a PAUSE in my SQL*Plus script between checking V$DATABASE and doing my insert and, in a second session, I do some simple inserts and commits {it does not matter what, so I don’t show it}. Thus the difference between the SCN I collected from V$DATABASE and the value inserted into the table.
Finally, in the fifth iteration, I check the value in V$DATABASE, do the insert, query it back and see the two are the same. And THEN I pause so I can do some changes and commit them in my second session. After I’ve done that I continue my first session which commits my latest insert into TEST_SCN. I check the values actually stored in the table and, just as at the top of this post, you see that the value actually preserved in the table is a later SCN than the placeholder one. It is doing something special.
(the below has the noddy code to create my test table and sequence as well as the test)
-- test_scn1 --create table test_scn (seq_no number,vc1 varchar2(10),scn_no number) -- create sequence scn_seq; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; pause 'commit some stuff in second session and then press any key' insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); commit; select * from test_scn; select current_scn from v$database; insert into test_scn values (scn_seq.nextval,'AAAAAAA',userenv('commitscn')); pause 'commit some stuff in second session again and then press any key' select * from test_scn; commit; select * from test_scn; select current_scn from v$database; -- the output of the test mdw> @test_scn1 --check V$DATABASE SCN CURRENT_SCN ----------- 11147809 -- Create and commit 1 row 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 -- note that the inserted SCN is 2 higher than the current SCN. --Same steps, 2nd iteration CURRENT_SCN ----------- 11147814 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 -- select SCN and inserted SCN are the same, 3 higher than first iteration -- same steps, 3rd iteration CURRENT_SCN ----------- 11147817 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 -- select SCN and inserted SCN are the same, 3 higher than first iteration -- 4th iteration, a pause CURRENT_SCN ----------- 11147820 'commit some stuff in second session and then press any key' -- I did indeed change and commit some stuff in second session, before I create my record in test_scn 1 row created. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 14 AAAAAAA 11147831 -- larger gap in SCN (11147817 to 11147831 -- 5th iteration, pause now after insert and before commit CURRENT_SCN ----------- 11147834 1 row created. 'commit some stuff in second session again and then press any key' -- I did indeed change and commit some stuff in second session SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 14 AAAAAAA 11147831 15 AAAAAAA 11147834 -- Notice the current_scn from V$DATABASE and the last row in the table match, 11147834.. Commit complete. SEQ_NO VC1 SCN_NO ---------- ---------- ---------- 11 AAAAAAA 11147811 12 AAAAAAA 11147814 13 AAAAAAA 11147817 14 AAAAAAA 11147831 15 AAAAAAA 11147842 -- But after the commit the SCN in row "15" has increased to 11147842. CURRENT_SCN ----------- 11147851 -- and the next check of V$DATABASE SCN shows the usual increase of 3 by the commit.
As you can see from the above, USERENV(‘COMMITSCN’) is doing something a little special and, despite all the limitations, I might actually have a use for it…
Where do my trace files go? V$DIAG_INFO October 19, 2015
Posted by mwidlake in development, performance, SQL Developer.Tags: performance, SQL, SQL developer
1 comment so far
Where do oracle trace files go? I don’t know why this piece of info will not stick in my head, I seem to have to look it up 3 or 4 times a year.
If only I had an easy way to find out. There is a very easy way to find out – and that piece of info won’t stay in my head either. So this really is a blog post just for stupid, forgetful me.
V$DIAG_INFO has been available since oracle V11. All the trace files go into the Automatic Diagnostic Repository (ADR) by default.
ora122> desc v$diag_info Name Null? Type ------------------------------------------------------------------- -------- --------------- INST_ID NUMBER NAME VARCHAR2(64) VALUE VARCHAR2(512) CON_ID NUMBER
Quick sql*plus script to get it out:
-- diag_info -- quick check of the new v$diag_info view that came in with 11 col inst_id form 9999 head inst col name form a25 col value form a60 wrap spool diag_info.lst set lines 120 select * from v$diag_info order by name / spool off
Contents:
INST_ID NAME VALUE CON_ID -------- -------------------- ---------------------------------------------------------------- ------- 1 Diag Enabled TRUE 0 1 ADR Base D:\APP\ORACLE 0 1 ADR Home D:\APP\ORACLE\diag\rdbms\ora122\ora122 0 1 Diag Trace D:\APP\ORACLE\diag\rdbms\ora122\ora122\trace 0 1 Diag Alert D:\APP\ORACLE\diag\rdbms\ora122\ora122\alert 0 1 Diag Incident D:\APP\ORACLE\diag\rdbms\ora122\ora122\incident 0 1 Diag Cdump D:\app\oracle\diag\rdbms\ora122\ora122\cdump 0 1 Health Monitor D:\APP\ORACLE\diag\rdbms\ora122\ora122\hm 0 1 Default Trace File D:\APP\ORACLE\diag\rdbms\ora122\ora122\trace\ora122_ora_7416.trc 0 1 Active Problem Count 0 0 1 Active Incident Count 0 0
I should add some notes later about setting the trace file identifier…
Ohhh, OK, I’ll do it now. To make it easier to identify your trace file, set tracefile_identifier
alter session set tracefile_identifier = 'mdw151019' --Now if I create a quick trace file alter session set sql_trace=true @test_code alter session set sql_trace=false
I now go to the Diag trace directory I identified via V$DIAG_INFO and look for my trace files. I could just look for the latest ones or do a wilcard search on my tracefile_identifier string and, pop, there we are:
19/10/2015 13:59 39,751 ora122_ora_7416_mdw151019.trc
19/10/2015 13:59 426 ora122_ora_7416_mdw151019.trm
If you want a taste of the numerous ways of initiating a 10046 trace, sometimes called a SQL trace, see Tim Hall’s excellent post on his Oracle Base website:
https://oracle-base.com/articles/misc/sql-trace-10046-trcsess-and-tkprof
Oh, one final nice thing. You can open trace files in SQL Developer and play with what information is shown. Maybe I should do a whole piece on that…
Actually, these two post from Oracelnerd and Orastory will get you going, it’s pretty simple to use in any case:
http://www.oraclenerd.com/2010/02/soug-sql-developer-with-syme-kutz.html
https://orastory.wordpress.com/2015/02/27/sql-developer-viewing-trace-files/