jump to navigation

Counting the Cost #4 – The speedy way May 26, 2009

Posted by mwidlake in performance.
Tags: , ,
trackback

This is my fourth look at counting the number of rows from a table. Here I will look at even faster ways of finding the number of records in a table, which are particularly useful for tables with million and billions of rows. You can get a good estimate in much less than a second for a table of any size.

So far I have shown with this posting here that the myths and arguments about whether count(*), count(1), count(pk) etc are immaterial with modern version of Oracle – they all take the same effort by the CBO.
This next post looks at how indexes are used when you count a single column in a table and why it is not always a good idea.
The third post shows how Oracle will use the smallest index it can to select count(*) and in particular how a bitmap index is so useful for this.

To count the number of rows you have in your table for sure, you have little choice but to scan either the table or an index on a mandatory column (or columns). You may well not have bitmap indexes on your biggest tables (bitmap indexes do have some major drawbacks) so the best the CBO can usually do is scan the Primary Key index (it is usually the smallest index on mandatory columns). On a table with 10 million rows then this index could easily hold 20,000 leaf blocks (8k block size) and could take dozens of seconds to minutes to scan, depending on how powerful your system is and how much of the index is in the block buffer cache.

{This is my script for the below tests but it uses a table, PERSON that it does not create. You can check out the syntax of things though.}

This is select count(*) on our small, 20,000 row table.

test102>select count(*) from test_1;
COUNT(*)
———-
19999
Elapsed: 00:00:00.01

Execution Plan
———————————————————-
Plan hash value: 454320086
———————————————————————-
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
———————————————————————-
| 0 | SELECT STATEMENT | | 1 | 11 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| T_PK | 19999 | 11 (0)| 00:00:01 |
———————————————————————-

Statistics
———————————————————-
0 recursive calls
0 db block gets
44 consistent gets
0 physical reads

For a table with more than a few thousand records, selecting the number of rows (num_rows as held in the data dictionary against ALL_TABLES is quicker:

— a different way to get the info
select table_name,num_rows from all_tables
where owner=USER and table_name=’TEST_1′;

TABLE_NAME                       NUM_ROWS
------------------------------ ----------
TEST_1                              19999

Elapsed: 00:00:00.04

Execution Plan
----------------------------------------------------
Plan hash value: 711593612

----------------------------------------------------
| Id  | Operation                             | Name| Rows |Bytes|Cost
----------------------------------------------------
|   0 | SELECT STATEMENT                 |       |  1 |  180 |    9
|*  1 |  FILTER                                    |       |     |         |
|*  2 |   HASH JOIN                             |       |  1 |  180 |    9
|   3 |    MERGE JOIN CARTESIAN         |       |  1 |  167 |    8
|   4 |     NESTED LOOPS OUTER          |       |  1 |  112 |    8
|   5 |      NESTED LOOPS OUTER         |       |  1 |  107 |    8
|   6 |       NESTED LOOPS OUTER        |       |  1 |  104 |    7
|   7 |        NESTED LOOPS OUTER       |       |  1 |   96 |    6
|   8 |         NESTED LOOPS                 |       |  1 |   85 |    5
|   9 |          NESTED LOOPS                |       |  1 |   82 |    4
|  10 |           NESTED LOOPS              |       |  1 |   50 |    3
|  11 |            TABLE ACCESS BY INDEX ROWID| USER$ |1|14 |1
|* 12 |             INDEX UNIQUE SCAN              | I_USER1|1|     |0
|* 13 |            TABLE ACCESS BY INDEX ROWID| OBJ$   |1| 36 |2
|* 14 |             INDEX RANGE SCAN                | I_OBJ2 |1|      |1
|* 15 |           TABLE ACCESS CLUSTER           | TAB$   |1|  32 |1
|* 30 |    INDEX RANGE SCAN             | I_OBJAUTH|  1 |   7 |   2
|* 31 |    FIXED TABLE FULL               | X$KZSRO  |  2 |  26 |  0
|* 32 |     FIXED TABLE FULL              | X$KZSPR  |  1 |  26 |   0
<snip>
Statistics
------------------------------------------------------
224  recursive calls
0     db block gets
60   consistent gets
0     physical reads
0     redo size
480  bytes sent via SQL*Net to client
381  bytes received via SQL*Net from client
2     SQL*Net roundtrips to/from client
8     sorts (memory)
0     sorts (disk)
1     rows processed

Now, have I lied? You can see from the above that it took 0.04 seconds to select the num_rows from the ALL_TABLES view. I trimmed the plan but you can see that it is looking at some very odd things, which are internal objects in the data dictionary (you may not see this if you run my script, I use a DBA privileged account and so I see the messy Oracle internals). I also trimmed the plan, which was 32 lines long. {It is very interesting to look at Oracle’s internals this way, but not now. }

This was the first execution and Oracle has to parse what looks like a simple statement but, because it is on a view(ALL_TABLES) which sits on top of many internal objects, the plan is quite complex. The parse effort is quite high. However, the predicted execution cost is low (9) and second run shows that the number of buffer gets is low:

— again without the extensive plan output
set autotrace on stat
select table_name,num_rows from all_tables
where owner=USER and table_name=’TEST_1′;

TABLE_NAME NUM_ROWS
—————————— ———-
TEST_1 19999

Elapsed: 00:00:00.00

Statistics
———————————————————-
0 recursive calls
0 db block gets
22 consistent gets
0 physical reads

So the number of consistent_gets goes down to 22 and the time it takes to run to below 1ms. That compares to a cost of 11 and consistent_gets of 44 for the primary key scan.
This cost of checking the data dictionary will be pretty much the same for any normal table, no matter how large.

Here I test another test table of mine (not created by the test script, I am afraid). I show the second run of selecting count(*) and also selecting num_rows from all_tables.

— Now to look at a bigger table
set autotrace on
select count(*) from person;

COUNT(*)
———-
82563
Elapsed: 00:00:00.01

Execution Plan
———————————————————-
Plan hash value: 1755611177
————————————————————————-
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 44 (5)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| PERS_ID | 82563 | 44 (5)| 00:00:01 |
————————————————————————-

Statistics
———————————————————-
0 recursive calls
0 db block gets
191 consistent gets
0 physical reads

select table_name,num_rows from all_tables
where owner=USER and table_name=’PERSON’;

TABLE_NAME NUM_ROWS
—————————— ———-
PERSON 82563
Elapsed: 00:00:00.00

Execution Plan
———————————————————-
Plan hash value: 711593612

—————————————————————-
| Id | Operation | Name | Rows | Bytes | Cost
—————————————————————–
| 0 | SELECT STATEMENT | | 1 | 180 | 9

Statistics
———————————————————-
0 recursive calls
0 db block gets
22 consistent gets
0 physical reads

Table PERSON is bigger, the CBO estimates the cost of a scan on the index as 44 and the run takes 191 consistent gets. The select from ALL_TABLES has a cost of 9 and 22 consistent gets – the same as for the smaller table.

Isn’t that great?

Well, maybe. The number of rows we are getting is the count of rows for that table the last time it was analysed. On version 9 of Oracle, that is probably when the DBA last scheduled an analyze or dbms_stats gather statement to run against the table {either directly or via a schema or whole database gather}. On Oracle 10 or 11 with the auto stats job, it should be within 10% as Oracle automatically re-gathers stats for any table changed by 10% or more since the last analyze, every night/over the weekend.
The accuracy of the count will be down to when it last ran, what the sample size was and how active the table is.

The question is, do you actually need to know the EXACT number of records, or just the number to a reasonable level of precision? Is the level of precision of the last gather OK? You can check the column LAST_ANALYZED on ALL_TABLES to see when that was. Sometimes it is accurate enough.

As a quick aside, if you have wide enough privileges (DBA or SELECT ANY DICTIONARY) and the time to play with the internal objects, you can construct a little script like the below which is even faster, as it only gets what you want and ignores table access issues {second iteration shown}:

set autotrace on
-- If you have the right account privs, you can look direct
-- into the data dictionary.
select o.name,o.owner#
      ,t.obj#,t.rowcnt
from sys.obj$ o
    ,sys.tab$	   t
    ,sys.user$    u
where o.name = 'PERSON'
and o.obj#   = t.obj#
and u.name   = 'TRAINER'
and u.user#  =o.owner#

NAME OWNER# OBJ# ROWCNT
—————————— ———- ———- ———-
PERSON 62 52732 82563

Elapsed: 00:00:00.00

Execution Plan
———————————————————-
Plan hash value: 3546182522
——————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————
| 0 | SELECT STATEMENT | | 1 | 55 | 4 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 55 | 4 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 47 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| USER$ | 1 | 14 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | I_USER1 | 1 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| OBJ$ | 1 | 33 | 2 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | I_OBJ2 | 1 | | 1 (0)| 00:00:01 |
| 7 | TABLE ACCESS CLUSTER | TAB$ | 1 | 8 | 1 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | I_OBJ# | 1 | | 0 (0)| 00:00:01 |
—————————————————————

Statistics
———————————————————-
0 recursive calls
0 db block gets
10 consistent gets
0 physical reads

So, looking at the data in the data dictionary is fast {and it should be, this is some of the basic information the CBO looks at to decide on its execution path and so Oracle Corp make it as fast as it reasonably can}. But it is potentially pretty inaccurate.

Do you remember me mentioning that Oracle 10 upwards {and Oracle 9 if your turn on monitoring on a table and call dbms_stats correctly} collects stats on a table when it changes by more than 10%?

How does oracle know?

Oracle keeps track of changes to tables. It seems to me to be pretty damned accurate in most circumstances {I’ll elaborate in a bit}.

The view to look at is ALL_TAB_MODIFCATIONS or DBA_TAB_MODIFICATIONS.

desc all_tab_modifications
 Name                                 Null?    Type
 ---------------- -------------------------------
 TABLE_OWNER                           VARCHAR2(30)
 TABLE_NAME                             VARCHAR2(30)
 PARTITION_NAME                        VARCHAR2(30)
 SUBPARTITION_NAME                     VARCHAR2(30)
 INSERTS                                   NUMBER
 UPDATES                                   NUMBER
 DELETES                                   NUMBER
 TIMESTAMP                               DATE
 TRUNCATED                               VARCHAR2(3)
 DROP_SEGMENTS                        NUMBER

As you can see, it shows the number of inserts, updates and deletes to the table. A few warnings:

– It does not record changes where you avoided the SQL layer, ie did direct path loads, used the +append hint or many other tricks to speed up processing by shoving a great chunk of data into the table quickly. All normal inserts/updates/deletes get caught though.

– The data is flushed from memory to the data dictionary “regularly”. Every 3 hours in Oracle 9, every 15 minutes in Oracle 10.1 and it seems only when a dbms_stats.gather statement is run on 10.2. I am afraid I have not yet checked on 11.

But you can {if you have the privs} flush the data down yourself, and it seems very quick:

test102>– You need to flush the stats down
test102>exec dbms_stats.flush_database_monitoring_info

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.23

And now a script for you to get the information. If you look {here} there is my sql*plus script I use all the time to do this. Feel free to steal and use it. I’d appreciate it if you left my initials in the top comment though:

select dbta.owner||'.'||dbta.table_name 	  tab_name
     ,dbta.num_rows			         anlyzd_rows
     ,to_char(dbta.last_analyzed,'yymmdd hh24:mi:ss')  last_anlzd
     ,nvl(dbta.num_rows,0)+nvl(dtm.inserts,0)
      -nvl(dtm.deletes,0) 		        tot_rows
  ,nvl(dtm.inserts,0)+nvl(dtm.deletes,0)+nvl(dtm.updates,0) chngs
  ,(nvl(dtm.inserts,0)+nvl(dtm.deletes,0)+nvl(dtm.updates,0))
    /greatest(nvl(dbta.num_rows,0),1)	   pct_c
  ,dtm.truncated		          trn
from	     dba_tables 	       dbta
-- replace below with all_tab_modifications if you need
left outer join sys.dba_tab_modifications dtm
   on  dbta.owner	      = dtm.table_owner
   and dbta.table_name    = dtm.table_name
   and dtm.partition_name is null
where dbta.table_name ='PERSON'
and dbta.owner	   ='TRAINER'

TAB_NAME ANLYZD_ROWS LAST_ANLZD TOT_ROWS CHNGS PCT_C
——————————————————————-
TRAINER.PERSON 82,563 090517 18:42:22 82,563 0 .000

1 row selected.

Elapsed: 00:00:00.06

Execution Plan
———————————————————-
Plan hash value: 1769260204

————————————————————
| Id | Operation | Name | Rows | Bytes | Cost
———————————————————–
0 | SELECT STATEMENT | | 3 | 396 | 22

Statistics
———————————————————-
224 recursive calls
0 db block gets
98 consistent gets
0 physical reads
0 redo size
796 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
9 sorts (memory)
0 sorts (disk)
1 rows processed

OK, 0 changes, not so informative? Well, it is, it means NUM_ROWS in the data dictionary is up to date and if you check back for the count(*) on that table, it is indeed 82563. But, just for you, I am going to damage my test PERSON table, just to show you.
I added about 27500 rows and deleted about 1250. I committed the changes and re-ran the script

@tab_count
exec dbms_stats.flush_database_monitoring_info

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03

Enter value for tab_name: person
old 15: where dbta.table_name like upper(nvl(‘&Tab_name’,’WHOOPS’))
new 15: where dbta.table_name like upper(nvl(‘person’,’WHOOPS’))
Any Key>
TAB_NAME ANLYZD_ROWS LAST_ANLZD TOT_ROWS CHNGS PCT_C
————————————————————— —
TRAINER.PERSON 82,563 090517 18:42:22 108,795 28,730 .348

1 row selected.

Elapsed: 00:00:00.95

Execution Plan
———————————————————-
Plan hash value: 4225397431
——————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost
————————————————————————–
| 0 | SELECT STATEMENT | | 2 | 478 | 63

Statistics
———————————————————-
0 recursive calls
0 db block gets
71 consistent gets
0 physical reads

I forgot to turn off the pause, so the statement took almost a second! But the cost and consistent gets show it is a lightweight statement. How accurate is it?

select count(*) from person

COUNT(*)
———-
108795

1 row selected.

Elapsed: 00:00:00.56

Execution Plan
———————————————————-
Plan hash value: 1755611177
————————————————————————-
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
————————————————————————-
| 0 | SELECT STATEMENT | | 1 | 44 (5)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| PERS_ID | 82563 | 44 (5)| 00:00:01 |
————————————————————————-
Statistics
———————————————————-
0 recursive calls
0 db block gets
250 consistent gets
0 physical reads

The count(*) has a cost of 44 and talks 250 consistent gets. The “estimate” from using SYS.DBA_TAB_MODIFICATIONS is 108795, which matches the count(*).

I use SYS.DBA_TAB_MODIFICATIONS a lot now. After all, the fastest way to do something is to not do it, so I don’t count large tables, I get Oracle to tell me to the best of its knowledge with what it has already stored.

It will not work properly if you do direct inserts, but I have never really seen the information in DBA_TAB_MODIFICIATIONS be out by much at all if you are dealing with normal DML. There is no need to constantly gather stats to keep the information current, it is a lot, lot faster than select coun(*).

So, next time your boss wants to know how many records are in the ORDER_ITEMS table, don’t wait an hour for a count(*) to finish, look at ALL/DBA_TAB_MODIFICATONS. OK, it might be out by a few, but then depending on if you had your cup of tea before or after you kicked off the count(*), would that give a more “accurate” count to email to your boss?

Comments»

1. Randolf Geist - June 25, 2009

Martin,

another idea for counting quickly the number of rows of a very large if you can’t rely on the statistics is to use the SAMPLE option, e.g.:


select count(*) * 100 as approx_cnt from some_large_tab sample (1) [block];

This will also use indexes if applicable and apply the sampling to the index, too. The result of course needs to be multiplied by the sampled factor. Default is row sampling, the optional “block” keyword enabled random block sampling which might be faster (but in most tests isn’t really) but less accurate if your blocks have significantly different number of rows.

Note that the DBA/ALL/USER_TAB_MODIFICATIONS can be misleading if you have a mis-behaving application that often rolls back table modifications, because the non-applied changed are still counted.

This can be demonstrated using a simple test case:


drop table tab_mod_test purge;

create table tab_mod_test
as
select * from all_objects
where 1 = 2;

exec dbms_stats.flush_database_monitoring_info

select * from user_tab_modifications;

insert into tab_mod_test
select * from all_objects where rownum <= 1000;

rollback;

exec dbms_stats.flush_database_monitoring_info

select * from user_tab_modifications;

Which will show you 1,000 inserts which didn’t really happen.

Regards,
Randolf

2. mwidlake - June 25, 2009

That’s a nice alternative Randolf. I wonder if that is more accurate than taking the rows_per_block from the segment stats and multiplying it by the number of blocks currently in the segment.

I think that in general, so long as direct operations are not being used, the use of xxx_tab_modifications is very powerful. Most live systems do not have a lot of rollback of changes, though that might be skewed as I simply have not seen a lot of systems that change or create data that is regularly rolled back. It comes down to the usual cover-all in the Oracle world, “it depends”. Some oracle sites probably do do a lot of rollback of dml. In my experienc, the biggest problem is direct load activity, but then you can add the number of rows processed by the direct loads to the xxx_tab_modifications and current row_count.

Like you, I have not found block sampling to gain you much at all, but others might find it does.

I need to return to this topic as I have not done the last section about partitioned tables… 🙂

3. Counting the Cost #5 – accurate and fast « Martin Widlake’s Yet Another Oracle Blog - July 9, 2009

[…] is the previous post on the […]

4. Automated Statistics Gathering Silently Fails #2 « Martin Widlake’s Yet Another Oracle Blog - July 23, 2009

[…] tables/partitions with more than 10% difference. You do this via DBA_TAB_MODIFICATIONS. I’ve already posted about this table {look towards the end of the post}, I am strangely attached to it, I think it is jolly […]

5. DBA_TAB_MODIFICATIONS « Martin Widlake's Yet Another Oracle Blog - July 2, 2010

[…] to see what else I said about DBA_TAB_MODIFICATIONS then the end of this {way too long} post shows how to use it to get a very fast count of rows in a table and if you have issues with permissions flushing database monitoring see this link about […]

6. Kostas Hairopoulos - July 4, 2010

Interesting article. The URL for the script seems broken

mwidlake - July 4, 2010

Thanks for letting me know about the broken link Kostas. Seems that the web service I used in my early days of blogging (hotlink files) no longer links to the files I put there. I have my own web site now so, once I have time (which I am afraid looks like August), I’ll do some housekeeping and sort out those file links.

In in the meantime anyone really wants any scripts you can no longer link to, email me and I’ll send them on.


Leave a reply to mwidlake Cancel reply