Recently one of my article got published in IOUG SELECT magazine (Q4 2010). In this article I have shared some scripts I use in my day to day performance analysis. Following is the link for the document:
Oracle AWR Trending
Oracle AWR Trending - IOUG SELECT Magazine -Quarter 4 2010
Saturday, December 25, 2010
Thursday, January 29, 2009
AWR Trending (Load Profile statistics)
Following query can be used on 10g for trending of Load Profile statistics. You just need to pass the date and it will print the hourly (or whatever interval you have set) physical reads/sec, physical writes/sec etc. statistics.
--alter session set nls_date_format='dd-mon-yy';
set lines 130 pages 1000
col stat_name for a25
col BEGIN_INTERVAL_TIME for a25
col END_INTERVAL_TIME for a25
--col redo_size for '999,999,990.99'
--col sess_l_reads for '999,999,990.99'
--col blk_change for '999,999,990.99'
--col phy_reads for '999,999,990.99'
--col phy_writes for '999,999,990.99'
--col user_calls for '999,999,990.99'
--col parse_count_tot for '999,999,990.99'
--col parse_count_hard for '999,999,990.99'
--col sort_disk for '999,999,990.99'
--col logons for '999,999,990.99'
--col execute_count for '999,999,990.99'
--col trans for '999,999,990.99'
select
date_time,
sum(case WHEN stat_name='redo size' then round((e_val - b_val)/sec,2) else null end) redo_size,
sum(case WHEN stat_name='session logical reads' then round((e_val - b_val)/sec,2) else null end) sess_l_reads,
sum(case WHEN stat_name='db block changes' then round((e_val - b_val)/sec,2) else null end) blk_change,
sum(case WHEN stat_name='physical reads' then round((e_val - b_val)/sec,2) else null end) phy_reads,
sum(case WHEN stat_name='physical writes' then round((e_val - b_val)/sec,2) else null end) phy_writes,
sum(case WHEN stat_name='user calls' then round((e_val - b_val)/sec,2) else null end) user_calls,
--sum(case WHEN stat_name='parse count (total)' then round((e_val - b_val)/sec,2) else null end) parse_count_tot,
--sum(case WHEN stat_name='parse count (hard)' then round((e_val - b_val)/sec,2) else null end) parse_count_hard,
--sum(case WHEN stat_name='sorts (disk)' then round((e_val - b_val)/sec,2) else null end) sort_disk,
sum(case WHEN stat_name='logons cumulative' then round((e_val - b_val)/sec,2) else null end) logons,
sum(case WHEN stat_name='execute count' then round((e_val - b_val)/sec,2) else null end) execute_count,
round((sum(case WHEN stat_name='user commits' then (e_val - b_val)/sec else null end) +
sum(case WHEN stat_name='user rollbacks' then (e_val - b_val)/sec else null end)),2) trans
from
(
select
to_char(sn.BEGIN_INTERVAL_TIME,'mm/dd/yy_hh24_mi')|| to_char(sn.END_INTERVAL_TIME,'_hh24_mi') Date_Time,
b.stat_name stat_name,
e.value e_val,
b.value b_val,
(extract( day from (end_interval_time-begin_interval_time) )*24*60*60+
extract( hour from (end_interval_time-begin_interval_time) )*60*60+
extract( minute from (end_interval_time-begin_interval_time) )*60+
extract( second from (end_interval_time-begin_interval_time)) ) sec
FROM
dba_hist_sysstat b,
dba_hist_sysstat e,
dba_hist_snapshot sn
where
trunc(sn.begin_interval_time) ='&Date' and
b.snap_id(+) = e.snap_id-1
and e.snap_id = sn.snap_id
and b.dbid(+) = e.dbid
and e.dbid = (select dbid from v$database)
and sn.dbid = (select dbid from v$database)
and b.instance_number(+) = e.instance_number
and e.instance_number = (select instance_number from v$instance)
and sn.instance_number = (select instance_number from v$instance)
and b.instance_number(+) = e.instance_number
and b.stat_name = e.stat_name
and b.stat_name in (
'redo size',
'session logical reads',
'db block changes',
'physical reads',
'physical writes',
'user calls',
'parse count (total)',
'parse count (hard)',
'sorts (disk)',
'logons cumulative',
'execute count',
'transactions',
'user commits',
'user rollbacks'
)
)
group by
date_time
Order by date_time
;
Sample output:
Enter value for date: 02-aug-09
old 32: trunc(sn.begin_interval_time) ='&Date' and
new 32: trunc(sn.begin_interval_time) ='02-aug-09' and
DATE_TIME REDO_SIZE SESS_L_READS BLK_CHANGE PHY_READS PHY_WRITES USER_CALLS LOGONS EXECUTE_COUNT TRANS
-------------------- ---------- ------------ ---------- ---------- ---------- ---------- ---------- ------------- ----------
08/02/09_00_00_01_00 15765.21 174.27 117.67 1.22 2.32 1.53 .05 6.43 .15
08/02/09_01_00_02_00 812.08 40.25 4 .03 .35 1.52 .05 3.35 .15
08/02/09_02_00_03_00 899.51 40.45 4.66 .05 .36 1.51 .05 3.31 .15
08/02/09_03_00_04_00 756.55 38.48 3.7 .03 .36 1.51 .05 3.21 .15
08/02/09_04_00_05_00 787.75 38.62 3.84 .03 .35 1.9 .07 3.48 .15
08/02/09_05_00_06_00 1171.49 289.58 41.65 .22 .49 1.69 .06 15.19 .16
08/02/09_06_00_07_00 11280.69 912.41 279.12 2.85 1.89 1.6 .06 13.59 .23
08/02/09_07_00_08_00 793.5 37.21 3.7 .04 .36 1.94 .07 3.45 .14
08/02/09_08_00_09_00 972.47 40.36 4.51 .04 .36 1.77 .06 3.48 .15
08/02/09_09_00_10_00 799.36 42.63 3.97 .03 .35 1.83 .06 3.41 .15
08/02/09_10_00_11_00 2206.13 430.48 90.97 .84 .63 1.69 .06 16.9 .18
08/02/09_11_00_12_00 743.9 36.13 3.47 .03 .35 1.49 .06 3.4 .14
08/02/09_12_00_13_00 845.58 39.75 4.14 .03 .35 1.89 .06 3.54 .15
08/02/09_13_00_14_00 826.71 40.49 3.95 .03 .35 1.69 .06 3.35 .15
08/02/09_14_00_15_00 1682.08 317.59 59.09 .06 .54 1.91 .06 15.66 .17
08/02/09_15_00_16_00 902.46 41.58 4.61 .03 .34 1.69 .06 3.44 .15
08/02/09_16_00_17_00 725.42 37.65 3.31 .03 .35 1.68 .06 3.38 .14
08/02/09_17_00_18_00 892.22 41.14 4.53 .03 .34 1.69 .06 3.42 .16
08/02/09_18_00_19_00 1866.91 340.55 66.31 .06 .56 1.69 .06 16.04 .17
08/02/09_19_00_20_00 907.04 41.34 4.42 .07 .39 2.47 .08 3.9 .16
08/02/09_20_00_21_00 880.26 38.45 3.84 .04 .39 1.71 .06 3.37 .14
08/02/09_21_00_22_00 796.58 39.68 3.91 .03 .35 1.69 .06 3.38 .15
08/02/09_22_00_23_00 3413.67 424.14 94.61 .2 .78 1.68 .06 16.68 .18
08/02/09_23_00_00_00 1345.72 52.08 8.42 .04 .36 1.53 .05 5.9 .43
24 rows selected.
I hope, it will be helpful for someone.
--alter session set nls_date_format='dd-mon-yy';
set lines 130 pages 1000
col stat_name for a25
col BEGIN_INTERVAL_TIME for a25
col END_INTERVAL_TIME for a25
--col redo_size for '999,999,990.99'
--col sess_l_reads for '999,999,990.99'
--col blk_change for '999,999,990.99'
--col phy_reads for '999,999,990.99'
--col phy_writes for '999,999,990.99'
--col user_calls for '999,999,990.99'
--col parse_count_tot for '999,999,990.99'
--col parse_count_hard for '999,999,990.99'
--col sort_disk for '999,999,990.99'
--col logons for '999,999,990.99'
--col execute_count for '999,999,990.99'
--col trans for '999,999,990.99'
select
date_time,
sum(case WHEN stat_name='redo size' then round((e_val - b_val)/sec,2) else null end) redo_size,
sum(case WHEN stat_name='session logical reads' then round((e_val - b_val)/sec,2) else null end) sess_l_reads,
sum(case WHEN stat_name='db block changes' then round((e_val - b_val)/sec,2) else null end) blk_change,
sum(case WHEN stat_name='physical reads' then round((e_val - b_val)/sec,2) else null end) phy_reads,
sum(case WHEN stat_name='physical writes' then round((e_val - b_val)/sec,2) else null end) phy_writes,
sum(case WHEN stat_name='user calls' then round((e_val - b_val)/sec,2) else null end) user_calls,
--sum(case WHEN stat_name='parse count (total)' then round((e_val - b_val)/sec,2) else null end) parse_count_tot,
--sum(case WHEN stat_name='parse count (hard)' then round((e_val - b_val)/sec,2) else null end) parse_count_hard,
--sum(case WHEN stat_name='sorts (disk)' then round((e_val - b_val)/sec,2) else null end) sort_disk,
sum(case WHEN stat_name='logons cumulative' then round((e_val - b_val)/sec,2) else null end) logons,
sum(case WHEN stat_name='execute count' then round((e_val - b_val)/sec,2) else null end) execute_count,
round((sum(case WHEN stat_name='user commits' then (e_val - b_val)/sec else null end) +
sum(case WHEN stat_name='user rollbacks' then (e_val - b_val)/sec else null end)),2) trans
from
(
select
to_char(sn.BEGIN_INTERVAL_TIME,'mm/dd/yy_hh24_mi')|| to_char(sn.END_INTERVAL_TIME,'_hh24_mi') Date_Time,
b.stat_name stat_name,
e.value e_val,
b.value b_val,
(extract( day from (end_interval_time-begin_interval_time) )*24*60*60+
extract( hour from (end_interval_time-begin_interval_time) )*60*60+
extract( minute from (end_interval_time-begin_interval_time) )*60+
extract( second from (end_interval_time-begin_interval_time)) ) sec
FROM
dba_hist_sysstat b,
dba_hist_sysstat e,
dba_hist_snapshot sn
where
trunc(sn.begin_interval_time) ='&Date' and
b.snap_id(+) = e.snap_id-1
and e.snap_id = sn.snap_id
and b.dbid(+) = e.dbid
and e.dbid = (select dbid from v$database)
and sn.dbid = (select dbid from v$database)
and b.instance_number(+) = e.instance_number
and e.instance_number = (select instance_number from v$instance)
and sn.instance_number = (select instance_number from v$instance)
and b.instance_number(+) = e.instance_number
and b.stat_name = e.stat_name
and b.stat_name in (
'redo size',
'session logical reads',
'db block changes',
'physical reads',
'physical writes',
'user calls',
'parse count (total)',
'parse count (hard)',
'sorts (disk)',
'logons cumulative',
'execute count',
'transactions',
'user commits',
'user rollbacks'
)
)
group by
date_time
Order by date_time
;
Sample output:
Enter value for date: 02-aug-09
old 32: trunc(sn.begin_interval_time) ='&Date' and
new 32: trunc(sn.begin_interval_time) ='02-aug-09' and
DATE_TIME REDO_SIZE SESS_L_READS BLK_CHANGE PHY_READS PHY_WRITES USER_CALLS LOGONS EXECUTE_COUNT TRANS
-------------------- ---------- ------------ ---------- ---------- ---------- ---------- ---------- ------------- ----------
08/02/09_00_00_01_00 15765.21 174.27 117.67 1.22 2.32 1.53 .05 6.43 .15
08/02/09_01_00_02_00 812.08 40.25 4 .03 .35 1.52 .05 3.35 .15
08/02/09_02_00_03_00 899.51 40.45 4.66 .05 .36 1.51 .05 3.31 .15
08/02/09_03_00_04_00 756.55 38.48 3.7 .03 .36 1.51 .05 3.21 .15
08/02/09_04_00_05_00 787.75 38.62 3.84 .03 .35 1.9 .07 3.48 .15
08/02/09_05_00_06_00 1171.49 289.58 41.65 .22 .49 1.69 .06 15.19 .16
08/02/09_06_00_07_00 11280.69 912.41 279.12 2.85 1.89 1.6 .06 13.59 .23
08/02/09_07_00_08_00 793.5 37.21 3.7 .04 .36 1.94 .07 3.45 .14
08/02/09_08_00_09_00 972.47 40.36 4.51 .04 .36 1.77 .06 3.48 .15
08/02/09_09_00_10_00 799.36 42.63 3.97 .03 .35 1.83 .06 3.41 .15
08/02/09_10_00_11_00 2206.13 430.48 90.97 .84 .63 1.69 .06 16.9 .18
08/02/09_11_00_12_00 743.9 36.13 3.47 .03 .35 1.49 .06 3.4 .14
08/02/09_12_00_13_00 845.58 39.75 4.14 .03 .35 1.89 .06 3.54 .15
08/02/09_13_00_14_00 826.71 40.49 3.95 .03 .35 1.69 .06 3.35 .15
08/02/09_14_00_15_00 1682.08 317.59 59.09 .06 .54 1.91 .06 15.66 .17
08/02/09_15_00_16_00 902.46 41.58 4.61 .03 .34 1.69 .06 3.44 .15
08/02/09_16_00_17_00 725.42 37.65 3.31 .03 .35 1.68 .06 3.38 .14
08/02/09_17_00_18_00 892.22 41.14 4.53 .03 .34 1.69 .06 3.42 .16
08/02/09_18_00_19_00 1866.91 340.55 66.31 .06 .56 1.69 .06 16.04 .17
08/02/09_19_00_20_00 907.04 41.34 4.42 .07 .39 2.47 .08 3.9 .16
08/02/09_20_00_21_00 880.26 38.45 3.84 .04 .39 1.71 .06 3.37 .14
08/02/09_21_00_22_00 796.58 39.68 3.91 .03 .35 1.69 .06 3.38 .15
08/02/09_22_00_23_00 3413.67 424.14 94.61 .2 .78 1.68 .06 16.68 .18
08/02/09_23_00_00_00 1345.72 52.08 8.42 .04 .36 1.53 .05 5.9 .43
24 rows selected.
I hope, it will be helpful for someone.
Sunday, August 31, 2008
Batch mode AWR report generation
Sometimes it is necessary to generate the awr report for the whole day or couple of interval (I generally do it for trend analysis). Doing it manually for every interval could be a tedious job. I have written a small script which you can run with starting and ending snap_id and it generates reports in below format:
instance_name_startsnapid_endsnapid_date_starttime_endtime.lst
Eg:
instancename_8837_8838_08_25_2008_15_00_16_00.lst
instancename_8838_8839_08_25_2008_16_00_17_00.lst
instancename_8839_8840_08_25_2008_17_00_18_00.lst
instancename_8840_8841_08_25_2008_18_00_19_00.lst
instancename_8841_8842_08_25_2008_19_00_20_00.lst
instancename_8842_8843_08_25_2008_20_00_21_00.lst
instancename_8843_8844_08_25_2008_21_00_22_00.lst
instancename_8844_8845_08_25_2008_22_00_23_00.lst
instancename_8845_8846_08_25_2008_23_00_00_00.lst
Script:
AWR_report_generation.sh
#!/bin/ksh
# AWR_report_generation.shV1.2 03-Aug-09
#
# Author - Kapil Goyal
# Batch mode AWR report generation
# This script needs minimum 2 arguments. First - Starting snap_id, Second - Ending Snap_id
# You need to change username/password. whomsoever has Grant execute on DBMS_WORKLOAD_REPOSITORY
# If you encounter any issue while running it contact me at kapil_goyal@yahoo.com
# It can also take third argument for incremental reports. Eg. if you take snapshots every 20 minutes but want a report
# for every hour then you can pass third argument - 3.
s_snap_id=$1
e_snap_id=$2
if [ "$3" = "" ]; then
step=1
else
step=$3
fi
while [ $s_snap_id -lt $e_snap_id ]
do
incr=`expr $s_snap_id + $step`
sqlplus /nolog << !
connect username/password
define num_days = 0;
define report_type='text';
define begin_snap='$s_snap_id'
define end_snap ='$incr'
column i_name new_value i_name
col dt new_value dt
select instance_name i_name from v\$instance;
select a.snap_id||'_'||b.snap_id||'_'||
to_char(a.end_interval_time,'MM_DD_YYYY_HH24_MI')||
'_'|| to_char(b.end_interval_time,'HH24_MI') dt
from dba_hist_snapshot A, dba_hist_snapshot B
where a.instance_number = (select instance_number from v\$instance) and
a.dbid = (select dbid from v\$database)
and a.snap_id=&begin_snap and b.snap_id=$incr;
define report_name = &i_name\_&dt
@?/rdbms/admin/awrrpt
!
s_snap_id=`expr $incr`;
done
instance_name_startsnapid_endsnapid_date_starttime_endtime.lst
Eg:
instancename_8837_8838_08_25_2008_15_00_16_00.lst
instancename_8838_8839_08_25_2008_16_00_17_00.lst
instancename_8839_8840_08_25_2008_17_00_18_00.lst
instancename_8840_8841_08_25_2008_18_00_19_00.lst
instancename_8841_8842_08_25_2008_19_00_20_00.lst
instancename_8842_8843_08_25_2008_20_00_21_00.lst
instancename_8843_8844_08_25_2008_21_00_22_00.lst
instancename_8844_8845_08_25_2008_22_00_23_00.lst
instancename_8845_8846_08_25_2008_23_00_00_00.lst
Script:
AWR_report_generation.sh
#!/bin/ksh
# AWR_report_generation.shV1.2 03-Aug-09
#
# Author - Kapil Goyal
# Batch mode AWR report generation
# This script needs minimum 2 arguments. First - Starting snap_id, Second - Ending Snap_id
# You need to change username/password. whomsoever has Grant execute on DBMS_WORKLOAD_REPOSITORY
# If you encounter any issue while running it contact me at kapil_goyal@yahoo.com
# It can also take third argument for incremental reports. Eg. if you take snapshots every 20 minutes but want a report
# for every hour then you can pass third argument - 3.
s_snap_id=$1
e_snap_id=$2
if [ "$3" = "" ]; then
step=1
else
step=$3
fi
while [ $s_snap_id -lt $e_snap_id ]
do
incr=`expr $s_snap_id + $step`
sqlplus /nolog << !
connect username/password
define num_days = 0;
define report_type='text';
define begin_snap='$s_snap_id'
define end_snap ='$incr'
column i_name new_value i_name
col dt new_value dt
select instance_name i_name from v\$instance;
select a.snap_id||'_'||b.snap_id||'_'||
to_char(a.end_interval_time,'MM_DD_YYYY_HH24_MI')||
'_'|| to_char(b.end_interval_time,'HH24_MI') dt
from dba_hist_snapshot A, dba_hist_snapshot B
where a.instance_number = (select instance_number from v\$instance) and
a.dbid = (select dbid from v\$database)
and a.snap_id=&begin_snap and b.snap_id=$incr;
define report_name = &i_name\_&dt
@?/rdbms/admin/awrrpt
!
s_snap_id=`expr $incr`;
done
Tuesday, August 26, 2008
Leveraging Result Cache to enhance the SQL performance
Everyone uses Cache for performance as accessing data from disk is always slower compared to memory (currently). Storages like EMC have cache, so that read/write can happen in cache instead of disk. Oracle Database has buffer cache to avoid physical reads.
The SQL query result cache enables explicit caching of query result sets and query fragments in database memory. A dedicated memory buffer stored in shared pool can be used for storing and retrieving the cached result. Good candidate statements for result cache are the statements which access many rows and returns less rows though it can be used for any statement.
Create a TEST User and Table:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create user TEST identified by test default tablespace USERS temporary tablespace TEMP;
User created.
SQL> grant connect,resource to TEST;
Grant succeeded.
SQL> conn test/test
Connected.
SQL> create table test_cache ( c1 varchar2(4000));
Table created.
SQL> insert into test_cache values('This is a test table to demonstrate Result Cache Feature of Oracle 11g');
1 row created.
Insert approx 2 millions rows:
SQL> insert into test_cache select * from test_cache;
1 row created.
SQL> /
2 rows created.
SQL> /
4 rows created.
SQL> /
8 rows created.
SQL> /
16 rows created.
SQL> /
32 rows created.
SQL> /
64 rows created.
SQL> /
128 rows created.
SQL> /
256 rows created.
SQL> /
512 rows created.
SQL> /
1024 rows created.
SQL> /
2048 rows created.
SQL> /
4096 rows created.
SQL> /
8192 rows created.
SQL> /
16384 rows created.
SQL> /
32768 rows created.
SQL> /
65536 rows created.
SQL> /
131072 rows created.
SQL> /
262144 rows created.
SQL> /
524288 rows created.
SQL> /
1048576 rows created.
SQL> insert into test_cache values('ONE');
1 row created.
SQL> commit;
Commit complete.
SQL> conn / as sysdba
Connected.
SQL> grant dba to TEST;
Grant succeeded.
SQL>
Check whether anything is cached or not:
SQL> conn test/test
Connected.
SQL> select TYPE,STATUS,NAME,CREATION_TIMESTAMP,ROW_COUNT,ROW_SIZE_MAX,OBJECT_NO,ROW_SIZE_AVG,INVALIDATIONS from v$result_cache_objects;
no rows selected
Run a Complex Self-Join query with AUTOTRACE ON
SQL> alter system flush buffer_cache;
System altered.
SQL> set autotrace on
SQL> SELECT count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t6
Where t1.c1='ONE' and
T2.c1='ONE' and
T3.c1='ONE' and
T4.c1='ONE' and
T5.c1='ONE' and
T6.c1='ONE';
COUNT(*)
----------
1
Elapsed: 00:00:04.77
Execution Plan
----------------------------------------------------------
Plan hash value: 156414347
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12012 | 6513T (2)|999:59:59 |
| 1 | SORT AGGREGATE | | 1 | 12012 | | |
| 2 | MERGE JOIN CARTESIAN | | 260T| 2780P| 6513T (2)|999:59:59 |
| 3 | MERGE JOIN CARTESIAN | | 1031G| 9389T| 25T (2)|999:59:59 |
| 4 | MERGE JOIN CARTESIAN | | 4080M| 29T| 101G (2)|999:59:59 |
| 5 | MERGE JOIN CARTESIAN | | 16M| 90G| 403M (2)|999:59:59 |
| 6 | MERGE JOIN CARTESIAN| | 63879 | 243M| 1597K (2)| 05:19:33 |
|* 7 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6292 (2)| 00:01:16 |
| 8 | BUFFER SORT | | 253 | 494K| 1591K (2)| 05:18:18 |
|* 9 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 10 | BUFFER SORT | | 253 | 494K| 403M (2)|999:59:59 |
|* 11 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 12 | BUFFER SORT | | 253 | 494K| 101G (2)|999:59:59 |
|* 13 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 14 | BUFFER SORT | | 253 | 494K| 25T (2)|999:59:59 |
|* 15 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 16 | BUFFER SORT | | 253 | 494K| 6513T (2)|999:59:59 |
|* 17 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
7 - filter("T6"."C1"='ONE')
9 - filter("T5"."C1"='ONE')
11 - filter("T4"."C1"='ONE')
13 - filter("T3"."C1"='ONE')
15 - filter("T2"."C1"='ONE')
17 - filter("T1"."C1"='ONE')
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
133314 consistent gets
133254 physical reads
0 redo size
522 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
1 rows processed
Ran the above sql statement couple of times, even after database blocks are cached, it was consistently taking 4.77 seconds and ~133k consistent gets and physical reads.
Execute the query with the result_cache hint
Alter system flush buffer_cache;
First time:
SQL> SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t6
Where t1.c1='ONE' and
T2.c1='ONE' and
T3.c1='ONE' and
T4.c1='ONE' and
T5.c1='ONE' and
T6.c1='ONE'
/
2 3 4 5 6 7 8 9 10 11 12 13 14 15
COUNT(*)
----------
1
Elapsed: 00:00:04.64
Execution Plan
----------------------------------------------------------
Plan hash value: 156414347
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12012 | 6513T (2)|999:59:59 |
| 1 | RESULT CACHE | ag7yp31pfqyv00m3vhbfwxjjz1 | | | | |
| 2 | SORT AGGREGATE | | 1 | 12012 | | |
| 3 | MERGE JOIN CARTESIAN | | 260T| 2780P| 6513T (2)|999:59:59 |
| 4 | MERGE JOIN CARTESIAN | | 1031G| 9389T| 25T (2)|999:59:59 |
| 5 | MERGE JOIN CARTESIAN | | 4080M| 29T| 101G (2)|999:59:59 |
| 6 | MERGE JOIN CARTESIAN | | 16M| 90G| 403M (2)|999:59:59 |
| 7 | MERGE JOIN CARTESIAN| | 63879 | 243M| 1597K (2)| 05:19:33 |
|* 8 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6292 (2)| 00:01:16 |
| 9 | BUFFER SORT | | 253 | 494K| 1591K (2)| 05:18:18 |
|* 10 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 11 | BUFFER SORT | | 253 | 494K| 403M (2)|999:59:59 |
|* 12 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 13 | BUFFER SORT | | 253 | 494K| 101G (2)|999:59:59 |
|* 14 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 15 | BUFFER SORT | | 253 | 494K| 25T (2)|999:59:59 |
|* 16 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 17 | BUFFER SORT | | 253 | 494K| 6513T (2)|999:59:59 |
|* 18 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
8 - filter("T6"."C1"='ONE')
10 - filter("T5"."C1"='ONE')
12 - filter("T4"."C1"='ONE')
14 - filter("T3"."C1"='ONE')
16 - filter("T2"."C1"='ONE')
18 - filter("T1"."C1"='ONE')
Result Cache Information (identified by operation id):
------------------------------------------------------
1 - column-count=1; dependencies=(TEST.TEST_CACHE); attributes=(single-row); parameters=(nls); name="SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t"
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
133314 consistent gets
133261 physical reads
0 redo size
522 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> SQL>
First time, it took the same time as previous query took.
Second time
SQL> /
COUNT(*)
----------
1
Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
Plan hash value: 156414347
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12012 | 6513T (2)|999:59:59 |
| 1 | RESULT CACHE | ag7yp31pfqyv00m3vhbfwxjjz1 | | | | |
| 2 | SORT AGGREGATE | | 1 | 12012 | | |
| 3 | MERGE JOIN CARTESIAN | | 260T| 2780P| 6513T (2)|999:59:59 |
| 4 | MERGE JOIN CARTESIAN | | 1031G| 9389T| 25T (2)|999:59:59 |
| 5 | MERGE JOIN CARTESIAN | | 4080M| 29T| 101G (2)|999:59:59 |
| 6 | MERGE JOIN CARTESIAN | | 16M| 90G| 403M (2)|999:59:59 |
| 7 | MERGE JOIN CARTESIAN| | 63879 | 243M| 1597K (2)| 05:19:33 |
|* 8 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6292 (2)| 00:01:16 |
| 9 | BUFFER SORT | | 253 | 494K| 1591K (2)| 05:18:18 |
|* 10 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 11 | BUFFER SORT | | 253 | 494K| 403M (2)|999:59:59 |
|* 12 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 13 | BUFFER SORT | | 253 | 494K| 101G (2)|999:59:59 |
|* 14 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 15 | BUFFER SORT | | 253 | 494K| 25T (2)|999:59:59 |
|* 16 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 17 | BUFFER SORT | | 253 | 494K| 6513T (2)|999:59:59 |
|* 18 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
8 - filter("T6"."C1"='ONE')
10 - filter("T5"."C1"='ONE')
12 - filter("T4"."C1"='ONE')
14 - filter("T3"."C1"='ONE')
16 - filter("T2"."C1"='ONE')
18 - filter("T1"."C1"='ONE')
Result Cache Information (identified by operation id):
------------------------------------------------------
1 - column-count=1; dependencies=(TEST.TEST_CACHE); attributes=(single-row); parameters=(nls); name="SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t"
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
522 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Note – Second time onwards it only took .01 second and 0 consistent gets and physical reads.
Check v$result_cache_objects:
SQL> set autot off
SQL> l
1* select TYPE,STATUS,NAME,CREATION_TIMESTAMP,ROW_COUNT,ROW_SIZE_MAX,OBJECT_NO,ROW_SIZE_AVG,INVALIDATIONS from v$result_cache_objects
SQL> /
TYPE STATUS NAME
---------- --------- --------------------------------------------------------------------------------------------------------------------------------
CREATION_ ROW_COUNT ROW_SIZE_MAX OBJECT_NO ROW_SIZE_AVG INVALIDATIONS
--------- ---------- ------------ ---------- ------------ -------------
Dependency Published TEST.TEST_CACHE
26-AUG-08 0 0 70304 0 0
Result Published SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t
26-AUG-08 1 5 0 5 0
As it is clear the result of the query is cached and published which means Result is available to use.
Check how much memory is being used by Result Cache
SQL> set serveroutput on
SQL> exec dbms_result_cache.memory_report(detailed=>TRUE)
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 7872K bytes (7872 blocks)
Maximum Result Size = 393K bytes (393 blocks)
[Memory]
Total Memory = 147280 bytes [0.029% of the Shared Pool]
... Fixed Memory = 2400 bytes [0.000% of the Shared Pool]
....... Cache Mgr = 152 bytes
....... Memory Mgr = 200 bytes
....... Bloom Fltr = 2K bytes
....... State Objs = bytes
... Dynamic Memory = 144880 bytes [0.029% of the Shared Pool]
....... Overhead = 112112 bytes
........... Hash Table = 64K bytes (4K buckets)
........... Chunk Ptrs = 24K bytes (3K slots)
........... Chunk Maps = 12K bytes
........... Miscellaneous = 9712 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 30 blocks
........... Used Memory = 2 blocks
............... Dependencies = 1 blocks (1 count)
............... Results = 1 blocks
................... SQL = 1 blocks (1 count)
PL/SQL procedure successfully completed.
Flush the Result Cache
Following supplied package can be used to flush the result cache:
SQL> exec dbms_result_cache.flush;
PL/SQL procedure successfully completed.
SQL> execute dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 7872K bytes (7872 blocks)
Maximum Result Size = 393K bytes (393 blocks)
[Memory]
Total Memory = 2400 bytes [0.000% of the Shared Pool]
... Fixed Memory = 2400 bytes [0.000% of the Shared Pool]
... Dynamic Memory = 0 bytes [0.000% of the Shared Pool]
PL/SQL procedure successfully completed.
The SQL query result cache enables explicit caching of query result sets and query fragments in database memory. A dedicated memory buffer stored in shared pool can be used for storing and retrieving the cached result. Good candidate statements for result cache are the statements which access many rows and returns less rows though it can be used for any statement.
Create a TEST User and Table:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create user TEST identified by test default tablespace USERS temporary tablespace TEMP;
User created.
SQL> grant connect,resource to TEST;
Grant succeeded.
SQL> conn test/test
Connected.
SQL> create table test_cache ( c1 varchar2(4000));
Table created.
SQL> insert into test_cache values('This is a test table to demonstrate Result Cache Feature of Oracle 11g');
1 row created.
Insert approx 2 millions rows:
SQL> insert into test_cache select * from test_cache;
1 row created.
SQL> /
2 rows created.
SQL> /
4 rows created.
SQL> /
8 rows created.
SQL> /
16 rows created.
SQL> /
32 rows created.
SQL> /
64 rows created.
SQL> /
128 rows created.
SQL> /
256 rows created.
SQL> /
512 rows created.
SQL> /
1024 rows created.
SQL> /
2048 rows created.
SQL> /
4096 rows created.
SQL> /
8192 rows created.
SQL> /
16384 rows created.
SQL> /
32768 rows created.
SQL> /
65536 rows created.
SQL> /
131072 rows created.
SQL> /
262144 rows created.
SQL> /
524288 rows created.
SQL> /
1048576 rows created.
SQL> insert into test_cache values('ONE');
1 row created.
SQL> commit;
Commit complete.
SQL> conn / as sysdba
Connected.
SQL> grant dba to TEST;
Grant succeeded.
SQL>
Check whether anything is cached or not:
SQL> conn test/test
Connected.
SQL> select TYPE,STATUS,NAME,CREATION_TIMESTAMP,ROW_COUNT,ROW_SIZE_MAX,OBJECT_NO,ROW_SIZE_AVG,INVALIDATIONS from v$result_cache_objects;
no rows selected
Run a Complex Self-Join query with AUTOTRACE ON
SQL> alter system flush buffer_cache;
System altered.
SQL> set autotrace on
SQL> SELECT count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t6
Where t1.c1='ONE' and
T2.c1='ONE' and
T3.c1='ONE' and
T4.c1='ONE' and
T5.c1='ONE' and
T6.c1='ONE';
COUNT(*)
----------
1
Elapsed: 00:00:04.77
Execution Plan
----------------------------------------------------------
Plan hash value: 156414347
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12012 | 6513T (2)|999:59:59 |
| 1 | SORT AGGREGATE | | 1 | 12012 | | |
| 2 | MERGE JOIN CARTESIAN | | 260T| 2780P| 6513T (2)|999:59:59 |
| 3 | MERGE JOIN CARTESIAN | | 1031G| 9389T| 25T (2)|999:59:59 |
| 4 | MERGE JOIN CARTESIAN | | 4080M| 29T| 101G (2)|999:59:59 |
| 5 | MERGE JOIN CARTESIAN | | 16M| 90G| 403M (2)|999:59:59 |
| 6 | MERGE JOIN CARTESIAN| | 63879 | 243M| 1597K (2)| 05:19:33 |
|* 7 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6292 (2)| 00:01:16 |
| 8 | BUFFER SORT | | 253 | 494K| 1591K (2)| 05:18:18 |
|* 9 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 10 | BUFFER SORT | | 253 | 494K| 403M (2)|999:59:59 |
|* 11 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 12 | BUFFER SORT | | 253 | 494K| 101G (2)|999:59:59 |
|* 13 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 14 | BUFFER SORT | | 253 | 494K| 25T (2)|999:59:59 |
|* 15 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 16 | BUFFER SORT | | 253 | 494K| 6513T (2)|999:59:59 |
|* 17 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
7 - filter("T6"."C1"='ONE')
9 - filter("T5"."C1"='ONE')
11 - filter("T4"."C1"='ONE')
13 - filter("T3"."C1"='ONE')
15 - filter("T2"."C1"='ONE')
17 - filter("T1"."C1"='ONE')
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
133314 consistent gets
133254 physical reads
0 redo size
522 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
1 rows processed
Ran the above sql statement couple of times, even after database blocks are cached, it was consistently taking 4.77 seconds and ~133k consistent gets and physical reads.
Execute the query with the result_cache hint
Alter system flush buffer_cache;
First time:
SQL> SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t6
Where t1.c1='ONE' and
T2.c1='ONE' and
T3.c1='ONE' and
T4.c1='ONE' and
T5.c1='ONE' and
T6.c1='ONE'
/
2 3 4 5 6 7 8 9 10 11 12 13 14 15
COUNT(*)
----------
1
Elapsed: 00:00:04.64
Execution Plan
----------------------------------------------------------
Plan hash value: 156414347
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12012 | 6513T (2)|999:59:59 |
| 1 | RESULT CACHE | ag7yp31pfqyv00m3vhbfwxjjz1 | | | | |
| 2 | SORT AGGREGATE | | 1 | 12012 | | |
| 3 | MERGE JOIN CARTESIAN | | 260T| 2780P| 6513T (2)|999:59:59 |
| 4 | MERGE JOIN CARTESIAN | | 1031G| 9389T| 25T (2)|999:59:59 |
| 5 | MERGE JOIN CARTESIAN | | 4080M| 29T| 101G (2)|999:59:59 |
| 6 | MERGE JOIN CARTESIAN | | 16M| 90G| 403M (2)|999:59:59 |
| 7 | MERGE JOIN CARTESIAN| | 63879 | 243M| 1597K (2)| 05:19:33 |
|* 8 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6292 (2)| 00:01:16 |
| 9 | BUFFER SORT | | 253 | 494K| 1591K (2)| 05:18:18 |
|* 10 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 11 | BUFFER SORT | | 253 | 494K| 403M (2)|999:59:59 |
|* 12 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 13 | BUFFER SORT | | 253 | 494K| 101G (2)|999:59:59 |
|* 14 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 15 | BUFFER SORT | | 253 | 494K| 25T (2)|999:59:59 |
|* 16 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 17 | BUFFER SORT | | 253 | 494K| 6513T (2)|999:59:59 |
|* 18 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
8 - filter("T6"."C1"='ONE')
10 - filter("T5"."C1"='ONE')
12 - filter("T4"."C1"='ONE')
14 - filter("T3"."C1"='ONE')
16 - filter("T2"."C1"='ONE')
18 - filter("T1"."C1"='ONE')
Result Cache Information (identified by operation id):
------------------------------------------------------
1 - column-count=1; dependencies=(TEST.TEST_CACHE); attributes=(single-row); parameters=(nls); name="SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t"
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
133314 consistent gets
133261 physical reads
0 redo size
522 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> SQL>
First time, it took the same time as previous query took.
Second time
SQL> /
COUNT(*)
----------
1
Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
Plan hash value: 156414347
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12012 | 6513T (2)|999:59:59 |
| 1 | RESULT CACHE | ag7yp31pfqyv00m3vhbfwxjjz1 | | | | |
| 2 | SORT AGGREGATE | | 1 | 12012 | | |
| 3 | MERGE JOIN CARTESIAN | | 260T| 2780P| 6513T (2)|999:59:59 |
| 4 | MERGE JOIN CARTESIAN | | 1031G| 9389T| 25T (2)|999:59:59 |
| 5 | MERGE JOIN CARTESIAN | | 4080M| 29T| 101G (2)|999:59:59 |
| 6 | MERGE JOIN CARTESIAN | | 16M| 90G| 403M (2)|999:59:59 |
| 7 | MERGE JOIN CARTESIAN| | 63879 | 243M| 1597K (2)| 05:19:33 |
|* 8 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6292 (2)| 00:01:16 |
| 9 | BUFFER SORT | | 253 | 494K| 1591K (2)| 05:18:18 |
|* 10 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 11 | BUFFER SORT | | 253 | 494K| 403M (2)|999:59:59 |
|* 12 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 13 | BUFFER SORT | | 253 | 494K| 101G (2)|999:59:59 |
|* 14 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 15 | BUFFER SORT | | 253 | 494K| 25T (2)|999:59:59 |
|* 16 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
| 17 | BUFFER SORT | | 253 | 494K| 6513T (2)|999:59:59 |
|* 18 | TABLE ACCESS FULL | TEST_CACHE | 253 | 494K| 6290 (2)| 00:01:16 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
8 - filter("T6"."C1"='ONE')
10 - filter("T5"."C1"='ONE')
12 - filter("T4"."C1"='ONE')
14 - filter("T3"."C1"='ONE')
16 - filter("T2"."C1"='ONE')
18 - filter("T1"."C1"='ONE')
Result Cache Information (identified by operation id):
------------------------------------------------------
1 - column-count=1; dependencies=(TEST.TEST_CACHE); attributes=(single-row); parameters=(nls); name="SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t"
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
522 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Note – Second time onwards it only took .01 second and 0 consistent gets and physical reads.
Check v$result_cache_objects:
SQL> set autot off
SQL> l
1* select TYPE,STATUS,NAME,CREATION_TIMESTAMP,ROW_COUNT,ROW_SIZE_MAX,OBJECT_NO,ROW_SIZE_AVG,INVALIDATIONS from v$result_cache_objects
SQL> /
TYPE STATUS NAME
---------- --------- --------------------------------------------------------------------------------------------------------------------------------
CREATION_ ROW_COUNT ROW_SIZE_MAX OBJECT_NO ROW_SIZE_AVG INVALIDATIONS
--------- ---------- ------------ ---------- ------------ -------------
Dependency Published TEST.TEST_CACHE
26-AUG-08 0 0 70304 0 0
Result Published SELECT /*+ result_cache */ count(*)
from
TEST_CACHE t1,
TEST_CACHE t2,
TEST_CACHE t3,
TEST_CACHE t4,
TEST_CACHE t5,
TEST_CACHE t
26-AUG-08 1 5 0 5 0
As it is clear the result of the query is cached and published which means Result is available to use.
Check how much memory is being used by Result Cache
SQL> set serveroutput on
SQL> exec dbms_result_cache.memory_report(detailed=>TRUE)
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 7872K bytes (7872 blocks)
Maximum Result Size = 393K bytes (393 blocks)
[Memory]
Total Memory = 147280 bytes [0.029% of the Shared Pool]
... Fixed Memory = 2400 bytes [0.000% of the Shared Pool]
....... Cache Mgr = 152 bytes
....... Memory Mgr = 200 bytes
....... Bloom Fltr = 2K bytes
....... State Objs = bytes
... Dynamic Memory = 144880 bytes [0.029% of the Shared Pool]
....... Overhead = 112112 bytes
........... Hash Table = 64K bytes (4K buckets)
........... Chunk Ptrs = 24K bytes (3K slots)
........... Chunk Maps = 12K bytes
........... Miscellaneous = 9712 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 30 blocks
........... Used Memory = 2 blocks
............... Dependencies = 1 blocks (1 count)
............... Results = 1 blocks
................... SQL = 1 blocks (1 count)
PL/SQL procedure successfully completed.
Flush the Result Cache
Following supplied package can be used to flush the result cache:
SQL> exec dbms_result_cache.flush;
PL/SQL procedure successfully completed.
SQL> execute dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 7872K bytes (7872 blocks)
Maximum Result Size = 393K bytes (393 blocks)
[Memory]
Total Memory = 2400 bytes [0.000% of the Shared Pool]
... Fixed Memory = 2400 bytes [0.000% of the Shared Pool]
... Dynamic Memory = 0 bytes [0.000% of the Shared Pool]
PL/SQL procedure successfully completed.
How to dump Database block and how to read it
I did this small test on my 11g instance. I created a small table EMP with 3 rows and then I dumped the block.
Create table, Insert 3 rows and commit:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create table emp(empno number(5), ename varchar2(30)) tablespace users;
Table created.
SQL> insert into emp values(7369,'SCOTT');
1 row created.
SQL> insert into emp values(7812,'SMITH');
1 row created.
SQL> insert into emp values(9241,'TOM');
1 row created.
SQL> commit;
Commit complete.
Get the block-number for rows:
select rowid,empno,ename,
dbms_rowid.rowid_relative_fno(rowid) fileno,
dbms_rowid.rowid_block_number(rowid) blockno,
dbms_rowid.rowid_row_number(rowid) rowno, rownum
from emp;
ROWID EMPNO ENAME FILENO BLOCKNO ROWNO ROWNUM
------------------ ---------- ------------------------------ ---------- ---------- ---------- ----------
AAARJGAAEAAAABEAAA 7369 SCOTT 4 68 0 1
AAARJGAAEAAAABEAAB 7812 SMITH 4 68 1 2
AAARJGAAEAAAABEAAC 9241 TOM 4 68 2 3
Dump the block:
SQL> alter system dump datafile 4 block 68;
System altered.
Trace file can be found in User Dump Dest.
SQL> show parameter user
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
license_max_users integer 0
parallel_adaptive_multi_user boolean TRUE
redo_transport_user string
user_dump_dest string /dmecsoid/oracle/diag/rdbms/de
v11g/dev11g/trace
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@dmecsoid->cd /dmecsoid/oracle/diag/rdbms/dev11g/dev11g/trace
oracle@dmecsoid->
oracle@dmecsoid->ls -lrt dev11g_ora_10038.trc
-rw-r----- 1 oracle dba 37429 Aug 25 23:26 dev11g_ora_10038.trc
oracle@dmecsoid->
How to read Block Data:
block_row_dump:
tab 0, row 0, @0x1f8b
tl: 13 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 3] c2 4a 46 --- 4a (hex to decimal)=74-1=73, 46(hex to decimal)=70-1=69-> 73 69(empno)
col 1: [ 5] 53 43 4f 54 54--- 53=S, 43=C, 4f=O, 54=T, 54=T -> S C O T T (ename)
tab 0, row 1, @0x1f7e
tl: 13 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 3] c2 4f 0d --- 4f (hex to decimal)=79-1=78, 0d(hex to decimal)=13-1=12 -> 78 12(empno)
col 1: [ 5] 53 4d 49 54 48--- 53=S, 4d=M, 49=I, 54=T, 48=H -> S M I T H (ename)
tab 0, row 2, @0x1f73
tl: 11 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 3] c2 5d 2a --- 5d (hex to decimal)=93-1=92, 2a(hex to decimal)=42-1=41 -> 92 41 (empno)
col 1: [ 3] 54 4f 4d --- 54=T, 4f=O, 4d=M -> T O M (ename)
end_of_block_dump
Create table, Insert 3 rows and commit:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create table emp(empno number(5), ename varchar2(30)) tablespace users;
Table created.
SQL> insert into emp values(7369,'SCOTT');
1 row created.
SQL> insert into emp values(7812,'SMITH');
1 row created.
SQL> insert into emp values(9241,'TOM');
1 row created.
SQL> commit;
Commit complete.
Get the block-number for rows:
select rowid,empno,ename,
dbms_rowid.rowid_relative_fno(rowid) fileno,
dbms_rowid.rowid_block_number(rowid) blockno,
dbms_rowid.rowid_row_number(rowid) rowno, rownum
from emp;
ROWID EMPNO ENAME FILENO BLOCKNO ROWNO ROWNUM
------------------ ---------- ------------------------------ ---------- ---------- ---------- ----------
AAARJGAAEAAAABEAAA 7369 SCOTT 4 68 0 1
AAARJGAAEAAAABEAAB 7812 SMITH 4 68 1 2
AAARJGAAEAAAABEAAC 9241 TOM 4 68 2 3
Dump the block:
SQL> alter system dump datafile 4 block 68;
System altered.
Trace file can be found in User Dump Dest.
SQL> show parameter user
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
license_max_users integer 0
parallel_adaptive_multi_user boolean TRUE
redo_transport_user string
user_dump_dest string /dmecsoid/oracle/diag/rdbms/de
v11g/dev11g/trace
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@dmecsoid->cd /dmecsoid/oracle/diag/rdbms/dev11g/dev11g/trace
oracle@dmecsoid->
oracle@dmecsoid->ls -lrt dev11g_ora_10038.trc
-rw-r----- 1 oracle dba 37429 Aug 25 23:26 dev11g_ora_10038.trc
oracle@dmecsoid->
How to read Block Data:
block_row_dump:
tab 0, row 0, @0x1f8b
tl: 13 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 3] c2 4a 46 --- 4a (hex to decimal)=74-1=73, 46(hex to decimal)=70-1=69-> 73 69(empno)
col 1: [ 5] 53 43 4f 54 54--- 53=S, 43=C, 4f=O, 54=T, 54=T -> S C O T T (ename)
tab 0, row 1, @0x1f7e
tl: 13 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 3] c2 4f 0d --- 4f (hex to decimal)=79-1=78, 0d(hex to decimal)=13-1=12 -> 78 12(empno)
col 1: [ 5] 53 4d 49 54 48--- 53=S, 4d=M, 49=I, 54=T, 48=H -> S M I T H (ename)
tab 0, row 2, @0x1f73
tl: 11 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 3] c2 5d 2a --- 5d (hex to decimal)=93-1=92, 2a(hex to decimal)=42-1=41 -> 92 41 (empno)
col 1: [ 3] 54 4f 4d --- 54=T, 4f=O, 4d=M -> T O M (ename)
end_of_block_dump
Thursday, July 26, 2007
11g New Features
This is my first blog. As Oracle has just launched 11g so thought to post some new features which I liked most.
11g Dataguard New Feature. 2
Real-Time Query: 2
Snapshot Standby: 2
Transient Logical Standby: 2
Enhanced Data Protection: 2
Enhanced Security: 2
Enterprise Manager Grid Control 11g Enhancements: 2
ROLLING DATABASE UPGRADES. 2
CASCADED DESTINATIONS. 3
11g RMAN Features. 4
Simplified active database duplication. 4
Increased speed of compression when preserving backup space: 4
Optimized undo backup: 4
Ease-of-use by Automatic Network File System (NFS) 4
Enhanced configuration of deletion policies. 4
11g ASM Enhancement 5
ASM Fast Disk Resync Overview.. 5
ASM Preferred Read Failure Groups Overview.. 5
Restricted Mount Disk Group For Fast Rebalance. 6
11g Dataguard New Feature
Real-Time Query:
A physical standby database can be open read-only while apply is active. This allows users attached to a physical standby database to query and report against data that is up to date with the primary database.
Snapshot Standby:
This is a new type of standby database that is created from a physical standby database. Once created, a snapshot standby can be opened read-write to process transactions that are independent of the primary database for test or other purposes. A snapshot standby database will continue to receive and archive updates from the primary database, however, redo data received from the primary will not be applied until the snapshot standby is converted back into a physical standby database and all updates that were made while it was a snapshot standby are discarded. This enables production data to remain in a protected state at all times.
Transient Logical Standby:
Users can convert a physical standby to a transient logical standby database to effect a rolling database upgrade, and then revert to a physical standby once the upgrade is complete (using the KEEP IDENTITY clause). This benefits physical standby users who wish to execute a rolling database upgrade without investing in redundant storage otherwise needed to create a logical standby database.
Enhanced Data Protection:
A Physical Standby can now detect lost datafile writes caused by faulty storage hardware and firmware that lead to data corruption. Data Guard will compare versions of blocks on the standby with that of the incoming redo stream. If there is a version discrepancy it implies a lost write. The user can then failover to the standby database and restore data consistency.
Enhanced Security:
SSL authentication can be used in lieu of password file to authenticate redo transmission. Note: SSL authentication requires use of PKI Certificates, ASO and OID.
Enterprise Manager Grid Control 11g Enhancements:
Enterprise Manager further simplifies management in the areas of:
• Creation of standby databases from existing RMAN backups
• Creation of an Oracle RAC standby database from an Oracle RAC primary.
• Automated standby clones for reporting, development, and test
• Automatic propagation of Enterprise Manager jobs and metric thresholds to the new primary database upon switchover or failover
• Fault-tolerant observer for Fast-Start Failover
• Enterprise Manager Data Recovery Advisor will utilize available standby databases when making recommendations for Intelligent Data Repair (IDR).
ROLLING DATABASE UPGRADES
Oracle Database software upgrades for major release and patchset upgrades (10.1.0.3 onwards) can be performed in a rolling fashion – with near zero database downtime, by using Data Guard SQL Apply (see Figure 2).
Figure 2 –Rolling Database Upgrades with SQL Apply
The steps in a rolling upgrade process involve upgrading the logical standby database to the next release, running in a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the upgraded database, and then finally upgrading the old primary database. While running in a mixed mode for testing purposes, the upgrade can be aborted and the software downgraded, without data loss. For additional data protection during these steps, a second standby database may be used.
Beginning with Oracle Database 11g, a physical standby database can also take advantage of the rolling upgrade feature provided by a logical standby. Through the use of the new KEEP IDENTITY clause option to the SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY statement, a physical standby database can be temporarily converted into a logical standby database for the rolling upgrade, and then reverted back to the original configuration of a primary database and a physical standby database when the upgrade is done.
CASCADED DESTINATIONS
Data Guard provides many flexible configuration options. Using Cascaded Destinations, a physical standby database can forward the redo it receives from the primary database to another standby database. Since the primary database sends redo data to the first standby databases, this feature reduces the load on the primary system, and can also reduce network traffic and use of valuable network resources at the primary site when multiple standby databases are required. Note that Oracle RAC and the Data Guard Broker are not supported in a Data Guard configuration that includes cascaded destinations.
11g RMAN Features
Simplified active database duplication
You can use the "network-aware" DUPLICATE command to create a duplicate or a standby database over the network without a need for pre-existing database backups. The ease-of-use is especially apparent through the Enterprise Manager GUI.
Increased speed of compression when preserving backup space:
You can use the CONFIGURE command to choose between the BZIP2 and ZLIB compression algorithms for RMAN backups. The new ZLIB backup compression algorithm can be 40% faster than previous BZIP2 algorithm. The real world data-warehousing database from one large pharmaceutical company had a compression ratio 2.0:1 with the BZIP2 algorithm, and 1.68:1 with the ZLIB algorithm.
Configure the backup compression algorithm with the following command (replace alg_name with either BZIP2 or ZLIB):
CONFIGURE COMPRESSION ALGORITHM TO 'alg_name';
Note: For more details, see the Oracle Database Backup and Recovery Reference.
Optimized undo backup:
Undo data that is not needed for transaction recovery (for example, for committed transactions), is not backed up. The benefit is reduced overall backup time and storage by not backing up undo that applies to committed transactions. This optimization is automatically enabled.
Ease-of-use by Automatic Network File System (NFS)
The NFS client is implemented as part of Oracle kernel in ODM library. This improves the ease of-use and stability of accessing NAS storage systems, as well as increasing the availability across different platforms while maintaining a consistent interface.
Enhanced configuration of deletion policies
Archived redo logs are eligible for deletion only when not needed by any required consumers such as Data Guard, Streams, Flashback Database, and so on.
In a Data Guard environment, all standby destinations are considered (instead of just mandatory destinations), before marking archive logs to be deleted. This configuration is specified using the CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY command.
When you CONFIGURE an archived log deletion policy, the configuration applies to all archiving destinations, including the flash recovery area. Both BACKUP ... DELETE INPUT and DELETE... ARCHIVELOG use this configuration, as does the flash recovery area.
When you back up the recovery area, RMAN can fail over to other archived redo log destinations if the archived redo log in the flash recovery area is inaccessible or corrupted.
11g ASM Enhancement
ASM Fast Disk Resync Overview
ASM fast disk resync significantly reduces the time required to resynchronize a transient failure of a disk. When a disk goes offline following a transient failure, ASM tracks the extents that are modified during the outage. When the transient failure is repaired, ASM can quickly resynchronize only the ASM disk extents that have been affected during the outage.
This feature assumes that the content of the affected ASM disks has not been damaged or modified.
When an ASM disk path fails, the ASM disk is taken offline but not dropped if you have set the DISK_REPAIR_TIME attribute for the corresponding disk group. The setting for this attribute determines the duration of a disk outage that ASM tolerates while still being able to resynchronize after you complete the repair.
Note: The tracking mechanism uses one bit for each modified allocation unit. This ensures that the tracking mechanism very efficient.
•Benefits:
•Fraction of time to establish redundancy
•Only changed blocks are resync’ed
•Fast recovery from transient failures
•Enables pro-active maintenance
ASM Preferred Read Failure Groups Overview
When you configure ASM failure groups, ASM in Oracle Database 10g always reads the primary copy of a mirrored extent. It may be more efficient for a node to read from a failure group extent that is closest to the node, even if it is a secondary extent. This is especially true in extended cluster configurations where reading from a local copy of an extent provides improved performance.
With Oracle Database 11g, you can do this by configuring preferred read failure groups using the new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS, to specify a list of preferred read failure group names. The disks in those failure groups become the preferred read disks. Thus, every node can read from its local disks. This results in higher efficiency and performance and reduced network traffic. The setting for this parameter is instance-specific.
•Benefits:
•Allow local mirror read operations
•Eliminate network latencies in extended clusters
•Better performance
Restricted Mount Disk Group For Fast Rebalance
A new mount mode to mount a disk group in Oracle Database 11g is called RESTRICTED. When a disk group is mounted in RESTRICTED mode, clients cannot access the files in a disk group. When ASM instance knows that there are no clients, it can improve the performance of the rebalance operation by not attempting to message clients for locking/unlocking extent maps.
A disk group mounted in RESTRICTED mode is mounted exclusively on only one node and clients of ASM on that node cannot use that disk group.
The RESTRICTED mode allows you to perform all maintenance tasks on a disk group in the ASM instance without any external interaction.
At the end of the maintenance cycle you have to explicitly dismount the disk group and remount it in normal mode.
The ALTER DISKROUP diskgroupname MOUNT command is extended to allow for ASM to mount the diskgroup in restricted mode. An example is shown on the slide.
When you use RESTRICTED option to startup a ASM instance, all the disk groups defined in ASM_DISKGROUPS parameter are mounted in RESTRICTED mode.
11g Dataguard New Feature. 2
Real-Time Query: 2
Snapshot Standby: 2
Transient Logical Standby: 2
Enhanced Data Protection: 2
Enhanced Security: 2
Enterprise Manager Grid Control 11g Enhancements: 2
ROLLING DATABASE UPGRADES. 2
CASCADED DESTINATIONS. 3
11g RMAN Features. 4
Simplified active database duplication. 4
Increased speed of compression when preserving backup space: 4
Optimized undo backup: 4
Ease-of-use by Automatic Network File System (NFS) 4
Enhanced configuration of deletion policies. 4
11g ASM Enhancement 5
ASM Fast Disk Resync Overview.. 5
ASM Preferred Read Failure Groups Overview.. 5
Restricted Mount Disk Group For Fast Rebalance. 6
11g Dataguard New Feature
Real-Time Query:
A physical standby database can be open read-only while apply is active. This allows users attached to a physical standby database to query and report against data that is up to date with the primary database.
Snapshot Standby:
This is a new type of standby database that is created from a physical standby database. Once created, a snapshot standby can be opened read-write to process transactions that are independent of the primary database for test or other purposes. A snapshot standby database will continue to receive and archive updates from the primary database, however, redo data received from the primary will not be applied until the snapshot standby is converted back into a physical standby database and all updates that were made while it was a snapshot standby are discarded. This enables production data to remain in a protected state at all times.
Transient Logical Standby:
Users can convert a physical standby to a transient logical standby database to effect a rolling database upgrade, and then revert to a physical standby once the upgrade is complete (using the KEEP IDENTITY clause). This benefits physical standby users who wish to execute a rolling database upgrade without investing in redundant storage otherwise needed to create a logical standby database.
Enhanced Data Protection:
A Physical Standby can now detect lost datafile writes caused by faulty storage hardware and firmware that lead to data corruption. Data Guard will compare versions of blocks on the standby with that of the incoming redo stream. If there is a version discrepancy it implies a lost write. The user can then failover to the standby database and restore data consistency.
Enhanced Security:
SSL authentication can be used in lieu of password file to authenticate redo transmission. Note: SSL authentication requires use of PKI Certificates, ASO and OID.
Enterprise Manager Grid Control 11g Enhancements:
Enterprise Manager further simplifies management in the areas of:
• Creation of standby databases from existing RMAN backups
• Creation of an Oracle RAC standby database from an Oracle RAC primary.
• Automated standby clones for reporting, development, and test
• Automatic propagation of Enterprise Manager jobs and metric thresholds to the new primary database upon switchover or failover
• Fault-tolerant observer for Fast-Start Failover
• Enterprise Manager Data Recovery Advisor will utilize available standby databases when making recommendations for Intelligent Data Repair (IDR).
ROLLING DATABASE UPGRADES
Oracle Database software upgrades for major release and patchset upgrades (10.1.0.3 onwards) can be performed in a rolling fashion – with near zero database downtime, by using Data Guard SQL Apply (see Figure 2).
Figure 2 –Rolling Database Upgrades with SQL Apply
The steps in a rolling upgrade process involve upgrading the logical standby database to the next release, running in a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the upgraded database, and then finally upgrading the old primary database. While running in a mixed mode for testing purposes, the upgrade can be aborted and the software downgraded, without data loss. For additional data protection during these steps, a second standby database may be used.
Beginning with Oracle Database 11g, a physical standby database can also take advantage of the rolling upgrade feature provided by a logical standby. Through the use of the new KEEP IDENTITY clause option to the SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY statement, a physical standby database can be temporarily converted into a logical standby database for the rolling upgrade, and then reverted back to the original configuration of a primary database and a physical standby database when the upgrade is done.
CASCADED DESTINATIONS
Data Guard provides many flexible configuration options. Using Cascaded Destinations, a physical standby database can forward the redo it receives from the primary database to another standby database. Since the primary database sends redo data to the first standby databases, this feature reduces the load on the primary system, and can also reduce network traffic and use of valuable network resources at the primary site when multiple standby databases are required. Note that Oracle RAC and the Data Guard Broker are not supported in a Data Guard configuration that includes cascaded destinations.
11g RMAN Features
Simplified active database duplication
You can use the "network-aware" DUPLICATE command to create a duplicate or a standby database over the network without a need for pre-existing database backups. The ease-of-use is especially apparent through the Enterprise Manager GUI.
Increased speed of compression when preserving backup space:
You can use the CONFIGURE command to choose between the BZIP2 and ZLIB compression algorithms for RMAN backups. The new ZLIB backup compression algorithm can be 40% faster than previous BZIP2 algorithm. The real world data-warehousing database from one large pharmaceutical company had a compression ratio 2.0:1 with the BZIP2 algorithm, and 1.68:1 with the ZLIB algorithm.
Configure the backup compression algorithm with the following command (replace alg_name with either BZIP2 or ZLIB):
CONFIGURE COMPRESSION ALGORITHM TO 'alg_name';
Note: For more details, see the Oracle Database Backup and Recovery Reference.
Optimized undo backup:
Undo data that is not needed for transaction recovery (for example, for committed transactions), is not backed up. The benefit is reduced overall backup time and storage by not backing up undo that applies to committed transactions. This optimization is automatically enabled.
Ease-of-use by Automatic Network File System (NFS)
The NFS client is implemented as part of Oracle kernel in ODM library. This improves the ease of-use and stability of accessing NAS storage systems, as well as increasing the availability across different platforms while maintaining a consistent interface.
Enhanced configuration of deletion policies
Archived redo logs are eligible for deletion only when not needed by any required consumers such as Data Guard, Streams, Flashback Database, and so on.
In a Data Guard environment, all standby destinations are considered (instead of just mandatory destinations), before marking archive logs to be deleted. This configuration is specified using the CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY command.
When you CONFIGURE an archived log deletion policy, the configuration applies to all archiving destinations, including the flash recovery area. Both BACKUP ... DELETE INPUT and DELETE... ARCHIVELOG use this configuration, as does the flash recovery area.
When you back up the recovery area, RMAN can fail over to other archived redo log destinations if the archived redo log in the flash recovery area is inaccessible or corrupted.
11g ASM Enhancement
ASM Fast Disk Resync Overview
ASM fast disk resync significantly reduces the time required to resynchronize a transient failure of a disk. When a disk goes offline following a transient failure, ASM tracks the extents that are modified during the outage. When the transient failure is repaired, ASM can quickly resynchronize only the ASM disk extents that have been affected during the outage.
This feature assumes that the content of the affected ASM disks has not been damaged or modified.
When an ASM disk path fails, the ASM disk is taken offline but not dropped if you have set the DISK_REPAIR_TIME attribute for the corresponding disk group. The setting for this attribute determines the duration of a disk outage that ASM tolerates while still being able to resynchronize after you complete the repair.
Note: The tracking mechanism uses one bit for each modified allocation unit. This ensures that the tracking mechanism very efficient.
•Benefits:
•Fraction of time to establish redundancy
•Only changed blocks are resync’ed
•Fast recovery from transient failures
•Enables pro-active maintenance
ASM Preferred Read Failure Groups Overview
When you configure ASM failure groups, ASM in Oracle Database 10g always reads the primary copy of a mirrored extent. It may be more efficient for a node to read from a failure group extent that is closest to the node, even if it is a secondary extent. This is especially true in extended cluster configurations where reading from a local copy of an extent provides improved performance.
With Oracle Database 11g, you can do this by configuring preferred read failure groups using the new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS, to specify a list of preferred read failure group names. The disks in those failure groups become the preferred read disks. Thus, every node can read from its local disks. This results in higher efficiency and performance and reduced network traffic. The setting for this parameter is instance-specific.
•Benefits:
•Allow local mirror read operations
•Eliminate network latencies in extended clusters
•Better performance
Restricted Mount Disk Group For Fast Rebalance
A new mount mode to mount a disk group in Oracle Database 11g is called RESTRICTED. When a disk group is mounted in RESTRICTED mode, clients cannot access the files in a disk group. When ASM instance knows that there are no clients, it can improve the performance of the rebalance operation by not attempting to message clients for locking/unlocking extent maps.
A disk group mounted in RESTRICTED mode is mounted exclusively on only one node and clients of ASM on that node cannot use that disk group.
The RESTRICTED mode allows you to perform all maintenance tasks on a disk group in the ASM instance without any external interaction.
At the end of the maintenance cycle you have to explicitly dismount the disk group and remount it in normal mode.
The ALTER DISKROUP diskgroupname MOUNT command is extended to allow for ASM to mount the diskgroup in restricted mode. An example is shown on the slide.
When you use RESTRICTED option to startup a ASM instance, all the disk groups defined in ASM_DISKGROUPS parameter are mounted in RESTRICTED mode.
Subscribe to:
Posts (Atom)