Tom Kyte教你如何制造糟糕的Oracle数据库

这个文档来的有点迟,不过迟到总比不到要好的多。我们可爱的Tom大叔这次很有爱的教导我们如何制造大量性能、管理具糟糕的,无人愿意管理的Oracle数据库。应当说做到这一点是十分不容易的,你需要从不考虑绑定变量,不考虑遵循任何范式同时往一个表里塞上两三百个列(当然有人真的这样做,而且认为这样做没什么不妥),你必须不了解Oracle的任何特性,如果你了解你必然对这些特性嗤之以鼻:”我才不会到那些东西,我用自己的代码来完成”,是的你很满足于一次次地重新制造轮子给自己带来的幸福感,等等。想要制造一个彻头彻底、糟糕到无与伦比的Oracle数据库的你还需要加一把劲,这当然也需要文档的指导,Tom这篇就是为这个来的,读罢这篇Worst Practices你或许还不足以制造世界上最糟糕的Oracle数据库,这不要紧!因为中国有大量的开发糟糕Oracle应用(或者其他什么的)的人才,你们可以组成ACOWPG(ALL CHINA Oracle Worst Practices Group)进一步交流这方面的心得。

好吧,以上以上!如果你和我一样是DBA,工作中有一个这样”蓄意”制造麻烦的家伙就已经够呛了,如果有一堆那么就是doomsday了。
[Read more…]

Script:AWR Trending

Kapil Goyal在他的IOUG presentation中共享了几个很有用的AWR性能诊断脚本,在这里共享一下:

 

 

Select extract(day from snap_interval) * 24 * 60 +
       extract(hour from snap_interval) * 60 +
       extract(minute from snap_interval) "Snapshot Interval",
       extract(day from retention) * 24 * 60 +
       extract(hour from retention) * 60 + extract(minute from retention) "Retention Interval(Minutes) ",
       extract(day from retention) "Retention(in Days) "
  from dba_hist_wr_control;

 

 

System Event Trending

 

event_response.sql
alter session set nls_date_format='dd-mon-yy';
set lines 150 pages 100 echo off feedback off
col date_time heading 'Date time|mm/dd/yy_hh_mi_hh_mi' for a30
col event_name for a26
col waits for 99,999,999,999 heading 'Waits'
col time for 99,999 heading 'Total Wait|Time(sec)'
col avg_wait_ms for 99,999 heading 'Avg Wait|(ms)'
prompt "Enter the date in DD-Mon-YY Format:"
WITH system_event AS
 (select sn.begin_interval_time begin_interval_time,
         sn.end_interval_time end_interval_time,
         se.event_name event_name,
         se.total_waits e_total_waits,
         lag(se.total_waits, 1) over(order by se.snap_id) b_total_waits,
         se.total_timeouts e_total_timeouts,
         lag(se.total_timeouts, 1) over(order by se.snap_id) b_total_timeouts,
         se.time_waited_micro e_time_waited_micro,
         lag(se.time_waited_micro, 1) over(order by se.snap_id) b_time_waited_micro
    from dba_hist_system_event se, dba_hist_snapshot sn
   where trunc(sn.begin_interval_time) = '&Date'
     and se.snap_id = sn.snap_id
     and se.dbid = sn.dbid
     and se.instance_number = sn.instance_number
     and se.dbid = (select dbid from v$database)
     and se.instance_number = (select instance_number from v$instance)
     and se.event_name = '&event_name') select to_char
 (se1.BEGIN_INTERVAL_TIME, 'mm/dd/yy_hh24_mi') || to_char
 (se1.END_INTERVAL_TIME, '_hh24_mi') date_time,
se1.event_name,
se1.e_total_waits - nvl(se1.b_total_waits,
0) waits,
(se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,
0)) / 1000000 time,
((se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,
0)) / 1000) / (se1.e_total_waits - nvl(se1.b_total_waits,
0)) avg_wait_ms from system_event se1 where(se1.e_total_waits - nvl(se1.b_total_waits,
0)) > 0 and nvl(se1.b_total_waits,
0) > 0
/

 

 

Load Profile Trending

 

alter session set nls_date_format='dd-mon-yy';
set lines 160 pages 1000 echo off feedback off
col stat_name for a25
col date_time for a40
col BEGIN_INTERVAL_TIME for a20
col END_INTERVAL_TIME for a20
prompt "Enter the date in DD-Mon-YY Format and Stats you want to trend like 'redo size','physical reads','physical writes','session logical reads' etc."

WITH sysstat AS
 (select sn.begin_interval_time begin_interval_time,
         sn.end_interval_time end_interval_time,
         ss.stat_name stat_name,
         ss.value e_value,
         lag(ss.value, 1) over(order by ss.snap_id) b_value
    from dba_hist_sysstat ss, dba_hist_snapshot sn
   where trunc(sn.begin_interval_time) = '&Date'
     and ss.snap_id = sn.snap_id
     and ss.dbid = sn.dbid
     and ss.instance_number = sn.instance_number
     and ss.dbid = (select dbid from v$database)
     and ss.instance_number = (select instance_number from v$instance)
     and ss.stat_name = '&stat_name')
select to_char(BEGIN_INTERVAL_TIME, 'mm/dd/yy_hh24_mi') || to_char(END_INTERVAL_TIME, '_hh24_mi') date_time,
stat_name,
round((e_value - nvl(b_value,0)) / (extract(day from(end_interval_time - begin_interval_time)) * 24 * 60 * 60
+ extract(hour from(end_interval_time - begin_interval_time)) * 60 * 60
+ extract(minute from(end_interval_time - begin_interval_time)) * 60 + extract(second from(end_interval_time - begin_interval_time))),0) per_sec
from sysstat where(e_value - nvl(b_value,0)) > 0 and nvl(b_value,0) > 0
/

 

 

Time Model Statistics Trend

 

 

alter session set nls_date_format='dd-mon-yy';
set lines 160 pages 1000
col date_time heading 'Date time' for a40
col stat_name heading 'Statistics Name' for a25
col time heading 'Time (s)' for 99,999,999,999
prompt "Enter the date in DD-Mon-YY Format and Stats you want to trend like 'DBtime', 'DB CPU', 'sql execute elapsed time', 'PL/SQL execution elapsed time','parse time elapsed', 'background elapsed time'"
WITH systimemodel AS
 (select sn.begin_interval_time begin_interval_time,
         sn.end_interval_time end_interval_time,
         st.stat_name stat_name,
         st.value e_value,
         lag(st.value, 1) over(order by st.snap_id) b_value
    from DBA_HIST_SYS_TIME_MODEL st, dba_hist_snapshot sn
   where trunc(sn.begin_interval_time) = '&Date'
     and st.snap_id = sn.snap_id
     and st.dbid = sn.dbid
     and st.instance_number = sn.instance_number
     and st.dbid = (select dbid from v$database)
     and st.instance_number = (select instance_number from v$instance)
     and st.stat_name = '&stat_name')
select to_char (BEGIN_INTERVAL_TIME, 'mm/dd/yy_hh24_mi') || to_char (END_INTERVAL_TIME, '_hh24_mi') date_time,
stat_name, round((e_value - nvl(b_value, 0)) / 1000000) time
  from systimemodel
 where(e_value - nvl(b_value, 0)) > 0 and nvl(b_value, 0) > 0
/

 

 

Want to Know if Execution Plan Changed Recently?

 

 

set lines 150 pages 150
col BEGIN_INTERVAL_TIME for a23
col PLAN_HASH_VALUE for 9999999999
col date_time for a30
col snap_id heading 'SnapId'
col executions_delta heading "No. of exec"
col sql_profile heading "SQL|Profile" for a7
col date_time heading 'Date time'

col avg_lio heading 'LIO/exec' for 99999999999.99
col avg_cputime heading 'CPUTIM/exec' for 9999999.99
col avg_etime heading 'ETIME/exec' for 9999999.99
col avg_pio heading 'PIO/exec' for 9999999.99
col avg_row heading 'ROWs/exec' for 9999999.99
SELECT distinct
s.snap_id ,
PLAN_HASH_VALUE,
to_char(s.BEGIN_INTERVAL_TIME,'mm/dd/yy_hh24mi')|| to_char(s.END_INTERVAL_TIME,'_hh24mi') Date_Time,
SQL.executions_delta,
SQL.buffer_gets_delta/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_lio,
--SQL.ccwait_delta,
(SQL.cpu_time_delta/1000000)/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_cputime ,
(SQL.elapsed_time_delta/1000000)/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_etime,
SQL.DISK_READS_DELTA/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_pio,
SQL.rows_processed_total/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_row
--,SQL.sql_profile
FROM
dba_hist_sqlstat SQL,
dba_hist_snapshot s
WHERE
SQL.instance_number =(select instance_number from v$instance)
and SQL.dbid =(select dbid from v$database)
and s.snap_id = SQL.snap_id
AND sql_id in
('&SQLID') order by s.snap_id
/

 

 

xp_awr.sql

 

 

select plan_table_output from table (dbms_xplan.display_awr('&sql_id',null,null,
'ADVANCED +PEEKED_BINDS'));

Script:常用SQL语句优化脚本

SQLT 下载 https://www.askmac.cn/wp-content/uploads/2011/01/sqlt.zip 

[oracle@PD009 xplore]$ pwd
/home/oracle/sqlt/utl/xplore
[oracle@PD009 xplore]$ ls



create_xplore_script.sql  drop_sys_views.sql  drop_user_objects.sql  install.sql  readme.txt  star.sql  sys_views.sql  uninstall.sql  user_objects.sql  xplore_2.zip  xplore.pkb  xplore.pks  xplore_script_1.log  xplore_script_1.sql


SQL> start install
Test Case User: SH
Password: oracle



Installation completed.
You are now connected as SH.

1. Set CBO env if needed
2. Execute @create_xplore_script.sql



SQL> @create_xplore_script.sql

Parameter 1:
XPLORE Method: XECUTE (default) or XPLAIN
"XECUTE" requires /* ^^unique_id */ token in SQL
"XPLAIN" uses "EXPLAIN PLAN FOR" command
Enter "XPLORE Method" [XECUTE]: 

Parameter 2:
Include CBO Parameters: Y (default) or N
Enter "CBO Parameters" [Y]: 

Parameter 3:
Include Exadata Parameters: Y (default) or N
Enter "EXADATA Parameters" [Y]: 

Parameter 4:
Include Fix Control: Y (default) or N
Enter "Fix Control" [Y]: 

Parameter 5:
Generate SQL Monitor Reports: N (default) or Y
Only applicable when XPLORE Method is XECUTE
Enter "SQL Monitor" [N]: 


Review and execute @xplore_script_1.sql




SQL>@xplore_script_1.sql
SH

Parameter 1:
Name of SCRIPT file that contains SQL to be xplored (required)
Note: SCRIPT must contain comment /* ^^unique_id */

Enter value for 1: star.sql

[oracle@PD009 xplore]$ cat star.sql


SELECT /* ^^unique_id */ c.cust_city,
       t.calendar_quarter_desc,
       SUM(s.amount_sold) sales_amount
  FROM sh.sales s, sh.times t, sh.customers c, sh.channels ch
 WHERE s.time_id = t.time_id
   AND s.cust_id = c.cust_id
   AND s.channel_id = ch.channel_id
   AND c.cust_state_province = 'FL'
   AND ch.channel_desc = 'Direct Sales'
   AND t.calendar_quarter_desc IN ('2000-01', '2000-02','1999-12')
 GROUP BY c.cust_city, t.calendar_quarter_desc;


awr trend

alter session set nls_date_format='dd-mon-yy';
set lines 160 pages 1000 echo off feedback off
col stat_name for a25
col date_time for a40
col BEGIN_INTERVAL_TIME for a20
col END_INTERVAL_TIME for a20
prompt "Enter the date in DD-Mon-YY Format and Stats you want to trend like 'redo size','physical reads','physical writes','session logical reads' etc."

WITH sysstat AS
 (select sn.begin_interval_time begin_interval_time,
         sn.end_interval_time end_interval_time,
         ss.stat_name stat_name,
         ss.value e_value,
         lag(ss.value, 1) over(order by ss.snap_id) b_value
    from dba_hist_sysstat ss, dba_hist_snapshot sn
   where  
       ss.snap_id = sn.snap_id
     and ss.dbid = sn.dbid
     and ss.instance_number = sn.instance_number
     and ss.dbid = (select dbid from v$database)
     and ss.instance_number = (select instance_number from v$instance)
     and ss.stat_name = 'DB time')
select to_char(BEGIN_INTERVAL_TIME, 'mm/dd/yy_hh24_mi') || to_char(END_INTERVAL_TIME, '_hh24_mi') date_time,
stat_name,
round((e_value - nvl(b_value,0)) / (extract(day from(end_interval_time - begin_interval_time)) * 24 * 60 * 60
+ extract(hour from(end_interval_time - begin_interval_time)) * 60 * 60
+ extract(minute from(end_interval_time - begin_interval_time)) * 60 + extract(second from(end_interval_time - begin_interval_time))),0) per_sec
from sysstat where(e_value - nvl(b_value,0)) > 0 and nvl(b_value,0) > 0
/


 

生成的XPLORE Report  xplore_report_2

 

 

 

 

select /*+ dynamic_sampling(b 10) dynamic_sampling_est_cdn(b) gather_plan_statistics*/ count(*) from tvb b;
SELECT * FROM TABLE(dbms_xplan.display_cursor(NULL,NULL,'ALLSTATS LAST'));

注意dynamic sampling used for this statement (level=2) 显示的level 2不是真的! level 10在这里真的是LEVEL 10!

EXPLAIN PLAN SET STATEMENT_ID = 'abc' FOR
select count(*) from tvb ;

SELECT * FROM TABLE(dbms_xplan.display('PLAN_TABLE','abc','ALL'));

set linesize 200 pagesize 1400;

select /* FINDSQLID */ SQL_ID,SQL_FULLTEXT from V$SQL  where SQL_TEXT LIKE '%&SQLTEXT%'  and SQL_FULLTEXT NOT LIKE '%FINDSQLID%'
union all
select SQL_ID,SQL_TEXT FROM DBA_HIST_SQLTEXT where SQL_TEXT LIKE '%&SQLTEXT%'
and SQL_TEXT NOT LIKE '%FINDSQLID%';

alter session set events '10046 trace name context forever,level 12';
alter session set events '10053 trace name context forever,level 1';

alter session set tracefile_identifier='10046';
alter session set timed_statistics = true;
alter session set statistics_level=all;
alter session set max_dump_file_size = unlimited;
alter session set events '10046 trace name context forever,level 12';
-- Execute the queries or operations to be traced here --

1.- Please provide AWR and ADDM report from each instance for interval of 30 minutes when the problem is present.
2.- Upload OS log file /var/log/messages
3.- Please upload background process trace files for each instance. LMD, LMS, LMON, DBWR, LGWR, diag, pmon, smon, etc.

有问题请去http://t.askmac.cn/forum-4-1.html提问, 会在一定时间内反馈给你
提问请写明 数据库版本、OS版本、问题类型
如果是性能问题请给出 AWR、ASH、ADDM及10046 TRACE
如果是ORA-600/7445错误请给出ALERT.LOG及其TRACE
如果是RAC CLUTERWARE问题请给出CRSD.LOG和CSSD.LOG

SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ
FROM SYS.x$ksppi x, SYS.x$ksppcv y
WHERE x.inst_id = USERENV ('Instance')
AND y.inst_id = USERENV ('Instance')
AND x.indx = y.indx
AND x.ksppinm like '%disable%';

select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from test;

select spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat));

select name,value from v$system_parameter where ISDEFAULT!='TRUE'  order by 1;

set linesize 200 pagesize 1400
@?/rdbms/admin/utllockt

==========================================================================================>

执行计划历史

Want to Know if Execution Plan Changed Recently?

set lines 150 pages 150
col BEGIN_INTERVAL_TIME for a23
col PLAN_HASH_VALUE for 9999999999
col date_time for a30
col snap_id heading 'SnapId'
col executions_delta heading "No. of exec"
col sql_profile heading "SQL|Profile" for a7
col date_time heading 'Date time'

col avg_lio heading 'LIO/exec' for 99999999999.99
col avg_cputime heading 'CPUTIM/exec' for 9999999.99
col avg_etime heading 'ETIME/exec' for 9999999.99
col avg_pio heading 'PIO/exec' for 9999999.99
col avg_row heading 'ROWs/exec' for 9999999.99
SELECT distinct
s.snap_id ,
PLAN_HASH_VALUE,
to_char(s.BEGIN_INTERVAL_TIME,'mm/dd/yy_hh24mi')|| to_char(s.END_INTERVAL_TIME,'_hh24mi') Date_Time,
SQL.executions_delta,
SQL.buffer_gets_delta/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_lio,
--SQL.ccwait_delta,
(SQL.cpu_time_delta/1000000)/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_cputime ,
(SQL.elapsed_time_delta/1000000)/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_etime,
SQL.DISK_READS_DELTA/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_pio,
SQL.rows_processed_total/decode(nvl(SQL.executions_delta,0),0,1,SQL.executions_delta) avg_row
--,SQL.sql_profile
FROM
dba_hist_sqlstat SQL,
dba_hist_snapshot s
WHERE
SQL.instance_number =(select instance_number from v$instance)
and SQL.dbid =(select dbid from v$database)
and s.snap_id = SQL.snap_id
AND sql_id in
('&SQLID') order by s.snap_id
/

xp_awr.sql

select plan_table_output from table (dbms_xplan.display_awr('&sql_id',null,null,
'ADVANCED +PEEKED_BINDS'));

SELECT to_char(TIME,'hh24:mi') , S.*
  FROM (SELECT NVL(WAIT_CLASS, 'CPU') ACTIVITY,
               TRUNC(SAMPLE_TIME, 'MI') TIME
          FROM GV$ACTIVE_SESSION_HISTORY) V   PIVOT(COUNT(*)  FOR ACTIVITY IN ('CPU' AS "CPU", 'Concurrency' AS "Concurrency", 'System I/O' AS "System I/O", 'User I/O' AS "User I/O", 'Administrative' AS "Administrative", 'Configuration' AS "Configuration", 'Application' AS "Application", 'Network' AS "Network", 'Commit' AS "Commit", 'Scheduler' AS "Scheduler", 'Cluster' AS "Cluster", 'Queueing' AS "Queueing", 'Other' AS "Other"))
 S
 WHERE TIME > SYSDATE - INTERVAL '500' MINUTE
 ORDER BY TIME

   SELECT *
    FROM (SELECT '1.v$sql'||'实例号:'||GV$SQL.inst_id source,
                 SQL_ID,
                 plan_hash_value,
                 TO_CHAR (FIRST_LOAD_TIME) begin_time,
                 '在cursor cache中' end_time,
                 executions "No. of exec",
                 (buffer_gets / executions) "LIO/exec",
                 (cpu_time / executions / 1000000) "CPUTIM/exec",
                 (elapsed_time / executions / 1000000) "ETIME/exec",
                 (disk_reads / executions) "PIO/exec",
                 (ROWS_PROCESSED / executions) "ROWs/exec"
            FROM Gv$SQL 
           WHERE sql_id = '&A'
          UNION ALL
          SELECT '2.sqltuning set' source,
                 sql_id,
                 plan_hash_value,
                 'JUST SQLSET NO DATE' begin_time,
                 'JUST SQLSET NO DATE' end_time,
                 EXECUTIONS "No. of exec",
                 (buffer_gets / executions) "LIO/exec",
                 (cpu_time / executions / 1000000) "CPUTIM/exec",
                 (elapsed_time / executions / 1000000) "ETIME/exec",
                 (disk_reads / executions) "PIO/exec",
                 (ROWS_PROCESSED / executions) "ROWs/exec"
            FROM dba_sqlset_statements
           WHERE SQL_ID = '&A'
          UNION ALL
          SELECT '3.dba_advisor_sqlstats' source,
                 sql_id,
                 plan_hash_value,
                 'JUST SQLSET NO DATE' begin_time,
                 'JUST SQLSET NO DATE' end_time,
                 EXECUTIONS "No. of exec",
                 (buffer_gets / executions) "LIO/exec",
                 (cpu_time / executions / 1000000) "CPUTIM/exec",
                 (elapsed_time / executions / 1000000) "ETIME/exec",
                 (disk_reads / executions) "PIO/exec",
                 (ROWS_PROCESSED / executions) "ROWs/exec"
            FROM dba_sqlset_statements
           WHERE SQL_ID = '&A'
          UNION ALL
          SELECT DISTINCT
                 '4.dba_hist_sqlstat' || '实例号:' || SQL.INSTANCE_NUMBER
                    source,
                 sql_id,
                 PLAN_HASH_VALUE,
                 TO_CHAR (s.BEGIN_INTERVAL_TIME ,'YYYY-MM-DD hh24:mi:ss') begin_time,
                 TO_CHAR (s.END_INTERVAL_TIME,'YYYY-MM-DD hh24:mi:ss') end_time,
                 SQL.executions_delta,
                 SQL.buffer_gets_delta
                 / DECODE (NVL (SQL.executions_delta, 0),
                           0, 1,
                           SQL.executions_delta)
                    "LIO/exec",
                 (SQL.cpu_time_delta / 1000000)
                 / DECODE (NVL (SQL.executions_delta, 0),
                           0, 1,
                           SQL.executions_delta)
                    "CPUTIM/exec",
                 (SQL.elapsed_time_delta / 1000000)
                 / DECODE (NVL (SQL.executions_delta, 0),
                           0, 1,
                           SQL.executions_delta)
                    "ETIME/exec",
                 SQL.DISK_READS_DELTA
                 / DECODE (NVL (SQL.executions_delta, 0),
                           0, 1,
                           SQL.executions_delta)
                    "PIO/exec",
                 SQL.ROWS_PROCESSED_DELTA
                 / DECODE (NVL (SQL.executions_delta, 0),
                           0, 1,
                           SQL.executions_delta)
                    "ROWs/exec"
            FROM dba_hist_sqlstat SQL, dba_hist_snapshot s
           WHERE     SQL.INSTANCE_NUMBER = s.INSTANCE_NUMBER
                 AND SQL.dbid = (SELECT dbid FROM v$database)
                 AND s.snap_id = SQL.snap_id
                 AND sql_id IN ('&A'))
ORDER BY source, begin_time DESC;


prompt 15 Most expensive SQL in the cursor cache


SELECT *
  FROM (SELECT SQL_ID,
               ELAPSED_TIME / 1000000 AS ELAPSED,
               SQL_TEXT
          FROM V$SQLSTATS
         ORDER BY ELAPSED_TIME DESC)
 WHERE ROWNUM <= 15;

prompt 15 Most expensive SQL in the workload repository


select * from (
select stat.sql_id as sql_id, sum(elapsed_time_delta) / 1000000 as elapsed,
(select to_char(substr(st.sql_text,1,55))
from dba_hist_sqltext st
where st.dbid = stat.dbid and st.sql_id = stat.sql_id) as sql_text_fragment
from dba_hist_sqlstat stat, dba_hist_sqltext text
where stat.sql_id = text.sql_id and
stat.dbid = text.dbid
group by stat.dbid, stat.sql_id
order by elapsed desc
) where ROWNUM <= 15;

19c New Feature:Real-time SQL Monitoring for Developers

Mainly scenarios is following :

    1.Database users can generate and view SQL monitor report of the SQL statements issued by themself, without granting any additional privileges.
    2.If users have not been granted the SELECT_CATALOG_ROLE ,they can not generate and view SQL monitor report of SQL statements executed by other users.
    3.If users have been granted the SELECT_CATALOG_ROLE ,they can see SQL monitor report of SQL executed by other users .
And you can generate and view SQL monitoring report from the SQL*PLUS command line by DBMS_SQLTUNE.REPORT_SQL_MONITOR_LIST and DBMS_SQLTUNE.REPORT_SQL_MONITOR_LIST package or from Enterprise Manager (EM) just as 11g (see Doc ID 1229904.1 and Doc ID 1380492.1).

TEST CASE:


    --1.create users
    conn / as sysdba

    --1-1.create Low-privileged users without DBA privilege
    --dep1 for SQL Statement using Tables

    drop user dep1 cascade;
    create user dep1 identified by dep1;
    alter user dep1 quota unlimited on users;
    grant connect,resource to dep1;

    --2.prepare test data
    conn dep1/dep1

    drop table testa_dep1;
    drop table testb_dep1;
    create table testa_dep1(c1 number, c2 char(100));
    create table testb_dep1(c1 number, c2 char(100));

    begin
      for i in 1 .. 200 loop
        for j in 1 .. 100 loop
          insert into testa_dep1 values(i,'A');
          commit;
        end loop;
      end loop;
    end;
    /

    begin
      for i in 1 .. 200 loop
        for j in 1 .. 100 loop
          insert into testb_dep1 values(i,'A');
          commit;
        end loop;
      end loop;
    end;
    /

    --2.Executing SQL and check SQL Monitor Active Report 
    conn dep1/dep1

    select /*+ use_nl(a b) */ count(*)
     from testa_dep1 a, testb_dep1 b
     where a.c1=b.c1;


    --3.Generate and view SQL Monitor List and Active Report 
    --should be able to view SQL monitor report of the SQL statements issued by user-self, without granting any additional privileges.
    --should not be able to view SQL monitor report of SQL statements issued by other users.

    --REPORT_SQL_MONITOR_LIST
    SET LONG 1000000
    SET LONGCHUNKSIZE 1000000
    SET LINESIZE 1000
    SET PAGESIZE 0
    SET TRIM ON
    SET TRIMSPOOL ON
    SET ECHO OFF
    SET FEEDBACK OFF
    spool monitor_list_sql_dep1_active.html
    SELECT DBMS_SQLTUNE.report_sql_monitor_list(type =>'ACTIVE',report_level => 'ALL') AS report FROM dual;
    spool off


    --REPORT_SQL_MONITOR
    set trimspool on
    set trim on
    set pages 0
    set linesize 1000
    set long 1000000
    set longchunksize 1000000
    spool monitor_sql_dep1.html
    select dbms_sqltune.report_sql_monitor(type=>'active') from dual;
    spool off

Oracle优化器:星型转换

Oracle 8i中引入了星型转换(star transformation)的优化器新特性以便更有效地处理星型查询。星型查询语句多用于基于星型模型设计的数据仓库应用中。星型模型的称谓源于该种模型以图形化表现时看起来形似一颗海星。这颗星的中央会由一个或多个事实表(fact tables)组成,而各个触角上则分布着多个维度表(dimension tables),如下图:

星型转换的基本思路是尽量避免直接去扫描星型模式中的事实表,因为这些事实表总会因为存有大量数据而十分庞大,对这些表的全表扫描会引起大量物理读并且效率低下。在典型的星型查询中,事实表总是会和多个与之相比小得多的维度表发生连接(join)操作。典型的事实表针对每一个维度表会存在一个外键(foreign key),除去这些键值(key)外还会存在一些度量字段譬如销售额度(sales amount)。与之对应的键值(key)在维度表上扮演主键的角色。而事实表与维度表间的连接操作一般都会发生在事实表上的外键和与之对应的维度表的主键间。同时这类查询总是会在维度表的其他列上存在限制十分严格的过滤谓词。充分结合这些维度表上的过滤谓词可以有效减少需要从事实表上访问的数据集合。这也就是星型转换(star transformation)的根本目的,仅访问事实表上相关的、过滤后精简的数据集合。

Oracle在Sample Schema示例模式中就存有星型模型的Schema,譬如SH:

SQL> select * from v$version;

BANNER
----------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> select * from global_name;

GLOBAL_NAME
-----------------------------------
www.askmac.cn

SQL> conn maclean/maclean
Connected.

SQL> select table_name,comments
  2    from dba_tab_comments
  3   where owner = 'SH'
  4     and table_name in ('SALES', 'CUSTOMERS', 'CHANNELS', 'TIMES');

TABLE_NAME                     COMMENTS
------------------------------ --------------------------------------------------------------------------------
CHANNELS                       small dimension table
CUSTOMERS                      dimension table
SALES                          facts table, without a primary key; all rows are uniquely identified by the comb
TIMES                          Time dimension table to support multiple hierarchies and materialized views

可以从以上各表的注释(comment)中看到,SALES表是SH模式下一个没有主键的事实表,而CHANNELS、CUSTOMERS、TIMES三个小表充当维度表的角色。我们试着构建以下星型查询语句,该查询用以检索出从1999年12月至2000年2月间Florida州所有城市直销形式的每月销售额。

SQL> col name for a35
SQL> col description for a45
SQL> col value for a8
SQL> select name,value,description from v$system_parameter where name='star_transformation_enabled';

NAME                                VALUE    DESCRIPTION
----------------------------------- -------- ---------------------------------------------
star_transformation_enabled         FALSE    enable the use of star transformation

/* 初始化参数star_transformation_enabled用以控制如何启用星型转换,
    默认为FALSE,该参数可以动态修改
*/

SELECT c.cust_city,
       t.calendar_quarter_desc,
       SUM(s.amount_sold) sales_amount
  FROM sh.sales s, sh.times t, sh.customers c, sh.channels ch
 WHERE s.time_id = t.time_id
   AND s.cust_id = c.cust_id
   AND s.channel_id = ch.channel_id
   AND c.cust_state_province = 'FL'
   AND ch.channel_desc = 'Direct Sales'
   AND t.calendar_quarter_desc IN ('2000-01', '2000-02','1999-12')
 GROUP BY c.cust_city, t.calendar_quarter_desc;

SQL> select * from table(dbms_xplan.display_cursor(format => 'IOSTATS'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------

SQL_ID  ddjm7k72b8p2a, child number 1
-------------------------------------
SELECT /*+ gather_plan_statistics */ c.cust_city,
t.calendar_quarter_desc,        SUM(s.amount_sold) sales_amount   FROM
sh.sales s, sh.times t, sh.customers c, sh.channels ch  WHERE s.time_id
= t.time_id    AND s.cust_id = c.cust_id    AND s.channel_id =
ch.channel_id    AND c.cust_state_province = 'FL'    AND
ch.channel_desc = 'Direct Sales'    AND t.calendar_quarter_desc IN
('2000-01', '2000-02','1999-12')  GROUP BY c.cust_city,
t.calendar_quarter_desc

Plan hash value: 382868716

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |           |      1 |        |     24 |00:00:00.62 |    1735 |   1726 |
|   1 |  HASH GROUP BY                 |           |      1 |     24 |     24 |00:00:00.62 |    1735 |   1726 |
|*  2 |   HASH JOIN                    |           |      1 |   1580 |   6015 |00:00:00.42 |    1735 |   1726 |
|*  3 |    TABLE ACCESS FULL           | CUSTOMERS |      1 |   2438 |   2438 |00:00:01.73 |    1459 |   1455 |
|*  4 |    HASH JOIN                   |           |      1 |   4575 |  74631 |00:00:00.18 |     276 |    271 |
|   5 |     PART JOIN FILTER CREATE    | :BF0000   |      1 |    227 |    182 |00:00:00.04 |      59 |     60 |
|   6 |      MERGE JOIN CARTESIAN      |           |      1 |    227 |    182 |00:00:00.04 |      59 |     60 |
|*  7 |       TABLE ACCESS FULL        | CHANNELS  |      1 |      1 |      1 |00:00:00.01 |       3 |      6 |
|   8 |       BUFFER SORT              |           |      1 |    227 |    182 |00:00:00.02 |      56 |     54 |
|*  9 |        TABLE ACCESS FULL       | TIMES     |      1 |    227 |    182 |00:00:00.02 |      56 |     54 |
|  10 |     PARTITION RANGE JOIN-FILTER|           |      1 |    117K|    117K|00:00:00.09 |     217 |    211 |
|  11 |      TABLE ACCESS FULL         | SALES     |      2 |    117K|    117K|00:00:00.07 |     217 |    211 |
---------------------------------------------------------------------------------------------------------------

可以看到在以上不使用星型转换的执行计划中对事实表SALES执行了全表扫描,这是我们不希望发生的。因为SALES表中每一行记录都对应于一笔销售记录,因此其可能包含数百万行记录。但实际上这其中仅有极小部分是我们在查询中指定的季度在弗罗里达州直销的纪录。若我们启用星型转换,执行计划是否有所改善?

SQL> alter session set star_transformation_enabled=temp_disable;
Session altered.

SQL> alter session set events '10053 trace name context forever,level 1';
Session altered.

在我们的理想当中星型变化会将原查询语句转换成如下形式:

SELECT c.cust_city,
       t.calendar_quarter_desc,
       SUM(s.amount_sold) sales_amount
  FROM sh.sales s, sh.times t, sh.customers c
 WHERE s.time_id = t.time_id
   AND s.cust_id = c.cust_id
   AND c.cust_state_province = 'FL'
   AND t.calendar_quarter_desc IN ('2000-01', '2000-02', '1999-12')
   AND s.time_id IN
       (SELECT time_id
          FROM sh.times
         WHERE calendar_quarter_desc IN ('2000-01', '2000-02', '1999-12'))
   AND s.cust_id IN
       (SELECT cust_id FROM sh.customers WHERE cust_state_province = 'FL')
   AND s.channel_id IN
       (SELECT channel_id
          FROM sh.channels
         WHERE channel_desc = 'Direct Sales')
 GROUP BY c.cust_city, t.calendar_quarter_desc;

/* 以添加AND..IN的形式明确了利用组合过滤谓词来减少需要处理的数据集 */

通过10053优化trace我们可以了解Oracle优化器是如何真正产生这部分过度谓词的:

FPD: Considering simple filter push in query block SEL$C3AF6D21 (#1)
"S"."CHANNEL_ID"=ANY (SELECT /*+ SEMIJOIN_DRIVER */ "CH"."CHANNEL_ID" FROM "SH"."CHANNELS" "CH")
AND "S"."CUST_ID"=ANY (SELECT /*+ SEMIJOIN_DRIVER */ "C"."CUST_ID" FROM "SH"."CUSTOMERS" "C") AND
"S"."TIME_ID"=ANY (SELECT /*+ SEMIJOIN_DRIVER */ "T"."TIME_ID
FPD: Considering simple filter push in query block SEL$ACF30367 (#4)
"T"."CALENDAR_QUARTER_DESC"='2000-01' OR "T"."CALENDAR_QUARTER_DESC"='2000-02' OR "T"."CALENDAR_QUARTER_DESC"='1999-12'
try to generate transitive predicate from check constraints for query block SEL$ACF30367 (#4)
finally: "T"."CALENDAR_QUARTER_DESC"='2000-01' OR "T"."CALENDAR_QUARTER_DESC"='2000-02' OR "T"."CALENDAR_QUARTER_DESC"='1999-12'

FPD: Considering simple filter push in query block SEL$F6045C7B (#3)
"C"."CUST_STATE_PROVINCE"='FL'
try to generate transitive predicate from check constraints for query block SEL$F6045C7B (#3)
finally: "C"."CUST_STATE_PROVINCE"='FL'

FPD: Considering simple filter push in query block SEL$6EE793B7 (#2)
"CH"."CHANNEL_DESC"='Direct Sales'
try to generate transitive predicate from check constraints for query block SEL$6EE793B7 (#2)
finally: "CH"."CHANNEL_DESC"='Direct Sales'

try to generate transitive predicate from check constraints for query block SEL$C3AF6D21 (#1)
finally: "S"."CHANNEL_ID"=ANY (SELECT /*+ SEMIJOIN_DRIVER */ "CH"."CHANNEL_ID" FROM "SH"."CHANNELS" "CH")
AND "S"."CUST_ID"=ANY (SELECT /*+ SEMIJOIN_DRIVER */ "C"."CUST_ID" FROM "SH"."CUSTOMERS" "C")
AND "S"."TIME_ID"=ANY (SELECT /*+ SEMIJOIN_DRIVER */ "T"."TIME_ID

Final query after transformations:******* UNPARSED QUERY IS *******

最终转换后的查询语句:

SELECT "C"."CUST_CITY" "CUST_CITY",
       "T"."CALENDAR_QUARTER_DESC" "CALENDAR_QUARTER_DESC",
       SUM("S"."AMOUNT_SOLD") "SALES_AMOUNT"
  FROM "SH"."SALES" "S", "SH"."TIMES" "T", "SH"."CUSTOMERS" "C"
 WHERE "S"."CHANNEL_ID" = ANY (SELECT /*+ SEMIJOIN_DRIVER */
         "CH"."CHANNEL_ID" "ITEM_1"
          FROM "SH"."CHANNELS" "CH"
         WHERE "CH"."CHANNEL_DESC" = 'Direct Sales')
   AND "S"."CUST_ID" = ANY (SELECT /*+ SEMIJOIN_DRIVER */
         "C"."CUST_ID" "ITEM_1"
          FROM "SH"."CUSTOMERS" "C"
         WHERE "C"."CUST_STATE_PROVINCE" = 'FL')
   AND "S"."TIME_ID" = ANY
 (SELECT /*+ SEMIJOIN_DRIVER */
         "T"."TIME_ID" "ITEM_1"
          FROM "SH"."TIMES" "T"
         WHERE "T"."CALENDAR_QUARTER_DESC" = '2000-01'
            OR "T"."CALENDAR_QUARTER_DESC" = '2000-02'
            OR "T"."CALENDAR_QUARTER_DESC" = '1999-12')
   AND "S"."TIME_ID" = "T"."TIME_ID"
   AND "S"."CUST_ID" = "C"."CUST_ID"
   AND "C"."CUST_STATE_PROVINCE" = 'FL'
   AND ("T"."CALENDAR_QUARTER_DESC" = '2000-01' OR
       "T"."CALENDAR_QUARTER_DESC" = '2000-02' OR
       "T"."CALENDAR_QUARTER_DESC" = '1999-12')
 GROUP BY "C"."CUST_CITY", "T"."CALENDAR_QUARTER_DESC"

/* 要比我们想想的复杂一些,子查询将IN语句化解了,
    并且AND...ANY的形式追加了过度谓词条件
*/

------------------------------------------------------------------+-----------------------------------+---------------+
| Id  | Operation                              | Name             | Rows  | Bytes | Cost  | Time      | Pstart| Pstop |
------------------------------------------------------------------+-----------------------------------+---------------+
| 0   | SELECT STATEMENT                       |                  |       |       |  1710 |           |       |       |
| 1   |  HASH GROUP BY                         |                  |  1254 |   77K |  1710 |  00:00:21 |       |       |
| 2   |   HASH JOIN                            |                  |  1254 |   77K |  1283 |  00:00:16 |       |       |
| 3   |    HASH JOIN                           |                  |  1254 |   45K |   877 |  00:00:11 |       |       |
| 4   |     TABLE ACCESS FULL                  | TIMES            |   227 |  3632 |    18 |  00:00:01 |       |       |
| 5   |     PARTITION RANGE SUBQUERY           |                  |  1254 |   26K |   858 |  00:00:11 | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 6   |      TABLE ACCESS BY LOCAL INDEX ROWID | SALES            |  1254 |   26K |   858 |  00:00:11 | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 7   |       BITMAP CONVERSION TO ROWIDS      |                  |       |       |       |           |       |       |
| 8   |        BITMAP AND                      |                  |       |       |       |           |       |       |
| 9   |         BITMAP MERGE                   |                  |       |       |       |           |       |       |
| 10  |          BITMAP KEY ITERATION          |                  |       |       |       |           |       |       |
| 11  |           BUFFER SORT                  |                  |       |       |       |           |       |       |
| 12  |            TABLE ACCESS FULL           | CHANNELS         |     1 |    13 |     3 |  00:00:01 |       |       |
| 13  |           BITMAP INDEX RANGE SCAN      | SALES_CHANNEL_BIX|       |       |       |           | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 14  |         BITMAP MERGE                   |                  |       |       |       |           |       |       |
| 15  |          BITMAP KEY ITERATION          |                  |       |       |       |           |       |       |
| 16  |           BUFFER SORT                  |                  |       |       |       |           |       |       |
| 17  |            TABLE ACCESS FULL           | TIMES            |   227 |  3632 |    18 |  00:00:01 |       |       |
| 18  |           BITMAP INDEX RANGE SCAN      | SALES_TIME_BIX   |       |       |       |           | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 19  |         BITMAP MERGE                   |                  |       |       |       |           |       |       |
| 20  |          BITMAP KEY ITERATION          |                  |       |       |       |           |       |       |
| 21  |           BUFFER SORT                  |                  |       |       |       |           |       |       |
| 22  |            TABLE ACCESS FULL           | CUSTOMERS        |  2438 |   38K |   406 |  00:00:05 |       |       |
| 23  |           BITMAP INDEX RANGE SCAN      | SALES_CUST_BIX   |       |       |       |           | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 24  |    TABLE ACCESS FULL                   | CUSTOMERS        |  2438 |   62K |   406 |  00:00:05 |       |       |
------------------------------------------------------------------+-----------------------------------+---------------+
Predicate Information:
----------------------
2 - access("S"."CUST_ID"="C"."CUST_ID")
3 - access("S"."TIME_ID"="T"."TIME_ID")
4 - filter(("T"."CALENDAR_QUARTER_DESC"='1999-12' OR "T"."CALENDAR_QUARTER_DESC"='2000-01' OR "T"."CALENDAR_QUARTER_DESC"='2000-02'))
12 - filter("CH"."CHANNEL_DESC"='Direct Sales')
13 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
17 - filter(("T"."CALENDAR_QUARTER_DESC"='1999-12' OR "T"."CALENDAR_QUARTER_DESC"='2000-01' OR "T"."CALENDAR_QUARTER_DESC"='2000-02'))
18 - access("S"."TIME_ID"="T"."TIME_ID")
22 - filter("C"."CUST_STATE_PROVINCE"='FL')
23 - access("S"."CUST_ID"="C"."CUST_ID")
24 - filter("C"."CUST_STATE_PROVINCE"='FL')

从以上演示中可以看到,星型转换添加了必要的对应于维度表约束的子查询谓词。这些子查询谓词又被称为位图半连接谓词(bitmap semi-join predicates,见SEMIJOIN_DRIVER hint)。通过迭代来自于子查询的键值,再通过位图(bitmap)的AND、OR操作(这些位图可以源于位图索引bitmap index,但也可以取自普通的B*tree索引),我们可以做到仅仅访问事实表上的查询相关记录。理想状况下维度表上的过滤谓词可以帮我们过滤掉大量的数据,这样就可以使执行计划效率大大提升。当我们获取到事实表上的相关行后,这部分结果集可能仍需要同维度表使用原谓词重复连接(join back)。某些情况下,重复连接可以被省略,之后我们会提到。

如上演示中列出了星型转换后的查询语句的执行计划。这里可以看到Oracle是使用”TABLE ACCESS BY LOCAL INDEX ROWID”形式访问SALES事实表的,而非全表扫描。这里我们仅关心7-23行的执行计划,服务进程分别在(12,17,22)行从维度表中取得各维度表的相关键值(key value),同时对部分结果集执行了BUFFER SORT操作;在(13,18,23)行的’bitmap index range scan’操作中服务进程从事实表的三个对应于维度表外键的位图索引上(SALES_CHANNEL_BIX,SALES_TIME_BIX,SALES_CUST_BIX)获取了最原始的位图。位图上的每一个bit都对应于事实表上的一行记录。若从子查询中获取的键值(key values)与事实表上的值一致则bit置为1,否则为0。举例而言位图bitmap:[1][0][1][1][0][0][0]..[0](之后都为零)表示事实表上仅有第一、三、四行匹配于由子查询提供的键值。我们假设以上位图是由times表子查询提供的众多键值中的一个(如’2000-01′)的对应于事实表的位图表达式。

接着在执行计划的(10,15,20)行上的’bitmap key iteration’操作会迭代每一个由子查询提供的键值并获取相应的位图。我们假设times表子查询提供的另外2个键值’2000-02’和’1999-12’分别对应的位图为[0][0][0][0][0][1]..[0]和[0][0][0][0][1][0]…[0]即每键值都只有一行符合。

毫无疑问ITERATION迭代操作会为我们生成众多位图,接下来需要对这些不同键值对应的位图进行位图合并操作(BITMAP MERGE,相当于对位图做OR操作),可以看到在上例执行计划中为(9,14,19)行;以我们假设的times表子查询位图合并而言,会生产一个简单的位图[1][0][1][1][1][1][0][0]..[0],这个位图对应事实表上的第一、三、四、五、六行,是对’2000-01′,’2000-02′,’1999-12’三个键值对应位图的合并。

在获得最后位图前我们还需要对来自于三个子查询的位图进一步处理,因为原始查询语句中各约束条件是AND与的形式,因此我们还要对这些已合并的位图执行AND与操作,如执行计划中的第八行”BITMAP AND”,因为是AND与操作所以这步又会过滤掉大量记录。我们假设最终获得的位图是[1][0][1][0]…[0],即仅有第一、三行。

通过最终bitmap位图Oracle可以极高效地生成事实表的ROWID,此步骤表现为第七行的”BITMAP CONVERSION TO ROWIDS”,我们使用这些ROWID来访问事实表取得少量的”绝对”相关记录。以我们的假设而言最终位图仅有2位为1,只需要用这2行的ROWID从事实表上直接fetch2条记录即可,从而避免了低效的全表扫描。

省略重复连接

因为子查询及位图树只是通过维度表上的过滤条件为事实表过滤掉大量的数据,所以从事实表上获取的相关数据仍可能需要重复一次和维度表的连接。省略重复连接的前提是维度表上所有的谓词都是半连接谓词子查询的一部分,And 由子查询检索到的列均唯一(unique) And 维度表的列不被select或group by涉及。在上例中无需对CHANNELS表再次连接的理由是没有select(或group by)CHANNEL表上的列,且channel_id列是唯一的。

临时表转换

若在已知星型转换中重复连接维度表无法被省略的话,Oracle可以将对维度表的子查询结果集存储到内存中的全局临时表(global temporary table)上以避免重复扫描维度表。此外,因为将子查询的结果集物化了,故而若使用并行查询则每个并行子进程(slave)可以直接从物化结果集的临时表中获得数据,而不需要反复执行子查询。

试看以下示例,了解Oracle是如何利用物化临时表避免反复连接的:

SQL> alter session set star_transformation_enabled=true;
Session altered.

SQL> alter session set events '10053 trace name context forever,level 1';
Session altered.

SELECT "T1"."C1" "CUST_CITY",
       "T"."CALENDAR_QUARTER_DESC" "CALENDAR_QUARTER_DESC",
       SUM("S"."AMOUNT_SOLD") "SALES_AMOUNT"
  FROM "SH"."SALES"                      "S",
       "SH"."TIMES"                      "T",
       "SYS"."SYS_TEMP_0FD9D660E_1DF5D6" "T1"
 WHERE "S"."CUST_ID" = ANY (SELECT /*+ SEMIJOIN_DRIVER CACHE_TEMP_TABLE ("T1") */
         "T1"."C0" "C0"
          FROM "SYS"."SYS_TEMP_0FD9D660E_1DF5D6" "T1")
   AND "S"."CHANNEL_ID" = ANY
 (SELECT /*+ SEMIJOIN_DRIVER */
         "CH"."CHANNEL_ID" "ITEM_1"
          FROM "SH"."CHANNELS" "CH"
         WHERE "CH"."CHANNEL_DESC" = 'Direct Sales')
   AND "S"."TIME_ID" = ANY
 (SELECT /*+ SEMIJOIN_DRIVER */
         "T"."TIME_ID" "ITEM_1"
          FROM "SH"."TIMES" "T"
         WHERE "T"."CALENDAR_QUARTER_DESC" = '2000-01'
            OR "T"."CALENDAR_QUARTER_DESC" = '2000-02'
            OR "T"."CALENDAR_QUARTER_DESC" = '1999-12')
   AND "S"."TIME_ID" = "T"."TIME_ID"
   AND "S"."CUST_ID" = "T1"."C0"
   AND ("T"."CALENDAR_QUARTER_DESC" = '2000-01' OR
       "T"."CALENDAR_QUARTER_DESC" = '2000-02' OR
       "T"."CALENDAR_QUARTER_DESC" = '1999-12')
 GROUP BY "T1"."C1", "T"."CALENDAR_QUARTER_DESC"

以上为启用临时表后的星型转换后的查询语句,相应的执行计划如下:
---------------------------------------------------------------------------+-----------------------------------+---------------+
| Id  | Operation                               | Name                     | Rows  | Bytes | Cost  | Time      | Pstart| Pstop |
---------------------------------------------------------------------------+-----------------------------------+---------------+
| 0   | SELECT STATEMENT                        |                          |       |       |   911 |           |       |       |
| 1   |  TEMP TABLE TRANSFORMATION              |                          |       |       |       |           |       |       |
| 2   |   LOAD AS SELECT                        |                          |       |       |       |           |       |       |
| 3   |    TABLE ACCESS FULL                    | CUSTOMERS                |  2438 |   62K |   406 |  00:00:05 |       |       |
| 4   |   HASH GROUP BY                         |                          |  1254 |   64K |   506 |  00:00:07 |       |       |
| 5   |    HASH JOIN                            |                          |  1254 |   64K |   479 |  00:00:06 |       |       |
| 6   |     HASH JOIN                           |                          |  1254 |   45K |   475 |  00:00:06 |       |       |
| 7   |      TABLE ACCESS FULL                  | TIMES                    |   227 |  3632 |    18 |  00:00:01 |       |       |
| 8   |      PARTITION RANGE SUBQUERY           |                          |  1254 |   26K |   456 |  00:00:06 | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 9   |       TABLE ACCESS BY LOCAL INDEX ROWID | SALES                    |  1254 |   26K |   456 |  00:00:06 | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 10  |        BITMAP CONVERSION TO ROWIDS      |                          |       |       |       |           |       |       |
| 11  |         BITMAP AND                      |                          |       |       |       |           |       |       |
| 12  |          BITMAP MERGE                   |                          |       |       |       |           |       |       |
| 13  |           BITMAP KEY ITERATION          |                          |       |       |       |           |       |       |
| 14  |            BUFFER SORT                  |                          |       |       |       |           |       |       |
| 15  |             TABLE ACCESS FULL           | CHANNELS                 |     1 |    13 |     3 |  00:00:01 |       |       |
| 16  |            BITMAP INDEX RANGE SCAN      | SALES_CHANNEL_BIX        |       |       |       |           | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 17  |          BITMAP MERGE                   |                          |       |       |       |           |       |       |
| 18  |           BITMAP KEY ITERATION          |                          |       |       |       |           |       |       |
| 19  |            BUFFER SORT                  |                          |       |       |       |           |       |       |
| 20  |             TABLE ACCESS FULL           | TIMES                    |   227 |  3632 |    18 |  00:00:01 |       |       |
| 21  |            BITMAP INDEX RANGE SCAN      | SALES_TIME_BIX           |       |       |       |           | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 22  |          BITMAP MERGE                   |                          |       |       |       |           |       |       |
| 23  |           BITMAP KEY ITERATION          |                          |       |       |       |           |       |       |
| 24  |            BUFFER SORT                  |                          |       |       |       |           |       |       |
| 25  |             TABLE ACCESS FULL           | SYS_TEMP_0FD9D660E_1DF5D6|  2438 |   12K |     4 |  00:00:01 |       |       |
| 26  |            BITMAP INDEX RANGE SCAN      | SALES_CUST_BIX           |       |       |       |           | KEY(SUBQUERY)| KEY(SUBQUERY)|
| 27  |     TABLE ACCESS FULL                   | SYS_TEMP_0FD9D660E_1DF5D6|  2438 |   36K |     4 |  00:00:01 |       |       |
---------------------------------------------------------------------------+-----------------------------------+---------------+

Predicate Information:
----------------------
3 - filter("C"."CUST_STATE_PROVINCE"='FL')
5 - access("S"."CUST_ID"="C0")
6 - access("S"."TIME_ID"="T"."TIME_ID")
7 - filter(("T"."CALENDAR_QUARTER_DESC"='1999-12' OR "T"."CALENDAR_QUARTER_DESC"='2000-01' OR "T"."CALENDAR_QUARTER_DESC"='2000-02'))
15 - filter("CH"."CHANNEL_DESC"='Direct Sales')
16 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
20 - filter(("T"."CALENDAR_QUARTER_DESC"='1999-12' OR "T"."CALENDAR_QUARTER_DESC"='2000-01' OR "T"."CALENDAR_QUARTER_DESC"='2000-02'))
21 - access("S"."TIME_ID"="T"."TIME_ID")
26 - access("S"."CUST_ID"="C0")

从以上trace中可以看到系统命名的临时表SYS_TEMP_0FD9D660E_1DF5D6缓存CUSTOMERS表,之后原先CUSTOMERS表被SYS_TEMP_0FD9D660E_1DF5D6所取代,原CUSTOMERS表上的cust_id和cust_city列均被替换为别名为T1的临时表的C0和C1列。实际上该临时表也仅需要这2列即可满足计划的需求,所以该临时表以如下查询语句填充:

ST: Subquery text:******* UNPARSED QUERY IS *******
SELECT "C"."CUST_ID" "ITEM_1","C"."CUST_CITY" "ITEM_2" FROM "SH"."CUSTOMERS" "C" WHERE "C"."CUST_STATE_PROVINCE"='FL'
Copy query block qb# -1 () : SELECT /*+ CACHE_TEMP_TABLE(T1) */  "C0" FROM "SYS"."SYS_TEMP_0FD9D660E_1DF5D6" T1
ST: Subquery (temp table) text:******* UNPARSED QUERY IS *******
SELECT /*+ CACHE_TEMP_TABLE ("T1") */ "T1"."C0" "C0" FROM "SYS"."SYS_TEMP_0FD9D660E_1DF5D6" "T1"
Copy query block qb# -1 () : SELECT /*+ CACHE_TEMP_TABLE(T1) */  "C0", "C1" FROM "SYS"."SYS_TEMP_0FD9D660E_1DF5D6" T1
ST: Join back qbc text:******* UNPARSED QUERY IS *******
SELECT /*+ CACHE_TEMP_TABLE ("T1") */ "T1"."C0" "C0","T1"."C1" "C1" FROM "SYS"."SYS_TEMP_0FD9D660E_1DF5D6" "T1"

可以从以上执行计划中看到第一、二、三行的”TEMP TABLE TRANSFORMATION LOAD AS SELECT TABLE ACCESS FULL CUSTOMERS”看到Oracle是如何将子查询物化为临时表的。在第25行,Oracle直接以该临时表替代了子查询来构建我们所需要的位图。到第27行Oracle直接利用该临时表来重复连接,避免再次扫描customers表。因为我们在构建临时表时已经使用谓词条件(如上面的红字语句),故而我们无需对临时表再次过滤。

如何启用星型查询

星型转换由初始化参数star_transformation_enabled控制,该参数可以有三种选项:

  • TRUE: Oracle优化器自动识别语句中的事实表和约束维度表并进行星型转换。这一切优化尝试都在CBO的藩篱内,优化器需要确定转换后的执行计划成本要低于不转换的执行计划;同时优化器还会尝试利用物化的临时表,如果那样真的好的话。
  • False: 优化器不会考虑星型转换。
  • TEMP_DISABLE:当一个维度表超过100个块时,”如果简单地设置star_transformation_enabled为TRUE来启用星型变换,那么会话会创建一个内存中的全局临时表(global temporary table)来保存已过滤的维度数据,这在过去会造成很多问题;”这里说的100个块其实是隐式参数_temp_tran_block_threshold(number of blocks for a dimension before we temp transform)的默认值,此外隐式参数_temp_tran_cache(determines if temp table is created with cache option,默认为TRUE)决定了这类临时表是否被缓存住;为了避免创建全局临时表可能带来的问题,就可以用到TEMP_DISABLE这个禁用临时表的选项,让优化器不再考虑使用物化的临时表。

默认该参数为False,若要问这是为什么?因为星型转换适用的场景是数据仓库环境中具有星型模型的模式,而且需要事实表的各个连接列上均有良好的索引时才能发挥其优势。如果能确定以上因素,那么我们可以放心的使用星型转换了,把star_transformation_enabled改为true或temp_disable吧!

总结

星型转换可以有效改善大的事实表与多个具有良好选择率的维度表间连接的查询。星型转换有效避免了全表扫描的性能窘境。它只fetch那些事实表上的”绝对”相关行。同时星型转换是基于CBO优化器的,Oracle能很好地认清使用该种转换是否有利。一旦维度表上的过滤无法有效减少需要从事实表上处理的数据集和时,那么可能全表扫描相对而言更为恰当。

以上我们力图通过一些简单的查询和执行计划来诠释星型转换的基本理念,但现实生产环境中实际的查询语句可能要复杂的多;举例而言如果查询涉及星型模型中的多个事实表的话,那么其复杂度就大幅提高了;如何正确构建事实表上的索引,收集相关列上的柱状图信息,在Oracle优化器无法正确判断的情况下循循善诱,都是大型数据仓库环境中DBA所面临的难题。

11g 新特性IGNORE_ROW_ON_DUPKEY_INDEX提示

11g中引入一些功能强大的hint提示,其中就包括了IGNORE_ROW_ON_DUPKEY_INDEX。其官方定义为:
“The IGNORE_ROW_ON_DUPKEY_INDEX hint applies only to single-table INSERT operations. It is not supported for UPDATE, DELETE, MERGE, or multitable insert operations. IGNORE_ROW_ON_DUPKEY_INDEX  causes the statement to ignore a unique key violation for a specified set of columns or for a specified index. When a unique key violation is encountered, a row-level rollback occurs and execution resumes with the next input row. If you specify this hint when inserting data with DML error logging enabled, then the unique key violation is not logged and does not cause statement termination.”

针对具有唯一性约束的键,若程序设计时没有考虑到插入具有重复键值的行会引发ORA-00001 unique constraint violated错误,进而可能导致程序过程终止的问题的话;直接修改程序将会是十分复杂的工程。所幸我们在11g中有了”IGNORE_ROW_ON_DUPKEY_INDEX”提示,在INSERT单表的语句中加入该hint可以让Oracle静默地(silently)忽略那些具有重复键值的插入行,而不触发ORA-00001错误,允许程序继续运行下去,这可以说是一种十分简便的折中方案。要在生产环境中使用该特性,我们有必要对比一下其同使用Exception处理违反唯一约束间的性能差别。

SQL> drop table youyus ;

Table dropped.

SQL> create table youyus (t1 int ,t2 varchar2(20),t3 varchar2(30)) tablespace users;

Table created

SQL> create unique index youyus_uk on youyus(t1) tablespace users;

Index created
/*清理现场,添加唯一约束索引*/

SQL> alter system set optimizer_dynamic_sampling=1;

System altered.

SQL> alter system flush shared_pool;

System altered.

SQL> alter system flush buffer_cache;

System altered.

/* 以下过程在0-30000的整数内随机取200000次值,可以确保尝试INSERT大量重复t1键值的行,
以便测试使用DUP_VAL_ON_INDEX Exception时的各项性能参数;此处以及之后我们都将commit置于loop循环外,
从而避免大量commit影响我们的实验结果*/

declare
  rnd int;
begin
  /* 使用exception处理重复键值插入违反约束的问题*/
  for i in 1 .. 200000 loop
    BEGIN
      select round(dbms_random.value * 30000) into rnd from dual;
      insert into youyus
        (t1, t2, t3)
      values
        (rnd, 'DUPLICATE', 'INSERT TEST');
    exception
      when DUP_VAL_ON_INDEX then
        continue;
    end;
  end loop;
  commit;
end;

SQL> select plsql_exec_time,cpu_time,elapsed_time,user_io_wait_time from v$sql where sql_text like 'declare%exception%';

PLSQL_EXEC_TIME   CPU_TIME ELAPSED_TIME USER_IO_WAIT_TIME
--------------- ---------- ------------ -----------------
        4392268   88296566     92345066            621020
/* 逝去时间92s,CPU时间为88s,PLSQL执行时间4s*/

SQL> select count(*) from youyus;

  COUNT(*)
----------
     29958

SQL> truncate table youyus;

Table truncated.

SQL> alter system flush shared_pool;

System altered.

SQL> alter system flush buffer_cache;

System altered.

/* 在插入前判断插入值是否违反唯一约束应当是一种不错的想法,不过写起来多少有些"麻烦"*/

declare
  dup_count int;
  rnd       int;
begin
   /* 使用插入前判断(check before insert)是否违反唯一约束的方式*/
  for i in 1 .. 200000 loop
    BEGIN
      select round(dbms_random.value * 30000) into rnd from dual;
      select count(*) into dup_count from youyus where t1 = rnd;
      IF (dup_count = 0) then
        insert into youyus
          (t1, t2, t3)
        values
          (rnd, 'DUPLICATE', 'INSERT TEST');
      END IF;
    END;
  END LOOP;
  commit;
end;

SQL> select plsql_exec_time,cpu_time,elapsed_time,user_io_wait_time  from v$sql where sql_text like 'declare%check%';

PLSQL_EXEC_TIME   CPU_TIME ELAPSED_TIME USER_IO_WAIT_TIME
--------------- ---------- ------------ -----------------
        2153769   15709301     18265730            679813

/* PLSQL执行时间缩短到2s,整个过程的CPU时间大幅减少到15s*/
/***  以上对比可以得出Exception处理是一种CPU敏感操作的结论  ***/

SQL> select count(*) from youyus;

  COUNT(*)
----------
     29968

SQL> truncate table youyus;

Table truncated.

SQL> alter system flush shared_pool;

System altered.

SQL> alter system flush buffer_cache;

System altered.

declare
  rnd int;
begin
/* 使用IGNORE_ROW_ON_DUPKEY_INDEX hint方式*/
  for i in 1 .. 200000 loop
    select round(dbms_random.value * 30000) into rnd from dual;
    insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(YOUYUS,YOUYUS_UK) */
    into youyus
      (t1, t2, t3)
    values
      (rnd, 'DUPLICATE', 'INSERT TEST');
  end loop;
  commit;
end;

SQL> select plsql_exec_time,cpu_time,elapsed_time,user_io_wait_time  from v$sql where sql_text like 'declare%IGNORE%';

PLSQL_EXEC_TIME   CPU_TIME ELAPSED_TIME USER_IO_WAIT_TIME
--------------- ---------- ------------ -----------------
        2377262   78452903     84209306            623539

SQL> select count(*) from youyus;

  COUNT(*)
----------
     29959

/*** 
IGNORE_ROW_ON_DUPKEY_INDEX hint模式下,
CPU_TIME对比Exception模式时减少11%,但仍远高于插入前预检查模式;
就修改程序的复杂度而言IGNORE_ROW_ON_DUPKEY_INDEX模式要低于使用Exception模式,
而Exception模式又要低于CHECK_BEFORE_INSERT模式;CHECK_BEFORE_INSERT模式的CPU成本最低,但修改程序时的成本时间最高 
                                                                                                  ***/

/*需要注意的是IGNORE_ROW_ON_DUPKEY_INDEX提示与我们以往使用的hint略有不同,不正确使用它将导致报错*/

declare
  rnd int;
begin
/* 使用IGNORE_ROW_ON_DUPKEY_INDEX hint方式*/
  for i in 1 .. 200000 loop
    select round(dbms_random.value * 30000) into rnd from dual;
    insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(YOUYUS,I_AM_ERROR) */
    into youyus
      (t1, t2, t3)
    values
      (rnd, 'DUPLICATE', 'INSERT TEST');
  end loop;
  commit;
end;
/
ERROR at line 1:
ORA-38913: Index specified in the index hint is invalid
ORA-06512: at line 7

上述三者各环节耗时图示:

总结一句,IGNORE_ROW_ON_DUPKEY_INDEX为lazy developer专备。

Oracle中SQL解析的流程

Oracle中SQL解析的主要流程:
sql_parse_digram

我们说的游标概念比较复杂,它可以是客户端程序中的游标,服务进程中的私有游标,以及服务器端共享池里的共享游标。假设一个游标被打开了,一般来说它的共享游标信息(包括执行计划,优化树等)总是会在SQL AREA里,无需再次软/硬解析。

SESSION_CACHED_CURSORS是Oracle中的一个初始化参数(修改必须重启实例),指定了每个会话缓存的游标上限(保留在PGA中);客户端程序中open cursor的要求仍会被传递给服务进程,服务进程首先扫描自身缓存的游标信息,如果命中则可以避免软解析,也有人称它为“软软解析”。

HOLD_CURSOR是预编译程序中的一个参数,它指定了私有游标是否因该被缓存,这里不做展开。

在分析工具tkprof中hard parse与soft parse被同等对待,都被认为是parse;软解析即会造成parse总数上升。

软解析避免了硬解析过程中的几个步骤,但仍包括了初始化的语法,语义解析并计算语句HASH值与SQL AREA中已有语句进行对比;若匹配则查询优化等昂贵的操作得以避免。

另请注意,10053事件仅在硬解析过程中被触发。

How to check and disable Adaptive Cursor Sharing in 11g

_optimizer_adaptive_cursor_sharing=false disables the feature.

There are 2 new columns in V$sql , IS_BIND_SENSITIVE and IS_BIND_AWARE that indicate the status for individual cursors.

1.) The parameter “_optimizer_adaptive_cursor_sharing” can be changed “on the fly”. This means if you issue an ‘alter system set “_optimizer_adaptive_cursor_sharing” = false |true; ‘ will be reflected in any existing session.

Remember, to disable ACS in 11g ,you should also set alter  system set “_optimizer_extended_cursor_sharing_rel”=’NONE’;

The parameter can be set at session or system level.
When set to NONE it stops the code from maintaining the internal statistical data about the binds.

 

And I advise you set “_optimizer_extended_cursor_sharing” = NONE .

 

2.) show parameter will always retrieve non-default settings also for hidden parameters:

sho parameter adapt
_optimizer_adaptive_cursor_sharing boolean FALSE

1.) non-default hidden (=underscore) parameters are shown with “show parameter ”
2.) the setting of hidden (=underscore) parameters are not supposed to be queried by end users.
3.) You may use 10053 tracing for obtaining the information for optimizer related parameters

sqlplus
set lines 200
set null null
set pages 99
set timi on
set time on

alter session set max_dump_file_size=unlimited;
alter session set events ‘10053 trace name context forever, level 1’;

— execute a statement causing a hardparse:

select /* a new comment */ * from dual;

exit

-> Use an editor or an unix command ( ie grep) and search for the _optimizer_adaptive_cursor_sharing parameter in the tracefile.

If you want to restore Optimizer_enabled_features from 11.2.0.1 to 10.2.0.4 , then you set:

alter session set "_optimizer_undo_cost_change" = '10.2.0.4'; -- 11.2.0.1
alter session set "_optimizer_null_aware_antijoin" = false; -- true
alter session set "_optimizer_extend_jppd_view_types" = false; -- true
alter session set "_replace_virtual_columns" = false; -- true
alter session set "_first_k_rows_dynamic_proration" = false; -- true
alter session set "_bloom_pruning_enabled" = false; -- true
alter session set "_optimizer_multi_level_push_pred" = false; -- true
alter session set "_optimizer_group_by_placement" = false; -- true
alter session set "_optimizer_extended_cursor_sharing_rel" = none; -- simple
alter session set "_optimizer_adaptive_cursor_sharing" = false; -- true
alter session set "_optimizer_improve_selectivity" = false ; -- true
alter session set "_optimizer_enable_density_improvements" = false; -- true
alter session set "_optimizer_native_full_outer_join" = off; -- force
alter session set "_optimizer_enable_extended_stats" = false; -- true
alter session set "_nlj_batching_enabled" = 0; -- 1
alter session set "_optimizer_extended_stats_usage_control" = 255; -- 224
alter session set "_bloom_folding_enabled" = false; -- true
alter session set "_optimizer_coalesce_subqueries" = false; -- true
alter session set "_optimizer_fast_pred_transitivity" = false; -- true
alter session set "_optimizer_fast_access_pred_analysis" = false; -- true
alter session set "_optimizer_unnest_disjunctive_subq" = false; -- true
alter session set "_optimizer_unnest_corr_set_subq" = false; -- true
alter session set "_optimizer_distinct_agg_transform" = false; -- true
alter session set "_aggregation_optimization_settings" = 32; -- 0
alter session set "_optimizer_connect_by_elim_dups" = false; -- true
alter session set "_optimizer_eliminate_filtering_join" = false; -- true
alter session set "_connect_by_use_union_all" = old_plan_mode; --true
alter session set "_optimizer_join_factorization" = false; -- true
alter session set "_optimizer_use_cbqt_star_transformation" = false; -- true
alter session set "_optimizer_table_expansion" = false ; -- true
alter session set "_and_pruning_enabled" = false ; -- true
alter session set "_optimizer_distinct_placement" = false ; -- true
alter session set "_optimizer_use_feedback" = false ; -- true
alter session set "_optimizer_try_st_before_jppd" = false ; -- true

【SQL优化】Oracle中的Top-N与分页匹配查询

以下罗列一些Oracle中的Top-N与分页匹配查询优化写法:



CREATE INDEX i_pop ON cities(population);


TOP-N


SELECT * FROM (
  SELECT name, population
  FROM cities
  ORDER BY population DESC
) WHERE rownum <= 5;


SELECT * FROM (
  SELECT name, population
  FROM cities
  WHERE state='Florida'
  ORDER BY population DESC
) WHERE rownum <= 5;



分页

SELECT * FROM (
  SELECT * FROM (
    SELECT name, population,
      rownum AS rn
    FROM cities
    WHERE state='Florida'
    ORDER BY population DESC
  ) WHERE rownum <= 20
) WHERE rn > 10;


Top-N with joins


SELECT * FROM (
  SELECT c.name as city, c.population, s.capital
  FROM cities c, states s
  WHERE c.state_id = s.id
    AND c.state='Florida'
  ORDER BY c.population DESC
) WHERE rownum <= 5
/




OPT_PARAM Hint

Applies to:
Oracle Server – Enterprise Edition – Version: 10.2.0.1 to 10.2.0.3 – Release: 10.2 to 10.2
Information in this document applies to any platform.
Goal
This article is explains the new optimizer hint “OPT_PARAM” introduced in 10g R2.
Solution
“OPT_PARAM” is a new optimizer hint introduced in 10g Release 2. This hint behaves the same way as
setting a parameter (e.g, using alter session) except that the effect is for the statement only. The hint only works for optimizer parameters. Global parameters such as optimizer_features_enable are not covered but optimizer_features_enable specifically has its own hint:

/*+ optimizer_features_enable(‘9.2.0’) */

@For an INTERNAL list of usable parameters see: Note:986618.1 Parameters useable by OPT_PARAM hint
Hint Syntax
The syntax is:

/*+ opt_param( [,] ) */

parameter_name is the name of a parameter
parameter_value is its value.

If the parameter contains a numeric value, the parameter value has to be specified without quotes.
The hint can be used to set multiple parameters by repeating the hint, i.e.

/*+ opt_param( [,] )
opt_param( [,] )
*/

Basic Usage Example
For example, the following hint sets <> to ‘false’ when added to a statement:

/*+ opt_param(‘hash_join_enabled’,’false’) */

e.g.:
Without the hint:


SQL> select empno from emp e, dept d where e.ename=d.dname

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     8 |   160 |     7  (15)| 00:00:01 |
|*  1 |  HASH JOIN         |      |     8 |   160 |     7  (15)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| DEPT |     4 |    40 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| EMP  |    28 |   280 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

With the hint the hash join is disabled choosing a different plan:


SQL> select /*+ opt_param('hash_join_enabled','false') */ empno 
from emp e, dept d where e.ename=d.dname;


----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     8 |   160 |     8  (25)| 00:00:01 |
|   1 |  MERGE JOIN         |      |     8 |   160 |     8  (25)| 00:00:01 |
|   2 |   SORT JOIN         |      |     4 |    40 |     4  (25)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| DEPT |     4 |    40 |     3   (0)| 00:00:01 |
|*  4 |   SORT JOIN         |      |    28 |   280 |     4  (25)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| EMP  |    28 |   280 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------



Multiple Parameter Settings example
The OPT_PARAM hint can be specified more than once Time to adjust more than one parameter at once as follows:


/*+ OPT_PARAM('_always_semi_join' 'off')
      OPT_PARAM('_b_tree_bitmap_plans' 'false')
      OPT_PARAM('query_rewrite_enabled' 'false')
      OPT_PARAM('_new_initial_join_orders' 'false')
      OPT_PARAM('optimizer_dynamic_sampling' 1)
      OPT_PARAM('optimizer_index_cost_adj' 1) */

Tune Very Large Hash Join

set timing on;
alter session set workarea_size_policy=MANUAL;
alter session set workarea_size_policy=MANUAL;

alter session set db_file_multiblock_read_count=512;
alter session set db_file_multiblock_read_count=512;

alter session set events '10351 trace name context forever, level 128';

alter session set hash_area_size=524288000;
alter session set hash_area_size=524288000;

alter session set "_hash_multiblock_io_count"=128;
alter session set "_hash_multiblock_io_count"=128;

alter session enable parallel query;

select /*+   pq_distribute(a hash,hash) parallel(a) parallel(b) */ column1,column2....
  from source_tab a, driving_tab b
 where  condition
 ;



--PQ_DISTRIBUTE(tab,out,in) How to distribute rows from tab in a PQ
--(out/in may be HASH/NONE/BROADCAST/PARTITION)

沪ICP备14014813号-2

沪公网安备 31010802001379号