Oracle Exadata v2的价格

著名的Conor O’Mahony(DB2的市场经理)在他的博客中罗列了使用一台全机架系统(full-rack)Exdata V2所需的费用列表:

  1. $1,150,000 硬件价格
  2. $1,680,000 存储服务器的软件价格
  3. $369,600 存储服务器软件支持和维护费用(以22%计)
  4. $1,520,000 Oracle企业版软件价格($47.5k*8 servers*8 cores*0.5 Intel core factor)
  5. $736,000 Oracle RAC软件价格($23k*8 servers*8 cores*0.5 Intel core factor)
  6. $368,000 Oracle分区特性价格 ($11.5k*8 servers*8 cores*0.5 Intel core factor)
  7. $368,000 Oracle高级压缩(Advanced Compression) ($11.5k*8 servers*8 cores*0.5 Intel core factor)
  8. $160,000 Oracle企业管理器诊断包(推荐安装)
  9. $160,000 Oracle企业管理器调优包(推荐安装)
  10. $728,640 以上除去存储服务器软件的第一年软件维护支持价格(以22%计)

其中Advanced Compression高级压缩特性中的hybrid columnar compression功能,只能在基于Exdata V2的存储上使用,如果想要这个特性,那么只能准备大把的钞票了。关于hybrid columnar compression可以参见这里
包括软件许可证(license)和第一年维护支持,总费用达到了$7,240,240; 同时以上罗列的费用不包括软件安装费用(cost do not include installation)。
以上价目都是有据可查的,大致体现了Oracle Exdata V2数据库服务器仍是一种十分”奢侈的”设备,虽然它和我们理想的Database Machine趋于协律了。
不过以Oracle的销售模式而言,在某个新产品刚刚推出或者产品急需要占领市场并面临激烈竞争的情况下,绝大多数order都会有或多或少的discount,这当然是涉及Oracle与具体客户的商业秘密,局外人是无从得知的。Conor O’Mahony博客中所指出的这个总数只能说是最“悲观的估计”,当然很显然是对竞争对手的攻击以及一种cheap advertisment(廉价广告)。

在Oracle 11g大推一系列高新技术的同时我们看到了IBM的DB2在搞什么?

DB2在搞对Oracle的兼容性:PL/SQL, easy to migrate from Oracle to DB2, Sap Certification;并拉拢Oracle的客户,攻击Oracle的价格。 这一系列的举措想必是IBM破甲行动中一个重要阶段的战略,但也从侧面反映了其数据库技术(RDBMS TECHNOLOGY)确实要落后于Oracle。

呵呵,堂堂之师岂需胜之不武!IBM正在发挥它这个商业公司的本质。

技术型的公司譬如SUN,已经用自己的覆灭证明了一个公司最原始最根本的存在意义是其商业价值,而非其技术有何种优势,抑或理念有多新颖;然而作为崇拜技术的普通人,似乎这类公司更为理想。

Oracle与IBM都是不同于SUN 的商业型的公司,只不过比较起来IBM还是要老道得多,商业得多。

缅怀SUN!

Script:收集UNDO诊断信息

以下脚本可以用于收集Automatic Undo Management的必要诊断信息,以sysdba身份运行:

spool Undo_Diag.out  

ttitle off
set pages 999
set lines 150
set verify off 

set termout off
set trimout on
set trimspool on

REM   
REM ------------------------------------------------------------------------  
  
REM   
REM  -----------------------------------------------------------------  
REM  
  
set space 2  

REM  REPORTING TABLESPACE INFORMATION: 
REM   
REM  This looks at Tablespace Sizing - Total bytes and free bytes  
REM   
 
column tablespace_name  format a30            heading 'TS Name'  
column sbytes           format 9,999,999,999  heading 'Total MBytes'  
column fbytes           format 9,999,999,999  heading 'Free MBytes'  
column file_name        format a30            heading 'File Name'
column kount            format 999            heading 'Ext'  
 
compute sum of fbytes on tablespace_name  
compute sum of sbytes on tablespace_name  
compute sum of sbytes on report  
compute sum of fbytes on report  
 
break on tablespace_name skip 2  
 
select a.tablespace_name,  a.file_name,  round(a.bytes/1024/1024,0) sbytes,  
       round(sum(b.bytes/1024/1024),0) fbytes,  count(*) kount, autoextensible  
from   dba_data_files a,  dba_free_space b  
where  a.file_id  =  b.file_id  
and a.tablespace_name in (select z.tablespace_name from dba_tablespaces z where retention like '%GUARANTEE')
group  by a.tablespace_name, a.file_name, a.bytes, autoextensible
order  by a.tablespace_name  
/  
 
set linesize 160  
 
 
REM   
REM  If you can significantly reduce physical reads by adding incremental  
REM  data buffers...do it.  To determine whether adding data buffers will  
REM  help, set db_block_lru_statistics = TRUE and  
REM  db_block_lru_extended_statistics = TRUE in the init.ora parameters.  
REM  You can determine how many extra hits you would get from memory as  
REM  opposed to physical I/O from disk.  **NOTE:  Turning these on will  
REM  impact performance.  One shift of statistics gathering should be enough  
REM  to get the required information.  
REM   
  

REM   
REM  -----------------------------------------------------------------  
REM

set lines 160

col tablespace_name format a30 heading "Tablespace"
col tb format a15 heading "TB Status"
col df format a10 heading "DF Status"
col extent_management format a15 heading "Extent|Management"
col allocation_type format a8 heading "Type"
col segment_space_management format a7 heading "Auto|Segment"
col retention format a11 heading "Retention|Level"
col autoextensible format a5 heading "Auto?"
col mx format 999,999,999 heading "Max Allowed"

select t.tablespace_name, t.status tb, d.status df,
extent_management, allocation_type, segment_space_management, retention,
autoextensible, (maxbytes/1024/1024) mx
from dba_tablespaces t, dba_data_files d
where t.tablespace_name = d.tablespace_name
and retention like '%GUARANTEE'
/


col status format a20 head "Status"
col cnt format 999,999,999 head "How Many?"

select status, count(*) cnt
from dba_rollback_segs
group by status
/


  

set termout on
set trimout off
set trimspool off
set lines 120
set pages 999

set termout off
set trimout on
set trimspool on

alter session set nls_date_format='dd-Mon-yyyy hh24:mi';


prompt
prompt  ############## RUNTIME ############## 
prompt

col rdate head "Run Time"

select sysdate rdate from dual;

prompt 
prompt  ############## DATAFILES ############## 
prompt 

col retention head "Retention"
col tablespace_name format a30 head "TBSP Name"
col file_id format 999 head "File #"
col a format 999,999,999,999,999 head "Bytes Alloc (MB)"
col b format 999,999,999,999,999 head "Max Bytes Used (MB)"
col autoextensible head "Auto|Ext"
col extent_management head "Ext Mngmnt"
col allocation_type head "Type"
col segment_space_management head "SSM"

select tablespace_name, file_id, sum(bytes)/1024/1024 a, 
       sum(maxbytes)/1024/1024 b, 
       autoextensible
from dba_data_files
where tablespace_name in (select tablespace_name from dba_tablespaces
   where retention like '%GUARANTEE' )
group by file_id, tablespace_name, autoextensible
order by tablespace_name
/

set termout on
set trimout off
set trimspool off

ttitle off
set pages 999
set lines 150
set verify off 

set termout off
set trimout on
set trimspool on

REM   
REM ------------------------------------------------------------------------  
  
REM   
REM  -----------------------------------------------------------------  
REM  
  
REM
REM  REPORTING UNDO EXTENTS INFORMATION:  
REM   
REM  -----------------------------------------------------------------  
REM 
REM  Undo Extents breakdown information
REM

ttitle center "Rollback Segments Breakdown" skip 2

col status format a20
col cnt format 999,999,999 head "How Many?"

select status, count(*) cnt from dba_rollback_segs
group by status
/

ttitle center "Undo Extents" skip 2

col segment_name format a30 heading "Name"
col "ACT BYTES" format 999,999,999,999 head "Active|Extents"
col "UNEXP BYTES" format 999,999,999,999 head "Unxpired|Extents"
col "EXP BYTES" format 999,999,999,999 head "Expired|Extents"

select segment_name,
 nvl(sum(act),0) "ACT BYTES",
 nvl(sum(unexp),0) "UNEXP BYTES",
 nvl(sum(exp),0) "EXP BYTES"
 from (
  select segment_name,
         nvl(sum(bytes),0) act,00 unexp, 00 exp
    from DBA_UNDO_EXTENTS
   where status='ACTIVE' group by segment_name
  union
  select segment_name,
         00 act, nvl(sum(bytes),0) unexp, 00 exp
    from DBA_UNDO_EXTENTS
   where status='UNEXPIRED' group by segment_name
  union
  select segment_name,
         00 act, 00 unexp, nvl(sum(bytes),0) exp
    from DBA_UNDO_EXTENTS
   where status='EXPIRED' group by segment_name
) group by segment_name;

ttitle center "Undo Extents Statistics" skip 2

col size format 999,999,999,999 heading "Size"
col "HOW MANY" format 999,999,999 heading "How Many?"
col st heading a12 heading "Status"

select distinct status st, count(*) "HOW MANY", sum(bytes) "SIZE"
from dba_undo_extents
group by status
/

col segment_name format a30 heading "Name"
col TABLESPACE_NAME for a20
col BYTES for 999,999,999,999
col BLOCKS for 999,999,999
col status for a15 heading "Status"
col segment_name heading "Segment"
col extent_id heading "ID"


select SEGMENT_NAME, TABLESPACE_NAME, EXTENT_ID, 
      FILE_ID, BLOCK_ID, BYTES, BLOCKS, STATUS
from dba_undo_extents
order by 1,3,4,5
/


REM
REM  -----------------------------------------------------------------  
REM 
REM  Undo Extents Contention breakdown
REM  Take out column TUNED_UNDORETENTION if customer 
REM   prior to 10.2.x
REM
REM   The time frame can be adjusted with this query
REM   By default using around 4 hour window of time
REM
REM   Ex.
REM   Using sysdate-.04 looking at the last hour
REM   Using sysdate-.16 looking at the last 4 hours
REM   Using sysdate-.32 looking at the last 8 hours
REM   Using sysdate-1 looking at the last 24 hours
REM

set linesize 140

ttitle center "Undo Extents Error Conditions (Default - Last 4 Hours)" skip 2


col UNXPSTEALCNT format 999,999,999  heading "# Unexpired|Stolen"
col EXPSTEALCNT format 999,999,999   heading "# Expired|Reused"
col SSOLDERRCNT format 999,999,999   heading "ORA-1555|Error"
col NOSPACEERRCNT format 999,999,999 heading "Out-Of-space|Error"
col MAXQUERYLEN format 999,999,999   heading "Max Query|Length"
col TUNED_UNDORETENTION format 999,999,999  heading "Auto-Ajusted|Undo Retention"
col hours format 999,999 heading "Tuned|(HRs)"

select inst_id, to_char(begin_time,'MM/DD/YYYY HH24:MI') begin_time, 
     UNXPSTEALCNT, EXPSTEALCNT , SSOLDERRCNT, NOSPACEERRCNT, MAXQUERYLEN,
     TUNED_UNDORETENTION, TUNED_UNDORETENTION/60/60 hours
from gv$undostat
where begin_time between (sysdate-.16) 
                     and sysdate
order by inst_id, begin_time
/

  
set termout on
set trimout off
set trimspool off


ttitle off
set pages 999
set lines 150
set verify off 
set termout off
set trimout on
set trimspool on

REM   
REM ------------------------------------------------------------------------  
  
col name format a30  
col gets format 9,999,999  
col waits format 9,999,999  
 
PROMPT  ROLLBACK HIT STATISTICS:  
REM   
  
REM  GETS - # of gets on the rollback segment header 
REM  WAITS - # of waits for the rollback segment header  
  
set head on;  
 
select name, waits, gets  
from   v$rollstat, v$rollname  
where  v$rollstat.usn = v$rollname.usn  
/  
 
col pct head "< 2% ideal"
 
select 'The average of waits/gets is '||  
   round((sum(waits) / sum(gets)) * 100,2)||'%' PCT 
From    v$rollstat  
/  
  

  
PROMPT  REDO CONTENTION STATISTICS:

REM   
REM  If the ratio of waits to gets is more than 1% or 2%, consider  
REM  creating more rollback segments  
REM   
REM  Another way to gauge rollback contention is:  
REM   
  
column xn1 format 9999999  
column xv1 new_value xxv1 noprint  
 

 
select class, count  
from   v$waitstat  
where  class in ('system undo header', 'system undo block', 
                 'undo header',        'undo block'          )  
/  

set head off

select 'Total requests = '||sum(count) xn1, sum(count) xv1  
from    v$waitstat  
/  
 
select 'Contention for system undo header = '||  
       (round(count/(&xxv1+0.00000000001),4)) * 100||'%'  
from  v$waitstat  
where   class = 'system undo header'  
/  
 
select 'Contention for system undo block = '||  
       (round(count/(&xxv1+0.00000000001),4)) * 100||'%'  
from    v$waitstat  
where   class = 'system undo block'  
/  
 
select 'Contention for undo header = '||  
       (round(count/(&xxv1+0.00000000001),4)) * 100||'%'  
from    v$waitstat  
where   class = 'undo header'  
/  
 
select 'Contention for undo block = '||  
       (round(count/(&xxv1+0.00000000001),4)) * 100||'%'  
from    v$waitstat  
where   class = 'undo block'  
/  

REM   
REM  NOTE: Not as useful with AUM configured 
REM 
REM  If the percentage for an area is more than 1% or 2%, consider  
REM  creating more rollback segments.  Note:  This value is usually very  
REM  small 
REM  and has been rounded to 4 places.  
REM   
REM ------------------------------------------------------------------------  
  
REM   
REM  The following shows how often user processes had to wait for space in  
REM  the redo log buffer:  
  
select name||' = '||value  
from   v$sysstat  
where  name = 'redo log space requests'  
/  
 
REM   
REM  This value should be near 0.  If this value increments consistently,  
REM  processes have had to wait for space in the redo buffer.  If this  
REM  condition exists over time, increase the size of LOG_BUFFER in the  
REM  init.ora file in increments of 5% until the value nears 0.  
REM  ** NOTE: increasing the LOG_BUFFER value will increase total SGA size.  
REM   
REM  -----------------------------------------------------------------------  
  
  
col name format a15  
col gets format 9999999  
col misses format 9999999  
col immediate_gets heading 'IMMED GETS' format 9999999  
col immediate_misses heading 'IMMED MISS' format 9999999  
col sleeps format 999999  
 
PROMPT  LATCH CONTENTION:  
REM   
REM  GETS - # of successful willing-to-wait requests for a latch  
REM  MISSES - # of times an initial willing-to-wait request was unsuccessful  
REM  IMMEDIATE_GETS - # of successful immediate requests for each latch  
REM  IMMEDIATE_MISSES = # of unsuccessful immediate requests for each latch  
REM  SLEEPS - # of times a process waited and requests a latch after an  
REM           initial willing-to-wait request  
REM   
REM  If the latch requested with a willing-to-wait request is not  
REM  available, the requesting process waits a short time and requests  
REM  again.  
REM  If the latch requested with an immediate request is not available,  
REM  the requesting process does not wait, but continues processing  
REM   

set head on  
select name,          gets,              misses,  
       immediate_gets,  immediate_misses,  sleeps  
from   v$latch  
where  name in ('redo allocation',  'redo copy')  
/  

set head off 

select 'Ratio of MISSES to GETS: '||  
        round((sum(misses)/(sum(gets)+0.00000000001) * 100),2)||'%'  
from    v$latch  
where   name in ('redo allocation',  'redo copy')  
/  
 

select 'Ratio of IMMEDIATE_MISSES to IMMEDIATE_GETS: '||  
        round((sum(immediate_misses)/  
       (sum(immediate_misses+immediate_gets)+0.00000000001) * 100),2)||'%' 
from    v$latch  
where   name in ('redo allocation',  'redo copy')  
/  
 
set head on
REM   
REM  If either ratio exceeds 1%, performance will be affected.  
REM   
REM  Decreasing the size of LOG_SMALL_ENTRY_MAX_SIZE reduces the number of  
REM  processes copying information on the redo allocation latch.  
REM   
REM  Increasing the size of LOG_SIMULTANEOUS_COPIES will reduce contention  
REM  for redo copy latches.  
  
REM   
REM  -----------------------------------------------------------------  
REM  This looks at overall i/o activity against individual  
REM  files within a tablespace  
REM   
REM  Look for a mismatch across disk drives in terms of I/O  
REM   
REM  Also, examine the Blocks per Read Ratio for heavily accessed  
REM  TSs - if this value is significantly above 1 then you may have  
REM  full tablescans occurring (with multi-block I/O)  
REM   
REM  If activity on the files is unbalanced, move files around to balance  
REM  the load.  Should see an approximately even set of numbers across files  
REM   
  
set space 1  

PROMPT  REPORTING I/O STATISTICS:
 
column pbr       format 99999999  heading 'Physical|Blk Read'  
column pbw       format 999999    heading 'Physical|Blks Wrtn'  
column pyr       format 999999    heading 'Physical|Reads'  
column readtim   format 99999999  heading 'Read|Time'  
column name      format a55       heading 'DataFile Name'  
column writetim  format 99999999  heading 'Write|Time'  
 
compute sum of f.phyblkrd, f.phyblkwrt on report  
 
select fs.name name,  f.phyblkrd pbr,  f.phyblkwrt pbw, 
       f.readtim,     f.writetim  
from   v$filestat f, v$datafile fs  
where  f.file#  =  fs.file#  
order  by fs.name  
/  
 
REM   
REM  -----------------------------------------------------------------  
  
PROMPT  GENERATING WAIT STATISTICS:  
REM   
REM  This will show wait stats for certain kernel instances.  This  
REM  may show the need for additional rbs, wait lists, db_buffers  
REM   
 
column class  heading 'Class Type'  
column count  heading 'Times Waited'  format 99,999,999 
column time   heading 'Total Times'   format 99,999,999  
 
select class,  count,  time  
from   v$waitstat  
where  count > 0  
order  by class  
/  
 
REM   
REM  Look at the wait statistics generated above (if any). They will  
REM  tell you where there is contention in the system.  There will  
REM  usually be some contention in any system - but if the ratio of  
REM  waits for a particular operation starts to rise, you may need to  
REM  add additional resource, such as more database buffers, log buffers,  
REM  or rollback segments  
REM   
REM  -----------------------------------------------------------------  
  
PROMPT  ROLLBACK EXTENT STATISTICS:  
REM   


 
column usn        format 999          heading 'Undo #'
column extents    format 999          heading 'Extents'  
column rssize     format 999,999,999  heading 'Size in|Bytes'  
column optsize    format 999,999,999  heading 'Optimal|Size'  
column hwmsize    format 99,999,999   heading 'High Water|Mark'  
column shrinks    format 9,999        heading 'Num of|Shrinks'  
column wraps      format 9,999        heading 'Num of|Wraps'  
column extends    format 999,999      heading 'Num of|Extends'  
column aveactive  format 999,999,999  heading 'Average size|Active Extents'  
column rownum noprint  
 
select usn, extents, rssize,    optsize,  hwmsize,  
       shrinks,   wraps,    extends,  aveactive  
from   v$rollstat  
order  by rownum  
/  



set termout on
set trimout off
set trimspool off

set lines 120
set pages 999

set termout off
set trimout on
set trimspool on



prompt
prompt  ############## RUNTIME ############## 
prompt

col rdate head "Run Time"

select sysdate rdate from dual;

prompt 
prompt  ############## HISTORICAL DATA ############## 
prompt 

col x format 999,999 head "Max Concurrent|Last 7 Days"
col y format 999,999 head "Max Concurrent|Since Startup"

select max(maxconcurrency) x from v$undostat
/
select max(maxconcurrency) y from sys.wrh$_undostat
/

col i format 999,999 head "1555 Errors"
col j format 999,999 head "Undo Space Errors"

select sum(ssolderrcnt) i from v$undostat
where end_time > sysdate-2
/

select sum(nospaceerrcnt) j from v$undostat
where end_time > sysdate-2
/

prompt 
prompt  ############## CURRENT STATUS OF SEGMENTS  ############## 
prompt  ##############   SNAPSHOT IN TIME INFO     ##############
prompt  ##############(SHOWS CURRENT UNDO ACTIVITY)##############
prompt 

col segment_name format a30 head "Segment Name"
col "ACT BYTES" format 999,999,999,999 head "Active Bytes"
col "UNEXP BYTES" format 999,999,999,999 head "Unexpired Bytes"
col "EXP BYTES" format 999,999,999,999 head "Expired Bytes"

select segment_name, nvl(sum(act),0) "ACT BYTES", 
  nvl(sum(unexp),0) "UNEXP BYTES",
  nvl(sum(exp),0) "EXP BYTES"
from (select segment_name, nvl(sum(bytes),0) act,00 unexp, 00 exp
   from dba_undo_extents where status='ACTIVE' group by segment_name
union 
select segment_name, 00 act, nvl(sum(bytes),0) unexp, 00 exp
from dba_undo_extents where status='UNEXPIRED' group by segment_name
union
select segment_name, 00 act, 00 unexp, nvl(sum(bytes),0) exp
from dba_undo_extents where status='EXPIRED' group by segment_name)
group by segment_name
order by 1
/

prompt 
prompt  ############## UNDO SPACE USAGE ############## 
prompt 

col usn format 999,999 head "Segment#"
col shrinks format 999,999,999 head "Shrinks"
col aveshrink format 999,999,999 head "Avg Shrink Size"

select usn, shrinks, aveshrink from v$rollstat
/
set termout on
set trimout off
set trimspool off
set pages 999

set termout off
set trimout on
set trimspool on


prompt
prompt  ############## RUNTIME ############## 
prompt

col rdate head "Run Time"

select sysdate rdate from dual;

col inst_id format 999 head "Instance #"
col Parameter format a35 wrap
col "Session Value" format a25 wrapped
col "Instance Value" format a25 wrapped

prompt
prompt  ############## PARAMETERS ############## 
prompt

select  a.inst_id, a.ksppinm  "Parameter",
             b.ksppstvl "Session Value",
             c.ksppstvl "Instance Value"
      from x$ksppi a, x$ksppcv b, x$ksppsv c
     where a.indx = b.indx and a.indx = c.indx
       and a.inst_id=b.inst_id and b.inst_id=c.inst_id
       and a.ksppinm in ('_undo_autotune', '_smu_debug_mode',
                         '_highthreshold_undoretention',
                'undo_tablespace','undo_retention','undo_management')
order by 2;

set termout on
set trimout off
set trimspool off
set pages 999

set termout off
set trimout on
set trimspool on

prompt
prompt  ############## RUNTIME ############## 
prompt

col rdate head "Run Time"

select sysdate rdate from dual;

prompt 
prompt  ############## WAITS FOR UNDO (Since Startup) ############## 
prompt 

col inst_id head "Instance#"
col eq_type format a3 head "Enq"
col total_req# format 999,999,999,999,999,999 head "Total Requests"
col total_wait# format 999,999 head "Total Waits"
col succ_req# format 999,999,999,999,999,999 head "Successes"
col failed_req# format 999,999,999999 head "Failures"
col cum_wait_time format 999,999,999 head "Cummalitve|Time"

select * from v$enqueue_stat where eq_type='US'
union
select * from v$enqueue_stat where eq_type='HW'
/

prompt 
prompt  ############## LOCKS FOR UNDO ############## 
prompt 

col addr head "ADDR"
col KADDR head "KADDR"
col sid head "Session"
col osuser format a10 head "OS User"
col machine format a15 head "Machine"
col program format a17 head "Program"
col process format a7 head "Process"
col lmode head "Lmode"
col request head "Request"
col ctime format 9,999 head "Time|(Mins)"
col block head "Blocking?"

select /*+ RULE */  a.SID, b.process,
b.OSUSER,  b.MACHINE,  b.PROGRAM, 
addr, kaddr, lmode, request, round(ctime/60/60,0) ctime, block 
from 
v$lock a, 
v$session b 
where 
a.sid=b.sid
and a.type='US'
/

prompt 
prompt  ############## TUNED RETENTION HISTORY (Last 2 Days) ############## 
prompt  ##############        LOWEST AND HIGHEST DATA        ############## 
prompt 

col low format 999,999,999,999 head "Undo Retention|Lowest Tuned Value"
col high format 999,999,999,999 head "Undo Retention|Highest Tuned Value"

select end_time, tuned_undoretention from v$undostat where tuned_undoretention = (
select min(tuned_undoretention) low
from v$undostat
where end_time > sysdate-2)
/

select end_time, tuned_undoretention from v$undostat where tuned_undoretention = (
select max(tuned_undoretention) high
from v$undostat
where end_time > sysdate-2)
/

prompt 
prompt  ############## CURRENT TRANSACTIONS ############## 
prompt 

col sql_text format a40 word_wrapped head "SQL Code"

select a.start_date, a.start_scn, a.status, c.sql_text
from v$transaction a, v$session b, v$sqlarea c
where b.saddr=a.ses_addr and c.address=b.sql_address
and b.sql_hash_value=c.hash_value
/

select current_scn from v$database
/

col a format 999,999 head "UnexStolen"
col b format 999,999 head "ExStolen"
col c format 999,999 head "UnexReuse"
col d format 999,999 head "ExReuse"

prompt 
prompt  ############## WHO'S STEALING WHAT? (Last 2 Days) ############## 
prompt 

select unxpstealcnt a, expstealcnt b,
  unxpblkreucnt c, expblkreucnt d
from v$undostat
where (unxpstealcnt > 0 or expstealcnt > 0)
and end_time > sysdate-2
/

set termout on
set trimout off
set trimspool off
set pages 999

set termout off
set trimout on
set trimspool on

prompt
prompt  ############## RUNTIME ############## 
prompt

col rdate head "Run Time"

select sysdate rdate from dual;

col current_scn head "SCN Now"
col start_date head "Trans Started"
col start_scn head "SCN for Trans"
col ses_addr head "ADDR"

prompt 
prompt  ############## Historical V$UNDOSTAT (Last 2 Days) ############## 
prompt 


col end_time format a18 Head "Date/Time"
col maxq format 999,999 head "Query|Maximum|Minutes"
col maxquerysqlid head "SqlID"
col undotsn format 999,999 head "TBS"
col undoblks format 999,999,999 head "Undo|Blocks"
col txncount format 999,999,999 head "# of|Trans"
col unexpiredblks format 999,999,999 head "# of Unexpired"
col expiredblks format 999,999,999 head "# of Expired"
col tuned format 999,999 head "Tuned Retention|(Minutes)"

select end_time, round(maxquerylen/60,0) maxq, maxquerysqlid,
undotsn, undoblks, txncount, unexpiredblks, expiredblks, 
round(tuned_undoretention/60,0) Tuned
from dba_hist_undostat
where end_time > sysdate-2
order by 1
/

prompt 
prompt  ############## RECENT MISSES FOR UNDO (Last 2 Days) ############## 
prompt 

set lines 500
select * from v$undostat where maxquerylen > tuned_undoretention
and end_time > sysdate-2
order by 2
/

select * from sys.wrh$_undostat where maxquerylen > tuned_undoretention
and end_time > sysdate-2
order by 2
/

prompt 
prompt  ############## AUTO-TUNING TUNE-DOWN DATA    ############## 
prompt  ############## ROLLBACK DATA (Since Startup) ############## 
prompt 

col name format a60 head "Name"
col value format 999,999,999 head "Counters"

select name, value from v$sysstat
where name like '%down retention%' or name like 'une down%'
or name like '%undo segment%' or name like '%rollback%'
or name like '%undo record%'
/

prompt 
prompt  ############## Long Running Query History ############## 
prompt 

col end_time head "Date"
col maxquerysqlid head "SQL ID"
col runawayquerysqlid format a15 head "Runaway SQL ID"
col results format a35 word_wrapped head "Space Issues"
col status head "Status"
col newret head "Tuned Down|Retention"

select end_time, maxquerysqlid, runawayquerysqlid, status,
        decode(status,1,'Slot Active',4,'Reached Best Retention',5,'Reached Best Retention',
                    8, 'Runaway Query',9,'Runaway Query-Active',10,'Space Pressure',
                   11,'Space Pressure Currently',
                   16, 'Tuned Down (to undo_retention) due to Space Pressure', 
                   17,'Tuned Down (to undo_retention) due to Space Pressure-Active',
                   18, 'Tuning Down due to Runaway', 19, 'Tuning Down due to Runaway-Active',
                   28, 'Runaway tuned down to last tune down value',
                   29, 'Runaway tuned down to last tune down value',
                   32, 'Max Tuned Down - Not Auto-Tuning',
                   33, 'Max Tuned Down - Not Auto-Tuning (Active)',
                   37, 'Max Tuned Down - Not Auto-Tuning (Active)', 
                   38, 'Max Tuned Down - Not Auto-Tuning', 
                   39, 'Max Tuned Down - Not Auto-Tuning (Active)', 
                   40, 'Max Tuned Down - Not Auto-Tuning', 
                   41, 'Max Tuned Down - Not Auto-Tuning (Active)', 
                   42, 'Max Tuned Down - Not Auto-Tuning', 
                   44, 'Max Tuned Down - Not Auto-Tuning', 
                   45, 'Max Tuned Down - Not Auto-Tuning (Active)', 
                   'Other ('||status||')') Results, spcprs_retention NewRet
from sys.wrh$_undostat
where status > 1
/



prompt 
prompt  ############## Details on Long Run Queries ############## 
prompt 

col sql_fulltext head "SQL Text"
Col sql_id heading "SQL ID"

select sql_id, sql_fulltext, last_load_time "Last Load", 
round(elapsed_time/60/60/24,0) "Elapsed Days" 
from v$sql where sql_id in 
(select maxquerysqlid from sys.wrh$_undostat 
where status > 1)
/

set termout on
set trimout off
set trimspool off
set pages 999

set termout off
set trimout on
set trimspool on

prompt
prompt  ############## RUNTIME ############## 
prompt

col rdate head "Run Time"

select sysdate rdate from dual;

prompt 
prompt  ############## IN USE Undo Data ############## 
prompt 

select 
((select (nvl(sum(bytes),0)) 
from dba_undo_extents 
where tablespace_name in (select tablespace_name from dba_tablespaces
   where retention like '%GUARANTEE' )
and status in ('ACTIVE','UNEXPIRED')) *100) / 
(select sum(bytes) 
from dba_data_files 
where tablespace_name in (select tablespace_name from dba_tablespaces
   where retention like '%GUARANTEE' )) "PCT_INUSE" 
from dual; 


select tablespace_name, extent_management, allocation_type,
segment_space_management, retention
from dba_tablespaces where retention like '%GUARANTEE'
/

col c format 999,999,999,999 head "Sum of Free"

select (nvl(sum(bytes),0)) c from dba_free_space
where tablespace_name in
(select tablespace_name from dba_tablespaces where retention like '%GUARANTEE')
/

col d format 999,999,999,999 head "Total Bytes"

select sum(bytes) d from dba_data_files
where tablespace_name in
(select tablespace_name from dba_tablespaces where retention like '%GUARANTEE')
/


PROMPT
PROMPT  ############## UNDO SEGMENTS ############## 
PROMPT

col status head "Status"
col z format 999,999 head "Total Extents"
break on report
compute sum on report of z

select status, count(*) z from dba_undo_extents
group by status
/

col z format 999,999 head "Undo Segments"

select status, count(*) z from dba_rollback_segs
group by status
/


prompt 
prompt  ############## CURRENT STATUS OF SEGMENTS  ############## 
prompt  ##############   SNAPSHOT IN TIME INFO     ##############
prompt  ##############(SHOWS CURRENT UNDO ACTIVITY)##############
prompt 


col segment_name format a30 head "Segment Name"
col "ACT BYTES" format 999,999,999,999 head "Active Bytes"
col "UNEXP BYTES" format 999,999,999,999 head "Unexpired Bytes"
col "EXP BYTES" format 999,999,999,999 head "Expired Bytes"

select segment_name, nvl(sum(act),0) "ACT BYTES", 
  nvl(sum(unexp),0) "UNEXP BYTES",
  nvl(sum(exp),0) "EXP BYTES"
from (select segment_name, nvl(sum(bytes),0) act,00 unexp, 00 exp
   from dba_undo_extents where status='ACTIVE' group by segment_name
union 
select segment_name, 00 act, nvl(sum(bytes),0) unexp, 00 exp
from dba_undo_extents where status='UNEXPIRED' group by segment_name
union
select segment_name, 00 act, 00 unexp, nvl(sum(bytes),0) exp
from dba_undo_extents where status='EXPIRED' group by segment_name)
group by segment_name
order by 1
/

prompt 
prompt  ############## UNDO SPACE USAGE ############## 
prompt 

col usn format 999,999 head "Segment#"
col shrinks format 999,999,999 head "Shrinks"
col aveshrink format 999,999,999 head "Avg Shrink Size"

select usn, shrinks, aveshrink from v$rollstat
/
set termout on
set trimout off
set trimspool off
spool off

Script:列出数据库中子表上没有对应索引的外键

该脚本用于列出在子表上没有对应索引的外键,没有索引可能引发额外的表锁:

"You should almost always index foreign keys. 
The only exception is when the matching unique or primary key is never updated or deleted."

When a foreign key is unindexed, DML on the parent primary key results in a share row exclusive table lock 
(or share-subexclusive table lock, SSX) on the child table, preventing DML from other transactions 
against the child table. If the DML affects several rows in the parent table, the lock on the child table 
is obtained and released immediately for each row in turn. Despite the speed of the lock-release process, 
this can cause significant amounts of contention on the child table during periods of 
heavy update/delete activity on the parent table.

When a foreign key is indexed, DML on the parent primary key results in a row share table lock 
(or subshare table lock, SS) on the child table. This type of lock prevents other transactions 
from issuing whole table locks on the child table, but does not block DML on either the parent or 
the child table. Only the rows relating to the parent primary key are locked in the child table.

Script:

REM  List foreign keys with no matching index on child table - causes locks

set linesize 150;

col owner for a20;
col COLUMN_NAME for a20;

SELECT c.owner,
         c.constraint_name,
         c.table_name,
         cc.column_name,
         c.status
    FROM dba_constraints c, dba_cons_columns cc
   WHERE c.constraint_type = 'R'
         AND c.owner NOT IN
                ('SYS',
                 'SYSTEM',
                 'SYSMAN',
                 'EXFSYS',
                 'WMSYS',
                 'OLAPSYS',
                 'OUTLN',
                 'DBSNMP',
                 'ORDSYS',
                 'ORDPLUGINS',
                 'MDSYS',
                 'CTXSYS',
                 'AURORA$ORB$UNAUTHENTICATED',
                 'XDB',
                 'FLOWS_030000',
                 'FLOWS_FILES')
         AND c.owner = cc.owner
         AND c.constraint_name = cc.constraint_name
         AND NOT EXISTS
                    (SELECT 'x'
                       FROM dba_ind_columns ic
                      WHERE     cc.owner = ic.table_owner
                            AND cc.table_name = ic.table_name
                            AND cc.column_name = ic.column_name
                            AND cc.position = ic.column_position
                            AND NOT EXISTS
                                       (SELECT owner, index_name
                                          FROM dba_indexes i
                                         WHERE     i.table_owner = c.owner
                                               AND i.index_Name = ic.index_name
                                               AND i.owner = ic.index_owner
                                               AND (i.status = 'UNUSABLE'
                                                    OR i.partitioned = 'YES'
                                                       AND EXISTS
                                                              (SELECT 'x'
                                                                 FROM dba_ind_partitions ip
                                                                WHERE status =
                                                                         'UNUSABLE'
                                                                      AND ip.
                                                                           index_owner =
                                                                             i.
                                                                              owner
                                                                      AND ip.
                                                                           index_Name =
                                                                             i.
                                                                              index_name
                                                               UNION ALL
                                                               SELECT 'x'
                                                                 FROM dba_ind_subpartitions isp
                                                                WHERE status =
                                                                         'UNUSABLE'
                                                                      AND isp.
                                                                           index_owner =
                                                                             i.
                                                                              owner
                                                                      AND isp.
                                                                           index_Name =
                                                                             i.
                                                                              index_name))))
ORDER BY 1, 2
/

11g compression 新特性(1)

11g引入了大量compress相关的特性,其中之一便是dbms_compression包;GET_COMPRESSION_RATIO函数可以帮助我们了解压缩某个表后各种可能的影响。换而言之,这个函数可以让我们在具体实施表压缩技术或者测试前,对于压缩后的效果能有一个基本的印象。该包在11gr2中被首次引入,故而使用之前版本的包括11gr1都无缘得用。其次除OLTP压缩模式之外的柱形混合压缩只能在基于Exdata存储的表空间上实现。使用DBMS_COMPRESSION包获取的相关压缩信息是十分准确的,因为在评估过程中Oracle通过实际采样并建立模型表以尽可能还原逼真的数据。 我们可以通过trace来分析其评估过程中的具体操作,可以分成2步:

1. 建立原表的样本表,其采样值基于原表的大小:

SQL> create table samp_dss_nation tablespace SCRATCH as select * from dss_nation sample block (50);

Table created.

2. 基于采用表建立对应压缩类型的模型表:

SQL> create table model_dss_nation tablespace SCRATCH compress for query high as select * from samp_dss_nation;
create table model_dss_nation tablespace SCRATCH compress for query high as select * from samp_dss_nation
*
ERROR at line 1:
ORA-64307: hybrid columnar compression is only supported in tablespaces
residing on Exadata storage

可以看到在实际建立过程中Oracle将拒绝在非Exdata存储的表空间上建立该类柱形混合压缩(包括:COMP_FOR_QUERY_HIGH,COMP_FOR_QUERY_LOW,COMP_FOR_ARCHIVE_HIGH,CO
MP_FOR_ARCHIVE_LOW)。但DBMS_COMPRESSION在进行评估时可以绕过Oracle对于该类操作的LOCK.

要在没有Exdata存储设备的情况下使用dbms_compression包评测OLTP压缩模式外的柱状混合压缩模式时
(hybrid columnar compression is only supported in tablespaces residing on Exadata storage),首先需要打上patch 8896202:

[oracle@rh2 admin]$ /s01/dbhome_1/OPatch/opatch lsinventory
Invoking OPatch 11.1.0.6.6

Oracle Interim Patch Installer version 11.1.0.6.6
Copyright (c) 2009, Oracle Corporation.  All rights reserved.

Oracle Home       : /s01/dbhome_1
Central Inventory : /s01/oraInventory
from           : /etc/oraInst.loc
OPatch version    : 11.1.0.6.6
OUI version       : 11.2.0.1.0
OUI location      : /s01/dbhome_1/oui
Log file location : /s01/dbhome_1/cfgtoollogs/opatch/opatch2010-06-02_23-08-33PM.log

Patch history file: /s01/dbhome_1/cfgtoollogs/opatch/opatch_history.txt

Lsinventory Output file location : /s01/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2010-06-02_23-08-33PM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.1.0
There are 1 products installed in this Oracle Home.

Interim patches (1) :

Patch  8896202      : applied on Wed Jun 02 21:55:44 CST 2010
Unique Patch ID:  11909460
Created on 29 Oct 2009, 15:21:45 hrs US/Pacific
Bugs fixed:
8896202

该patch用以:ENABLE COMPRESSION ADVISOR TO ESTIMATE EXADATA HCC COMPRESSION RATIOS

接着我们还需要运行被修改后的DBMSCOMP包创建SQL,具体操作为:

SQL> @?/rdbms/admin/prvtcmpr.plb

Package created.

Grant succeeded.

Package body created.

No errors.

Package body created.

No errors.

Type body created.

No errors.
SQL> @?/rdbms/admin/dbmscomp.sql

Package created.

Synonym created.

Grant succeeded.

No errors.

DBMS_COMPRESSION包在对表压缩进行评估时,默认表最少数据为1000000行,可能在你的测试库中没有这么多数据,我们可以修改这个下限;

通过将COMP_RATIO_MINROWS常数修改为1后,就可以分析最小为1行的表了:

SQL>create or replace package sys.dbms_compression authid current_user is

  COMP_NOCOMPRESS       CONSTANT NUMBER := 1;
  COMP_FOR_OLTP         CONSTANT NUMBER := 2;
  COMP_FOR_QUERY_HIGH   CONSTANT NUMBER := 4;
  COMP_FOR_QUERY_LOW    CONSTANT NUMBER := 8;
  COMP_FOR_ARCHIVE_HIGH CONSTANT NUMBER := 16;
  COMP_FOR_ARCHIVE_LOW  CONSTANT NUMBER := 32;

  COMP_RATIO_MINROWS CONSTANT NUMBER := 10;
  COMP_RATIO_ALLROWS CONSTANT NUMBER := -1;

  PROCEDURE get_compression_ratio(scratchtbsname IN varchar2,
                                  ownname        IN varchar2,
                                  tabname        IN varchar2,
                                  partname       IN varchar2,
                                  comptype       IN number,
                                  blkcnt_cmp     OUT PLS_INTEGER,
                                  blkcnt_uncmp   OUT PLS_INTEGER,
                                  row_cmp        OUT PLS_INTEGER,
                                  row_uncmp      OUT PLS_INTEGER,
                                  cmp_ratio      OUT NUMBER,
                                  comptype_str   OUT varchar2,
                                  subset_numrows IN number DEFAULT COMP_RATIO_MINROWS);

  function get_compression_type(ownname IN varchar2,
                                tabname IN varchar2,
                                row_id  IN rowid) return number;

  PROCEDURE incremental_compress(ownname         IN dba_objects.owner%type,
                                 tabname         IN dba_objects.object_name%type,
                                 partname        IN dba_objects.subobject_name%type,
                                 colname         IN varchar2,
                                 dump_on         IN number default 0,
                                 autocompress_on IN number default 0,
                                 where_clause    IN varchar2 default '');

end dbms_compression;

Package created.

SQL> alter package dbms_compression compile body;

Package body altered.

接下来我们通过建立一个基于TPC-D的测试的Schema,保证各表上有较多的数据,并且数据有一定的拟真度:

SQL> select table_name,num_rows,blocks from user_tables ;

TABLE_NAME                       NUM_ROWS     BLOCKS
------------------------------ ---------- ----------
DSS_SUPPLIER                        20000        496
DSS_PART                           400000       7552
DSS_REGION                              5          5
DSS_PARTSUPP                      1600000      29349
DSS_LINEITEM                     12000000     221376
DSS_ORDER                         3000000      48601
DSS_CUSTOMER                       300000       6922
DSS_NATION                             25          5

现在可以进行压缩评估了,我们针对测试模型Schema编辑以下匿名块并运行

SQL> set serveroutput on;
SQL> declare
  cmp_blk_cnt   binary_integer;
  uncmp_blk_cnt binary_integer;
  cmp_rows      binary_integer;
  uncmp_rows    binary_integer;
  cmp_ratio     number;
  cmp_typ       varchar2(100);
BEGIN
  for i in (SELECT TABLE_NAME
              from dba_tables
             where compression = 'DISABLED'
               and owner = 'MACLEAN' and num_rows>1000000) loop
    for j in 1 .. 5 loop
      dbms_compression.get_compression_ratio(scratchtbsname => 'SCRATCH',
                                             ownname        => 'MACLEAN',
                                             tabname        => i.table_name,
                                             partname       => NULL,
                                             comptype       => power(2, j),
                                             blkcnt_cmp     => cmp_blk_cnt,
                                             blkcnt_uncmp   => uncmp_blk_cnt,
                                             row_cmp        => cmp_rows,
                                             row_uncmp      => uncmp_rows,
                                             cmp_ratio      => cmp_ratio,
                                             comptype_str   => cmp_typ);
      dbms_output.put_line(i.table_name || '--' || 'compress_type is ' ||
                           cmp_typ || ' ratio :' ||
                           to_char(cmp_ratio, '99.9') || '%');

    end loop;
  end loop;
end;
/
DSS_ORDER--compress_type is "Compress For OLTP" ratio :  1.1%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_ORDER--compress_type is "Compress For Query High" ratio :  2.7%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_ORDER--compress_type is "Compress For Query Low" ratio :  1.7%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_ORDER--compress_type is "Compress For Archive High" ratio :  2.9%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_ORDER--compress_type is "Compress For Archive Low" ratio :  2.7%
DSS_PARTSUPP--compress_type is "Compress For OLTP" ratio :   .9%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_PARTSUPP--compress_type is "Compress For Query High" ratio :  1.8%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_PARTSUPP--compress_type is "Compress For Query Low" ratio :  1.2%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_PARTSUPP--compress_type is "Compress For Archive High" ratio :  1.9%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_PARTSUPP--compress_type is "Compress For Archive Low" ratio :  1.8%
DSS_LINEITEM--compress_type is "Compress For OLTP" ratio :  1.4%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_LINEITEM--compress_type is "Compress For Query High" ratio :  3.5%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_LINEITEM--compress_type is "Compress For Query Low" ratio :  2.3%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_LINEITEM--compress_type is "Compress For Archive High" ratio :  4.3%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
DSS_LINEITEM--compress_type is "Compress For Archive Low" ratio :  3.7%

PL/SQL procedure successfully completed.

可以从上述测试看到,”Compress For Archive High”压缩率最高,该类型最适合于数据归档存储,但其算法复杂度高于”Compress For Archive Low”,压缩耗时亦随之上升。
总体压缩率都较低,这同TPC-D测试的数据建模有一定关联,我们再使用一组TPC-H的测试数据来模拟压缩:

SQL> conn liu/liu;
Connected.

SQL> select num_rows,blocks,table_name from user_tables;

  NUM_ROWS     BLOCKS TABLE_NAME
---------- ---------- ------------------------------
   3000000      46817 H_ORDER
    300000       6040 H_CUSTOMER
  12000000     221376 H_LINEITEM
        25          5 H_NATION
    400000       7552 H_PART
         5          5 H_REGION
   1600000      17491 H_PARTSUPP
     20000        496 H_SUPPLIER

8 rows selected.

SQL> set serveroutput on;
SQL> declare
  cmp_blk_cnt   binary_integer;
  uncmp_blk_cnt binary_integer;
  cmp_rows      binary_integer;
  uncmp_rows    binary_integer;
  cmp_ratio     number;
  cmp_typ       varchar2(100);
BEGIN
  for i in (SELECT TABLE_NAME
              from dba_tables
             where compression = 'DISABLED'
               and owner = 'LIU' and num_rows>1000000) loop
    for j in 1 .. 5 loop
      dbms_compression.get_compression_ratio(scratchtbsname => 'SCRATCH',
                                             ownname        => 'LIU',
                                             tabname        => i.table_name,
                                             partname       => NULL,
                                             comptype       => power(2, j),
                                             blkcnt_cmp     => cmp_blk_cnt,
                                             blkcnt_uncmp   => uncmp_blk_cnt,
                                             row_cmp        => cmp_rows,
                                             row_uncmp      => uncmp_rows,
                                             cmp_ratio      => cmp_ratio,
                                             comptype_str   => cmp_typ);
      dbms_output.put_line(i.table_name || '--' || 'compress_type is ' ||
                           cmp_typ || ' ratio :' ||
                           to_char(cmp_ratio, '99.9') || '%');

    end loop;
  end loop;
end;
/
H_ORDER--compress_type is "Compress For OLTP" ratio :  1.1%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_ORDER--compress_type is "Compress For Query High" ratio :  5.2%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_ORDER--compress_type is "Compress For Query Low" ratio :  2.9%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_ORDER--compress_type is "Compress For Archive High" ratio :  7.2%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_ORDER--compress_type is "Compress For Archive Low" ratio :  5.5%
H_PARTSUPP--compress_type is "Compress For OLTP" ratio :   .9%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_PARTSUPP--compress_type is "Compress For Query High" ratio :  5.1%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_PARTSUPP--compress_type is "Compress For Query Low" ratio :  2.7%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_PARTSUPP--compress_type is "Compress For Archive High" ratio :  7.2%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_PARTSUPP--compress_type is "Compress For Archive Low" ratio :  5.3%
H_LINEITEM--compress_type is "Compress For OLTP" ratio :  1.4%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_LINEITEM--compress_type is "Compress For Query High" ratio :  5.2%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_LINEITEM--compress_type is "Compress For Query Low" ratio :  3.0%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_LINEITEM--compress_type is "Compress For Archive High" ratio :  7.4%
Compression Advisor self-check validation successful. select count(*) on both
Uncompressed and EHCC Compressed format = 1000001 rows
H_LINEITEM--compress_type is "Compress For Archive Low" ratio :  5.6%

PL/SQL procedure successfully completed.

可以看到相比TPC-D的测试用数据,TPC-H建立的数据更具可压缩性。

PS:
TPC-D represents a broad range of decision support (DS) applications that require complex, long running queries against large complex data structures. Real-world business questions were written against this model, resulting in 17 complex queries.
The TPC Benchmark™H (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.
The performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@Size), and reflects multiple aspects of the capability of the system to process queries. These aspects include the selected database size against which the queries are executed, the query processing power when queries are submitted by a single stream, and the query throughput when queries are submitted by multiple concurrent users. The TPC-H Price/Performance metric is expressed as $/QphH@Size.

11gr2 Pseudo-error debugging events

so different from 10gr2,more debugging events added.

for example , event 10949:

10949, 00000, “Disable autotune direct path read for full table scan”

AND

10978, 00000, “general event for materialized view logs”

you can view the event list from HERE.

直接路径读取对于延迟块清除的影响

在Oracle 11g版本中串行的全表扫描可能使用直接路径读取(direct path read)的方式取代之前版本中一直使用的DB FILE SCATTERED READ, 显然direct path read具备更多的优势:

1. 减少了对栓的使用,避免可能的栓争用

2.物理IO的大小不再取决于buffer_cache中所存在的块;试想某个8个块的extent中1,3,5,7号块在高速缓存中,而2,4,6,8块没有被缓存,传统的方式在读取该extent时将会是对2,4,6,8块进行4次db file sequential read,这是一种十分可怕的状况,其效率往往要比单次读取这个区间的所有8个块还要低得多,虽然Oracle为了避免这种情况总是尽可能的不缓存大表的块(读入后总是放在队列最冷的一端);而direct path read则可以完全避免这类问题,尽可能地单次读入更多的物理块。

当然直接路径读取也会引入一些缺点:

1.在直接路径读取某段前需要对该对象进行一次段级的检查点(A segment checkpoint).

2.可能导致重复的延迟块清除操作(我们假设你了解delayed block cleanout是什么).

metalink 文档[ID 793845.1] 对该新版本中的变化进行了描述:

Applies to:

Oracle Server – Enterprise Edition – Version: 11.1.0.6 to 11.1.0.7
This problem can occur on any platform.

Symptoms

After migrating an 11g database from a standalone to a 4-node RAC,  a noticeable
increase of 'direct path read' waits were observed at times.
Here are the Cache sizes and Top 5 events.
waits

Cache Sizes                       Begin        End
~~~~~~~~~~~                  ---------- ----------
               Buffer Cache:     3,232M     3,616M  Std Block Size:         8K
           Shared Pool Size:     6,736M     6,400M      Log Buffer:     8,824K
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                           Avg
                                                          wait   % DB
Event                                 Waits     Time(s)   (ms)   time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU                                           13,916          42.1
direct path read                  1,637,344      13,359      8   40.4 User I/O
db file sequential read              47,132       1,111     24    3.4 User I/O
DFS lock handle                     301,278       1,028      3    3.1 Other
db file parallel read                14,724         554     38    1.7 User I/O

Changes

Migrated from a standalone database to a 4-node RAC.
Moved from Unix file system storage to ASM.

Using Automatic Shared Memory Management (ASMM).
The setting of db_cache_size in spfile/pfile is low compared to normal workload requirements.

Cause

There have been changes in 11g in the heuristics to choose between direct path reads or reads through buffer cache for serial table scans.
In 10g, serial table scans for “large” tables used to go through cache (by default) which is not the case anymore.  In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats.
Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.

Solution

When using Automatic Shared Memory Management (ASMM) and with buffer cache low limit set at a low end compared to the normal workload requirements and usually after startup, 11g might choose to do serial direct path read scans for large tables that do not fit in the SGA. When ASMM increases the buffer cache due to increased demand, 11g might not again do serial direct path read scans for these same large tables.  If you like to avoid this from happening, you should note the buffer cache and share pool requirements for a normal workload and set the low limits of buffer cache and shared pool in spfile/pfile close to these normal workload values.
db_cache_size
shared_pool_size

下面我们对直接路径读取对于延迟块清除造成的影响进行测试:

SQL> create table tv as select rownum rn,rpad('A',600,'Z') rp from dual
2       connect by level <=300000;

表已创建。

新建一个会话a:

SQL> set linesize 200 pagesize 1400;
SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                          27281
25 physical reads                                                 27273
25 physical reads direct                                          27273         
25 cleanouts only - consistent read gets                            0

-- 显然查询采用了直接路径读取方式

SQL> update tv set rn=rn+1;                        -- 尝试批量更新

SQL> alter system flush buffer_cache;             
-- 刷新高速缓存,造成延迟块清除的情景,并提交

系统已更改。

SQL> commit;

提交完成。

新建一个会话b:

SQL> set linesize 200 pagesize 1400;
SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                 54554
25 physical reads                                                        27273
25 physical reads direct                                                 27273
25 redo size                                                                 0
25 cleanouts only - consistent read gets                           27273      
--查询采用direct path read时产生了延迟块清除操作,但不产生redo

SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                109104
25 physical reads                                                        54546
25 physical reads direct                                                 54546
25 redo size                                                                 0
25 cleanouts only - consistent read gets                                 54546

再次查询仍采用直接路径读取,产生了相同数目的延迟块清除操作,并没有产生redo;可见direct path read的清除操作仅是针对从磁盘上读取到PGA内存中的镜像,而不对实际的块做任何修改,因而也没有任何redo;

下面我们使用普通串行全表扫描方式,设置event 10949可以避免采用直接路径读取方式.关于该事件可以参见这里.

SQL> ALTER SESSION SET EVENTS '10949 TRACE NAME CONTEXT FOREVER';

会话已更改。

SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                163662
25 physical reads                                                        81819
25 physical reads direct                                                 54546
25 redo size                                                           1966560
25 cleanouts only - consistent read gets                                 81819

SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                190947
25 physical reads                                                        95673
25 physical reads direct                                                 54546
25 redo size                                                           1966560
25 cleanouts only - consistent read gets                                 81819

第一次采用普通全表扫描方式时产生了与direct path read时相同量的延迟块清除操作,并因此产生了大量的redo,这种模式回归到了最经典的延迟块清除情景中;之后的一次读取则不再需要清除块和产生重做了,我们在读取一个“干净”的表段。

从以上测试我们可以了解到,11g中使用更为广泛的direct path read方式对有需要延迟块清除操作的段所可能产生的影响,因为实际没有一个“修改块”的操作,所以虽然延迟块清除操作在该种模式下每次都必须产生,却实际没有产生脏块,因而也就不会有“写块”的必要,故而也没有redo的产生。所产生的负载可能更多的体现在cpu time的使用上。

How to make BBED(Oracle Block Brower and EDitor Tool) on Unix/Linux/Windows

“BBED(Oracle Block Brower and EDitor Tool),用来直接查看和修改数据文件数据的一个工具,是Oracle一款内部工具,可以直接修改Oracle数据文件块的内容,简单来说就是一个针对 Oracle的二进制编辑工具。该工具不受Oracle支持,所以默认是没有生成可执行文件的,在使用前需要重新编译。”

 

在10g中编译该工具显得较简单:

 

[maclean@rh2 ~]$ cd $ORACLE_HOME/rdbms/lib

[maclean@rh2 lib]$ make -f ins_rdbms.mk $ORACLE_HOME/rdbms/lib/bbed
make: `/s01/10gdb/rdbms/lib/bbed' is up to date.

[maclean@rh2 lib]$ rm bbed

[maclean@rh2 lib]$ make -f ins_rdbms.mk $ORACLE_HOME/rdbms/lib/bbed

Linking BBED utility (bbed)
rm -f /s01/10gdb/rdbms/lib/bbed
gcc -o /s01/10gdb/rdbms/lib/bbed -L/s01/10gdb/rdbms/lib/ -L/s01/10gdb/lib/ -L/s01/10gdb/lib/stubs/  /s01/10gdb/lib/s0main.o /s01/10gdb/rdbms/lib/ssbbded.o /s01/10gdb/rdbms/lib/sbbdpt.o `cat /s01/10gdb/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 /s01/10gdb/rdbms/lib/defopt.o -ldbtools10 -lclntsh  `cat /s01/10gdb/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /s01/10gdb/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10 -lmm -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 `cat /s01/10gdb/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lnro10 `cat /s01/10gdb/lib/ldflags`    -lnsslb10 -lncrypt10 -lnsgr10 -lnzjs10 -ln10 -lnnz10 -lnl10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10   -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10 -lclient10 -lnnetd10  -lvsn10 -lcommon10 -lgeneric10 -lsnls10 -lnls10  -lcore10 -lsnls10 -lnls10 -lcore10 -lsnls10 -lnls10 -lxml10 -lcore10 -lunls10 -lsnls10 -lnls10 -lcore10 -lnls10   `cat /s01/10gdb/lib/sysliblist` -Wl,-rpath,/s01/10gdb/lib -lm    `cat /s01/10gdb/lib/sysliblist` -ldl -lm   -L/s01/10gdb/lib

[maclean@rh2 lib]$ cp bbed $ORACLE_HOME/bin

[maclean@rh2 lib]$ bbed
Password:

BBED: Release 2.0.0.0.0 - Limited Production on Thu Sep 2 14:18:27 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

BBED>

/* 你可能要问密码是什么? 呵呵 .. :) */

11.2.0.1中编译bbed可执行文件所需要的ssbbded.o和sbbdpt.o对象文件被移除了,所幸我们可以使用10g下的这2个对象文件在11.2.0.1中编译。

[maclean@rh2 lib]$ make -f ins_rdbms.mk $ORACLE_HOME/rdbms/lib/bbed

Linking BBED utility (bbed)
rm -f /s01/11gdb/rdbms/lib/bbed
gcc -o /s01/11gdb/rdbms/lib/bbed -m64 -L/s01/11gdb/rdbms/lib/ -L/s01/11gdb/lib/ -L/s01/11gdb/lib/stubs/  /s01/11gdb/lib/s0main.o /s01/11gdb/rdbms/lib/ssbbded.o /s01/11gdb/rdbms/lib/sbbdpt.o `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -ldbtools11 -lclntsh  `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lztkg11 -lztkg11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11 -lmm -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11   -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11 -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11   `cat /s01/11gdb/lib/sysliblist` -Wl,-rpath,/s01/11gdb/lib -lm    `cat /s01/11gdb/lib/sysliblist` -ldl -lm   -L/s01/11gdb/lib
gcc: /s01/11gdb/rdbms/lib/ssbbded.o: No such file or directory
gcc: /s01/11gdb/rdbms/lib/sbbdpt.o: No such file or directory

[maclean@rh2 ~]$ cp /s01/10gdb/rdbms/lib/ssbbded.o /s01/11gdb/rdbms/lib

[maclean@rh2 ~]$ cp /s01/10gdb/rdbms/lib/sbbdpt.o  /s01/11gdb/rdbms/lib

[maclean@rh2 ~]$ cp /s01/10gdb/rdbms/mesg/bbedus.ms* /s01/11gdb/rdbms/mesg/

/* bbed 需要用到bbedus.msg和bbedus.msb 2个信息文件 */

[maclean@rh2 lib]$ make -f ins_rdbms.mk $ORACLE_HOME/rdbms/lib/bbed

Linking BBED utility (bbed)
rm -f /s01/11gdb/rdbms/lib/bbed
gcc -o /s01/11gdb/rdbms/lib/bbed -m64 -L/s01/11gdb/rdbms/lib/ -L/s01/11gdb/lib/ -L/s01/11gdb/lib/stubs/  /s01/11gdb/lib/s0main.o /s01/11gdb/rdbms/lib/ssbbded.o /s01/11gdb/rdbms/lib/sbbdpt.o `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -ldbtools11 -lclntsh  `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lztkg11 -lztkg11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11 -lmm -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /s01/11gdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11   -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lnnetd11  -lvsn11 -lcommon11 -lgeneric11 -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11   `cat /s01/11gdb/lib/sysliblist` -Wl,-rpath,/s01/11gdb/lib -lm    `cat /s01/11gdb/lib/sysliblist` -ldl -lm   -L/s01/11gdb/lib

[maclean@rh2 lib]$ file bbed
bbed: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, not stripped

[maclean@rh2 lib]$ size bbed
   text    data     bss     dec     hex filename
 154473   43448      32  197953   30541 bbed

[maclean@rh2 lib]$ ldd bbed
        libclntsh.so.11.1 => /s01/11gdb/lib/libclntsh.so.11.1 (0x00002b042b883000)
        libnnz11.so => /s01/11gdb/lib/libnnz11.so (0x00002b042dead000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00000039f2400000)
        libm.so.6 => /lib64/libm.so.6 (0x00000039f2000000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00000039f2800000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00000039f5c00000)
        libc.so.6 => /lib64/libc.so.6 (0x00000039f1c00000)
        libaio.so.1 => /usr/lib64/libaio.so.1 (0x00002b042e293000)
        /lib64/ld-linux-x86-64.so.2 (0x00000039f1800000)

[maclean@rh2 lib]$ cp bbed $ORACLE_HOME/bin

[maclean@rh2 lib]$ which bbed
/s01/11gdb/bin/bbed

[maclean@rh2 lib]$ bbed
Password:

BBED: Release 2.0.0.0.0 - Limited Production on Thu Sep 2 15:18:37 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

BBED>

 

如图:

 

 

 

 

ORA-00600 [KCBZPB_1], [59033077], [4], [1], [] example

below is the 600 entry in the alertlog:

alert.log:
Hex dump of Absolute File 14, Block 312821 in trace file /u01/ORAHOME/app/oracle/admin/TIGERS7/bdump/tigers7_dbw0_10999.trc
***
Corrupt block relative dba: 0x0384c5f5 (file 14, block 312821)
Bad header found during preparing block for write
Data in bad block -
type: 6 format: 1 rdba: 0x00000384
last change scn: 0xf90b.c5f55f7c seq: 0x9 flg: 0x72
consistency value in tail: 0x0001f90b
check value in block header: 0x102, block checksum disabled
spare1: 0x6, spare2: 0x2, spare3: 0x0
***
Thu Apr 16 18:32:48 2009
Errors in file /u01/ORAHOME/app/oracle/admin/TIGERS7/bdump/tigers7_dbw0_10999.trc:
ORA-00600: internal error code, arguments: [kcbzpb_1], [59033077], [4], [1], [], [], [], []
Thu Apr 16 18:32:49 2009
Errors in file /u01/ORAHOME/app/oracle/admin/TIGERS7/bdump/tigers7_dbw0_10999.trc:
ORA-00600: internal error code, arguments: [kcbzpb_1], [59033077], [4], [1], [], [], [], []
DBW0: terminating instance due to error 600
Instance terminated by DBW0, pid = 10999
Thu Apr 16 19:04:58 2009

After that, We have executed dbverify against the identified file and it produced no errors:

DBVERIFY: Release 9.2.0.8.0 - Production on Thu Apr 16 19:31:32 2009

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


DBVERIFY - Verification starting : FILE = /u32/ORAINDX/oradata/TIGERS7/indx01.dbf

DBVERIFY - Verification complete

Total Pages Examined : 1280000
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 1262823
Total Pages Failing (Index): 0
Total Pages Processed (Other): 8751
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 8426
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Highest block SCN : 10386833124905 (2418.1602203177)

we do open a sr ,and oracle support suggest to do below query:
ACTION PLAN
===========
1) please describe the sequence of events leading up to the problem

2) please upload the alert.log. ZIP if >2MB. Dot not use RAR.

3) please describe your backup strategy:
a) when was your last valid backup?
b) are you using RMAN to perform this backup?
c) do you have all archivelogs from the last backup to now?
d) was this a hot or cold backup?

4) even if you’re not using RMAN, run the following in RMAN:
$ rman target /
RMAN> backup validate check logical database;

5) Once RMAN validate is completed, run the following in SQL*Plus as SYSDBA:
SQL> select * from v$database_block_corruption;

6) Please run the following query in SQL*Plus as SYSDBA
— db must be in either MOUNT or OPEN mode
— Save the queries to a file, eg. rec_query1.sql, then run it in SQL*Plus
—————– start ——————
set echo on
set pagesize 2000 linesize 200 trimspool on
col name form a60
col status form a10
col dbname form a15
col member form a60
col inst_id form 999
col resetlogs_time form a25
col created form a25
col DB_UNIQUE_NAME form a15
col stat form 9999999999
col thr form 99999
col “Uptime” form a80

spool rec_query1.out
show user
alter session set nls_date_format=’DD-MM-RR hh24:mi:ss’;

select inst_id, instance_name, status,
to_char(STARTUP_TIME,’dd-Mon-yyyy hh24:mi’) || ‘ – ‘ ||
trunc(SYSDATE-(STARTUP_TIME) ) || ‘ day(s), ‘ ||
trunc(24*((SYSDATE-STARTUP_TIME) – trunc(SYSDATE-STARTUP_TIME)))||’ hour(s), ‘ ||
mod(trunc(1440*((SYSDATE-STARTUP_TIME) – trunc(SYSDATE-STARTUP_TIME))), 60) ||’ minute(s), ‘ ||
mod(trunc(86400*((SYSDATE-STARTUP_TIME) – trunc(SYSDATE-STARTUP_TIME))), 60) ||’ seconds’
“Uptime”
from gv$instance
order by inst_id
/

select dbid, name dbname, open_mode, database_role,
to_char(created,’dd-Mon-YYYY hh24:mi:ss’) created,
to_char(resetlogs_time,’dd-Mon-YYYY hh24:mi:ss’) resetlogs_time
from v$database;

archive log list;

select count(*) from v$backup where status = ‘ACTIVE’;

select * from v$log;
select * from v$logfile;
select * from v$recover_file order by 1;

select distinct(status)from v$datafile;
select FILE#,TS# , status, NAME from v$datafile
where status not in (‘SYSTEM’,’ONLINE’)
order by 1;

select fhsta, count(*)
from X$KCVFH group by fhsta;

select min(fhrba_Seq), max(fhrba_Seq)
from X$KCVFH;
select hxfil FILE#,fhsta STAT,fhscn SCN,
fhthr thr, fhrba_Seq SEQUENCE,fhtnm TABLESPACE
from x$kcvfh order by 1;

7) dump the block. Run the following as SYSDBA in SQL*Plus:
SQL> alter session set max_dump_file_size=unlimited;
SQL> oradebug setmypid;
SQL> alter system dump datafile ‘full pathname for file 14’ block 312821;
SQL> oradebug tracefile_name;

==> upload the said trace file

8) run dbv against datafile 14:
$ dbv file= logfile=

spool off
—————– end ——————

RESEARCH
===============
ORA-600 [4519] “Block Corruption Detected – Cache type wrong”
We found a corrupted block when trying to read a block using
consistent read. An invalid block type was found.
Possible Block Corruption in Memory.

ORA-600 [kcbzpb_1] A block has been read cleanly from disk and updated successfully by the
clients of the cache layer.
Before the cache layer writes the block back to disk it does a health
check on the cache header.
If requested to do so (default), it generates a checksum for the block.
The health check is failing.
MEMORY CORRUPTION

ORA-600 [kcbzpb_1] was raised because DBA 59033077 => 14,312821 was found corrupted when read in the cache before we writ eit to disk.
Alert.log shows same block as corrupted, BAD HEADER, meaning blocks was overwriten.
Now DBV doesn’t show any corruption in file 14.

ACTION PLAN
====================

Hi,

I reviewed the information and the crash was caused by in memory corruption.
If restarted your database should be fine.

RESEARCH
================
Db crashed with ORA-600 [KCBZPB_1] because of corrupted block in memory:

STACK: kcbbxsv kcbbwlru kcbbdrv ksbabs ksbrdp

Bug.5866883/5845232 (36) INSTANCE GOES DOWN DUE TO ORA-600 [KCBZPB_1] V9208:
Bug.5845843/5845232 (96) DATABASE CRASH BY ORA-00600 [2032] , ORA-00600 [KCBZPB_1]

Bug:5845232: Block corruption / errors from concurrent dequeue operations
Tags: AQ CORR/PHY DUMP OERI R9208 REGRESSION SUPERCEEDED
Details:
This problem is introduced in 9.2.0.8 by the fix for bug 4144683.
Concurrent dequeue operations can lead to block corruption
/ memory corruption with varying symptoms such as ORA-600 [6033],
ORA-600 [6101] and ORA-600 [kcoapl_blkchk] if DB_BLOCK_CHECKING is enabled.
The fix for this bug is Patch 6401576.

Bug:6401576 ORA-600 [KTBAIR1] / ORA-600 [KCBZPB_1] / CORRUPTION MESSAGES –> DB CRASH
Abstract: OERI[ktbair1] / ORA-600 [6101] index corruption possible
Fixed-Releases: WIN:9208P22
Tags: CORR/IND OERI
Details:
Note: This fix replaces the fix in bug 5845232.
Certain index operations can lead to block corruption
/ memory corruption with varying symptoms such as ORA-600 [6033],
ORA-600 [6101] , ORA-600 [ktbair1] , ORA-600 [kcbzpb_1],
ORA-600 [4519] and ORA-600 [kcoapl_blkchk] if DB_BLOCK_CHECKING is enabled.

ISSUE CLARIFICATION
====================
Db crashed with ORA-600 [KCBZPB_1]

ISSUE VERIFICATION
===================
alert.log and trace file

CAUSE DETERMINATION
======================
in memory corruption

CAUSE JUSTIFICATION
====================
Bug:6401576 ORA-600 [KTBAIR1] / ORA-600 [KCBZPB_1] / CORRUPTION MESSAGES –> DB CRASH

POTENTIAL SOLUTION(S)
======================
apply patch for Bug:6401576

POTENTIAL SOLUTION JUSTIFICATION(S)
====================================
to fi x the issue

SOLUTION / ACTION PLAN
=======================

Hi,

These errors looks very similar to Bug:6401576 ORA-600 [KTBAIR1] / ORA-600 [KCBZPB_1] / CORRUPTION MESSAGES –> DB CRASH

Please download and apply one-off patch for Bug:6401576 from

Metalink->Patches->patch#=6401576 ->Platform=Hp_UX

Thanks, Rodica

关于参数log_file_name_convert

Oracle文档对于该参数的描述十分容易产生歧义:converts the filename of a new log file on the primary database to the filename of a log file on the standby database,有时被误解为归档日志的文件名转换。

如在某standby备库进行以下测试:

 

alter system set log_file_name_convert='orcl','ZZZZZZ' scope=spfile;

SQL> select fnnam,fnonm from x$kccfn;

FNNAM

--------------------------------------------------------------------------------

FNONM

--------------------------------------------------------------------------------

/u01/oradata/ZZZZZZ/redo03.log

/u01/oradata/orcl/redo03.log

/u01/oradata/ZZZZZZ/redo02.log

/u01/oradata/orcl/redo02.log

/u01/oradata/ZZZZZZ/redo01.log

/u01/oradata/orcl/redo01.log


alter system set log_file_name_convert='orcl','8888888' scope=spfile;

SQL> select fnnam,fnonm from x$kccfn;



FNNAM

--------------------------------------------------------------------------------

FNONM

--------------------------------------------------------------------------------

/u01/oradata/8888888/redo03.log

/u01/oradata/orcl/redo03.log

 /u01/oradata/8888888/redo02.log

/u01/oradata/orcl/redo02.log

/u01/oradata/8888888/redo01.log

/u01/oradata/orcl/redo01.log

v$datafile中的大部分信息来源于x$kccfn内部视图,kccfn意为[F]ile [N]ames来源于Controlfile,其中 fnnam为经过对controlfile中文件名记录转制(由db_file_name_convert或 log_file_name_convert等参数convert)后的记录,而fnonm为控制文件中的原始文件名(或曰文件路径)。若在Data Guard配置过程中遭遇到日志文件名或数据文件名的转制问题,可以通过查询该视图进一步分析。

author: maclean
permanent link:https://www.askmac.cn/2010/05/31/%E5%85%B3%E4%BA%8E%E5%8F%82%E6%95%B0log_file_name_convert/
date:2010-05-31
All rights reserved.

ORA-00600 [kcbz_check_objd_typ_3]错误一例

5月26日凌晨某客户实例警告日志中出现”ORA-00600: internal error code, arguments: [kcbz_check_objd_typ_3], [0], [0], [1], [], [], [], []“,经过分析发现与之前CR实例发生的600错误情况症状相同。

从跟踪文件m1_m0001_4209.trc中可以看到当时的执行的SQL语句,如附件。

PL/SQL的调用堆栈为:

—– PL/SQL Call Stack —–

object      line  object

handle    number  name

3f3e89300        10  package body SYS.PRVT_HDM

3f5e9d3d8        16  SYS.WRI$_ADV_HDM_T

3f3f90898      1535  package body SYS.PRVT_ADVISOR

3f3f90898      1618  package body SYS.PRVT_ADVISOR

3f3e89300       106  package body SYS.PRVT_HDM

3eb69a3f8         1  anonymous block

函数调用堆栈为:

kgerinv kgeasnmierr kcbassertbd3 kcbz_check_objd_typ kcbzib kcbgtcr ktrget kdsgrp kdsfbr qertbFetchByRowid rwsfcd…….

以上信息与metalink Bug 4430244 中的描述完全一致,原因为Segment advisor的代码错误的将已被drop的对象数据块加载到缓存区中,导致后续的操作出现错误。

通过之前的调查已明确当前系统中已应用过Bug 4430244的补丁,认为可能是ORACLE没有解决但存在的未知BUG,或者是以前 提供的4430244补丁存在缺陷,原因如下:

1. 已确定目前所应用的小补丁没有冲突,即各小补丁所实现的功 能不受影响;

2. 当前数据库系统已经使用了与本次错误信息完全符合的bug 4430244的小补丁;

3. 当相关的小补丁都应用之后,在metalink上仍有客户提交完全相同的错误(详见bug 7032704和bug 6818725)。

4. 本次错误相关的数据块与以前(2008年7月)出现的不一致,因此不是物理存储上的损坏。

5. 本次错误与Bug 6388743 “ORA-00600 [KCBZ_CHECK_OBJD_TYP_3],[0],[0],[1],[],[],[],[] OCCURRED”中的描述完全一致,ORACLE猜测是但没有确定是BUG 4430244, 只是建议使用4430244的小补丁或10.2.0.4的补丁包, 最后此bug以客户应用10.2.0.4补丁包而被视 为结束。真实原因不明。

Metalink在之前该实例出现错误的SR提出以下解决方法:

1. 在RAC的所有实例中冲 刷buffer_cache

使用命令刷数据缓存区后,会使缓存区的数据块都标记为free,即以前缓存的数据都被移出内存。影响主要是在一段时间内使物理读增加,因 此建议在业务空闲时段操作,避免在刷缓存的同时有高负载应用在申请缓存空间。该方法可临时防止相关错误抛出。

2. 停用Segment advisor job

该job相关功能是进行 对象段(如表,索引等)的存储空间状态收集,识别 是否适合根据其可用空间大小进行收缩,并提供建议,例如是否存在较大的浪费空间,表的高水位线是否太高,是否需要进行表的回缩等。如 不需要或不关心对象段的存储空间状态,可不做相关操作。该方法可以长远解决问题的发生。

3. 升级数据库版本至10.2.0.4

author: maclean
permanent link:https://www.askmac.cn/2010/05/31/ora-00600-kcbz_check_objd_typ_3%E9%94%99%E8%AF%AF%E4%B8%80%E4%BE%8B/
date:2010-05-31
All rights reserved.

沪ICP备14014813号-2

沪公网安备 31010802001379号