below is the 600 entry in the alertlog:
alert.log: Hex dump of Absolute File 14, Block 312821 in trace file /u01/ORAHOME/app/oracle/admin/TIGERS7/bdump/tigers7_dbw0_10999.trc *** Corrupt block relative dba: 0x0384c5f5 (file 14, block 312821) Bad header found during preparing block for write Data in bad block - type: 6 format: 1 rdba: 0x00000384 last change scn: 0xf90b.c5f55f7c seq: 0x9 flg: 0x72 consistency value in tail: 0x0001f90b check value in block header: 0x102, block checksum disabled spare1: 0x6, spare2: 0x2, spare3: 0x0 *** Thu Apr 16 18:32:48 2009 Errors in file /u01/ORAHOME/app/oracle/admin/TIGERS7/bdump/tigers7_dbw0_10999.trc: ORA-00600: internal error code, arguments: [kcbzpb_1], [59033077], [4], [1], [], [], [], [] Thu Apr 16 18:32:49 2009 Errors in file /u01/ORAHOME/app/oracle/admin/TIGERS7/bdump/tigers7_dbw0_10999.trc: ORA-00600: internal error code, arguments: [kcbzpb_1], [59033077], [4], [1], [], [], [], [] DBW0: terminating instance due to error 600 Instance terminated by DBW0, pid = 10999 Thu Apr 16 19:04:58 2009
After that, We have executed dbverify against the identified file and it produced no errors:
DBVERIFY: Release 9.2.0.8.0 - Production on Thu Apr 16 19:31:32 2009 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. DBVERIFY - Verification starting : FILE = /u32/ORAINDX/oradata/TIGERS7/indx01.dbf DBVERIFY - Verification complete Total Pages Examined : 1280000 Total Pages Processed (Data) : 0 Total Pages Failing (Data) : 0 Total Pages Processed (Index): 1262823 Total Pages Failing (Index): 0 Total Pages Processed (Other): 8751 Total Pages Processed (Seg) : 0 Total Pages Failing (Seg) : 0 Total Pages Empty : 8426 Total Pages Marked Corrupt : 0 Total Pages Influx : 0 Highest block SCN : 10386833124905 (2418.1602203177)
we do open a sr ,and oracle support suggest to do below query:
ACTION PLAN
===========
1) please describe the sequence of events leading up to the problem
2) please upload the alert.log. ZIP if >2MB. Dot not use RAR.
3) please describe your backup strategy:
a) when was your last valid backup?
b) are you using RMAN to perform this backup?
c) do you have all archivelogs from the last backup to now?
d) was this a hot or cold backup?
4) even if you’re not using RMAN, run the following in RMAN:
$ rman target /
RMAN> backup validate check logical database;
5) Once RMAN validate is completed, run the following in SQL*Plus as SYSDBA:
SQL> select * from v$database_block_corruption;
6) Please run the following query in SQL*Plus as SYSDBA
— db must be in either MOUNT or OPEN mode
— Save the queries to a file, eg. rec_query1.sql, then run it in SQL*Plus
—————– start ——————
set echo on
set pagesize 2000 linesize 200 trimspool on
col name form a60
col status form a10
col dbname form a15
col member form a60
col inst_id form 999
col resetlogs_time form a25
col created form a25
col DB_UNIQUE_NAME form a15
col stat form 9999999999
col thr form 99999
col “Uptime” form a80
spool rec_query1.out
show user
alter session set nls_date_format=’DD-MM-RR hh24:mi:ss’;
select inst_id, instance_name, status,
to_char(STARTUP_TIME,’dd-Mon-yyyy hh24:mi’) || ‘ – ‘ ||
trunc(SYSDATE-(STARTUP_TIME) ) || ‘ day(s), ‘ ||
trunc(24*((SYSDATE-STARTUP_TIME) – trunc(SYSDATE-STARTUP_TIME)))||’ hour(s), ‘ ||
mod(trunc(1440*((SYSDATE-STARTUP_TIME) – trunc(SYSDATE-STARTUP_TIME))), 60) ||’ minute(s), ‘ ||
mod(trunc(86400*((SYSDATE-STARTUP_TIME) – trunc(SYSDATE-STARTUP_TIME))), 60) ||’ seconds’
“Uptime”
from gv$instance
order by inst_id
/
select dbid, name dbname, open_mode, database_role,
to_char(created,’dd-Mon-YYYY hh24:mi:ss’) created,
to_char(resetlogs_time,’dd-Mon-YYYY hh24:mi:ss’) resetlogs_time
from v$database;
archive log list;
select count(*) from v$backup where status = ‘ACTIVE’;
select * from v$log;
select * from v$logfile;
select * from v$recover_file order by 1;
select distinct(status)from v$datafile;
select FILE#,TS# , status, NAME from v$datafile
where status not in (‘SYSTEM’,’ONLINE’)
order by 1;
select fhsta, count(*)
from X$KCVFH group by fhsta;
select min(fhrba_Seq), max(fhrba_Seq)
from X$KCVFH;
select hxfil FILE#,fhsta STAT,fhscn SCN,
fhthr thr, fhrba_Seq SEQUENCE,fhtnm TABLESPACE
from x$kcvfh order by 1;
7) dump the block. Run the following as SYSDBA in SQL*Plus:
SQL> alter session set max_dump_file_size=unlimited;
SQL> oradebug setmypid;
SQL> alter system dump datafile ‘full pathname for file 14’ block 312821;
SQL> oradebug tracefile_name;
==> upload the said trace file
8) run dbv against datafile 14:
$ dbv file=
spool off
—————– end ——————
RESEARCH
===============
ORA-600 [4519] “Block Corruption Detected – Cache type wrong”
We found a corrupted block when trying to read a block using
consistent read. An invalid block type was found.
Possible Block Corruption in Memory.
ORA-600 [kcbzpb_1] A block has been read cleanly from disk and updated successfully by the
clients of the cache layer.
Before the cache layer writes the block back to disk it does a health
check on the cache header.
If requested to do so (default), it generates a checksum for the block.
The health check is failing.
MEMORY CORRUPTION
ORA-600 [kcbzpb_1] was raised because DBA 59033077 => 14,312821 was found corrupted when read in the cache before we writ eit to disk.
Alert.log shows same block as corrupted, BAD HEADER, meaning blocks was overwriten.
Now DBV doesn’t show any corruption in file 14.
ACTION PLAN
====================
Hi,
I reviewed the information and the crash was caused by in memory corruption.
If restarted your database should be fine.
RESEARCH
================
Db crashed with ORA-600 [KCBZPB_1] because of corrupted block in memory:
STACK: kcbbxsv kcbbwlru kcbbdrv ksbabs ksbrdp
Bug.5866883/5845232 (36) INSTANCE GOES DOWN DUE TO ORA-600 [KCBZPB_1] V9208:
Bug.5845843/5845232 (96) DATABASE CRASH BY ORA-00600 [2032] , ORA-00600 [KCBZPB_1]
Bug:5845232: Block corruption / errors from concurrent dequeue operations
Tags: AQ CORR/PHY DUMP OERI R9208 REGRESSION SUPERCEEDED
Details:
This problem is introduced in 9.2.0.8 by the fix for bug 4144683.
Concurrent dequeue operations can lead to block corruption
/ memory corruption with varying symptoms such as ORA-600 [6033],
ORA-600 [6101] and ORA-600 [kcoapl_blkchk] if DB_BLOCK_CHECKING is enabled.
The fix for this bug is Patch 6401576.
Bug:6401576 ORA-600 [KTBAIR1] / ORA-600 [KCBZPB_1] / CORRUPTION MESSAGES –> DB CRASH
Abstract: OERI[ktbair1] / ORA-600 [6101] index corruption possible
Fixed-Releases: WIN:9208P22
Tags: CORR/IND OERI
Details:
Note: This fix replaces the fix in bug 5845232.
Certain index operations can lead to block corruption
/ memory corruption with varying symptoms such as ORA-600 [6033],
ORA-600 [6101] , ORA-600 [ktbair1] , ORA-600 [kcbzpb_1],
ORA-600 [4519] and ORA-600 [kcoapl_blkchk] if DB_BLOCK_CHECKING is enabled.
ISSUE CLARIFICATION
====================
Db crashed with ORA-600 [KCBZPB_1]
ISSUE VERIFICATION
===================
alert.log and trace file
CAUSE DETERMINATION
======================
in memory corruption
CAUSE JUSTIFICATION
====================
Bug:6401576 ORA-600 [KTBAIR1] / ORA-600 [KCBZPB_1] / CORRUPTION MESSAGES –> DB CRASH
POTENTIAL SOLUTION(S)
======================
apply patch for Bug:6401576
POTENTIAL SOLUTION JUSTIFICATION(S)
====================================
to fi x the issue
SOLUTION / ACTION PLAN
=======================
Hi,
These errors looks very similar to Bug:6401576 ORA-600 [KTBAIR1] / ORA-600 [KCBZPB_1] / CORRUPTION MESSAGES –> DB CRASH
Please download and apply one-off patch for Bug:6401576 from
Metalink->Patches->patch#=6401576 ->Platform=Hp_UX
Thanks, Rodica