PRM-DUL在Linux/Unix VNC下的显示问题

PRM-DUL在Linux/Unix  VNC下的显示问题,PRM-DUL在Linux VNC远程图形化下若出现无法显示菜单栏的问题,那么可以使用快捷键AlT+F7后来拖动窗口,显示出 PRM-DUL的图形化界面。

注意当使用PRM-DUL在恢复大数据量的数据库时优先考虑启用VNC,否则若Xmanager之类的工具出现中断,那么可能需要从头再。

Oracle STARTUP GIVES ORA-1172 AND ORA-600[3020]

PROBLEM:
Received ora-1172: recovery of thread %s stuck at block 15902 of file 6
during a startup of their database after a backup.
It looks like their database was NOT shutdown normal before the
backup, but instead was aborted.
On trying to issue: recover datafile '<file 6>',
ct then receives ora-600[3020][402669086][1][64][1]......
402669086 being the dba mentioend in ora-1172 above.
=========================
DIAGNOSTIC ANALYSIS:
We received ct's database and tried the recover it.
We got the same problems and did got a dd for block 15902.
Beginning of the block dump shows:
0000000 0601 0000 1800 3e1c 0000 007b 0000 000a  <<======
0000020 0000 0000 0100 0000 0000 0509 4000 65e5
0000040 0001 7b88 0001 0200 0000 0000 0002 0016
0000060 0000 05c7 0800 1055 004c 01df 8000 0001
0000100 4000 65e4 0001 0005 ffff 001c 01d7 01bb
0000120 01bb 0000 0005 068b 055e 0431 0304 01d7
0000140 0000 0000 0000 0000 0000 0000 0000 0000
 
traced recovery process:
received the following in the trace file:
RECOVERY OF THREAD 1 STUCK AT BLOCK 15902 OF FILE 6
REDO RECORD - Thread:1 RBA:0x000040:0x00000402:0x0076 LEN:0x0260 VLD:0x01
CONTINUE SCN scn: 1.40006607 02/24/97 14:19:12
CHANGE #3 CLASS:1 DBA:0x18003e1e INC:0x0000007b SEQ:0x00400007 OPCODE 11.2
buffer dba: 18003E1E inc: 7B seq: A ver: 1 type: 6=trans data
.... (rest of the trace file is included)
Also dumped the logfile ....
CHANGE #1 CLASS:1 DBA:0x18003e1e INC:0x0000007b SEQ:0x00000001 OPCODE 13.6
ktsnb redo: seg:0x509 typ:1 inx:1
Can see changes being made all the way to SEQ:0x00000009.
 
QUESTION:  why is recovery stuck then on
CHANGE #3 CLASS:1 DBA:0x18003e1e INC:0x0000007b SEQ:0x00400007???
Where is this coming from?
=========================

=========================
REPRODUCIBLE?:
Yes, with 7.1.3 and 7.1.6.  I have ct's database
if needed.  Tried to recover in 7.1.3 and 7.1.6 and got the
same exact RECOVERY OF THREAD 1 STUCK AT BLOCK 15902 OF FILE 6
problem.

CUSTOMER IMPACT:
Ct needs to know why his recovery did NOT go through.
Although they did a shutdown abort, how could it have
messed up his database from doing normal crash recovery.
Ct needs to know what has happened to cause the
recovery to get "stuck".

=========================
WORKAROUND:
Ct had to rebuild his database, but because of export problems,
customer ended up using DUL.

=========================

For More Oracle DUL (data unloader) information :
Refer  http://parnassusdata.com/en/emergency-services  

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com


Oracle ASM COMMUNICATION ERROR CAUSING THE INSTANCE TO CRASH

ASM communication error has been reported by RDBMS which is leading to 
instance crash. This has happened couple of times in last few months. 
 
There are two kind of ASM communication error happened:
 
WARNING: ASM communication error: op 17 state 0x40 (21561)
WARNING: ASM communication error: op 0 state 0x0 (15055)
 


We are seeing this kind of crash frequently causing disruption in the service 
and availability of this critical production database.
 
DIAGNOSTIC ANALYSIS:
--------------------
This is a 4 Node RAC database. Last time issue occured on Instance 1.
 
Diagnostics Time frame to focus on:
===========================================
Wed Feb 27 10:29:50 2013 <==== ariesprd1 communciation failure with ASM 
reported
WARNING: ASM communication error: op 17 state 0x40 (21561)
..
..
WARNING: ASM communication error: op 0 state 0x0 (15055)
..
Wed Feb 27 12:56:04 2013
Errors in file
D:\ORABASE\diag\rdbms\ariesprd\ariesprd1\trace\ariesprd1_dbw0_10068.trc:
ORA-21561: OID generation failed
..
..
Wed Feb 27 12:56:04 2013 <===== leading to instance crash
System state dump requested by (instance=1, osid=10068 (DBW0)), 
summary=[abnormal instance termination].
System State dumped to trace file
D:\ORABASE\diag\rdbms\ariesprd\ariesprd1\trace\ariesprd1_diag_6420.trc
DBW0 (ospid: 10068): terminating the instance due to error 63997
 
 
WORKAROUND:
-----------
Generally, instance crash resolves the issue but last time it led to issue 
with block recovery (kind of logical corruption) causing the all four nodes 
to hang forever. 
 

This creates a kind of hang in the system till ultimately database instance  is crashing. 

Last crash has led to some block recovery issue and finally 
we  have to deploy DUl to retrieve the data 


For More Oracle DUL (data unloader) information :
Refer  http://parnassusdata.com/en/emergency-services  

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle ASM DISKGROUP WILL NOT MOUNT AFTER ADDING DISKS

This environment is using secure file. There is no backup due to customer
ordering wrong hardware from Sun to perform the backup to. Customer tried
to add 8 disks to the diskgroup. One of the disks added was slice2 which
had the partition table for the disk on it. After the add failed and they
realized what had happened they work with System Administrators and
according to customer successfully switched slice2 and slice6. After this
they used the disks to successfully create dummy diskgroup DATA3. The
diskgroup has critical production data and it not mounting is causing the
production database not to mount resulting in significant revenue loss for
the company. As there presently is no backup of this data and they are
using secure file DUL is not an option to extract the data from the failed
diskgroup. The diskgroup will not mount because disks that were just added
cannot be discovered. Last attempt by customer to use AMDU resulted in core
dumps and no AMDU output. Customer is request that the existing disk
headers for the disk be repaired so that they can get the diskgroup mounted
and then add the correct disks to the diskgroup.

 

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle ORA-600 [3020] , ORA-353 DURING DATABASE RECOVERY

this happened 3 times  on 3 different archivedlogs in a recent recovery.
Finally customers had to abort the recovery process and then use DUL to
rebuild the database. 

About DUL & third party DUL-like tools Refer  http://parnassusdata.com/en/emergency-services  for more info.
Are you sure that they have write-back caching rather than write-through
caching?  If so, how did they enable this?  Is this a function of the
hard drive they are using, or some special software they are using?
The reason this is important is that write-back caching is known to corrupt Oracle databases on all platforms, while write-through is safe.  The reason is that Oracle has to absolutely guaranteed that when NT claims a write is completed that the data is really on disk.  If data that Oracle thinks it has written is still in system memory when NT crashes, then the database will be unrecoverable at that point - the state of the database will be different from what the undo/redo logs claim it should be in.
 


If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle CORRUPTING DATABASE OPEN FAILS WITH ORA-704, ORA-376 AND ORA-1092

full database restore from hot backup taken 03-SEPT.
There has never been a backup of the undo tablespace so
it was not restored. We updated the init.ora parameters
to offline and corrupt the rollback segments and to allow
corruption at resetlogs. After restore before recover the
undo datafile (3) files 2, 103, and 103 were offline dropped.
The recovery was started on the remaining datafiles, recovery
from log sequence# 2559 and cancel after applying 2576.
This recovers the database to 09-SEPT. The undo datafile
number 2 is still failing validation even after offline
drop and _offline and _corrupt for all undo listed in the
alert.log since before the database backup.
 
DIAGNOSTIC ANALYSIS:
--------------------
init.ora parameters changed or added:
 
 undo_management = manual
 _corrupted_rollback_segments = (_SYSSMU1$, thru _SYSSMU11$)
 _offline_rollback_segments = (_SYSSMU1$, thru _SYSSMU11$)
 _allow_resetlogs_corruption = true
 max_dump_file_size=unlimited
 event = "10013 trace name context forever, level 10"
 event = "376 trace name errorstack level 3"
 
 create the controlfile to mount
    
 sql trace your recovery session
    
 SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';
 SQL> alter database datafile  
'/oracle/index03/oradata/medprod/undo_dat_01.dbf'
      offline drop;
 SQL> alter database datafile  
'/oracle/data04/oradata/medprod/undo_dat_02.dbf'
      offline drop;
 SQL> alter database datafile  
'/oracle/data01/oradata/medprod/undo_dat_02.dbf'
      offline drop;
 SQL> recover database until cancel using backup controlfile;
      cancel after 2576 is applied
 SQL> alter database open resetlogs;
 
Executed 10046 trace level 12 with event 376 set:
 
medprod_ora_52745.trc:

ksedmp: internal or fatal error
ORA-376: file 2 cannot be read at this time
ORA-1110: data file 2: '/oracle/index03/oradata/medprod/undo_dat_01.dbf'
Current SQL statement for this session:
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, 
spare2 from obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner 
is null and linkname is null and subname is null
 

Oracle DUL (data unloader) may help this case :

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com


ORA-600 [KTSSDRO_SEGMENT1] ON STARTUP – DB OPENS, BUT CRASHES IMMEDIATELY

CT was initially receiving an ORA-600[25012][1][3]
This was doing an insert into a user table.  CT attempted
to do an index rebuild on an index on the table, and the
instance crashed.  Now, all attempts to open the DB
result in ORA-600[ktssdro_segment1][1][12621530][0].
/*
* Since we are committing after freeing every eight extents it is
* possible that the number of extents as indicated by the seg$ entry
* is different from the number of used extents in uet$. This will
* happen if the earlier instance had crashed in the midst of freeing
* extents. However since the segment header is itself freed only at
* the end the extent number should not be zero
*/
   ASSERTNM3(key.ktsukext != 0, OERINM("ktssdro_segment1"),
          segtid->tsn_ktid, segtid->dba_ktid, key.ktsukext);
   KSTEV4(key.ktsuktsn, key.ktsukfno, key.ktsukbno, key.ktsukext,
         KSTID(409));
    }
 
From reading this, it looks like a possible corruption in UET$ or seg$ ?
I have suggested that CT set Event 10061 to disable SMON freeing up free
extents.  This would mean no deletes from uet$, but not sure if this will
solve it. 
 
Unfortunately, CT does not have a good backup or backup strategy.
unload data using ORACLE DUL Data Unloader.

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com


ORA-00600 [KDDUMMY_BLKCHK] RECURRING

The exact error in that file is:
ORA-600: internal error [kddummy_blkchk], [255], [2392], [6401]
the error 6401 was detected for file 255 block 2392.
 
Block Checking: DBA = 1069549912, Block Type = KTB-managed data block
*** actual free space = 2612 < kdxcoavs = 2617
---- end index block validation
rechecking block failed with error code 6401
 
The current sql is a parallel select, the trace file is from a pq slave.
The stack trace suggests we are doing block clean out.
 
The block dumps shows an IOT with 6 key and 3 non key columns.
 
These are indeed all the symptoms of bug 6646613, so this situation is caused
 
by bug 6646613.
 
Checking the backport files:
43810, 0000,  "check-and-skip corrupt blocks in index scans"
 
The event is implemented in kdi.c and kdir.c and it sets some flags,
that should cause the block to be checked and skipped if corrupt.
 
But that scenario does not apply here, in this case we are cleaning
the block, and only then the corruption visible, after the cleanout.




The problem is that the blocks are already corrupt but our code
does not detect it, until the blocks are cleaned out.
 
In general the problem is a small difference in the available free
space. If the blocks are not updated any further they can still
be queried and give the correct results.
 
A further update can seriously corrupt a block as we then possibly
try to put in a row for which there in reality is no space and
severely thrashing the block in the process, by overwriting important
structures.
 
To salvage the data you either use DUL or
 clone the database and disable all events and block checking.
You can then introduce corruptions but you can query the data
so you can use any method to salvage the data.
We advise to do this on a clone to protect you from
unexpected side effects.
Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

ORA-600 [KTSPSCANINIT-D]

PROBLEM:
--------
Ran the script and came back with table_act_entry.
Can select from table_act_entry table with rownum<200 but other rownum 
options give error. Also attempted to analyze "SA"."TABLE_ACT_ENTRY" table 
but receive same ORA-600 error. You can see from pasted data.  Can't anaylze 
table_act_entry?
 
SQL> select * from TABLE_ACT_ENTRY where rownum>100 and rownum<150;
select * from TABLE_ACT_ENTRY  where rownum>100 and rownum<150
              *
ERROR at line 1:
ORA-600: internal error code, arguments: [ktspScanInit-d], [35758097], 
[],[], [], [], [], []
 
SQL> select * from TABLE_ACT_ENTRY where rownum>18315000;
select * from TABLE_ACT_ENTRY where rownum>18315000
              *
ERROR at line 1:
ORA-600: internal error code, arguments: [ktspScanInit-d], [35758097], 
[],[], [], [], [], []
 
SQL>  analyze table "SA"."TABLE_ACT_ENTRY" validate structure cascade;
 analyze table "SA"."TABLE_ACT_ENTRY" validate structure cascade
*
ERROR at line 1:
ORA-600: internal error code, arguments: [ktspScanInit-d], [35758097], 
[],[], [], [], [], []
 
 
DIAGNOSTIC ANALYSIS:
--------------------
ERROR:              
   ORA-600 [ktspscaninit-d] [a]
 
 VERSIONS:
   versions 9.2
 
 DESCRIPTION:
 
   Oracle has encountered an inconsistency in the metadata for an ASSM
   (Automatic Segment Space Management) segment. 
 
   An ASSM segment has two Highwater marks, a Low Highwater mark (LHWM) and 
   a High Highwater mark (HHWM - this is the same as a traditional HWM).
 
   This error is raised when we fail to locate the Low Highwater mark block.
 
   Stored in the Segment header is information which identifies the Level 1
   Bitmap Block (L1BMB) for the LHWM block, this BMB is managing the range 
   of datablocks holds the LHWM block.
 
   If during a scan of the ranges in this L1BMB we fail to locate the LHWM
   block then this error is raised.
  
 ARGUMENTS:
   Arg [a] Block address of Low HWM block  
  
 FUNCTIONALITY:      
   TRANSACTION SEGMENT PAGETABLE
 
---------------
 
Tried to corrupt the bitmap and rebuild it - this did not work
 
WORKAROUND:
-----------
Drop table and import from backup - this is not an option
the table is critical to the complete operation of the database.



A tool 'DUL' was used to take table data and write it to a flat file, and now  they are trying to use SQLLOADER to load it back into a table;

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle system01.dbf corruption

my oracle database died. The datafile system01.dbf was corrupt after a power outage.

the server disks have several files corrupt including archivelogs

the backup was lost

Anyone know a strategy to recover this database?

if not I need to extract data from an oracle to dbf csv someone knows a tool to do this?

 

Depending on the ‘corruption’  and condition of the other files, Oracle support may be able to help you force the database open in an inconsistent manner just to export your data.  You will need to log an SR for them to work directly with you.
As for ‘extract data’, there is a service which field support performs.

 

There may be a possibility of data salvaging by usage of DUL (Data unloader) .If you have lost everything, Oracle Constulting may be able to assist using the DUL tool, but note it is at a cost above your normal support, so that should be a last resort.

There is also a third party tool available with the Abbreviation Oracle PRM-DUL
 (Data unloading by data extraction). Refer  http://parnassusdata.com/en/emergency-services  for more info..

 

沪ICP备14014813号-2

沪公网安备 31010802001379号