DUL Oracle Data Unloaderダウンロード

オラクルDULは、オランダからは、Oracleサポート、バーナードバンDuijnen開発企業内のOracleデータベースの回復ツールです:
DULは、Oracleの製品ではありません
DULは、Oracleでサポートされて製品ではありません
DULは、内部使用のためにOracle Supportの営業サポート部門に厳密に制限されている
DULは、海外で使用するOracleの内部承認を通過する必要が、あなたは最初のPS Oracleの標準サービスを購入する必要がありますのみDULを使用してもよいし、そうでなければそれもDULを使用する資格はありません
DULが厳密に制御されている一つの理由は、Oracleソースコードの使用であり、それは厳密に制御されなければならない

約DUL9最初から、DULソフトウェアタイムロックDULの使用を制限するため、与えるためにバーナードバンDuijnen外の世界は、彼が定期的にDULは(C言語に基づくDUL)を異なるプラットフォーム上でコンパイルし、定期的にOracleの内部DULにアップロードすることを追加ワークスペース(ベースstbeehiveスペース)、Oracleサポート·ダウンロードするために、内部VPNログインを使用することができます。つまり、ロック日付のバージョンをリリース10月1日bernard.van.duijnenのようなものです、30日、11月1日DULは、単に読んでいない、基本的には効果のないOS時間に、このバージョンなので、OSを変更するための時間ですそれは無用だ。 DULが時間内にデータファイルを読み込むように、Oracleのデータファイル·レーンはまた、現在の時刻を記録したため。 DULとの時間を変更するには、平均的なユーザーのために不可能。
DULには、HP-UXのバージョンを、対応するようにbernard.van.duijnen学生は、DULプラットフォームは、HP-UXを提供しないわけではないので注意してください。
古すぎるため、現在の10グラム、11グラム、12cのデータベースで使用されるOracle DULバージョンの一方、以前のバージョンでは、基本的に機能していません。米国での使用DULが厳密に中国で制御され、その後、基本は、Oracle ACSの高度な顧客サービス部門は、ORACLEのACSフィールドサービス価格はまだ高価で購入し、使用中に持っているである。
附属書は、Oracle ACSプレゼンテーション文書DULサービスを提供(もちろん元のサイトサービスは、より高価であり、ユーザーは毎年、PS標準サービスを購入したことを条件とする、そうでなければ、あなたも、ACSの高度なサービスのオンサイトサービスを購入することはできません):

https://www.askmac.cn/wp-content/uploads/2014/01/DUL.pdf

 

 

DUL 10 Manual:

DUL User’s and Configuration Guide V10.2.4.27

https://www.askmac.cn/wp-content/uploads/2014/01/DUL-Users-and-Configuration-Guide-V10.2.4.27.pdf

 

以下は、ダウンロードリンクDUL10であるが、ためにロックのため、定期的に失敗する。

 

DUL FOR LINUX平台

DUL FOR Windows平台

 

詩タンソフトウェア(マクリーンが置かれている会社)はDUL類似製品、PRM-DULを開発しました。グラフィカルインターフェース(GUI)の導入に基づいてDULとデータブリッジ(SQLLDRファイルになって着陸せずにデータを直接DBLINKとしてターゲット·データベースに同じを転送)およびその他の機能、およびPRM-DULがJavaベースで書かれているので、あなたはすべてのプラットフォームを横断することができますので、 HP-UXを含む。

 

PRM-DUL無料版のダウンロード:
http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_3206.zip

 

PRM-DULマニュアルhttp://www.parnassusdata.com/sites/default/files/ParnassusData%20Recovery%20Manager%20For%20Oracle%20Database%E7%94%A8%E6%88%B7%E6%89% 8B%E5%86%で8C%の20v0.3.pdf

PRM-DUL無料版各テーブルをデフォルトでは、唯一のデータベースにデータテーブルのこれ以上1万超える行ほど小さい場合は、直接、自由PRM-DULを使用することができ、データの1万行を抽出することができます。あなたのデータベースが大きくなり、データが非常に重要です、あなたはEnterprise EditionのPRM-DUL、データベースのライセンス·ソフトウェア·ライセンスをPRM-DULのEnterprise Editionを購入を検討することができ、ライセンス価格は17を含む7500元(ある場合%の付加価値税)。
一方PRM-DULもいくつかの無料ライセンスを提供します。
自由で開かれた、いくつかのPRM-DUL Enterprise Editionのライセンスキー

使用DUL後にOracleデータベース·リカバリ·ケースがまだ作業した場合は、サービス復旧の採用を検討することができます:
詩タンソフトウェアは今やなど、Oracle回復のほぼ全てのシーンを提供しています:データベースを開くことができなかった、テーブル等、間違わDROP、TRUNCATE、DELETEで、ASMのディスク·グループができないMOUNTが好き。

彼らが扱うことができない場合は、あなたが回復を助けるために詩タンORACLEデータベースの修復ソフトウェア専門のチームメンバーを見つけることができます!
タン詩データベースの修復ソフトウェア専門家チーム

 

電話番号:086-13764045638        メール:service@parnassusdata.com

 

 

 

Current recovery options 
restore and rollforward
export/import
use SQL*Loader to re-load the data
(parallel) create table as select (PCTS)
Transportable Tablespace


Diagnostic tools
orapatch
BBED (block browser/editor) 
Undocumented parameters
_corrupted_rollback_segments, _allow_resetlogs_corruption  etc... 


No alternatives in the case of loss of SYSTEM tablespace datafile(s) 
The database must be in ‘reasonably’ good condition or else recovery is not possible (even with the undocumented parameters!) 
Patching is very ‘cumbersome’ and is not always guaranteed to work
Certain corruptions are beyond patching
Bottom line - loss of data!!


The most common problem is the fact that customer’s backup strategy does not match their business needs. 
Eg.  Customer takes weekly backups of the database, but in the event of a restore their business need is to be up and running within (say) 10 hours.   This is not feasible since the ‘rollforward’ of one week’s worth of archive logs would (probably) take more than 10 hours!!


Building a cloned database exporting data, and importing into the recovery database.
Building a cloned database and using Transportable Tablespaces for recovery. 


DUL could be a possible solution
DUL (?) - Bernard says ‘Life is DUL without it!’
bottom line - salvage as much data as possible



DUL is intended to retrieve data that cannot be retrieved otherwise
It is NOT an alternative for restore/rollforward, EXP, SQL*Plus etc. 
It is meant to be a last resort, not for normal production usage
Note: There are logical consistency issues with the data retrieved


DUL should not be used where data can be salvaged using one of the supported mechanisms (restore/rollforward, exp/imp etc…)


Doesn’t need the database or the instance to be open
Does not use recovery, archive logs etc…
It doesn’t care about data consistency
more tolerant to data corruptions
Does not require the SYTEM tablespace to recover


DUL is a utility that can unload data from “badly damaged” databases. 
DUL will scan a database file, recognize table header blocks, access extent information, and read all rows 
Creates a SQL*Loader or Export formatted output
matching SQL*Loader control file is also generated




DUL version 3 (still in testing!) supports IMP loadable dump file.  More on DUL version 3 later...



Read the Oracle data dictionary if the SYSTEM tablespace files are available 
Analyze all rows to determine 
number of columns, column datatypes and column lengths


If the SYSTEM tablespace datafiles are not available DUL does its own analysis, more on this later...



DUL can handle all row types
normal rows, migrated rows, chained rows, multiple extents, clustered tables, etc. 
The utility runs completely unattended, minimal manual intervention is needed.
Cross platform unloading is supported



DUL can open other datafile(s) if there are extents in that datafile(s).
Although DUL can handle it, LONG RAW presents a problem for SQL*Loader - we’ll talk about this shortly...

For cross platform unloading the configuration parameters within "init.dul" will have to be modified to match those of the original platform and O/S rather than the platform from which the unload is being done.
DUL unloads in the physical order of the columns. The cluster key columns are always unloaded first.


Recovers data directly from Oracle data files 
the Database (RDBMS) is bypassed 
Does dirty reads, it assumes that every transaction is committed
Does not check if media recovery has been done
DATABASE CORRUPT - BLOCKS OK 
Support for Locally Managed Tablespaces


DUL does not require that media recovery be done.
Since DUL reads the data directly from datafiles,  it  reads data that is committed as well as uncommitted.  Therefore the data that is salvaged by DUL can potentially be logically corrupt.  It is upto the DBA and/or the Application programmers to validate the data.


The database can be copied from a different operating system than the DUL-host 
Supports all database constructs: 
row chaining, row migration, hash/index clusters, longs, raws, rowids, dates, numbers, multiple free list groups, segment high water mark, NULLS, trailing NULL columns etc...
DUL should work with all versions 6 , 7, 8 and 9
Enhanced to support 9i functionality. 


DUL has been tested with versions from 6.0.36 up to 7.2.2. The old block header layout (pre 6.0.27.2) also works! 


The main new features are: 
  Support for Oracle version 6, 7, 8 and 9 
  Support for Automatic Space Managed Segments 
  New bootstrap procedure: just use ‘bootstrap;’.   No more 
       dictv6,7 or 8.ddl files 
  LOB are supported in SQL*Loader mode only 
  (Sub)Partitioned tables can be unloaded 
  Unload a single (Sub)Partition 
  Improved the scan tables 
  The timestamp and interval datatypes 
  Stricter checking of negative numbers 
  (Compressed) Index Organized Tables be unloaded 
  Very strict checking of row status flags 
  Unload index to see what rows you are missing 
  Objects, nested tables and varrays are not supported (internal  
        preparation for varray support ) 


DUL has been tested with versions from 6.0.36 up to 9.0.1. The old block header layout (pre 6.0.27.2) also works! 
DuL 92 is mostly bug fixes:
The latest version is DUL92. The main new features are: 
     fix for problem with startup when db_block_size = 16K 
     fix for scan database and Automatic Space Managed Segments 
     fix for block type errors high block types; new max is 51 
     Support for Automatic Space Managed Segments 
     phase zero of new unformat command 
     internal preparation for varray support 
     Bug fix in the stricter checking of negative numbers 
     Bug fix in the unloading of clustered tables 


The database can be corrupted, but an individual data block used must be 100% correct
blocks are checked to make sure that they are not corrupt and belong to the correct segment
DUL can and will only unload table/cluster data. 
it will not dump triggers, constraints, stored procedures nor create scripts for tables or views
But the data dictionary tables describing these can be unloaded


Note: If during an unload a bad block is encountered, an error message is printed in the loader file and to standard output. Unloading will continue with the next row or block. 


MLSLABELS (trusted oracle) are not supported
No special support for multi byte character sets
DUL can unload (LONG) RAWs, but there is no way to reload these 1-to-1 with SQL*Loader
SQL*Loader cannot be used to load  LONG RAW data.



DUL can unload (long) raws, but there is no way to reload these 1-to-1 with SQL*Loader. There is no suitable format in SQL*Loader
to preserve all long raws. Use the export mode instead or write a Pro*C program to load the data.



DUL and large files (files > 2GB) 
Starting from DUL version 8.0.6.7 DUL will report if it can do 32-bit i/o(no large file support) or 64-bit i/o with large file suport.
DUL support for raw devices
DUL will work on raw devices. But DUL is not raw device aware.


Raw Devices:
On some platforms we skip the first part of the raw device. DUL does not automatically skip this extra part. The easiest way to configure DUL in this is the optional extra offset in the control file. These extra offsets that I am aware of are 4K on AIX raw devices and 64K for Dec Unix. 
DUL does not use the size as stored in the file header. So DUL will read the whole raw device including the unused part at the end.


There are two configuration files for DUL
init.dul
control.dul
Configuration parameters are platform specific.


If you do decide that DUL is the only way to go, then here is how to go about configuring and using DUL.  Good Luck!!


Contains parameters to help DUL understand the format of the database files 
Has information on  
DUL cache size
Details of header layout
Oracle block size
Output file format
Sql*Loader format and record size. 
etc...


Sample init.dul file for Solaris looks like:
# The dul cache must be big enough to hold all entries from the Dictionary dollar tables.
dc_columns = 200000
dc_tables = 20000
dc_objects = 20000
dc_users = 40
# OS specific parameters
big_endian_flag = true
dba_file_bits = 6
align_filler = 1
db_leading_offset = 1
# Database specific parameters
db_block_size = 2048
# Sql*Loader format parameters
ldr_enclose_char = "
ldr_phys_rec_size = 81


Used to translate the file numbers to file names
Each entry on a separate line, first the file_number then the data_file_name
A third optional field is an extra positive or negative byte offset, that will be added to all fseek() operations for that datafile.


This optional field makes it possible to skip over the extra block for AIX on raw devices or to unload from fragments of a datafile.

The control file would look like : 
  1  /u04/bugmnt/tar9569610.6/gs/sysgs.dbf                                
  2  /u04/bugmnt/tar9569610.6/gs/rbs.dbf                                  
  3  /u04/bugmnt/tar9569610.6/gs/user.dbf         
  4  /u04/bugmnt/tar9569610.6/gs/index.dbf                   
  5  /u04/bugmnt/tar9569610.6/gs/test.dbf
When the database is up and running v$dbfile contains the above information.



# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40

# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 6
osd_c_struct_alignment = 32
osd_file_leader_size = 1

# database parameters
db_block_size = 8192

# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81

#ADD PARAMETERS
export_mode=true  # still needed with dul9
compatible=9


# AIX version 7 example with one file on raw device
   1 /usr/oracle/dbs/system.dbf
   8 /dev/rdsk/data.dbf 4096

   # Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
   0  1 /fs1/oradata/PMS/system.dbf
   1  2 /tmp/huge_file_part1 startblock 1 endblock 1000000
   1  2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
   1  2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000



Case1: Data dictionary usable


Case 1:  
SYSTEM tablespace available
Case 2:  
Using DUL without the SYSTEM tablespace


Straight forward method  	                     
Execute ‘dul’ from os prompt then ‘bootstrap’ from DUL
Don’t need to know about the application tables structure, column types etc...


DUL> unload table hr.emp_trunc;

DUL: Error: No entry in OBJ$ for "EMP_TRUNC" type = 2
DUL: Error: Could not resolve object id
DUL: Error: Missing dictionary information, cannot unload table
DUL> scan database;

Case2: Without the SYSTEM tablespace 

Needs an in depth knowledge about the application and the application tables
The unloaded data does not have any value, if you do not know from which table it came from
Column types can be guessed by DUL but table and column names are lost
The guessed column types can be wrong


Note: 
1) Any old SYSTEM tablespace from the same database but weeks old can be of great help!
2) If you recreate the tables (from the original CREATE TABLE scripts) then the structural information of a "lost" table can be matched to the "seen" tables scanned with two SQL*Plus scripts. (fill.sql andgetlost.sql)

Steps to follow: 
1.configure DUL for the target database. This means creating a correct init.dul and control.dul. 
2.SCAN DATABASE; : scan the database for extents and segments. 
3.SCAN TABLES; : scan the found segments for rows. 
4.SCAN EXTENTS; : scan the found extents. 
5.Identify the lost tables from the output of step 3. 
6.UNLOAD the identified tables. 



DUL will not find “last” columns that only contain NULL's
Trailing NULL columns are not stored in the database
Tables that have been dropped can be seen
When a table is dropped, the description is removed from the data dictionary only
Tables without rows will go unnoticed


During startup DUL goes through the following steps: 
the parameter file "init.dul" is processed
the DUL control file (default "control.dul") is scanned
try to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL's data dictionary cache
try to load seg.dat and col.dat. 
accept DDL-statements or run the DDL script specified as first argument



DUL version 3, 8, 9 and 92 are available. 

http://www-sup.nl.oracle.com/dul/index.html

 exceutables, user’s and configutration guide
Available on most common platforms
Solaris
AIX
NT
HP etc...


DUL version 9 is currently available on: 
aix
alphavms62
att3000
dcosx
hp.tar.bin
osf1
rm4000.tar.bin   
sco
sequen
sunos
sunsol2
vaxvms55
vaxvms61
win95
winnt 

DuL with Dictionary


 Configure init.dul and control.dul
 Load DuL
 Bootstrap
 Unload database, user, table


DuL without Dictionary


 Configure init.dul and control.dul (control will include
   the datafiles needing to be recovered only).
 Load DuL
 alter session set use_scanned_extent_map = true
 scan database
 scan tables
 Using the found table definitions construct an uload 
   statement:
unload table dul2.emp (EMPLOYEE_ID number(22), FIRST_NAME varchar2(20), 
LAST_NAME varchar2(25), 
EMAIL varchar2(25),PHONE_NUMBER varchar2(20), HIRE_DATE date, JOB_ID varchar2 (10),
SALARY number(22), COMMISSION_PCT number(22),MANAGER_ID number(22), 
DEPARTMENT_ID number(22))
storage (dataobjno 28200);





 

DUL Oracle Data Unloader工具下载

Oracle DUL 是Oracle公司内部的数据库恢复工具,由在荷兰的Oracle Support,Bernard van Duijnen开发:

  • DUL不是Oracle的一个产品
  • DUL不是一个受Oracle支持的产品
  • DUL被严格限制为Oracle Support售后支持部门内部使用
  • DUL的使用在国外需要经过Oracle公司的内部审批,首先你必须购买了Oracle的标准服务PS才可能用到DUL,否则甚至没有资格使用DUL
  • DUL被严格控制的一个原因是其采用了部分Oracle源代码,所以必须被严格控制

 

大约从DUL 9开始,Bernard van Duijnen为了限制外界使用DUL,所以给DUL加上了软件时间锁,即他会定期编译不同平台上的DUL(DUL基于C语言编写)并定期上传到ORACLE 内部的DUL workspace(基于stbeehive的空间),Oracle Support可以使用内部VPN登陆后下载。就是说 好比bernard.van.duijnen 在10月1日发布了一个版本,日期锁是30天,那么这个版本到11月1日基本就失效了, DUL不是简单的读OS时间,所以改OS时间是没用的。 因为Oracle的datafile里也记录了一个当前时间,所以DUL读的是datafile里的时间。 一般用户不可能为了用DUL去改那个时间。

注意由于bernard.van.duijnen同学不提供HP-UX平台上的DUL,所以DUL没有HP-UX的对应版本。

同时早期的Oracle DUL版本用在现在的版本10g、11g、12c的数据库基本是用不了了,因为太老了。  在美国使用DUL是被严格控制的,在中国国内的话 基本就是Oracle ACS 高级客户服务部门对外在用,购买ORACLE ACS现场服务的价格还是很贵的。

附件为一个Oracle ACS提供DUL 服务的介绍文档(当然原厂现场服务是比较昂贵的,且前提是用户已经每年购买了PS标准服务,否则甚至无法购买ACS高级服务的现场服务):

https://www.askmac.cn/wp-content/uploads/2014/01/DUL.pdf

 

 

DUL 10的英文版使用手册:

DUL User’s and Configuration Guide V10.2.4.27

https://www.askmac.cn/wp-content/uploads/2014/01/DUL-Users-and-Configuration-Guide-V10.2.4.27.pdf

 

以下是DUL 10的下载链接,但是因为加锁了,所以会定期失效。

DUL FOR LINUX平台

DUL FOR Windows平台

 

诗檀软件(Maclean 所在的公司)开发了DUL的同类产品 ,PRM-DUL。 在DUL的基础上引入了图形化界面GUI和DataBridge(数据无需落地成为SQLLDR文件,直接像DBLINK一样传输到目标数据库)等功能;同时由于PRM-DUL是基于JAVA编写的,所以可以跨所有平台,包括HP-UX。

PRM-DUL的免费版本下载:

http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_3206.zip

PRM-DUL的使用手册 http://www.parnassusdata.com/sites/default/files/ParnassusData%20Recovery%20Manager%20For%20Oracle%20Database%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C%20v0.3.pdf

 

PRM-DUL的免费版本默认每张表只能抽取一万行数据,如果你的数据库很小以至于没有超过一万行数据的表,那么可以直接使用免费的PRM-DUL。 如果你的数据库较大且数据十分重要,那么可以考虑购买企业版的PRM-DUL,企业版PRM-DUL 针对一套数据库提供一个License软件使用许可证,一个License的价格是7500元人民币(含17%增值税)。

同时PRM-DUL还提供部分免费的License:

免费开放几个PRM-DUL 企业版License Key

 

如果你的Oracle数据库恢复case在使用DUL后仍搞不定,那么可以考虑通过服务恢复:

诗檀软件目前提供几乎所有场景的Oracle恢复情况,包括:数据库无法打开,表被误DROP、TRUNCATE、DELETE等,ASM Diskgroup无法MOUNT等。

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638   QQ号:47079569    邮箱:service@parnassusdata.com

 

 

 

Current recovery options 
restore and rollforward
export/import
use SQL*Loader to re-load the data
(parallel) create table as select (PCTS)
Transportable Tablespace


Diagnostic tools
orapatch
BBED (block browser/editor) 
Undocumented parameters
_corrupted_rollback_segments, _allow_resetlogs_corruption  etc... 


No alternatives in the case of loss of SYSTEM tablespace datafile(s) 
The database must be in ‘reasonably’ good condition or else recovery is not possible (even with the undocumented parameters!) 
Patching is very ‘cumbersome’ and is not always guaranteed to work
Certain corruptions are beyond patching
Bottom line - loss of data!!


The most common problem is the fact that customer’s backup strategy does not match their business needs. 
Eg.  Customer takes weekly backups of the database, but in the event of a restore their business need is to be up and running within (say) 10 hours.   This is not feasible since the ‘rollforward’ of one week’s worth of archive logs would (probably) take more than 10 hours!!


Building a cloned database exporting data, and importing into the recovery database.
Building a cloned database and using Transportable Tablespaces for recovery. 


DUL could be a possible solution
DUL (?) - Bernard says ‘Life is DUL without it!’
bottom line - salvage as much data as possible



DUL is intended to retrieve data that cannot be retrieved otherwise
It is NOT an alternative for restore/rollforward, EXP, SQL*Plus etc. 
It is meant to be a last resort, not for normal production usage
Note: There are logical consistency issues with the data retrieved


DUL should not be used where data can be salvaged using one of the supported mechanisms (restore/rollforward, exp/imp etc…)


Doesn’t need the database or the instance to be open
Does not use recovery, archive logs etc…
It doesn’t care about data consistency
more tolerant to data corruptions
Does not require the SYTEM tablespace to recover


DUL is a utility that can unload data from “badly damaged” databases. 
DUL will scan a database file, recognize table header blocks, access extent information, and read all rows 
Creates a SQL*Loader or Export formatted output
matching SQL*Loader control file is also generated




DUL version 3 (still in testing!) supports IMP loadable dump file.  More on DUL version 3 later...



Read the Oracle data dictionary if the SYSTEM tablespace files are available 
Analyze all rows to determine 
number of columns, column datatypes and column lengths


If the SYSTEM tablespace datafiles are not available DUL does its own analysis, more on this later...



DUL can handle all row types
normal rows, migrated rows, chained rows, multiple extents, clustered tables, etc. 
The utility runs completely unattended, minimal manual intervention is needed.
Cross platform unloading is supported



DUL can open other datafile(s) if there are extents in that datafile(s).
Although DUL can handle it, LONG RAW presents a problem for SQL*Loader - we’ll talk about this shortly...

For cross platform unloading the configuration parameters within "init.dul" will have to be modified to match those of the original platform and O/S rather than the platform from which the unload is being done.
DUL unloads in the physical order of the columns. The cluster key columns are always unloaded first.


Recovers data directly from Oracle data files 
the Database (RDBMS) is bypassed 
Does dirty reads, it assumes that every transaction is committed
Does not check if media recovery has been done
DATABASE CORRUPT - BLOCKS OK 
Support for Locally Managed Tablespaces


DUL does not require that media recovery be done.
Since DUL reads the data directly from datafiles,  it  reads data that is committed as well as uncommitted.  Therefore the data that is salvaged by DUL can potentially be logically corrupt.  It is upto the DBA and/or the Application programmers to validate the data.


The database can be copied from a different operating system than the DUL-host 
Supports all database constructs: 
row chaining, row migration, hash/index clusters, longs, raws, rowids, dates, numbers, multiple free list groups, segment high water mark, NULLS, trailing NULL columns etc...
DUL should work with all versions 6 , 7, 8 and 9
Enhanced to support 9i functionality. 


DUL has been tested with versions from 6.0.36 up to 7.2.2. The old block header layout (pre 6.0.27.2) also works! 


The main new features are: 
  Support for Oracle version 6, 7, 8 and 9 
  Support for Automatic Space Managed Segments 
  New bootstrap procedure: just use ‘bootstrap;’.   No more 
       dictv6,7 or 8.ddl files 
  LOB are supported in SQL*Loader mode only 
  (Sub)Partitioned tables can be unloaded 
  Unload a single (Sub)Partition 
  Improved the scan tables 
  The timestamp and interval datatypes 
  Stricter checking of negative numbers 
  (Compressed) Index Organized Tables be unloaded 
  Very strict checking of row status flags 
  Unload index to see what rows you are missing 
  Objects, nested tables and varrays are not supported (internal  
        preparation for varray support ) 


DUL has been tested with versions from 6.0.36 up to 9.0.1. The old block header layout (pre 6.0.27.2) also works! 
DuL 92 is mostly bug fixes:
The latest version is DUL92. The main new features are: 
     fix for problem with startup when db_block_size = 16K 
     fix for scan database and Automatic Space Managed Segments 
     fix for block type errors high block types; new max is 51 
     Support for Automatic Space Managed Segments 
     phase zero of new unformat command 
     internal preparation for varray support 
     Bug fix in the stricter checking of negative numbers 
     Bug fix in the unloading of clustered tables 


The database can be corrupted, but an individual data block used must be 100% correct
blocks are checked to make sure that they are not corrupt and belong to the correct segment
DUL can and will only unload table/cluster data. 
it will not dump triggers, constraints, stored procedures nor create scripts for tables or views
But the data dictionary tables describing these can be unloaded


Note: If during an unload a bad block is encountered, an error message is printed in the loader file and to standard output. Unloading will continue with the next row or block. 


MLSLABELS (trusted oracle) are not supported
No special support for multi byte character sets
DUL can unload (LONG) RAWs, but there is no way to reload these 1-to-1 with SQL*Loader
SQL*Loader cannot be used to load  LONG RAW data.



DUL can unload (long) raws, but there is no way to reload these 1-to-1 with SQL*Loader. There is no suitable format in SQL*Loader
to preserve all long raws. Use the export mode instead or write a Pro*C program to load the data.



DUL and large files (files > 2GB) 
Starting from DUL version 8.0.6.7 DUL will report if it can do 32-bit i/o(no large file support) or 64-bit i/o with large file suport.
DUL support for raw devices
DUL will work on raw devices. But DUL is not raw device aware.


Raw Devices:
On some platforms we skip the first part of the raw device. DUL does not automatically skip this extra part. The easiest way to configure DUL in this is the optional extra offset in the control file. These extra offsets that I am aware of are 4K on AIX raw devices and 64K for Dec Unix. 
DUL does not use the size as stored in the file header. So DUL will read the whole raw device including the unused part at the end.


There are two configuration files for DUL
init.dul
control.dul
Configuration parameters are platform specific.


If you do decide that DUL is the only way to go, then here is how to go about configuring and using DUL.  Good Luck!!


Contains parameters to help DUL understand the format of the database files 
Has information on  
DUL cache size
Details of header layout
Oracle block size
Output file format
Sql*Loader format and record size. 
etc...


Sample init.dul file for Solaris looks like:
# The dul cache must be big enough to hold all entries from the Dictionary dollar tables.
dc_columns = 200000
dc_tables = 20000
dc_objects = 20000
dc_users = 40
# OS specific parameters
big_endian_flag = true
dba_file_bits = 6
align_filler = 1
db_leading_offset = 1
# Database specific parameters
db_block_size = 2048
# Sql*Loader format parameters
ldr_enclose_char = "
ldr_phys_rec_size = 81


Used to translate the file numbers to file names
Each entry on a separate line, first the file_number then the data_file_name
A third optional field is an extra positive or negative byte offset, that will be added to all fseek() operations for that datafile.


This optional field makes it possible to skip over the extra block for AIX on raw devices or to unload from fragments of a datafile.

The control file would look like : 
  1  /u04/bugmnt/tar9569610.6/gs/sysgs.dbf                                
  2  /u04/bugmnt/tar9569610.6/gs/rbs.dbf                                  
  3  /u04/bugmnt/tar9569610.6/gs/user.dbf         
  4  /u04/bugmnt/tar9569610.6/gs/index.dbf                   
  5  /u04/bugmnt/tar9569610.6/gs/test.dbf
When the database is up and running v$dbfile contains the above information.



# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40

# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 6
osd_c_struct_alignment = 32
osd_file_leader_size = 1

# database parameters
db_block_size = 8192

# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81

#ADD PARAMETERS
export_mode=true  # still needed with dul9
compatible=9


# AIX version 7 example with one file on raw device
   1 /usr/oracle/dbs/system.dbf
   8 /dev/rdsk/data.dbf 4096

   # Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
   0  1 /fs1/oradata/PMS/system.dbf
   1  2 /tmp/huge_file_part1 startblock 1 endblock 1000000
   1  2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
   1  2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000



Case1: Data dictionary usable


Case 1:  
SYSTEM tablespace available
Case 2:  
Using DUL without the SYSTEM tablespace


Straight forward method  	                     
Execute ‘dul’ from os prompt then ‘bootstrap’ from DUL
Don’t need to know about the application tables structure, column types etc...


DUL> unload table hr.emp_trunc;

DUL: Error: No entry in OBJ$ for "EMP_TRUNC" type = 2
DUL: Error: Could not resolve object id
DUL: Error: Missing dictionary information, cannot unload table
DUL> scan database;

Case2: Without the SYSTEM tablespace 

Needs an in depth knowledge about the application and the application tables
The unloaded data does not have any value, if you do not know from which table it came from
Column types can be guessed by DUL but table and column names are lost
The guessed column types can be wrong


Note: 
1) Any old SYSTEM tablespace from the same database but weeks old can be of great help!
2) If you recreate the tables (from the original CREATE TABLE scripts) then the structural information of a "lost" table can be matched to the "seen" tables scanned with two SQL*Plus scripts. (fill.sql andgetlost.sql)

Steps to follow: 
1.configure DUL for the target database. This means creating a correct init.dul and control.dul. 
2.SCAN DATABASE; : scan the database for extents and segments. 
3.SCAN TABLES; : scan the found segments for rows. 
4.SCAN EXTENTS; : scan the found extents. 
5.Identify the lost tables from the output of step 3. 
6.UNLOAD the identified tables. 



DUL will not find “last” columns that only contain NULL's
Trailing NULL columns are not stored in the database
Tables that have been dropped can be seen
When a table is dropped, the description is removed from the data dictionary only
Tables without rows will go unnoticed


During startup DUL goes through the following steps: 
the parameter file "init.dul" is processed
the DUL control file (default "control.dul") is scanned
try to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL's data dictionary cache
try to load seg.dat and col.dat. 
accept DDL-statements or run the DDL script specified as first argument



DUL version 3, 8, 9 and 92 are available. 
http://www-sup.nl.oracle.com/dul/index.html
 exceutables, user’s and configutration guide
Available on most common platforms
Solaris
AIX
NT
HP etc...


DUL version 9 is currently available on: 
aix
alphavms62
att3000
dcosx
hp.tar.bin
osf1
rm4000.tar.bin   
sco
sequen
sunos
sunsol2
vaxvms55
vaxvms61
win95
winnt 

DuL with Dictionary


 Configure init.dul and control.dul
 Load DuL
 Bootstrap
 Unload database, user, table


DuL without Dictionary


 Configure init.dul and control.dul (control will include
   the datafiles needing to be recovered only).
 Load DuL
 alter session set use_scanned_extent_map = true
 scan database
 scan tables
 Using the found table definitions construct an uload 
   statement:
unload table dul2.emp (EMPLOYEE_ID number(22), FIRST_NAME varchar2(20), 
LAST_NAME varchar2(25), 
EMAIL varchar2(25),PHONE_NUMBER varchar2(20), HIRE_DATE date, JOB_ID varchar2 (10),
SALARY number(22), COMMISSION_PCT number(22),MANAGER_ID number(22), 
DEPARTMENT_ID number(22))
storage (dataobjno 28200);







ORA-00600[3705]数据库无法OPEN打开一例

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638 QQ号:47079569 邮箱:service@parnassusdata.com

 

 

当出现ORA-00600: internal error code, arguments: [3705], [1], [1], [1], [1], 报错,且Oracle数据库无法打开时考虑参考本Note。

相关的报错信息可能如下:

ksedmp: internal or fatal error
ORA-00345: redo log write error block 2798 count 2
ORA-00312: online log 2 thread 1: 'J:\MCS_REDO\REDO02.LOG'
ORA-27072: skgfdisp: I/O error
OSD-04008: WriteFile() failure, unable to write to file
O/S-Error: (OS 21) The device is not ready.

其报错的call stack可能如下:
ksedmp ksfdmp kgeriv kgesiv ksesic4 kctopn kcttha ksbabs ksbrdp

相关数据文件的checkpoint scn都一致,且数据库关闭是干净的。

该问题可能由Bug 3397131
Abstract: CONTROL FILE / REDO FLAG MISMATCH ORA-600[3705]

所引起。根本原因在于OS底层的问题,而不是Oracle的问题。 每一次控制文事务更新控制文件的尾部时,oracle会更新控制文件中的SEQ#,这个SEQ#也会记录在当前的redo logfile重做日志文件中。当Oracle下一次读取控制文件时会验证控制文件中的SEQ#。该报错ORA-00600[3705]说明读取到控制文件中的SEQ#时发现是过时的。

 

针对改问题较为简单的解决方式是 重建控制文件。

Oracle DUL初步使用

init.dul


osd_big_endian_flag=false                        ==》该参数用来指定 endian
osd_dba_file_bits=10
osd_c_struct_alignment=32
osd_file_leader_size=1
osd_word_size=32


dc_columns=200000
dc_tables=10000
dc_objects=10000000

DC_USERS=40000
dc_segments=100000



control_file=control.dul



db_block_size=8192


compatible=10

LDR_ENCLOSE_CHAR = |                      ==>指定了 后面导出表的ENCLOSE 的符号
LDR_PHYS_REC_SIZE = 81
~                            


control.dul 通过下面的脚本获得


    sqlplus /nolog
    connect / as sysdba
    startup mount
    set trimspool on pagesize 0 linesize 256 feedback off
    column name format a200
    spool control.dul
    select ts#, rfile#, name from v$datafile;
    exit

例如

         0          1 /s01/oradata/G10R25/datafile/o1_mf_system_8nx5srds_.dbf
         1          2 /s01/oradata/G10R25/datafile/o1_mf_undotbs1_8nx5srg3_.dbf
         2          3 /s01/oradata/G10R25/datafile/o1_mf_sysaux_8nx5srdx_.dbf
         4          4 /s01/oradata/G10R25/datafile/o1_mf_users_8nx5srgb_.dbf
         6          5 /s01/oradata/G10R25/datafile/o1_mf_example_8nx5tqoy_.dbf
         7          6 /s01/oradata/G10R25/datafile/o1_mf_tbs5_8nx7pwqp_.dbf
         8          7 /s01/oradata/G10R25/datafile/o1_mf_mac_tb1_8ot5bdph_.dbf
         8          8 /s01/oradata/G10R25/datafile/o1_mf_mac_tb1_8ot5c081_.dbf
         8          9 /s01/oradata/G10R25/datafile/o1_mf_mac_tb1_8ot5cto7_.dbf
         8         10 /s01/oradata/G10R25/datafile/o1_mf_mac_tb1_8ot5ctpx_.dbf
         8         11 /s01/oradata/G10R25/datafile/o1_mf_mac_tb1_8ot5ctss_.dbf
         8         12 /s01/oradata/G10R25/datafile/o1_mf_mac_tb1_8ot5ctvl_.dbf
         8         13 /s01/oradata/G10R25/datafile/o1_mf_mac_tb1_8ot5ctwz_.dbf


[oracle@vrh8 dul_dir]$ ./dul 

Data UnLoader: 10.2.0.5.22 - Internal Only - on Sat Jul 20 07:54:48 2013
with 64-bit io functions

Copyright (c) 1994 2013 Bernard van Duijnen All rights reserved.

 Strictly Oracle Internal Use Only


DUL: Warning: Recreating file "dul.log"
Found db_id = 2696593743
Found db_name = G10R25


Found db_name = G10R25
DUL> bootstrap
  2  ;
Probing file = 1, block = 377
. unloading table                BOOTSTRAP$
DUL: Warning: block number is non zero but marked deferred trying to process it anyhow
      57 rows unloaded
DUL: Warning: Dictionary cache DC_BOOTSTRAP is empty
Reading BOOTSTRAP.dat 57 entries loaded
Parsing Bootstrap$ contents
Generating dict.ddl for version 10
 OBJ$: segobjno 18, file 1 block 121
 TAB$: segobjno 2, tabno 1, file 1  block 25
 COL$: segobjno 2, tabno 5, file 1  block 25
 USER$: segobjno 10, tabno 1, file 1  block 89
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$   78608 rows unloaded
. unloading table                      TAB$    1672 rows unloaded
. unloading table                      COL$   57314 rows unloaded
. unloading table                     USER$      67 rows unloaded
Reading USER.dat 67 entries loaded
Reading OBJ.dat 78608 entries loaded and sorted 78608 entries
Reading TAB.dat 1672 entries loaded
Reading COL.dat 57314 entries loaded and sorted 57314 entries
Reading BOOTSTRAP.dat 57 entries loaded

DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 10
 OBJ$: segobjno 18, file 1 block 121
 TAB$: segobjno 2, tabno 1, file 1  block 25
 COL$: segobjno 2, tabno 5, file 1  block 25
 USER$: segobjno 10, tabno 1, file 1  block 89
 TABPART$: segobjno 266, file 1 block 2121
 INDPART$: segobjno 271, file 1 block 2161
 TABCOMPART$: segobjno 288, file 1 block 2297
 INDCOMPART$: segobjno 293, file 1 block 2345
 TABSUBPART$: segobjno 278, file 1 block 2217
 INDSUBPART$: segobjno 283, file 1 block 2257
 IND$: segobjno 2, tabno 3, file 1  block 25
 ICOL$: segobjno 2, tabno 4, file 1  block 25
 LOB$: segobjno 2, tabno 6, file 1  block 25
 COLTYPE$: segobjno 2, tabno 7, file 1  block 25
 TYPE$: segobjno 181, tabno 1, file 1  block 1297
 COLLECTION$: segobjno 181, tabno 2, file 1  block 1297
 ATTRIBUTE$: segobjno 181, tabno 3, file 1  block 1297
 LOBFRAG$: segobjno 299, file 1 block 2393
 LOBCOMPPART$: segobjno 302, file 1 block 2425
 UNDO$: segobjno 15, file 1 block 105
 TS$: segobjno 6, tabno 2, file 1  block 57
 PROPS$: segobjno 96, file 1 block 721
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$
DUL: Warning: Recreating file "OBJ.ctl"
   78608 rows unloaded
. unloading table                      TAB$
DUL: Warning: Recreating file "TAB.ctl"
    1672 rows unloaded
. unloading table                      COL$
DUL: Warning: Recreating file "COL.ctl"
   57314 rows unloaded
. unloading table                     USER$
DUL: Warning: Recreating file "USER.ctl"
      67 rows unloaded
. unloading table                  TABPART$     291 rows unloaded
. unloading table                  INDPART$     413 rows unloaded
. unloading table               TABCOMPART$      12 rows unloaded
. unloading table               INDCOMPART$      60 rows unloaded
. unloading table               TABSUBPART$    4392 rows unloaded
. unloading table               INDSUBPART$   21960 rows unloaded
. unloading table                      IND$    2399 rows unloaded
. unloading table                     ICOL$    3847 rows unloaded
. unloading table                      LOB$     577 rows unloaded
. unloading table                  COLTYPE$    1795 rows unloaded
. unloading table                     TYPE$    1990 rows unloaded
. unloading table               COLLECTION$     568 rows unloaded
. unloading table                ATTRIBUTE$    7414 rows unloaded
. unloading table                  LOBFRAG$       1 row  unloaded
. unloading table              LOBCOMPPART$       0 rows unloaded
. unloading table                     UNDO$      21 rows unloaded
. unloading table                       TS$      13 rows unloaded
. unloading table                    PROPS$      29 rows unloaded
Reading USER.dat 67 entries loaded
Reading OBJ.dat 78608 entries loaded and sorted 78608 entries
Reading TAB.dat 1672 entries loaded
Reading COL.dat 57314 entries loaded and sorted 57314 entries
Reading TABPART.dat 291 entries loaded and sorted 291 entries
Reading TABCOMPART.dat 12 entries loaded and sorted 12 entries
Reading TABSUBPART.dat 4392 entries loaded and sorted 4392 entries
Reading INDPART.dat 413 entries loaded and sorted 413 entries
Reading INDCOMPART.dat 60 entries loaded and sorted 60 entries
Reading INDSUBPART.dat 21960 entries loaded and sorted 21960 entries
Reading IND.dat 2399 entries loaded
Reading LOB.dat 577 entries loaded
Reading ICOL.dat 3847 entries loaded
Reading COLTYPE.dat 1795 entries loaded
Reading TYPE.dat 1990 entries loaded
Reading ATTRIBUTE.dat 7414 entries loaded
Reading COLLECTION.dat 568 entries loaded
Reading BOOTSTRAP.dat 57 entries loaded
Reading LOBFRAG.dat 1 entries loaded and sorted 1 entries
Reading LOBCOMPPART.dat 0 entries loaded and sorted 0 entries
Reading UNDO.dat 21 entries loaded
Reading TS.dat 13 entries loaded
Reading PROPS.dat 29 entries loaded
Database character set is AL32UTF8
Database national character set is AL16UTF16


PROPS.dat==> props$里存放了字符集属性




  2  [oracle@vrh8 dul_dir]$ ls -ltr
total 12060
-rwxr-xr-x 1 oracle oinstall 1098504 Jul 20 07:53 dul
-rw-r--r-- 1 oracle oinstall     318 Jul 20 07:53 init.dul
-rw-r--r-- 1 oracle oinstall    1021 Jul 20 07:54 control.dul
-rw-r--r-- 1 oracle oinstall    1185 Jul 20 07:54 USER.dat
-rw-r--r-- 1 oracle oinstall     252 Jul 20 07:54 USER.ctl
-rw-r--r-- 1 oracle oinstall     717 Jul 20 07:54 UNDO.dat
-rw-r--r-- 1 oracle oinstall     532 Jul 20 07:54 UNDO.ctl
-rw-r--r-- 1 oracle oinstall  159360 Jul 20 07:54 TYPE.dat
-rw-r--r-- 1 oracle oinstall     392 Jul 20 07:54 TYPE.ctl
-rw-r--r-- 1 oracle oinstall     269 Jul 20 07:54 TS.dat
-rw-r--r-- 1 oracle oinstall     318 Jul 20 07:54 TS.ctl
-rw-r--r-- 1 oracle oinstall  226175 Jul 20 07:54 TABSUBPART.dat
-rw-r--r-- 1 oracle oinstall     684 Jul 20 07:54 TABSUBPART.ctl
-rw-r--r-- 1 oracle oinstall   14577 Jul 20 07:54 TABPART.dat
-rw-r--r-- 1 oracle oinstall     678 Jul 20 07:54 TABPART.ctl
-rw-r--r-- 1 oracle oinstall   97023 Jul 20 07:54 TAB.dat
-rw-r--r-- 1 oracle oinstall     880 Jul 20 07:54 TAB.ctl
-rw-r--r-- 1 oracle oinstall     255 Jul 20 07:54 TABCOMPART.dat
-rw-r--r-- 1 oracle oinstall     334 Jul 20 07:54 TABCOMPART.ctl
-rw-r--r-- 1 oracle oinstall     907 Jul 20 07:54 PROPS.dat
-rw-r--r-- 1 oracle oinstall     254 Jul 20 07:54 PROPS.ctl
-rw-r--r-- 1 oracle oinstall 4519576 Jul 20 07:54 OBJ.dat
-rw-r--r-- 1 oracle oinstall     600 Jul 20 07:54 OBJ.ctl
-rw-r--r-- 1 oracle oinstall      38 Jul 20 07:54 LOBFRAG.dat
-rw-r--r-- 1 oracle oinstall     608 Jul 20 07:54 LOBFRAG.ctl
-rw-r--r-- 1 oracle oinstall   32487 Jul 20 07:54 LOB.dat
-rw-r--r-- 1 oracle oinstall     810 Jul 20 07:54 LOB.ctl
-rw-r--r-- 1 oracle oinstall       0 Jul 20 07:54 LOBCOMPPART.dat
-rw-r--r-- 1 oracle oinstall     336 Jul 20 07:54 LOBCOMPPART.ctl
-rw-r--r-- 1 oracle oinstall 1149629 Jul 20 07:54 INDSUBPART.dat
-rw-r--r-- 1 oracle oinstall     684 Jul 20 07:54 INDSUBPART.ctl
-rw-r--r-- 1 oracle oinstall   20616 Jul 20 07:54 INDPART.dat
-rw-r--r-- 1 oracle oinstall     678 Jul 20 07:54 INDPART.ctl
-rw-r--r-- 1 oracle oinstall  135890 Jul 20 07:54 IND.dat
-rw-r--r-- 1 oracle oinstall     810 Jul 20 07:54 IND.ctl
-rw-r--r-- 1 oracle oinstall    1275 Jul 20 07:54 INDCOMPART.dat
-rw-r--r-- 1 oracle oinstall     334 Jul 20 07:54 INDCOMPART.ctl
-rw-r--r-- 1 oracle oinstall   86622 Jul 20 07:54 ICOL.dat
-rw-r--r-- 1 oracle oinstall     392 Jul 20 07:54 ICOL.ctl
-rw-r--r-- 1 oracle oinstall    6400 Jul 20 07:54 dict.ddl
-rw-r--r-- 1 oracle oinstall  114017 Jul 20 07:54 COLTYPE.dat
-rw-r--r-- 1 oracle oinstall     608 Jul 20 07:54 COLTYPE.ctl
-rw-r--r-- 1 oracle oinstall   72235 Jul 20 07:54 COLLECTION.dat
-rw-r--r-- 1 oracle oinstall     754 Jul 20 07:54 COLLECTION.ctl
-rw-r--r-- 1 oracle oinstall 3579150 Jul 20 07:54 COL.dat
-rw-r--r-- 1 oracle oinstall     950 Jul 20 07:54 COL.ctl
-rw-r--r-- 1 oracle oinstall   18145 Jul 20 07:54 BOOTSTRAP.dat
-rw-r--r-- 1 oracle oinstall     332 Jul 20 07:54 BOOTSTRAP.ctl
-rw-r--r-- 1 oracle oinstall  779009 Jul 20 07:54 ATTRIBUTE.dat
-rw-r--r-- 1 oracle oinstall     752 Jul 20 07:54 ATTRIBUTE.ctl
-rw-r--r-- 1 oracle oinstall   16018 Jul 20 07:56 dul.log



[oracle@vrh8 dul_dir]$ vi dict.ddl

REM DDL Script to unload the dictionary cache for DUL

REM force the settings to get the expected DUL self readable format
alter session set profile DUL_READABLE_FORMAT;

unload table OBJ$( OBJ# number, DATAOBJ# number, OWNER# number,
    NAME clean varchar2(30), NAMESPACE ignore, SUBNAME varchar2(30),
    TYPE# number, CTIME ignore, MTIME ignore, STIME ignore,
    STATUS ignore, REMOTEOWNER ignore, LINKNAME ignore,
    FLAGS ignore, OID$ hexraw)
    storage ( tablespace 0 segobjno 18 file 1 block 121);

unload table TAB$( OBJ# number, DATAOBJ# number,
    TS# number, FILE# number, BLOCK# number,
    BOBJ# number, TAB# number, COLS number, CLUCOLS number,
    PCTFREE$ ignore, PCTUSED$ ignore, INITRANS ignore, MAXTRANS ignore,
    FLAGS ignore, AUDIT$ ignore, ROWCNT ignore, BLKCNT ignore,
    EMPCNT ignore, AVGSPC ignore, CHNCNT ignore, AVGRLN ignore,
    AVGSPC_FLB ignore, FLBCNT ignore,
    ANALYZETIME ignore, SAMPLESIZE ignore,
    DEGREE ignore, INSTANCES ignore,
    INTCOLS ignore, KERNELCOLS number, PROPERTY number)
    cluster  C_OBJ#(OBJ#)
    storage ( tablespace 0 segobjno 2 tabno 1 file 1 block 25);

unload table COL$ ( OBJ# number, COL# number , SEGCOL# number,
    SEGCOLLENGTH ignore, OFFSET ignore, NAME char(30),
    TYPE# number, LENGTH number, FIXEDSTORAGE ignore,
    PRECISION# number, SCALE number, NULL$ ignore, DEFLENGTH ignore,
    DEFAULT$ ignore, INTCOL# number, PROPERTY number,
    CHARSETID number, CHARSETFORM number)
    cluster C_OBJ#(OBJ#)
    storage ( tablespace 0 segobjno 2 tabno 5 file 1 block 25);

[oracle@vrh8 dul_dir]$ 
[oracle@vrh8 dul_dir]$ cat dict.ddl
REM DDL Script to unload the dictionary cache for DUL

REM force the settings to get the expected DUL self readable format
alter session set profile DUL_READABLE_FORMAT;

unload table OBJ$( OBJ# number, DATAOBJ# number, OWNER# number,
    NAME clean varchar2(30), NAMESPACE ignore, SUBNAME varchar2(30),
    TYPE# number, CTIME ignore, MTIME ignore, STIME ignore,
    STATUS ignore, REMOTEOWNER ignore, LINKNAME ignore,
    FLAGS ignore, OID$ hexraw)
    storage ( tablespace 0 segobjno 18 file 1 block 121);

unload table TAB$( OBJ# number, DATAOBJ# number,
    TS# number, FILE# number, BLOCK# number,
    BOBJ# number, TAB# number, COLS number, CLUCOLS number,
    PCTFREE$ ignore, PCTUSED$ ignore, INITRANS ignore, MAXTRANS ignore,
    FLAGS ignore, AUDIT$ ignore, ROWCNT ignore, BLKCNT ignore,
    EMPCNT ignore, AVGSPC ignore, CHNCNT ignore, AVGRLN ignore,
    AVGSPC_FLB ignore, FLBCNT ignore,
    ANALYZETIME ignore, SAMPLESIZE ignore,
    DEGREE ignore, INSTANCES ignore,
    INTCOLS ignore, KERNELCOLS number, PROPERTY number)
    cluster  C_OBJ#(OBJ#)
    storage ( tablespace 0 segobjno 2 tabno 1 file 1 block 25);

unload table COL$ ( OBJ# number, COL# number , SEGCOL# number,
    SEGCOLLENGTH ignore, OFFSET ignore, NAME char(30),
    TYPE# number, LENGTH number, FIXEDSTORAGE ignore,
    PRECISION# number, SCALE number, NULL$ ignore, DEFLENGTH ignore,
    DEFAULT$ ignore, INTCOL# number, PROPERTY number,
    CHARSETID number, CHARSETFORM number)
    cluster C_OBJ#(OBJ#)
    storage ( tablespace 0 segobjno 2 tabno 5 file 1 block 25);

unload table USER$( USER# number, NAME varchar2(30))
    cluster C_USER#(USER#)
    storage ( tablespace 0 segobjno 10 tabno 1 file 1 block 89);

unload table TABPART$( OBJ# number, DATAOBJ# number, BO# number,
    PART# number, HIBOUNDLEN ignore, SPARE3 ignore,
    TS# number, FILE# number, BLOCK# number,
    PCTFREE$ ignore, PCTUSED$ ignore,
    INITRANS ignore, MAXTRANS ignore,
    FLAGS number)
    storage ( tablespace 0 segobjno 266 file 1 block 2121);

unload table INDPART$( OBJ# number, DATAOBJ# number, BO# number,
   PART# number, HIBOUNDLEN ignore, HIBOUNDVAL ignore, FLAGS number,
   TS# number, FILE# number, BLOCK# number)
    storage ( tablespace 0 segobjno 271 file 1 block 2161);

unload table TABCOMPART$( OBJ# number, DATAOBJ# ignore, BO# number,
    PART# number)
    storage ( tablespace 0 segobjno 288 file 1 block 2297);

unload table INDCOMPART$( OBJ# number, DATAOBJ# ignore, BO# number,
    PART# number)
    storage ( tablespace 0 segobjno 293 file 1 block 2345);

unload table TABSUBPART$( OBJ# number, DATAOBJ# number,
    POBJ# number, SUBPART# number, FLAGS number,
    TS# number, FILE# number, BLOCK# number)
    storage ( tablespace 0 segobjno 278 file 1 block 2217);

unload table INDSUBPART$( OBJ# number, DATAOBJ# number,
    POBJ# number, SUBPART# number, FLAGS number,
    TS# number, FILE# number, BLOCK# number)
    storage ( tablespace 0 segobjno 283 file 1 block 2257);

unload table IND$( BO# number, OBJ# number,
    DATAOBJ# number, TS# number, FILE# number, BLOCK# number,
    INDMETHOD# ignore, COLS number, PCTFREE$ ignore,
    INITRANS ignore, MAXTRANS ignore, PCTTHRESH$ ignore,
    TYPE# number, FLAGS number, PROPERTY number)
    cluster  C_OBJ#(BO#)
    storage ( tablespace 0 segobjno 2 tabno 3 file 1 block 25);

unload table ICOL$( BO# number, OBJ# number, COL# number, POS# number)
    cluster  C_OBJ#(BO#)
    storage ( tablespace 0 segobjno 2 tabno 4 file 1 block 25);

unload table LOB$( OBJ# number, COL# number, INTCOL# number,
    lobj# number, part# ignore, ind# number,
    ts# number, file# number, block# number, chunk number,
    pctversion$ ignore, flags ignore, property number)
    cluster  C_OBJ#(OBJ#)
    storage ( tablespace 0 segobjno 2 tabno 6 file 1 block 25);

unload table COLTYPE$( OBJ# number, COL# number, INTCOL# number,
    toid hexraw, version# ignore, packed number, intcols number,
    intcols#s ignore, flags number)
    cluster  C_OBJ#(OBJ#)
    storage ( tablespace 0 segobjno 2 tabno 7 file 1 block 25);

unload table TYPE$( TOID hexraw, VERSION# ignore,
    VERSION ignore, TVOID hexraw,
    TYPECODE number, PROPERTIES ignore, ATTRIBUTES number)
    cluster  C_TOID_VERSION#( TOID, VERSION#)
    storage ( tablespace 0 segobjno 181 tabno 1 file 1 block 1297);

unload table COLLECTION$( TOID hexraw, VERSION# ignore,
    COLL_TOID hexraw, COLL_VERSION# ignore,
    ELEM_TOID hexraw, ELEM_VERSION# ignore,
    SYNOBJ# ignore, PROPERTIES number,
    CHARSETID number, CHARSETFORM ignore,
    LENGTH number, PRECISION# number, SCALE number,
    UPPER_BOUND number)
    cluster  C_TOID_VERSION#( TOID, VERSION#)
    storage ( tablespace 0 segobjno 181 tabno 2 file 1 block 1297);

unload table ATTRIBUTE$( TOID hexraw, VERSION# ignore,
    NAME clean varchar2(30), 
    ATTRIBUTE# number, ATTR_TOID hexraw, ATTR_VERSION# ignore,
    SYNOBJ# ignore, PROPERTIES number, 
    CHARSETID number, CHARSETFORM ignore,
    LENGTH number, PRECISION# number, SCALE number) 
    cluster  C_TOID_VERSION#( TOID, VERSION#)
    storage ( tablespace 0 segobjno 181 tabno 3 file 1 block 1297);

unload table LOBFRAG$( fragobj# number, parentobj# number,
    tabfragobj# ignore, indfragobj# ignore, frag# number,
    fragtype$ ignore, ts# number, file# number, block# number,
    chunk number)
    storage ( tablespace 0 segobjno 299 file 1 block 2393);

unload table LOBCOMPPART$( partobj# number not null,
    lobj# number not null,
    tabpartobj# ignore, indpartobj# ignore,
    part# number not null)
    storage ( tablespace 0 segobjno 302 file 1 block 2425);

unload table UNDO$( US# number, NAME clean varchar2(30),
    USER# ignore, FILE# number, BLOCK# number,
    SCNBAS ignore, SCNWRP ignore,
    XACTSQN ignore, UNDOSQN ignore, INST# ignore,
    STATUS$ number, TS# number)
    storage ( tablespace 0 segobjno 15 file 1 block 105);

unload table TS$( TS# number, NAME clean varchar2(30),
    OWNER# ignore, ONLINE$ ignore, CONTENTS$ ignore,
    UNDOFILE# ignore, UNDOBLOCK# ignore,
    BLOCKSIZE number)
    cluster  C_TS#( TS#)
    storage ( tablespace 0 segobjno 6 tabno 2 file 1 block 57);

unload table PROPS$( NAME clean varchar2(30),
    VALUE$ clean varchar2(4000), COMMENT$ ignore)
    storage ( tablespace 0 segobjno 96 file 1 block 721);


REM restore the user settings
alter session set profile USER;
REM load the files into the cache
reload;














[oracle@vrh8 dul_dir]$ cat COL.ctl 
load data
CHARACTERSET US7ASCII
infile 'COL.dat'
insert
into table "COL$"
fields terminated by whitespace
(
  "OBJ#"                             CHAR(5) enclosed by X'22'       
 ,"COL#"                             CHAR(3) enclosed by X'22'       
 ,"SEGCOL#"                          CHAR(3) enclosed by X'22'       
 ,"NAME"                             CHAR(30) enclosed by X'22'      
 ,"TYPE#"                            CHAR(3) enclosed by X'22'       
 ,"LENGTH"                           CHAR(5) enclosed by X'22'       
 ,"PRECISION#"                       CHAR(3) enclosed by X'22'       
 ,"SCALE"                            CHAR(2) enclosed by X'22'       
 ,"INTCOL#"                          CHAR(3) enclosed by X'22'       
 ,"PROPERTY"                         CHAR(8) enclosed by X'22'       
 ,"CHARSETID"                        CHAR(4) enclosed by X'22'       
 ,"CHARSETFORM"                      CHAR(1) enclosed by X'22'       
)









[oracle@vrh8 dul_dir]$ head -20 COL.dat
"20" "1" "2" "OBJ#" "2" "22" "" "" "1" "0" "0" "0"
"20" "2" "1" "BO#" "2" "22" "" "" "2" "0" "0" "0"
"20" "3" "3" "COL#" "2" "22" "" "" "3" "0" "0" "0"
"20" "4" "4" "POS#" "2" "22" "" "" "4" "0" "0" "0"
"20" "5" "5" "SEGCOL#" "2" "22" "" "" "5" "0" "0" "0"
"20" "6" "6" "SEGCOLLENGTH" "2" "22" "" "" "6" "0" "0" "0"
"20" "7" "7" "OFFSET" "2" "22" "" "" "7" "0" "0" "0"
"20" "8" "8" "INTCOL#" "2" "22" "" "" "8" "0" "0" "0"
"20" "9" "9" "SPARE1" "2" "22" "" "" "9" "0" "0" "0"
"20" "10" "10" "SPARE2" "2" "22" "" "" "10" "0" "0" "0"
"20" "11" "11" "SPARE3" "2" "22" "" "" "11" "0" "0" "0"
"20" "12" "12" "SPARE4" "1" "1000" "" "" "12" "0" "873" "1"
"20" "13" "13" "SPARE5" "1" "1000" "" "" "13" "0" "873" "1"
"20" "14" "14" "SPARE6" "12" "7" "" "" "14" "0" "0" "0"
"28" "1" "1" "OWNER#" "2" "22" "" "" "1" "0" "0" "0"
"28" "2" "2" "NAME" "1" "30" "" "" "2" "0" "873" "1"
"28" "3" "3" "CON#" "2" "22" "" "" "3" "0" "0" "0"
"28" "4" "4" "SPARE1" "2" "22" "" "" "4" "0" "0" "0"
"28" "5" "5" "SPARE2" "2" "22" "" "" "5" "0" "0" "0"
"28" "6" "6" "SPARE3" "2" "22" "" "" "6" "0" "0" "0"





TAB.CTL


load data
CHARACTERSET US7ASCII
infile 'TAB.dat'
insert
into table "TAB$"
fields terminated by whitespace
(
  "OBJ#"                             CHAR(5) enclosed by X'22'
 ,"DATAOBJ#"                         CHAR(5) enclosed by X'22'
 ,"TS#"                              CHAR(1) enclosed by X'22'
 ,"FILE#"                            CHAR(1) enclosed by X'22'
 ,"BLOCK#"                           CHAR(6) enclosed by X'22'
 ,"BOBJ#"                            CHAR(5) enclosed by X'22'
 ,"TAB#"                             CHAR(2) enclosed by X'22'
 ,"COLS"                             CHAR(3) enclosed by X'22'
 ,"CLUCOLS"                          CHAR(1) enclosed by X'22'
 ,"KERNELCOLS"                       CHAR(3) enclosed by X'22'
 ,"PROPERTY"                         CHAR(10) enclosed by X'22'
)		 
	
	
	
	TAB.dat
	
	"20" "2" "0" "1" "25" "2" "4" "14" "1" "14" "1024"
"28" "28" "0" "1" "169" "" "" "9" "" "9" "0"
"15" "15" "0" "1" "105" "" "" "22" "" "22" "0"
"25" "25" "0" "1" "145" "" "" "3" "" "3" "0"
"17" "17" "0" "1" "113" "" "" "14" "" "14" "0"
"13" "8" "0" "1" "73" "8" "1" "7" "3" "7" "1024"
"19" "2" "0" "1" "25" "2" "3" "34" "1" "34" "1024"
"14" "8" "0" "1" "73" "8" "2" "20" "3" "20" "1024"
"21" "2" "0" "1" "25" "2" "5" "24" "1" "24" "1024"
"5" "2" "0" "1" "25" "2" "2" "26" "1" "26" "1024"
"23" "23" "0" "1" "129" "" "" "6" "" "6" "0"
"16" "6" "0" "1" "57" "6" "2" "32" "1" "32" "1024"
"56" "56" "0" "1" "377" "" "" "3" "" "3" "0"
"12" "6" "0" "1" "57" "6" "1" "4" "1" "4" "1024"
"32" "29" "0" "1" "177" "29" "2" "11" "1" "11" "1024"
"22" "10" "0" "1" "89" "10" "1" "25" "1" "25" "1024"
"18" "18" "0" "1" "121" "" "" "21" "" "21" "0"
"4" "2" "0" "1" "25" "2" "1" "37" "1" "37" "1024"
"31" "29" "0" "1" "177" "29" "1" "21" "1" "21" "1024"
"57" "57" "0" "1" "417" "" "" "8" "" "8" "536870912"
"58" "58" "0" "1" "425" "" "" "6" "" "6" "536870912"
"61" "10" "0" "1" "89" "10" "2" "8" "1" "8" "1024"
"62" "62" "0" "1" "449" "" "" "4" "" "4" "536870912"
"63" "63" "0" "1" "457" "" "" "8" "" "8" "536870912"
	

	
	DUL> 
  2  unload maclean.maclog;
. unloading table                    MACLOG       2 rows unloaded


导出一张表

[oracle@vrh8 dul_dir]$ cat MACLEAN_MACLOG.ctl 
load data
CHARACTERSET AL32UTF8
infile 'MACLEAN_MACLOG.dat' "fix 81"
insert
continueif this (80) = X'2B0A'
into table "MACLEAN"."MACLOG"
fields terminated by whitespace
(
  "T1"                               CHAR(1) enclosed by X'7C'       
)


[oracle@vrh8 dul_dir]$ cat MACLEAN_MACLOG.dat 
|1|                                                                             
|1|      


insert into maclean.maclog(t2) values('刘相兵');

insert into maclean.maclog(t2) values('刘相兵');
insert into maclean.maclog(t2) values('刘相兵');

测试lob 

  1* select * from maclean.maclog
SQL> /

        T1 T2
---------- --------------------
           ���������շ�����ڿη��
           ��

           刘相兵
           刘相兵
           刘相兵
           刘相兵

		   
		   SQL> alter system checkpoint;

System altered.


  3  unload maclean.maclog;
. unloading (index organized) table     LOB010b882b       0 rows unloaded
Preparing lob metadata from lob index
Reading LOB010b882b.dat 0 entries loaded and sorted 0 entries
. unloading table                    MACLOG       5 rows unloaded
DUL> [oracle@vrh8 dul_dir]$ 







[oracle@vrh8 dul_dir]$ cat MACLEAN_MACLOG.ctl
load data
CHARACTERSET AL32UTF8
infile 'MACLEAN_MACLOG.dat' "fix 81"
insert
continueif this (80) = X'2B0A'
into table "MACLEAN"."MACLOG"
fields terminated by whitespace
(
  "T1"                               CHAR(1) enclosed by X'7C'       
 ,"T2"                               CHAR(60) enclosed by X'7C'      
)



[oracle@vrh8 dul_dir]$ cat MACLEAN_MACLOG.dat 
|| |���������շ�����ڿη����|               
|| |刘相兵|                                                                  
|| |刘相兵|                                                                  
|| |刘相兵|                                                                  
|| |刘相兵| 


有数据字典情况下 DUL 支持 CLOB多字符集









[oracle@vrh8 dul_dir]$ rm -rf *.ctl
[oracle@vrh8 dul_dir]$ rm -rf *.dat


==========================================================================================》


下面进入无system表空间 时的情况

DUL: Warning: Recreating file "dul.log"
Found db_id = 2696593743
Found db_name = G10R25
DUL> scan database;
Scanning tablespace 0, data file 1 ...
  1204 segment header and 84386 data blocks
  tablespace 0, data file 1: 97279 blocks scanned
Scanning tablespace 1, data file 2 ...

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 151009

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 199569

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 231593

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 259809

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 284097

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 330193

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 411913

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 419801

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 433857

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 447913

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 453097

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 455617

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 481225

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 527257

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 577033

DUL: Warning: Unsupported block type 37
DUL: Error: While processing ts# 1 file# 2 block# 624553
  0 segment header and 0 data blocks
  tablespace 1, data file 2: 673279 blocks scanned
Scanning tablespace 2, data file 3 ...
  3202 segment header and 42323 data blocks
  tablespace 2, data file 3: 65279 blocks scanned
Scanning tablespace 4, data file 4 ...

DUL: Error: Calculated block checksum, 0x0031,is non zero
DUL: Error: checksum value stored in block is 0x039c
DUL: Error: checksum value stored in block should be 0x03ad
DUL: Error: Ignoring block that has checksum mismatch
DUL: Error: While processing ts# 4 file# 4 block# 699

DUL: Error: Calculated block checksum, 0xdf8d,is non zero
DUL: Error: checksum value stored in block is 0xbc8b
DUL: Error: checksum value stored in block should be 0x6306
DUL: Error: Ignoring block that has checksum mismatch
DUL: Error: While processing ts# 4 file# 4 block# 742245
  26442 segment header and 570400 data blocks
  tablespace 4, data file 4: 766239 blocks scanned
Scanning tablespace 6, data file 5 ...
  425 segment header and 5710 data blocks
  tablespace 6, data file 5: 12799 blocks scanned
Scanning tablespace 7, data file 6 ...
  7 segment header and 6047 data blocks
  tablespace 7, data file 6: 6399 blocks scanned
Scanning tablespace 8, data file 7 ...
  0 segment header and 0 data blocks
  tablespace 8, data file 7: 1279 blocks scanned
Scanning tablespace 8, data file 8 ...
  0 segment header and 0 data blocks
  tablespace 8, data file 8: 639 blocks scanned
Scanning tablespace 8, data file 9 ...
  0 segment header and 0 data blocks
  tablespace 8, data file 9: 639 blocks scanned
Scanning tablespace 8, data file 10 ...
  0 segment header and 0 data blocks
  tablespace 8, data file 10: 639 blocks scanned
Scanning tablespace 8, data file 11 ...
  0 segment header and 0 data blocks
  tablespace 8, data file 11: 639 blocks scanned
Scanning tablespace 8, data file 12 ...
  0 segment header and 0 data blocks
  tablespace 8, data file 12: 639 blocks scanned
Scanning tablespace 8, data file 13 ...
  0 segment header and 0 data blocks
  tablespace 8, data file 13: 639 blocks scanned
Reading EXT.dat
DUL: Warning: Increased the size of DC_EXTENTS from 10000 to 32768 entries
 30442 entries loaded and sorted 30442 entries
Reading SEG.dat 31280 entries loaded
Reading COMPATSEG.dat 0 entries loaded
Reading SCANNEDLOBPAGE.dat 6890 entries loaded and sorted 6890 entries


scan extents;


暂时说到这里

Oracle Data UnLoader (DUL) Update

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

 

I first learned to use DUL about 10 years ago from an in-class workshop conducted by Jim Stone. While
the primary purpose and use of Bernard Van Duijnen’s DUL data extract utility has not changed, a recent
customer incident had me review its’ capabilities and what has changed specifically over the last few
years. Others in ACS who don’t use it often should be aware of the following:
Documentation: The technical support and download website was updated to

https://stbeehive.oracle.com/teamcollab/overview/DUL.

This should be your first stop for technicalinformation. The User’s Configuration Guide while still very primitive, is now easily downloadable to a Word document and is updated to reflect the current release of DUL. Wiki pages now exist organized by topic to extend the User’s Guide. Additionally, there is a Forum created to collect War Stories and Help.
The long-time distribution list helpdul_nl@oracle.com is still available for real-time help.
Configuration: To run DUL, 2 configuration files are needed; init.dul and control.dul. While init.dul
specifies the platform specific parameters, control.dul specified the location of the datafiles for DUL to
extract from. In the past, control.dul had required a file number, the relative file number and the fully
qualified datafile name. This required asking the customer to run a SQL statement with the database in
mount-only mode which often delayed the engagement and in many cases would not even be possible.
Now, if the database is 10 or greater, DUL can read the file number and relative file numbers directly
from the header (assuming they aren’t corrupted) making the specification unnecessary in the
control.dul. There are also options now for specifying Automatic Storage Managed (ASM) disks as input.
See the Configuration Guide for details.
Database Features: The current version of DUL is 10.2.4.27 as of this writing. While DUL will run against
an Oracle 11g database, there are some 11g features or data types that are not currently supported:
11g secure file lobs, label security, encryption, ASM on Exadata. For a full list, see the above web URL.
With the above in mind, each DUL engagement will have its’ own challenges. No two are necessarily
alike. It’s important to set customer expectations ahead and set limits on what is defined as success.
Performing a DUL used to be a fixed, high cost, onsite-required service. Some Support Analysts still
reference it that way in SRs. However, it is now a Time and Materials, mostly remote service billed at
standard rates which makes it more likely that a customer will be willing to move ahead.
To perform the service remotely (via OWC or WebEx) requires sending the customer the 600k DUL
executable. As this is an internal-only utility, the legal requirement is that the customer must delete the
executable from their systems once completed. While this may be difficult to enforce, the DUL
executable will expire after some time and put out a message saying “A new version of DUL is required”.
To be prudent, always download the latest version from the website just before engaging. As Bernard is
often quoted, “Life is DUL without it!”

【Oracle ASM数据恢复】V$ASM_DISK HEADER_STATUS显示为Provisioned的问题解析

当用户加载mount一个之前可用的diskgroup 时,将在ASM的告警日志中看到下面的错误:

 

如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!

 

SQL> ALTER DISKGROUP ALL MOUNT
Tue Jul 19 09:31:09 2005
Loaded ASM Library - Generic Linux, version 1.0.0 library for asmlib interface
Tue Jul 19 09:31:09 2005
NOTE: cache registered group DBFILE_GRP number=1 incarn=0xc3fd9b7d
NOTE: cache registered group FLASHBACK_GRP number=2 incarn=0xc40d9b7e
NOTE: cache dismounting group 1/0xC3FD9B7D (DBFILE_GRP)
NOTE: dbwr not being msg'd to dismount
ERROR: diskgroup DBFILE_GRP was not mounted
NOTE: cache dismounting group 2/0xC40D9B7E (FLASHBACK_GRP)
NOTE: dbwr not being msg'd to dismount
ERROR: diskgroup FLASHBACK_GRP was not mounted

[oracle@vrh8 ~]$ oerr ora 15032
15032, 00000, "not all alterations performed"
// *Cause:  At least one ALTER DISKGROUP action failed.
// *Action: Check the other messages issued along with this summary error.
//
[oracle@vrh8 ~]$ oerr ora 15063
15063, 00000, "ASM discovered an insufficient number of disks for diskgroup \"%s\""
// *Cause:  ASM was unable to find a sufficient number of disks belonging to the
//          diskgroup to continue the operation.
// *Action: Check that the disks in the diskgroup are present and functioning, 
//          that the owner of the ORACLE binary has read/write permission to 
//          the disks, and that the ASM_DISKSTRING initialization parameter 
//          has been set correctly.  Verify that ASM discovers the appropriate 
//          disks by querying V$ASM_DISK from the ASM instance.
//

 

 

 

主要是出现三个错误:
ORA-15032: not all alterations performed
ORA-15063: diskgroup “FLASHBACK_GRP” lacks quorum of 2 PST disks; 0 found
ORA-15063: diskgroup “DBFILE_GRP” lacks quorum of 2 PST disks; 0 found

 

 

检查V$ASM_DISK 的HEADER_STATUS 显示为PROVISIONED,甚至当这个disk没有被asmlib label过:

 

 

SQL> select path, MOUNT_STATUS, HEADER_STATUS, MODE_STATUS, STATE from v$asm_disk;
PATH MOUNT_S HEADER_STATUS MODE_ST STATE
------------- ------- ------------- ------- --------
/dev/raw/raw1 CLOSED PROVISIONED ONLINE NORMAL

 

 

 

导致该问题的原因一般是硬件故障,或者由于存储的固件升级

 

使用kfed来检查disk header:

kfed read /dev/raw/raw1
kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD
kfdhdb.driver.provstr: ORCLDISKASM1 ; 0x000: length=12
kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname: ASM1 ; 0x028: length=4
kfdhdb.grpname: DBFILE_GRP ; 0x048: length=10
kfdhdb.fgname: ASM1 ; 0x068: length=4
kfdhdb.capname: ; 0x088: length=0
kfdhdb.dskname: ASM1 ; 0x028: length=4
kfdhdb.grpname: DBFILE_GRP ; 0x048: length=10
kfdhdb.fgname: ASM1 ; 0x068: length=4
kfdhdb.capname: ; 0x088: length=0

 

 

观察kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER,  KFDHDR_MEMBER说明其header状态实际是MEMBER,而V$ASM_DISK.HEADER_STATUS 则显示为PROVISIONED,2者并不匹配。

 

在disk header中kfdhdb.hdrsts 标记了本disk的状态, 下表显示了几个状态的描述:

 

kfdhdb.hdrsts 描述
MEMBER 属于当前diskgroup的disk
FORMER 这个disk以前属于一个diskgroup,现在这个diskgroup被删除了
CANDIDATE 当使用裸设备,一个新的可以被diskgroup所用的disk
PROVISIONED 当使用asmlib,一个新的可以被diskgroup所用的disk

 

 

如果kfed  read发现其状态为0x027: KFDHDR_MEMBER,则V$ASM_DISK不应当显示PROVISIONED。

我们发现当checksum不正确时可能导致V$ASM_DISK.HEADER_STATUS显示为PROVISIONED。

例如当发生硬件故障导致header的部分显得正常而仅仅checksum不正确。那么我们可以通过kfed来修复该问题的, kfed不愧和 amdu 、adhu合称ASM 三神器。

 

 

 

 

【Oracle数据恢复】ORA-00704: 引导程序进程失败 ORA-39700: 必须用 UPGRADE 选项打开数据库错误解析一例

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638    QQ号:47079569    邮箱:service@parnassusdata.com

$ oerr ora 704
00704, 00000, "bootstrap process failure"
// *Cause:  Failure in processing bootstrap data - see accompanying error.
// *Action: Contact your customer support representative.

$ oerr ora 39700
39700, 00000, "database must be opened with UPGRADE option"
// *Cause:  A normal database open was attempted, but the database has not 
//          been upgraded to the current server version.
// *Action: Use the UPGRADE option when opening the database to run 
//          catupgrd.sql (for database upgrade), or to run catalog.sql 
//          and catproc.sql (after initial database creation).

Recovery of Online Redo Log: Thread 1 Group 3 Seq 9 Reading mem 0
  Mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\orcl\REDO03.LOG
Tue Dec 10 09:33:56 2013
Completed redo application
Tue Dec 10 09:33:56 2013
Completed crash recovery at
 Thread 1: logseq 9, block 5, scn 10226857380
 0 data blocks read, 0 data blocks written, 2 redo blocks read
Tue Dec 10 09:33:56 2013
Thread 1 advanced to log sequence 10 (thread open)
Thread 1 opened at log sequence 10
  Current log# 1 seq# 10 mem# 0: D:\ORACLE\PRODUCT\10.2.0\ORADATA\orcl\REDO01.LOG
Successful open of redo thread 1
Tue Dec 10 09:33:57 2013
SMON: enabling cache recovery
Tue Dec 10 09:33:57 2013
Errors in file d:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_1596.trc:
ORA-00704: 引导程序进程失败
ORA-39700: 必须用 UPGRADE 选项打开数据库

Tue Dec 10 09:33:57 2013
Error 704 happened during db open, shutting down database
USER: terminating instance due to error 704
Tue Dec 10 09:33:57 2013
Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_mman_4060.trc:
ORA-00704: bootstrap process failure

Tue Dec 10 09:33:57 2013
Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_dbw0_1800.trc:
ORA-00704: bootstrap process failure

ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option

The following is observed in the sqlplus session

SQL> startup;
ORACLE instance started.

Total System Global Area 1954160640 bytes
Fixed Size                  2227752 bytes
Variable Size            1325400536 bytes
Database Buffers          620756992 bytes
Redo Buffers                5775360 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option
Process ID: 22861
Session ID: 1705 Serial number: 5

以上的ORA-00704和ORA-39700 2个错误一般出现在升级数据库数据字典操作不当的场景中,可能是使用了版本错误的ORACLE BINARY也可能是数据字典本身有严重的问题了。

 

一般建议调整正确的 ORACLE PATH ,同时重新运行字典升级的脚本例如 catupgrd.sql; 如仍无法解决,则可能需要手动patch 数据字典了。

 

 

 

【Oracle数据恢复】数据块损坏/坏块诊断

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638   QQ号:47079569    邮箱:service@parnassusdata.com

 

在ORACLE中形成 数据块损坏/坏块诊断corruption多种多样,但其症状大致为如下几种:

  • ORA-01578错误
  • ORA-600[61xx]错误
  • ORA-600[3339]或者ORA-600[3398]
  • ORA-600[2130],ORA-600[2845],ORA-600[4147]错误等等
  • SELECT 查询出讹误的数据

 

应当该类ORACLE数据块损坏/坏块诊断的问题 有这么几个三板斧的步骤:

1、如果数据库仍然是打开状态,则需要判断该块损坏/坏块所在的 数据文件号、块号 并定位到具体的对象(可能是表或者索引)。 结合ORA-1578错误或者ORA-600报出的变量信息,采取如下SQL来定位

 

SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents
WHERE file_id = &fileid
and &blockid between block_id AND block_id + blocks - 1;

 

2、取决于上一步获得的SEGMENT_TYPE, 如果是以下的SEGMENT_TYPE是可以重建的:

  • index
  • 数据可以重新获得的表,或者可以重建的表
  • 回滚段,除了SYSTEM这个回滚段
  • 排序段 , sort segment
  • 临时表

 

 

3、 如果不属于步骤2中支出的任何一种,那么需要注意以下的信息:

  • 数据库是否是归档模式
  • 有无表的备份数据,包括export /sqlldr
  • 是否该表上有基于 NOT NULL字段的索引?
  • 如果有这样的索引,那么是否是UNIUQE的?

 

4、是否这套库从前已经有块损坏/坏块的情况? 这一点有经验的DBA可以从alert.log大致了解情况的, 如果以往有过此类问题则可以参考下文的后续建议

 

5、如果用户正使用归档模式,则应当建议保存一份归档redo和在线日志以便今后的后续诊断。如果不是,则要求用户备份所有的在线日志

 

6、在有条件的情况下做10210,10211和10212 event来捕捉错误源头。 如果现场工程师怀疑问题不是由于 ORACLE本身引起的,则建议dump 有问题的数据块并结合OS和存储、卷管理器的日志来分析。  如果怀疑是内存损坏则有必要考虑_db_block_cache_protect ,注意不是所有平台支持_db_block_cache_protect而且其损坏较多性能

 

7、在某些情况下,有必要要求用户启用归档模式来避免后续再次发生问题时无法有效恢复

 

必要收集的证据

 

1、 包括ORACLE TRACE和ALERT文件,这个是我们诊断此类问题的源头, 并分析这些报告中是否有其他数据块被报告存在损坏

2、从OS角度转储坏的数据块

Unix: dd if=badfile.dbf count=5 bs=2048 skip=75

 

 

后续建议

 

1、当我们在分析trace或redo日志转储时 有必要调整用户的预期,要表达给用户这些信息:

  • 我们在帮助判断原因,而不是判断如何修复这些坏块
  • 我们在研究这些证据,但这些证据未必能让我们下决定性的结论

 

 

2、有时候数据块是在内存中损坏了 例如ORA-600[3398],为了验证这些情况可以:

  • analyze table X validate structure cascade;
  • alter system flush buffer_cache;
  • 从OS角度转储该数据块并分析

 

 

后续措施

 

1、寻找本质, 例如:

  • 所有的损坏都只发生在某个裸设备或者设备或者控制器上
  • 每数4个块出现一个坏块
  • 数据块本身没问题,但是出现的位置不对
  • 数据块的部分是健康的,但其他地方不正确

 

2、 通过绕过存在 损坏/坏块的数据块来重建表:

使用10231 level 10事件来执行一个全表扫描的CTAS

通过构建ROWID来避免访问损坏的数据块 【数据恢复】利用构造ROWID实现无备份情况下绕过ORA-1578、ORA-8103、ORA-1410等逻辑/物理坏块问题

 

3、 启用10210、10211和10212并更新数据块来进一步定位坏块的细节,并考虑使用10231 event

 

其他工具

 

其他可选的工具包括dul、oranum、orapatch、bbed等,这些都是ORACLE内部工具。

 

 

 

【Oracle数据恢复】ORA-00600[4000]错误解析

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638 QQ号:47079569 邮箱:service@parnassusdata.com

 

ORA-00600[4000]是Oracle 内核事务undo模块的一个内部报错信息,一般来说ORA-00600[4000]错误会附带一个argument , 该arg[a]表示Undo segment number USN。

早期版本中当使用表空间传输且对传输后的表有DML时可能因为BUG而引起该错误,可以参考文档1371820.8。

到9i以上如果遇到该ORA-00600[4000]错误,则一般是 存储/OS等断电或者故障导致Oracle的undo segment的损坏, 常见于没有正常关闭实例 之后打开数据的场景中。

以下是ORA-00600[4000]的BUG 列表:

 

NB Bug Fixed Description
16761566 11.2.0.4, 12.1.0.2, 12.2.0.0 Instance fails to start with ORA-600 [4000] [usn#]
13910190 11.2.0.3.BP15, 11.2.0.4, 12.1.0.1 ORA-600 [4000] from plugged in tablespace in Exadata
14741727 11.2.0.2.9, 11.2.0.2.BP19, 11.2.0.3.BP12, 11.2.0.3.BP13, 12.1.0.1 Fixes for bug 12326708 and 14624146 can cause problems – backout fix
+ 10425010 11.2.0.3, 12.1.0.1 Stale data blocks may be returned by Exadata FlashCache
* 9145541 11.1.0.7.4, 11.2.0.1.2, 11.2.0.2, 12.1.0.1 OERI[25027]/OERI[4097]/OERI[4000]/ORA-1555 in plugged datafile after CREATE CONTROLFILE in 11g
12353983 ORA-600 [4000] with XA in RAC
7687856 11.2.0.1 ORA-600 [4000] from DML on transported ASSM tablespace
2917441 11.1.0.6 OERI [4000] during startup
3115733 9.2.0.5, 10.1.0.2 OERI[4000] / index corruption can occur during index coalesce
2959556 9.2.0.5, 10.1.0.2 STARTUP after an ORA-701 fails with OERI[4000]
1371820 8.1.7.4, 9.0.1.4, 9.2.0.1 OERI:4506 / OERI:4000 possible against transported tablespace
+ 434596 7.3.4.2, 8.0.3.0 ORA-600[4000] from altering storage of BOOTSTRAP$

 

常见修复ORA-00600[4000]的手段包括使用ADJUST_SCN事件或者_MINIMUM_GIGA_SCN调整SCN,或者使用其他隐藏参数,或者对undo segment/ITL 使用BBED手动修改等。

 

如果自己搞不定可以找ASKMACLEAN专业数据库修复团队成员帮您恢复!

Bug 16761566 – INSTANCE FAILED TO START WITH ORA-600 [4000] [USN#]

注意对于 SYSTEM表空间执行exec dbms_space_admin.tablespace_fix_segment_extblks(‘SYSTEM’);的话可能意外导致

 

 

ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [4000], [170], [], [], [], [], [], [], [], [], [], []

 

所以建议永远也不要对 SYSTEM表空间执行,DBMS_SPACE_ADMIN.TABLESPACE_FIX_SEGMENT_EXTBLKS 。

为了修复这种损坏,一般需要用到PITR point in time recovery ; 如果根本没备份 ,那么实际只好手动修改bootstrap$对象的segment header了。

 

 

 

【Oracle备份】RMAN备份如何备份完不删除归档并不重复备份归档日志

11.2.0.3数据库环境,使用rman进行归档日志备份,想实现:
(1)每天备份归档日志,备份完并不删除归档日志
(2)归档日志备份成功一次之后,下次再备份的时候rman就自动不会再次备份这个归档日志

 

这个需求可以通过 backup archivelog all not backed up; 语法来实现。 使用该命令如果遇到 满足 not backed up xx times 的归档才会备份,否则即便该归档仍在DISK上未被删除 也不会重复备份, 避免了重复备份带来的问题,也无需每次备份均删除磁盘上的归档文件。

 

RMAN> backup archivelog all not backed up;

Starting backup at 30-NOV-13
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=31 recid=88 stamp=832886696
input archive log thread=1 sequence=32 recid=89 stamp=832886698
input archive log thread=1 sequence=33 recid=90 stamp=832886701
input archive log thread=1 sequence=34 recid=91 stamp=832886705
input archive log thread=1 sequence=35 recid=92 stamp=832886706
input archive log thread=1 sequence=36 recid=93 stamp=832886707
input archive log thread=1 sequence=37 recid=94 stamp=832886709
input archive log thread=1 sequence=38 recid=95 stamp=832886710
input archive log thread=1 sequence=39 recid=96 stamp=832886717
channel ORA_DISK_1: starting piece 1 at 30-NOV-13
channel ORA_DISK_1: finished piece 1 at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/backupset/2013_11_30/o1_mf_annnn_TAG20131130T212517_99mssy9g_.bkp tag=TAG20131130T212517 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 30-NOV-13

Starting Control File and SPFILE Autobackup at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/autobackup/2013_11_30/o1_mf_s_832886719_99msszd5_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 30-NOV-13

RMAN> backup archivelog all not backed up;

Starting backup at 30-NOV-13
current log archived
using channel ORA_DISK_1
skipping archive log file /s01/arch/1_31_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_32_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_33_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_34_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_35_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_36_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_37_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_38_831398352.dbf; already backed on 30-NOV-13
skipping archive log file /s01/arch/1_39_831398352.dbf; already backed on 30-NOV-13
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=40 recid=97 stamp=832886724
channel ORA_DISK_1: starting piece 1 at 30-NOV-13
channel ORA_DISK_1: finished piece 1 at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/backupset/2013_11_30/o1_mf_annnn_TAG20131130T212524_99mst5k2_.bkp tag=TAG20131130T212524 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 30-NOV-13

Starting Control File and SPFILE Autobackup at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/autobackup/2013_11_30/o1_mf_s_832886726_99mst6n1_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 30-NOV-13

 

 

此外还可以指定 备份几次以上的才不备份, 例如 这里我们要求备份2次或以上的归档 此次才不备份, 那么就是backup archivelog all not backed up 2 times;

 

 

 

RMAN> backup archivelog all not backed up 2 times;

Starting backup at 30-NOV-13
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=31 recid=88 stamp=832886696
input archive log thread=1 sequence=32 recid=89 stamp=832886698
input archive log thread=1 sequence=33 recid=90 stamp=832886701
input archive log thread=1 sequence=34 recid=91 stamp=832886705
input archive log thread=1 sequence=35 recid=92 stamp=832886706
input archive log thread=1 sequence=36 recid=93 stamp=832886707
input archive log thread=1 sequence=37 recid=94 stamp=832886709
input archive log thread=1 sequence=38 recid=95 stamp=832886710
input archive log thread=1 sequence=39 recid=96 stamp=832886717
input archive log thread=1 sequence=40 recid=97 stamp=832886724
input archive log thread=1 sequence=41 recid=98 stamp=832886806
channel ORA_DISK_1: starting piece 1 at 30-NOV-13
channel ORA_DISK_1: finished piece 1 at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/backupset/2013_11_30/o1_mf_annnn_TAG20131130T212646_99mswr0o_.bkp tag=TAG20131130T212646 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 30-NOV-13

Starting Control File and SPFILE Autobackup at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/autobackup/2013_11_30/o1_mf_s_832886809_99msws3r_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 30-NOV-13

RMAN>  backup archivelog all not backed up 2 times;

Starting backup at 30-NOV-13
current log archived
using channel ORA_DISK_1
skipping archive log file /s01/arch/1_31_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_32_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_33_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_34_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_35_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_36_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_37_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_38_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_39_831398352.dbf; already backed up 2 time(s)
skipping archive log file /s01/arch/1_40_831398352.dbf; already backed up 2 time(s)
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=41 recid=98 stamp=832886806
input archive log thread=1 sequence=42 recid=99 stamp=832886861
channel ORA_DISK_1: starting piece 1 at 30-NOV-13
channel ORA_DISK_1: finished piece 1 at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/backupset/2013_11_30/o1_mf_annnn_TAG20131130T212741_99msygh3_.bkp tag=TAG20131130T212741 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 30-NOV-13

Starting Control File and SPFILE Autobackup at 30-NOV-13
piece handle=/s01/flash_recovery_area/G10R25/autobackup/2013_11_30/o1_mf_s_832886863_99msyhl3_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 30-NOV-13

沪ICP备14014813号-2

沪公网安备 31010802001379号