ORACLE専用データ復旧ソフトウェア Oracle DUL

ParnassusData Recovery Manager(以下略してPRM)は企業レベルのOracleデータディザスタリカバリーソフトで、OracleデータベースのインスタンスでSQLを実行することじゃなく、直接にOracle9i,10g,11g,12cからデータベースのデータフィイルを抽出することにより、目標データを救うソフトである。ParnassusData Recovery ManagerはJavaに基づいて開発された安全性が高く、ダウンロードして解凍したあと、インストールすることなしにそのままで運用できて、とっても便利で使いやすいなソフトである。

PRMはGUIグラフィカルインターフェース(図1)を採用し、新たなプログラミング言語をあらかじめ勉強する必要はなく、Oracle bottom-level structure of database原理さえわかっていれば、リカバリーウィザード(Recovery Wizard)を使うことによって、データをリカバリーできる。

 

prm-japan-dul1

 

お客様きっとまだこういう疑問を抱いているでしょう:「RMANという応用歴史が長いOracleリカバリーマネジャーのバックアップによるリカバリーはまだ足りないというのか。なぜわざわざPRMを買う必要があるでしょう。」

 

ご存知の通りですが、今企業に発展し続けていくITシステムでは、データボリュムがまさに日進月歩である。データの整合性を確保することは何より大事ですが、このデータが爆発的に増加する時代のなか、今日のOracle DBAたちは既存のディスクボリュムが不足で、データ全体を格納できないやテープによるバックアップがリカバリーするときに要する時間が予想した平均修復時間に超えていたなどさまざまな課題に直面している。

 

「データベースにとって、バックアップは一番大事である」ということわざはあらゆるDBAのこころに刻んでいる。けど、現実の環境は千差万別で、筆者の経験では、企業データベース環境でバックアップスペースが足りないとか、購入したストレージデバイスが短期的に手に入ることができないとか、バックアップしていたが、実際にリカバリーしているとき、使えなくなったとか、現場に起きることが実に予想できない。

 

現実によくあるリカバリートラブルに対応するため、PRM 诗檀データベースリカバリーマネジャーソフトは開発された。PRM 诗檀データベースリカバリーマネジャーソフトはORACLEデータベース内部データ構造理解したうえで、全くバックアップしていないときにSYSTEMテーブルスペースなくしたとか、ORACLEデータディクショナリーテーブルを誤操作したとか、電源がきれたことによりデータディクショナリー一貫性をなくし、データベースを起動できないとか、さまざまな難題に対応できる。さらに、誤切断(Truncate)/削除 (Delete)業務データテーブルなど、人為的な誤操作もリカバリーできる。

 

Oracle専門家に限っていることじゃなく、わずか数日間しかOracleデータベースを接触していなかった方にも自由自在に運用できる。このすべてはPRMのインストールがとびきりたやすいところとグラフィカル対話型インタフェースのおかげです。これにより、リカバリーを実行する技術者たちはデータベースに関する専門知識を要することじゃなく、あらためて新たなプログラミング言語を勉強する必要もない。データベースの基礎になる格納構造についてはなおさらです。マウスをクルックするだけで、余裕をもってリカバリーできる。

 

伝統的なリカバリーツールであるDULにたいして、PRMはDULがかなわない強みをもっている。DULはOracleオリジナル内部リカバリーツールであて、使うたびに必ずOracle内部プロセスに通する。一般的に、Oracle現場技術サポートを購入した少しのユーザーがOracle会社から派遣されたエンジニアの支援の元に、リカバリー作業が始められる。PRMは少数の専門家しかデータベースリカバリー作業しか務められないという現実を一新し、データベースがトラブルを起きる時点からリカバリー完了するまでの時間を大幅に短縮したから、企業のリカバリーコストを大分軽減した。

 

PRMを通ってリカバリーするには二つパータンにわけている。伝統的な抽出方法として、データをファイルから完全抽出して、フラットテキストファイルに書き入れて、そしてSQLLDRなどツールでデータベースへロードする。伝統の使い方がわかりやすいが、既存二倍のデータボリュームのスペースを要求するというデメリットがある。つまり、ひとつのフラットテキストファイルが占めているスペースとあとでテキストデータをデータベースにロードするスペース。時間についても、元のデータをファイルから抽出したから、新規データベースにロードできるため、常に二倍の時間を必要としている。

 

もう一つはわたしたちPRMにより、独創的なデータバイパスモード、つまりPRMを通ってじかにデータを抽出し、新規や他の使用可能なデータベースにロードする。これより、着陸データストレージに避けて、伝統の方法に比べて、さらに時間もスペースも節約できる。

 

いま、Oracle ASMの自動ストレージ管理技術はより多くの企業により採用されて、このシステムは伝統のファイルシステムよりずっと多くのメリットをもっている。例えば、性能がより高く発揮できるところ、クラスタを支持しているところ、そして管理するとき、とても便利のところ。でも、普通のユーザーにとって、ASMの格納構造があんまりにわかりにくく、いざASMであるDisk Groupの内部データ構造が故障になって、Disk GroupがMOUNTできなくなる。つまり、ユーザーたち大事なデータをASMに閉じ込まれた。こういうときは、必ずASM内部構造に詳しいOracle会社の専門家を現場に要請して、手動的にトラブルを解決する術しかない。でも、Oracle会社の現場技術サポートを買うには大量な資金が必要としている。

 

そこで、PRMの開発者(前Oracle会社シニアエンジニア)はOracle ASM内部データ構造に対する幅広い知識に基づき、PRMでは、ASMに対するデータリカバリー機能を追加した。

 

今PRMが支持しているASMに対するデータリカバリー機能は以下ご覧のとおり:

1.  たとえDisk Groupが正常にMOUNTできなくても、PRMを通って、ASMディスクに使用可能なメタデータを読み取ることができる、さらにこれらのメタデータがDisk GroupにあるASMファイルをコーピーすることもできる。

2.  たとえDisk Groupが正常にMOUNTできなくても、PRMを通って、ASMにあるデータファイルを読み取ることやそのまま抽出することも支持する。抽出する方法については伝統的な抽出方法とデータバイパスモード両方も支持している。

 

 

Oracle DUL Data Unloader數據恢復工具信息匯總

Oracle DUL 是Oracle公司內部的數據庫恢復工具,由在荷蘭的Oracle Support,Bernard van Duijnen開發:

  • DUL不是Oracle的一個產品
  • DUL不是一個受Oracle支持的產品
  • DUL被嚴格限制為Oracle Support售後支持部門內部使用
  • DUL的使用在國外需要經過Oracle公司的內部審批,首先你必須購買了Oracle的標準服務PS才可能用到DUL,否則甚至沒有資格使用DUL
  • DUL被嚴格控制的一個原因是其采用了部分Oracle源代碼,所以必須被嚴格控制

 

大約從DUL 9開始,Bernard van Duijnen為了限制外界使用DUL,所以給DUL加上了軟件時間鎖,即他會定期編譯不同平臺上的DUL(DUL基於C語言編寫)並定期上傳到ORACLE 內部的DUL workspace(基於stbeehive的空間),Oracle Support可以使用內部VPN登陸後下載。就是說 好比bernard.van.duijnen 在10月1日發布了一個版本,日期鎖是30天,那麽這個版本到11月1日基本就失效了, DUL不是簡單的讀OS時間,所以改OS時間是沒用的。

註意由於bernard.van.duijnen同學不提供HP-UX平臺上的DUL,所以DUL沒有HP-UX的對應版本。

同時早期的Oracle DUL版本用在現在的版本10g、11g、12c的數據庫基本是用不了了,因為太老了。  在美國使用DUL是被嚴格控制的,在中國國內的話 基本就是Oracle ACS 高級客戶服務部門對外在用,購買ORACLE ACS現場服務的價格還是很貴的。

附件為一個Oracle ACS提供DUL 服務的介紹文檔(當然原廠現場服務是比較昂貴的,且前提是用戶已經每年購買了PS標準服務,否則甚至無法購買ACS高級服務的現場服務):

https://www.askmac.cn/wp-content/uploads/2014/01/DUL.pdf

 

 

DUL 10的英文版使用手冊:

DUL User’s and Configuration Guide V10.2.4.27

https://www.askmac.cn/wp-content/uploads/2014/01/DUL-Users-and-Configuration-Guide-V10.2.4.27.pdf

 

以下是DUL 10的下載鏈接,但是因為加鎖了,所以會定期失效。

DUL FOR LINUX平臺(已更新為PRM-DUL)

DUL FOR Windows平臺 (已更新為PRM-DUL)

 

 

DUL可以從已經損壞嚴重的數據庫中抽取數據, DUL可以直接掃描Oracle Datafile數據文件,並識別表頭塊segment header,訪問Extent盤區信息,並從中讀取實際行數據。 後續DUL能生成SQLLDR形式的導入文件,或者EXP格式的DMP文件。

如果SYSTEM表空間數據文件還在,那麽 DUL讀取Oracle數據字典。否則DUL采取采用的形式實際讀取行,並根據內部算法判斷字段類型,字段長度。

DUL可以基本上處理所有的常見行類型,包括常規行、遷移行、鏈式行、多盤區和簇表等;處理這些行時不需要額外人工介入。跨平臺抽取也是OK的。  DUL直接從Oracle Datafile抽取數據,而無需Oracle數據庫實例。 其實施臟讀,假設每個事務均已經提交。 DUL不檢測是否需要做介質恢復,即便是損壞的數據塊也可以讀出。 支持DMT和LMT表空間。 由於是臟讀,所以DUL恢復數據後一般建議由應用驗證數據。

兼容性方面DUL可以處理從不同操作系統上拷貝過來的數據文件。支持大多數數據庫結構: 鏈式行、遷移行、hash/index 簇,LONG,RAW,ROWID,DATE,Number,多FreeList,高水位,NULL等等。 DUL 兼容ORACLE 6,7,8和9以及10g 11g 12c。

 

 

 

 

 

 

詩檀軟件(Maclean 所在的公司)開發了DUL的同類產品 ,PRM-DUL。 在DUL的基礎上引入了圖形化界面GUI和DataBridge(數據無需落地成為SQLLDR文件,直接像DBLINK一樣傳輸到目標數據庫)等功能;同時由於PRM-DUL是基於JAVA編寫的,所以可以跨所有平臺,包括HP-UX。

PRM-DUL的免費版本下載:

http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_3206.zip

PRM-DUL的使用手冊 http://www.parnassusdata.com/sites/default/files/ParnassusData%20Recovery%20Manager%20For%20Oracle%20Database%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C%20v0.3.pdf

 

PRM-DUL的免費版本默認每張表只能抽取一萬行數據,如果你的數據庫很小以至於沒有超過一萬行數據的表,那麽可以直接使用免費的PRM-DUL。 如果你的數據庫較大且數據十分重要,那麽可以考慮購買企業版的PRM-DUL,企業版PRM-DUL 針對一套數據庫提供一個License軟件使用許可證,一個License的價格是7500元人民幣(含17%增值稅)。

同時PRM-DUL還提供部分免費的License:

免費開放幾個PRM-DUL 企業版License Key

 

 

 

 

 

 

 

 

如果你的Oracle數據庫恢復case在使用DUL後仍搞不定,那麽可以考慮通過服務恢復:

詩檀軟件目前提供幾乎所有場景的Oracle恢復情況,包括:數據庫無法打開,表被誤DROP、TRUNCATE、DELETE等,ASM Diskgroup無法MOUNT等。

 

 

PRM-DUL 基於JAVA開發,這保證了PRM可以跨平臺運行,無論是AIX、Solaris、HPUX等Unix平臺, Redhat、Oracle Linux、SUSE等Linux平臺,還是Windows平臺上均可以直接運行PRM。

 

PRM-DUL支持的操作系統平臺:

 

Platform Name Supported
AIX POWER Yes
Solaris Sparc Yes
Solaris X86 Yes
Linux X86 Yes
Linux X86-64 Yes
HPUX Yes
MacOS Yes

 

 

PRM-DUL目前支持的數據庫版本

 

ORACLE DATABASE VERSION Supported
Oracle 7 Yes
Oracle 8 Yes
Oracle 8i Yes
Oracle 9i Yes
Oracle 10g Yes
Oracle 11g Yes
Oracle 12c Yes

 

 

PRM-DUL目前支持的多語言:

 

 

語言 字符集 對應的編碼
中文 簡體/繁體 ZHS16GBK GBK
中文 簡體/繁體 ZHS16DBCS CP935
中文 簡體/繁體 ZHT16BIG5 BIG5
中文 簡體/繁體 ZHT16DBCS CP937
中文 簡體/繁體 ZHT16HKSCS CP950
中文 簡體/繁體 ZHS16CGB231280 GB2312
中文 簡體/繁體 ZHS32GB18030 GB18030
日文 JA16SJIS SJIS
日文 JA16EUC EUC_JP
日文 JA16DBCS CP939
韓語 KO16MSWIN949 MS649
韓語 KO16KSC5601 EUC_KR
韓語 KO16DBCS CP933
法語 WE8MSWIN1252 CP1252
法語 WE8ISO8859P15 ISO8859_15
法語 WE8PC850 CP850
法語 WE8EBCDIC1148 CP1148
法語 WE8ISO8859P1 ISO8859_1
法語 WE8PC863 CP863
法語 WE8EBCDIC1047 CP1047
法語 WE8EBCDIC1147 CP1147
德語 WE8MSWIN1252 CP1252
德語 WE8ISO8859P15 ISO8859_15
德語 WE8PC850 CP850
德語 WE8EBCDIC1141 CP1141
德語 WE8ISO8859P1 ISO8859_1
德語 WE8EBCDIC1148 CP1148
意大利語 WE8MSWIN1252 CP1252
意大利語 WE8ISO8859P15 ISO8859_15
意大利語 WE8PC850 CP850
意大利語 WE8EBCDIC1144 CP1144
泰語 TH8TISASCII CP874
泰語 TH8TISEBCDIC TIS620
阿拉伯語 AR8MSWIN1256 CP1256
阿拉伯語 AR8ISO8859P6 ISO8859_6
阿拉伯語 AR8ADOS720 CP864
西班牙語 WE8MSWIN1252 CP1252
西班牙語 WE8ISO8859P1 ISO8859_1
西班牙語 WE8PC850 CP850
西班牙語 WE8EBCDIC1047 CP1047
葡萄牙語 WE8MSWIN1252 CP1252
葡萄牙語 WE8ISO8859P1 ISO8859_1
葡萄牙語 WE8PC850 CP850
葡萄牙語 WE8EBCDIC1047 CP1047
葡萄牙語 WE8ISO8859P15 ISO8859_15
葡萄牙語 WE8PC860 CP860

 

 

PRM-DUL支持的表存儲類型:

 

表存儲類型 Supported
Cluster Table簇表 YES
索引組織表,分區或非分區 YES
普通堆表,分區或非分區 YES
普通堆表 啟用基本壓縮 YES(Future)
普通堆表 啟用高級壓縮 NO
普通堆表 啟用混合列壓縮 NO
普通堆表 啟用加密 NO
帶有虛擬字段virtual column的表 NO
鏈式行、遷移行 chained rows 、migrated rows YES

 

 

註意事項: 對於virtual column、11g optimized default column而言 數據抽取可能沒問題,但會丟失對應的字段。 這2個都是11g之後的新特性,使用者較少。

 

 

PRM-DUL支持的數據類型

 

數據類型 Supported
BFILE No
Binary XML No
BINARY_DOUBLE Yes
BINARY_FLOAT Yes
BLOB Yes
CHAR Yes
CLOB and NCLOB Yes
Collections (including VARRAYS and nested tables) No
Date Yes
INTERVAL DAY TO SECOND Yes
INTERVAL YEAR TO MONTH Yes
LOBs stored as SecureFiles Future
LONG Yes
LONG RAW Yes
Multimedia data types (including Spatial, Image, and Oracle Text) No
NCHAR Yes
Number Yes
NVARCHAR2 Yes
RAW Yes
ROWID, UROWID Yes
TIMESTAMP Yes
TIMESTAMP WITH LOCAL TIMEZONE Yes
TIMESTAMP WITH TIMEZONE Yes
User-defined types No
VARCHAR2 and VARCHAR Yes
XMLType stored as CLOB No
XMLType stored as Object Relational No

 

 

PRM-DUL對ASM的支持

 

 

功能 Supported
支持直接從ASM中抽取數據,無需拷貝到文件系統上 YES
支持從ASM中拷貝數據文件 YES
支持修復ASM metadata YES
支持圖形化展示ASM黑盒 Future

 

 

 

以下 為ORACLE DUL 工具使用手冊:

 

DUL’s PRINCIPLES and FEATURE LIST

STANDALONE C-PROGRAM

DUL is a standalone C program that directly retrieves rows from tables in data files. The Oracle RDBMS software is NOT used at all. DUL does dirty reads, it assumes that every transaction is committed. Nor does it check/require that media recovery has been done.

LAST RESORT

DUL is intended to retrieve data that cannot be retrieved otherwise. It is NOT an alternative for EXP, SQL*Plus etc. It is meant to be a last resort, not for normal production usage.

Before you use DUL you must be aware that the rdbms has many hidden features to force a bad database open. Undocumented init.ora parameters and events can be used to skip roll forward, to disable rollback, disable certain SMON actions, advance the database scn and more.

DATABASE CORRUPT – BLOCKS OK

The database can be corrupted, but an individual data block used must be 100% correct. During all unloading checks are made to make sure that blocks are not corrupted and belong to the correct segment. If during a scan a bad block is encountered, an error message is printed in the loader file and to standard output. Unloading will continue with the next row or block.

ROWS in CLUSTERS/TABLES/INDEXES

DUL can and will only unload index/table/cluster data. It will NOT dump triggers, stored procedures nor create sql scripts for tables or views. (But the data dictionary tables describing them can be unloaded). The data will be unloaded in a format suitable for SQL*Loader or IMP. A matching control file for SQL*Loader is generated as well.

DUL can unload indices and index organized tables. Index unload is usefull to determine how many rows a table should have or to identify the missing rows.

CROSS PLATFORM UNLOADING

Cross-platform unloading is supported. The database can be copied from a different operating system than the DUL-host. (Databases/systems done so far: Sequent/ptx, Vax Vms, Alpha Vms, MVS, HP9000/8xx, IBM AIX, SCO Unix, Alpha OSF/1, Intel Windows NT).

The configuration parameters within “init.dul” will have to be modified to match those of the original platform and O/S rather than the platform from which the unload is being done.

ROBUST

DUL will not dump, spin or hang no matter how badly corrupted the database is.

(NEARLY) ALL ORACLE FEATURES SUPPORTED

Full support for all database constructs: row chaining, row migration, hash/index clusters, longs, raws, rowids, dates, numbers, multiple free list groups, segment high water mark, NULLS, trailing NULL columns, and unlimited extents, new block layout of Oracle8, partitioned tables.

Later additions are lobs, compressed indexes, 9ir2 compressed tables. Varrays and ADTs (user defined objects) are partly supported in sql*loader mode.

ASM is fully supported, files can be extracted from an asm disk group. No mounted ASM instance is used, the disks are accessed directly. Non default asm allocation unit sizes are supported.

Data can be recovered from export dump files with the unexp command suite. Some initial work has been done for unpump to support data pump files.

SUPPORTED RDBMS VERSIONS

DUL should work with all versions starting oracle 6. DUL has been tested with versions from 6.0.26 up to 10.2. Even the old block header layout (pre 6.0.27.2) is supported.

MULTI BYTE SUPPORT

DUL itself is essentially a single byte application. The command parser does not understand multi byte characters, but it is possible to unload any multi byte database. For all possible caveats there is a work around.

DUL can optionally convert to UTF8. This is for NCLOBS that are stored in UTF16.

RESTRICTIONS

MLSLABELS

Multi Level Security Lables of trusted oracle are not supported.

(LONG) RAW

DUL can unload (long) raws. Nowadays there is suitable format in SQL*Loader to preserve all long raws. So Long raws and blobs can be unloaded in both modes.

ORACLE8 OBJECT OPTION AND LOBS

Nested tables are not yet supported, if they are needed let me know and it will be added. Varrays and ADTs are supported, also those that are stored as a kernel lob. CLOBS, NCLOBS are supported both in SQL*Loader mode and in exp mode. BLOBS are best handled in exp mode, the generated hex format in SQL*Loader mode is not loaded correctly currently.

PORTABLE

DUL can be ported to any operating system with an ANSI-C compiler. DUL has been ported to many UNIX variants, VMS and WindowsNT. Currently all builds are done using gcc and a cross compiler environment on Linux

RDBMS INTERNALS

A good knowledge of the Oracle RDBMS internals is a pre requisite to be able to use DUL successfully. For instance the Data Server Internals (DSI) courses give a good foundation. There is even a module dedicated to DUL

 

SETTING UP and USING DUL

CONFIGURATION FILES

There are two configuration files for DUL. “init.dul” contains all configuration parameters. (size of caches, details of header layout, oracle block size, output file format) In the control file, “control.dul”, the database data file names and the asm disks can be specified.

DATA DICTIONARY AVAILABLE

The Oracle data dictionary is available if the data files which made up the SYSTEM TableSpace are available and useable. The number which Oracle assigned to these files and the name you have given them, which does not have to be the original name which Oracle knew, must be included in the “control.dul” file. You also need to eventually include the file numbers and names of any files from other TableSpaces for which you wish to eventually unload TABLES and their data. The lack of inclusion of these files will not affect the data dictionary unload step but it will affect later TABLE unloading.

USING DUL WHEN USER$, OBJ$, TAB$ and COL$ CAN BE  UNLOADED

Steps to follow:

  1. configure DUL for the target database. This means creating a correct init.dul and control.dul. The SYSTEM TableSpace’s data file numbers and names must be included within the control.dul file along with any data files for TableSpaces from which you wish to unload TABLEs and their data. For Oracle8 and higher the tablespace number and the relative file number must be specified for each datafile.
  2. Use the ” BOOTSTRAP; ” command to prepare for unloading. The bootstrap process will find a compatibility segment, find the bootstrap$ table unload The old ” dul dictv7.ddl”re no longer needed.
  3. Unload the tables for which data files have been included within the “control.dul” file. Use one of the following commands:
    • “UNLOAD TABLE [ owner>.]table ; (do not forget the semicolon)
      • This will unload the one table definition and the table’s data.
    • “UNLOAD USER user name ;
      • This unloads all tables and data for the specified user.
    • “UNLOAD DATABASE ;
      • This unloads all of the database tables available. (except the user SYS).

NO DATA DICTIONARY AVAILABLE

If data files are not available for the SYSTEM TableSpace the unload can still continue but USER, TABLE and COLUM names will not be known. Identifying the tables can be an overwhelming task. But it can be (and has been) done. You need in depth knowledge about your application and the application tables. Column types can be guessed by DUL, but table and column names are lost. Any old SYSTEM tablespace from the same database but weeks old can be of great help!. Most of the information that DUL uses does not change. (only the dataobj# is during truncate or index rebuild)

USING DUL WITHOUT SYSTEM TABLESPACE

Steps to follow:

  1. configure DUL for the target database. This means creating a correct init.dul and control.dul. (See Port specific parameters ). In this case control.dul file will need the numbers and names of datafiles from which TABLEs and data will be unloaded but it does not require the SYSTEM TableSpace’s information.
  2. SCAN DATABASE; : scan the database, build extent and segment map
  3. SCAN TABLES; or SCAN EXTENTS; : gather row statistics
  4. Identify the lost tables from the output of step 3.
  5. UNLOAD the identified tables.
AUTOMATED SEARCH

To ease the hunt for the lost tables: the scanned statistical information in seen_tab.dat and seen_col.dat can be loaded into a fresh database. If you recreate the tables ( Hopefully the create table scripts are still available) then structure information of a “lost” table can be matched to the “seen” tables scanned information with two SQL*Plus scripts. (fill.sql and getlost.sql).

HINTS AND PITFALLS
  • Names are not really relevant for DUL, only for the person who must load the data. But the unloaded data does not have any value, if you do not know from which table it came.
  • The guessed column types can be wrong. Even though the algorithm is conservative and decides UNKNOWN if not sure.
  • Trailing NULL columns are not stored in the database. So if the last columns only contain NULL’s than the scanner will NOT find them. (During unload trailing NULL columns are handled correctly).
  • When a table is dropped, the description is removed from the data dictionary only. The data blocks are not overwritten unless they are reused for a new segment. So the scanner software can see a table that has been dropped.
  • Tables without rows will go unnoticed.
  • Newer objects have a higher object id than older objects. If an table is recreated, or if there is a test and a production version of the same table the object id can be used to decide.

DDL (DUL Description Language) UNLOAD STATEMENT OVERVIEW

DUL uses an SQL like command interface. There are DDL statements to unload extents, tables, users or the entire database. Data dictionary information required can be specified in the ddl statements or taken from the previously unloaded data dictionary. The following three statements will unload the DEPT table. The most common form is if the data dictionary and the extent map are available:

         UNLOAD TABLE scott.dept;

All relevant information can be specified in the statement as well:

        REM Columns with type in the correct order
        REM The segment header loaction in the storage clause
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));

Oracle version 6:

        REM version 6 data blocks have segment header location in each block
        ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));

Oracle7:

        REM Oracle7 data blocks have object id in each block

        ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( OBJNO 1501 );

DUL’S OUTPUT FORMAT.

Only complete good rows are written to the output file. For this each row is buffered. The size of the buffer can changed with the init.dul parameter BUFFER. There is no speed gained with a high BUFFER parameter, it should just be big enough to hold a complete row. Incomplete or bad rows are not written out. The FILE_SIZE_IN_MB init.dul parameter can be used to split the output (at a proper boundary) into multiple files, each file can be loaded individually.

There are three different modes of output format.

  • Export mode
  • SQL*Loader mode: stream data files
  • SQL*Loader mode: Fixed physical record data files

EXPORT MODE

The generated file is completely different from a table mode export generated by EXP! The file is the minimal format that IMP can load. For each table a separate IMP loadable file will be generated. It is a single table dump file. It contains a header an insert table statement and the table data. Table grants, storage clauses, or triggers will not be included. An minimal create table statement is included (no storage clause just column names and types without precision). The character set indication in the file in the generated header is V6 style. It is set to mean ASCII based characterset.

To enable export mode, set the init.dul parameter EXPORT_MODE to TRUE.

As the generated pseudo dump file does not contain character set information set NLS_LANG to match that of the original database. In export mode no character set conversion is done.

SQL*LOADER MODES

The data in the is either not converted at all, or everthing is converted to UTF8 if LDR_OUTPUT_IN_UTF8 is set. This setting is required in mixed character set environments as the contents of a data file must have a single character set.<\P>

When loading the data you probably need to set NLS_LANG to match that of the original database to prevent unwanted character set conversion.

For both SQL*Loader output formats the columns will be space separated and enclosed in double quotes. Any double quote in the data will be doubled. SQL*Loader recognizes this and will load only one. The character used to enclose the columns can be changed from double quote to any character you like with the init.dul parameter LDR_ENCLOSE_CHAR.

There are two styles of physical record organization:

Stream Mode

Nothing special is done in stream mode, a newline is printed after each record. This is a compact format and can be used if the data does not contain newline characters. To enable stream mode set LDR_PHYS_REC_SIZE = 0 in init.dul.

Fixed Physical Records

This mode is essential if the data can contain newlines. One logical record, one comlete row, can be composed of multiple physical records. The default is record length is 81, this fits nicely on the screen of a VT220. The physical record size can be specified with LDR_PHYS_REC_SIZE in init.dul.

OUTPUT FILE NAMES

The file names generated are: owner name_table name.ext. The extension is “.dmp” for IMP loadable files. “.dat” and “.ctl” are used for the SQL*Loader datafile and the control file. To prevent variable substitution and other unwanted side effects, strange characters are stripped.(Only alpha numeric and ‘_’ are allowed).

If the FILE parameter is set the generated names will be FILEnnn.ext. This possibility is a work around if the file system does not support long enough file names. (Old windows with 6.3 filename format)

SOME DUL INTERNALS

REQUIRED INFORMATION

To unload table data from a database block the following information must be known:

  1. Column/Cluster Information: The number and type of the columns. For char or varchar columns the maximum length as well. The number of cluster columns and the table number in the cluster. This information can be supplied in the unload statement or it can be taken from the previously unloaded USER$, OBJ$, TAB$ and COL$.
  2. Segment/Extent information: When unloading a table the extent table in the data segment header block is used to locate all data blocks. The location of this segment header block (file number and block number) is taken from the data dictionary or can be specified in the unload statement. If the segment header is not correct/available then another method must be used. DUL can build its own extent map by scanning the whole database. (in a separate run of DUL with the scan database statement.)
BINARY HEADERS

C-Structs in block headers are not copied directly, they are retrieved with specialized functions. All offsets of structure members are programmed into DUL. This approach makes it possible to cross-unload. (Unload an MVS created data file on an HP) Apart from byte order only four layout types have been found so far.

  1. Vax VMS and Netware : No alignment padding between structure members.
  2. Korean Ticom Unix machines : 16 bit alignment of structure members.
  3. MS/DOS 16 bit alignment and 16 bit wordsize.
  4. Rest of the world (Including Alpha VMS) structure member alignment on member size.
MACHINE DEPENDENCIES

Machine dependencies (of the database) are configurable with parameters:

  • Order of bytes in a word (big/little endian).
  • Number of bits for the low part of the FILE# in a DBA (Block Address).
  • Alignment of members in a C-struct.
  • Number of blocks or bytes before the oracle file header block.
  • Size of a word used in the segment header structure.
UNLOADING THE DATA DICTIONARY

DUL can use the data dictionary of the database to be unloaded if the files for it exist and are uncorrupted. For the data dictionary to be used, internal tables must be unloaded first to external files: (USER$, OBJ$, TAB$ and COL$). The bootstrap command will find and unload the required tables.

DDL ( DUL DESCRIPTION LANGUAGE ) SPECIFICATION

[ ALTER SESSION ] SET init.dul parameter =  value ;
     Most parameters can be changed on the fly. 

BOOTSTRAP [LOCATE | GENERATE | COMPLETE
           | UNLOAD   Bootstrap$ segment header block address ];
     Bootstraps the data dictionary. Default is COMPLETE.
     LOCATE finds and unloads the bootstrap$ table.
     GENERATE builds a ddl file based on inforation in the cache.
     COMPLETE is in fact LOCATE, followed by GENERATE (two times)

COMMIT;
     Writes the changed block to the data file.

CREATE BLOCK INDEX  index_name  ON  device ;

A block index contains address of valid oracle blocks found in a corrupt file system. Useful to merge multiple disk images or to unload from corrupted file systems. This is only useful in extreme file system corruption scenarios.

DESCRIBE  owner_name  . table_name ;

DUMP [ TABLESPACE  tablespace_no ]
     [ FILE  file_no  ]
     [ BLOCK  block_no  ]
     [ LEVEL  level_no  ] ;
     Not a complete blockdump, mainly used for debugging.
     The block address is remembered.

EXTRACT  asm file name  to  output file name  ;
     Copies any ASM file from a disk group to the file system.
     (there was a problem with online redologs this needs more testing)

MERGE block_index INTO [  segment  ];

The merge command uses the information in the index file to locate possible data blocks it looks for a combination of file numbers and object id, each candidate block is compared to the current block in the datafile. If the current block is bad, or has an older scn the candidate will will be written into the datafile. This is only useful in extreme file system corruption scenarios.

REM any_text_you_like_till_End_Of_Line : comment
REM  NOT allowed inside ddl statements. ( To avoid a two layer lexical scan).

ROLLBACK; # Cancels the UPDATE statements.

SHOW     DBA  dba ;                # dba -> file_no block_no calculator
	   | DBA  rfile_no block_no ;  # file_no block_no -> dba calculator
	   | SIZES ;                   # show some size of important structs
	   | PARAMETER;                # shows the values of all parameters
	   | LOBINFO;                  # lob indexes found with SCAN DATABASE
       | DATAFILES;                # summary of configured datafiles
       | ASM DISKS;                # summary of configured asm disks
       | ASM FILES;                # summary of configured datafiles on asm
       | ASM FILE  cid      # extent information for asm file

UNEXP [TABLE] [  owner  . ]  table name 
       (  column list  ) [ DIRECT ]
       DUMP FILE  dump file name
       FROM  begin offset  [ UNTIL  end offset  ]
       [ MINIMUM  minimal number of columns  COLUMNS ] ;

       To unload data from a corrupted exp dump file. No special setup
       or configuration is required, just the compatible parameter.
       The start offset should be where a row actually begins.

UNPUMP
       To unload data from a corrupted expdp (datapump) dump file.
       This is still work in progress, the basic commands work
       but rather complex to use. Contact me if this is needed.

UNLOAD DATABASE;

UNLOAD USER user_name;

UNLOAD [TABLE]  [  schema_name . ]  table_name 
              [ PARTITION(  partition_name ) ]
              [ SUBPARTITION(  sub_partition_name ) ]
              [ (  column_definitions ) ]
              [  cluster_clause  ]
              [  storage_clause  ] ;

UNLOAD EXTENT  table_name 
              [ (  column_definitions  ) ]
              [ TABLESPACE  tablespace_no  ] 
              FILE  extent_start_file_number
              BLOCK extent_start_block_number 
              BLOCKS  extent_size_in oracle_blocks ;

UNLOAD LOB SEGMENT FOR [  schema_name . ]  table_name   [ (  column name  ) ] ;

UNLOAD LOB SEGMENT STORAGE ( SEGOBJNO data obj#) ;

UPDATE [ block_address ] SET UB1|UB2|UB4 @ offset_in_block = new_value ;
UPDATE [ block_address ] SET  block element name  = new_value ;
        Now and then we can repair something.
	Patches the current block and dumps it.
	You can issue multiple UPDATE commands.
	Block is not written yet, use COMMIT to write. 

storage_clause ::=
        STORAGE ( storage_specification  [ more_storage_specs ] )

storage_specification ::=
        OBJNO object_id_number 
|       TABNO cluster_table_number
|       SEGOBJNO cluster/data_object_number       /* v7/v8 style data block id */
|       FILE  data_segment_header_file_number     /* v6 style data block id */
        BLOCK data_segment_header_block_number )   
|       any_normal_storage_specification_but_silently_ignored

SCAN DATABASE;

Scans all blocks of all data files. Two or three files are generated:

  1. SEG.dat information of found segment headers (index/cluster/table): (object id, file number, and block number).
  2. EXT.dat information of contiguous table/cluster data blocks. (object id(V7), file and block number of segment header (V6), file number and block number of first block, number of blocks, number of tables)
  3. SCANNEDLOBPAGE.dat information for each lob datablock, this file (optional, only if init.dul:SCAN_DATABASE_SCANS_LOB_SEGMENTS=TRUE) can possibly be huge. Also the required memory size can be problematic. The purpose is twofold: 1: to possibly work around corrupt lob indexes during unload table. 2: unload lob segments (for deleted lobs or lob segments without lob index or parent table) Meaning of the fields in SCANNEDLOBPAGE.dat: (segobj#, lobid, fat_page_no, version( wrap, base), ts#, file#, block#)
SCAN DUMP FILE  dump file name
        [ FROM  begin offset  ]
        [ UNTIL  end offset  ];

        Scans an  export dump file to produce to provide the
        create/insert statements and the offsets in the dump file.

SCAN LOB SEGMENT    storage clause ;
SCAN LOB SEGMENT FOR  table name  [.  column name] ;
        Scans the lob segment to produce LOBPAGE.dat information,
        but then for this segment only. Probably quicker and
        smaller. For partitioned objects use scan database.

SCAN TABLES;
        Uses SEG.dat and EXT.dat as input.
        Scans all tables in all data segments (a header block and at least one
        matching extent with at least 1 table).

SCAN EXTENTS;
        Uses SEG.dat and EXT.dat as input.
        All extents for which no corresponding segment header has been found.
        (Only useful if a tablespace is not complete, or a segment header
        is corrupt).

EXIT QUIT and EOF all cause DUL to terminate.

DDL ( DUL DESCRIPTION LANGUAGE ) DESCRIPTION

Rules for UNLOAD EXTENT and UNLOAD TABLE:

Extent Map

UNLOAD TABLE requires an extent map. In 99.99% of the cases the extent map in the segment header is available. In the rare 0.01% that the segment header is lost an extent map can be build with the scan database command. The self build extent map will ONLY be used during an unload if the parameter USE_SCANNED_EXTENT_MAP is set to TRUE.

All data blocks have some ID of the segment they belong to. But there is a fundamental difference between V6 and V7. Data blocks created by Oracle version 6 have the address of the segment header block. Data blocks created by Oracle7 have the segment object id in the header.

Column Specification

The column definitions must be specified in the order the columns are stored in the segment, that is ordered by col$.segcol#. This is not necessarily the same order as the columns where specified in the create table statement. Cluster columns are moved to the front, longs to the end. Columns added to the table with alter table command, are always stored last.

Unloading a single extent

UNLOAD EXTENT can be used to unload 1 or more adjacent blocks. The extent to be unloaded must be specified with the STORAGE clause: To specify a single extent use: STORAGE ( EXTENTS( FILE fno BLOCK bno BLOCKS #blocks) ) (FILE and BLOCK specify the first block, BLOCKS the size of the extent)

DUL specific column types

There are two extra DUL specific data types:

  1. IGNORE: the column will be skipped as if it was not there at all.
  2. UNKNOWN: a heuristic guess will be made for each column.

In SQL*Loader mode there are even more DUL specific data types:

  1. HEXRAW: column is HEX dumped.
  2. LOBINFO: show some information from LOB locators .
  3. BINARY NUMBER: Machine word as used in a LOB index.

Identifying USER$, OBJ$, TAB$ and COL$

DUL uses the same bootstrap procedure as the rdbms. That is it uses the root dba from the system datafile header to locate the bootstrap$ table. Depending on the version this root dba is either the location of the compatibility segment containing the bootstrap$ address or for the newer versions the address of the bootstrap$ table itself. The bootstrap$ table is unloaded and its contents is parsed to find the first four tables (USER$, OBJ$, TAB$ and COL$). The other tables are unloaded based on information in these first four.

DESCRIPTION OF SCAN COMMANDS

SCAN TABLES and SCAN EXTENTS scan for the same information and produce similar output. ALL columns of ALL rows are inspected. For each column the following statistics are gathered:

  • How often the column is seen in a data block.
  • The maximum internal column length.
  • How often the column IS NULL.
  • How often the column consists of at least 75% printable ascii.
  • How often the column consists of 100% printable ascii.
  • How often the column is a valid oracle number.
  • How often the column is a nice number. (not many leading or trailing zero’s)
  • How often the column is a valid date.
  • How often the column is a possible valid rowid.

These statistics are combined and a column type is suggested. Using this suggestion five rows are unloaded to show the result. These statistics are dumped to two files (seen_tab.dat and seen_col.dat). There are SQL*Loader and SQL*Plus scripts available to automate a part of the identification process. (Currently known as the getlost option).

DESCRIBE

There is a describe command. It will show the dictionary information for the table, available in DUL’s dictionary cache.

DUL STARTUP SEQUENCE

During startup DUL goes through the following steps:

  • the parameter file “init.dul” is processed.
  • the DUL control file (default “control.dul”) is scanned.
  • Try to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL’s data dictionary cache.
  • Try to load seg.dat and col.dat.
  • Accept DDL-statements or run the DDL script specified as first arg.

DUL parameters to be specified in init.dul:

ALLOW_TRAILER_MISMATCH
BOOLEAN
Strongly discouraged to use, will seldom produce more rows. Use only if you fully understand what it means and why you want it. skips the check for correct block trailer. The blocks failing this test are split of corrupt. But it saves you the trouble to patch some blocks.
ALLOW_DBA_MISMATCH
BOOLEAN
Strongly discouraged to use, will seldom produce more rows. Use only if you fully understand what it means and why you want it. Skips the check for correct block address. The blocks failing this test are probably corrupt. But it saves you the trouble to patch some blocks.
ALLOW_OTHER_OBJNO
BOOLEAN
If your dictionary is older than your datafiles then the data object id’s can differ for truncated tables. With this parameter set to true it will issue a warning but use the value from segment header. All other blocks are fully checked. This is for special cases only.
ASCII2EBCDIC
BOOLEAN
Must (var)char fields be translated from EBCDIC to ASCII. (For unloading MVS database on a ASCII host)
BUFFER
NUMBER (bytes)
row output buffer size used in both export and SQL*Loader mode. In each row is first stored in this buffer. Only complete rows without errors are written to the output file.
COMPATIBLE
NUMBER
Database version , valid values are 6, 7, 8 or 9. This parameter must be specified
CONTROL_FILE
TEXT
Name of the DUL control file (default: “control.dul”).
DB_BLOCK_SIZE
NUMBER
Oracle block size in bytes (Maximum 32 K)
DC_COLUMNS
NUMBER
DC_OBJECTS
NUMBER
DC_TABLES
NUMBER
DC_USERS
NUMBER
Sizes of dul dictionary caches. If one of these is too low the cache will be automatically resized.
EXPORT_MODE
BOOLEAN
EXPort like output mode or SQL*Loader format
FILE
TEXT
Base for (dump or data) file name generation. Use this on 8.3 DOS like file systems
FILE_SIZE_IN_MB
NUMBER (Megabytes)
Maximum dump file size. Dump files are split into multiple parts. Each file has a complete header and can be loaded individually.
LDR_ENCLOSE_CHAR
TEXT
The character to enclose fields in SQL*Loader mode.
LDR_PHYS_REC_SIZE
NUMBER
Physical record size for the generated loader datafile.
LDR_PHYS_REC_SIZE = 0 No fixed records, each record is terminated with a newline.
LDR_PHYS_REC_SIZE > 2: Fixed record size.
MAX_OPEN_FILES
Maximum # of database files that are concurrently kept open at the OS level.
OSD_BIG_ENDIAN_FLAG
Byte order in machine word. Big Endian is also known as MSB first. DUL sets the default according to the machine it is running on. For an explanation why this is called Big Endian, you should read Gullivers Travels.
OSD_DBA_FILE_BITS
File Number Size in DBA in bits. Or to be more precise the size of the low order part of the file number.
OSD_FILE_LEADER_SIZE
bytes/blocks added before the real oracle file header block
OSD_C_STRUCT_ALIGNMENT
C Structure member alignment (0,16 or 32). The default of 32 is correct for most ports.
OSD_WORD_SIZE
Size of a machine word always 32, except for MS/DOS(16)
PARSE_HEX_ESCAPES
Boolean default FALSE
Use \\xhh hex escape sequences in strings while parsing. If set to true then strange characters can be specified using escape sequences. This feature is also for specifying multi-byte characters.
USE_SCANNED_EXTENT_MAP
BOOLEAN
Use the scanned extent map in ext.dat when unloading a table. The normal algorithme uses the extent map in the segment header. This parameter is only useful if some segment headers are missing or incorrect.
WARN_RECREATE_FILES
BOOLEAN (TRUE)
Set to FALSE to suppress the warning message if an existing file is overwritten.
WRITABLE_DATAFILES
BOOLEAN (FALSE)
Normal use of DUL will only read the database files. However the UPDATE and the SCAN RAW DEVICE will write as well. The parameter is there to prevent accidental damage.

SAMPLE init.dul :

# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40

# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 10
osd_c_struct_alignment = 32
osd_file_leader_size = 1

# database parameters
db_block_size = 8k

# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81

Configuring the port dependent parameters

Starting from rdbms version 10G osd parameters are easy to configure. Typically all parameters can be used at their defaults. The only one that might need attention is osd_big_endian_flag, when doing a cross platform unload, where the original database platform is different from the current machine. If osd_big_endian_flag is set incorrectly, it is detected at startup, when doing file header inspection.

Collection of known Parameters

For pre 10G databases there is a list of known parameters in the osd wiki page list of osd (Operating System Dependend) parameters for almost every platform. If your platform is not in the list you can use the suggestions below to determine the parameters. (And then please inform me so I can add them to the list.)

osd_big_endian_flag

big endian or little endian (byte order in machine words): HP, SUN and mainframes are generally big endian: OSD_BIG_ENDIAN_FLAG = TRUE. DEC and Intel platforms are little endian: OSD_BIG_ENDIAN_FLAG = FALSE. The default is correct for the platform where DUL is running on.

There is no standard trick for this, the following might work on a unix system:

  echo dul | od -x
  If the output is like:
     0000000 6475 6c0a
     0000004
  You are on a big endian machine (OSD_BIG_ENDIAN_FLAG=TRUE).

  If you see:
     0000000 7564 0a6c
     0000004
  This is a little endian machine (OSD_BIG_ENDIAN_FLAG=FALSE).
osd_dba_file_bits

The number of bits in a dba used for the low order part of file number. Perform the following query:

  SQL> select dump(chartorowid('0.0.1')) from dual;
  
  Typ=69 Len=6: 8,0,0,0,0,0    ->       osd_dba_file_bits =  5 (SCO)
  Typ=69 Len=6: 4,0,0,0,0,0    ->       osd_dba_file_bits =  6 (Sequent , HP)
  Typ=69 Len=6: 1,0,0,0,0,0    ->       osd_dba_file_bits =  8 (NCR,AIX)
  Typ=69 Len=6: 0,16,0,0,0,0   ->       osd_dba_file_bits = 12 (MVS)
  Typ=69 Len=10: 0,0,0,0,0,64,0,0,0,0      osd_dba_file_bits = 10 (Oracle8)       
OSD_C_STRUCT_ALIGNMENT

Structure layout in data file headers. 0: No padding between members in a C-struct (VAX/VMS only) 16: Some korean ticom machines and MS/DOS 32: Structure members are member size aligned. (All others including ALPHA/VMS) Check the following query:

SELECT * FROM v$type_size
WHERE type IN ( 'KCBH', 'KTNO', 'KCBH', 'KTBBH', 'KTBIT', 'KDBH'
              , 'KTECT', 'KTETB', 'KTSHC') ;

In general osd_c_struct_alignment = 32 and the following output is expected:

K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              24
KTB      KTBBH    TRANSACTION FIXED HEADER                 48
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           44
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            8

8 rows selected.

For VAX/VMS and Netware ONLY osd_c_struct_alignment = 0 and this output is expected:

COMPONEN TYPE     DESCRIPTION                      SIZE
-------- -------- -------------------------------- ----------
K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              23
KTB      KTBBH    TRANSACTION FIXED HEADER                 42
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           39
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            7

8 rows selected.

If there is a different list this will require some major hacking and sniffing and possibly a major change to DUL. (Email Bernard.van.Duijnen@oracle.com)

osd_file_leader_size

Number of blocks/bytes before the oracle file header. Unix datafiles have an extra leading block ( file size, block size magic number) A large number ( > 100) is seen as a byte offset, a small number is seen as a number of oracle blocks.

Unix    :       osd_file_leader_size = 1
Vms     :       osd_file_leader_size = 0
Desktop :       osd_file_leader_size = 1 (or 512 for old personal oracle)
Others  :       Unknown ( Use Andre Bakker's famous PATCH utility to find out)
                An Oracle7 file header block starts with the pattern 0X0B010000.

You can add an additional byte offset in control.dul in the optional third field (for instance for AIX or DEC UNIX data files on raw device)

Control file syntax specification

A control file (default name “control.dul”) is used to specify asm disks, block indexes and the data file names. The format of the control has been extended

Currently there are three types of specifications in the DUL control file. Each entry on a separate line. The asm disks must precede the asm files.

  control_file_line ::= asm_disk_spec | file_piece_spec | block_index_spec

If COMPATIBLE is 10 or higher you can also specify asm disks. Its generally sufficent to specify the device name. All properties are automatically retrieved by header inspection. The full syntax is only needed when header inspection is impossible, that is for disks with corrupt headers. The syntax is:

DISK  device name [  disk group options  ]

 disk group option  ::= GROUP  disk group name 
                           | DISK_NO  disk number in group 
                           | F1B1  File1 Block1 location

A block index is a way to access oracle blocks on corrupt file systems. In general a corrupt file system is not wiped out, its not empty. Due to the specific layout of oracle blocks it is possible to datablocks an store their location in the block index. See also the create block index command . A block_index_name is a normal identifier, it is used to construct an unique file name.

BLOCK INDEX  block_index_name

Each entry can contain a part of a datafile. The smallest unit is a single data block. This way it is possible to split datafiles that are too big for DUL in parts where each part is smaller than 2GB.

In general it is sufficient to specify the file name. Even for a single block. If compatible is 10 or higher the file numbers and the tablespace numbers will be read from the file header.

If the specified details are different from the file header DUL will give a warning but use your specification. This is to be able to unload files with a corrupted header block. For debugging it is possible to dump the file header.

The optional extra leader offset is an extra byte offset, that will be added to all lseek() operations for that datafile. This makes it possible to skip over the extra 4k block for some AIX raw devices, or the extra 64K on Tru64 on raw devices

  file_piece_spec ::= 
         [ [ tablespace_no ] relative_file_number]data_file_name
         [ optional extra leader offset ]
         [ startblock block_no ]
         [ endblock block_no ]

Examples

   # AIX version 7 example with one file on raw device
   1 /usr/oracle/dbs/system.dbf
   8 /dev/rdsk/data.dbf 4096
   # Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
   0  1 /fs1/oradata/PMS/system.dbf
   1  2 /tmp/huge_file_part1 startblock 1 endblock 1000000
   1  2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
   1  2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000
# ASM disks for two disk groups
disk /media/maxtor/asm/dgn1
disk /media/maxtor/asm/dgn2
disk /media/maxtor/asm/dgn3
disk /media/maxtor/asm/dgn4
disk /media/maxtor/asm/dgodd

# system datafile in the first asm disk group
+DGN/db102/datafile/system.257.621616979

# users datafile in a different disk group
+DGODD/db102/datafile/users.257.621616683

# a so called big file tablespace, use 1024 for the file#
 8 1024 /home/oracle/v102/dbs/bigfilets

# Or let DUL find out itself from the header
 /home/oracle/v102/dbs/bigfilets

# one tablespace with a different block size
/home/oracle/v102/dbs/ts16k.dbf block_size 16k

# or let DUL find out by header inspection
/home/oracle/v102/dbs/ts16k.dbf

Sample unload session: data dictionary usable for DUL

    1. create a suitable “init.dul”
    2. create a control.dul
    sqlplus /nolog
    connect / as sysdba
    startup mount
    set trimspool on pagesize 0 linesize 256 feedback off
    column name format a200
    spool control.dul
    select ts#, rfile#, name from v$datafile;
    exit
edit the result
For Oracle8 a different query must be used:
   select ts#, rfile#, name from v$datafile;
    1. start DUL and bootstrap;
$ dul

Data UnLoader 10.2.1.16 - Oracle Internal Only - on Thu Jun 28 11:37:24 2007
with 64-bit io functions

Copyright (c) 1994 2007 Bernard van Duijnen All rights reserved.

 Strictly Oracle Internal use Only
DUL> bootstrap;
Probing file = 1, block = 377
. unloading table                BOOTSTRAP$      57 rows unloaded
DUL: Warning: Dictionary cache DC_BOOTSTRAP is empty
Reading BOOTSTRAP.dat 57 entries loaded
Parsing Bootstrap$ contents
DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 10
 OBJ$: segobjno 18, file 1
 TAB$: segobjno 2, tabno 1, file 1
 COL$: segobjno 2, tabno 5, file 1
 USER$: segobjno 10, tabno 1, file 1
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$   52275 rows unloaded
. unloading table                      TAB$    1943 rows unloaded
. unloading table                      COL$   59310 rows unloaded
. unloading table                     USER$      70 rows unloaded
Reading USER.dat 70 entries loaded
Reading OBJ.dat
 52275 entries loaded and sorted 52275 entries
Reading TAB.dat 1943 entries loaded
Reading COL.dat 59310 entries loaded and sorted 59310 entries
Reading BOOTSTRAP.dat 57 entries loaded
...
Some more messages for all the other TABLES
...
Database character set is WE8ISO8859P1
Database national character set is AL16UTF16
DUL> unload user SCOTT;
About to unload SCOTT's tables ...
. unloading table                       EMP      14 rows unloaded

Example unload session: data dictionary UNUSABLE for DUL

    1. create a suitable “init.dul” (See config guide)
    2. create a control.dul See above
    3. scan the database for segment headers and extents:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:10:16 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL> scan database;
data file 1 20480 blocks scanned
data file 4 7680 blocks scanned
data file 5 512 blocks scanned
DUL>quit
    1. Restart DUL and scan the found tables for column statistics this creates a huge amount of output:
echo scan tables \; | dul > scan.out&

[ many lines here]


Object id 1601 table number 0
UNLOAD TABLE T1601_0 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER, C5 DATE
        , C6 NUMBER, C7 NUMBER, C8 NUMBER )
    STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));

Colno  Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
    1    14        3    0%   0%   0% 100% 100%   0%   0%
    2    14        6    0% 100% 100% 100%  14%   0%  21%
    3    14        9    0% 100% 100% 100%  14%   0%   0%
    4    14        3    7%   0%   0% 100% 100%   0%   0%
    5    14        7    0%   0%   0%   0%   0% 100%   0%
    6    14        3    0%   0%   0% 100% 100%   0%   0%
    7    14        2   71%   0%   0% 100% 100%   0%   0%
    8    14        2    0%   0%   0% 100% 100%   0%   0%

"7369" "SMITH" "CLERK" "7902" "17-DEC-1980 AD 00:00:00" "800" "" "20"

"7499" "-0.000025253223" "SALESMAN" "7698" "20-FEB-1981 AD 00:00:00" "1600" "30+

0" "30"

"7521" "WARD" "SALESMAN" "7698" "22-FEB-1981 AD 00:00:00" "1250" "500" "30"

"7566" "JONES" "MANAGER" "7839" "02-APR-1981 AD 00:00:00" "2975" "" "20"

"7654" "MARTIN" "SALESMAN" "7698" "28-SEP-1981 AD 00:00:00" "1250" "1400" "30"

[ many more lines here ]

This looks familiar, use the above information and your knowledge of the emp table to compose:

UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
        hiredate date, sal number, comm number deptno number)
 STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
    1. use this statement to unload emp:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:46:33 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Loaded 350 segments
Loaded 204 extents
Extent map sorted
DUL> UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
DUL 2> hiredate date, sal number, comm number deptno number)
DUL 3> STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
. unloading table                       EMP      14 rows unloaded
DUL>quit
 

Example unload session: Incorrect init.dul Parameters

WRONG osd_dba_file_bits size

This can generate output similar to below. Normally this should not happen since you should create a demo database and check this via the DUL documented (in html page) query.

The mismatch in DBA’s is only in the file number (first number in brackets) part. The second number, the block number, is correct.

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:40:33 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [4][2]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][3] DBA in block mismatch [4][3]
DUL: Warning: Bad cache layer header file#=1, block#=3

...........and etc..........

WRONG osd_file_leader_size

This may create output similar to below, but many other flavours are possible. In this case we are a fixed number of blocks off. The file number is correct. The difference in the block numbers is constant.:

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:44:23 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.

DUL: Warning: Block[1][2] DBA in block mismatch [1][3]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][3] DBA in block mismatch [1][4]
DUL: Warning: Bad cache layer header file#=1, block#=3

...........and etc..........

WRONG osd_c_struct_alignment

This may generate output similar to the following:

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:46:10 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
. unloading table OBJ$

DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262145
OS error 2: No such file or directory

DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262146
OS error 2: No such file or directory

...........and etc..........

WRONG db_block_size

The following output was generated when the db_block_size was set too small. The correct value was 4096 and it was set to 2048. Normally, the value for this parameter should be taken from the Oracle instances’s init.ora file and will not be correctly set.

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Thu Sep 4 12:38:25 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [513][1159680]
DUL: Warning: File=1, block 2: illegal block version 2
DUL: Warning: Block[1][2] Illegal block type[0]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][4] DBA in block mismatch [1][2]
DUL: Warning: File[1]Block[4]INCSEQ mismatch[90268!=0]
DUL: Warning: Bad cache layer header file#=1, block#=4

DUL: Warning: Block[1][6] DBA in block mismatch [1][3]
DUL: Warning: File[1]Block[6]INCSEQ mismatch[139591710!=86360346]
DUL: Warning: Bad cache layer header file#=1, block#=6

...........and etc..........

QUOTE MISSING

If you get the following error it is caused by the data dictionary tables “USER$, OBJ$, TAB$ and COL$” not being correctly generated. To fix this error simply delete all dictv6.ddl or dictv7.ddl created .dat and .ctl files and restart.

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:49:30 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL: Error: Quote missing

Salvaging data from corrupt EXP dump files – UNEXP Tutorial

If you do not know anything about the structure of a EXP dump file this can be difficult. Here is a quick explanation. Apart from the file header a dump file has MARKERS that identify the various sections. In each table section there will be SQL statements. The most interrsesting part is the create table statement, followed by the insert into table statement. The insert statement is directly followed by the bind information, (number of columns, and for each column its type and bind length and a small bit more). Then it is followed by the actual columns. Each column is preceded by a two byte length, followed by the actual column data. There are several tricks for longer columns possible. The end of the column data is marked by the special length marker OXFFFF. There is no marker for the beginning of a row. Resynching after a corruption is trial and error. Corruption are generally not immediate detectable. The format is slightly different for DIRECT export, so you will have to use the DIRECT option for DIRECT exports. The offset to be specified is the beginning of a row. In general the first one directly behind the bind array, but for optimal flexibility you can start anywhere in the row data.

The first step is to scan the dump file to find the offsets and the sql statements. Each output line starts with the offset where the item is found.

DUL>  scan dump file expdat.dmp;
0: CSET: 1 (US7ASCII)                # Character set info from the header
3: SEAL EXPORT:V10.02.01             # the Seal - the exp version tag
20: DBA SYSTEM                       # exp done as SYSTEM
8461: CONNECT SCOTT                  # section for user SCOTT
8475: TABLE "EMP"
                                     # complete create table staement
8487: CREATE TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))  PCTFREE 10 PCTUSED 40
INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS

                                     # Insert statement
8829: INSERT INTO "EMP" ("EMPNO", "ENAME", "JOB", "MGR", "HIREDATE",
"SAL", "COMM", "DEPTNO") VALUES (:1, :2, :3, :4, :5, :6, :7, :8)

                                     # BIND information
8957: BIND information for 8 columns
 col[  1] type 2 max length 22
 col[  2] type 1 max length 10 cset 31 (WE8ISO8859P1) form 1
 col[  3] type 1 max length 9 cset 31 (WE8ISO8859P1) form 1
 col[  4] type 2 max length 22
 col[  5] type 12 max length 7
 col[  6] type 2 max length 22
 col[  7] type 2 max length 22
 col[  8] type 2 max length 22
Conventional export                  # Conventional means NOT DIRECT

9003: start of table data            # Here begins the first row

Now build an unexp statement from the create table statement and the direct/conventional information and the start of the column data.

UNEXP TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))
dump file expdat.dmp from 9003;

Unloaded 14 rows, end of table marker at 9670 # so we have our famous 14 rows

This builds the normal SQL*Loader file and a matching control file. In the output file one extra column is added, this is related to the status of the row. A P means the row is Partial, (some columns missing) R means Resynch, it is the first row after a resynchronisation. O means Overlap, the previous row had errors, but the new row partly overlaps the other one.

 

 

 

Table Of Contents
~~~~~~~~~~~~~~~~~
1. Introduction
2. Using DUL
2.1 Create an appropriate init.dul file
2.2 Create the control.dul file
2.3 Unload the object information
2.4 Invoke DUL
2.5 Rebuild the database
3. How to rebuild object definitions that are stored in the data dictionary ?
4. How to unload data when the segment header block is corrupted ?
5. How to unload data when the file header block is corrupted ?
6. How to unload data without the system tablespace ?
7. Appendix A : Where to find the executables ?
8. References

1. Introduction
~~~~~~~~~~~~~~~

This document is to explain how to use DUL rather than to give a full
explanation of Bernard’s Data UnLoader capabilities.

This document is for internal use only and should not be given to customers at
any time, Dul should always be used by or under the supervision of a support
analyst.

DUL (Data UnLoader) is intended to retrieve data from the Oracle Database that
cannot be retrieved otherwise. This is not an alternative for the export
utility or SQL*Loader. The database may be corrupted but an individual data
block used must be 100% correct. During all unloading checks are made to make
sure that blocks are not corrupted and belong to the correct segment. If a
corrupted block is detected by DUL, an error message is printed in the loader
file and to the standard output, but this will not terminate the unloading of
the next row or block.

2. Using DUL
~~~~~~~~~~~~

First you must retrieve the necessary information about the objects that exists
in the database, these statistics will be loaded into the DUL dictionary to
unload the database objects.

This information is retrieved from the USER$, OBJ$, TAB$ and COL$ tables that
were created at database creation time, they can be unloaded based on the fact
that object numbers are fixed for these tables due to the rigid nature of sql.
bsq. DUL can find the information in the system tablespace, therefor the system
tablespace datafile(s) must be included in the control file, if this datafile(s)
is not present see chapter 6.
2.1 Create an appropriate “init.dul” file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

REM Platform specific parameters (NT)
REM A List of parameters for the most common platforms can be obtained from
REM
osd_big_endian_flag=false
osd_dba_file_bits=10
osd_c_struct_alignment=32
osd_file_leader_size=1
osd_word_size = 32

REM Sizes of dul dictionary caches. If one of these is too low startup will
REM fail.
dc_columns=2000000
dc_tables=10000
dc_objects=1000000
dc_users=400
dc_segments=100000

REM Location and filename of the control file, default value is control.dul
REM in the current directory
control_file = D:\Dul\control_orcl.dul

REM Database blocksize, can be found in the init<SID>.ora file or can be
REM retrieved by doing “show parameter %db_block_size%” in server manager
REM (svrmgr23/30/l) changes this parameter to whatever the block size is of
REM the crashed database.
db_block_size=4096

REM Can/must be specified when data is needed into export/import format.
REM this will create a file suitable to use by the oracle import utility,
REM although the generated file is completely different from a table mode
REM export generated by the EXP utility. It is a single table dump file
REM with only a create table structure statement and the table data.
REM Grants, storage clauses, triggers are not included into this dump file !
export_mode=true

REM Compatible parameter must be specified an can be either 6, 7 or 8
compatible=8

REM This parameter is optional and can be specified on platforms that do
REM not support long file names (e.g. 8.3 DOS) or when the file format that
REM DUL uses “owner_name.table_name.ext” is not acceptable. The dump files
REM will be something like dump001.ext, dump002.ext, etc in this case.
file = dump

A complete list can be obtained at
html section “DUL Parameters” although this init.dul file will work in most
cases and contains all accurate parameters to succesfully complete the
unloading.

2.2 Create the “control.dul” file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A good knowledge about the logical tablespace and physical datafile
structure is needed or you can do the following queries when the database
is mounted :

Oracle 6, 7
———–
> connect internal
> spool control.DUL
> select * from v$dbfile;
> spool off

Oracle 8
——–
> connect internal
> spool control.DUL
> select ts#, rfile#, name from v$datafile;
> spool off

Edit the spool file and change, if needed, the datafile location and stripe
out unnecessary information like table headers, feedback line, etc…
A sample control file looks something like this :

REM Oracle7 control file
1 D:\DUL\DATAFILE\SYS1ORCL.DBF
3 D:\DUL\DATAFILE\DAT1ORCL.DBF
7 D:\DUL\DATAFILE\USR1ORCL.DBF

REM Oracle8 control file
0 1 D:\DUL\DATAFILE\SYS1ORCL.DBF
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF
1 3 D:\DUL\DATAFILE\USR2ORCL.DBF
2 4 D:\DUL\DATAFILE\DAT1ORCL.DBF

Note : Each entry can contain a part of a datafile, this can be useful when
you need to split datafiles that are too big for DUL, so that each
part is smaller than for example 2GB. For example :

REM Oracle8 with a datafile split into multiple parts, each part is
REM smaller than 1GB !
0 1 D:\DUL\DATAFILE\SYS1ORCL.DBF
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 1 endblock 1000000
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 1000001 endblock 2000000
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 2000001 endblock 2550000
2.3 Unload the object information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Start the DUL utility with the appropriate ddl (Dul Description Language)
script. There are 3 scripts available to unload the USER$, OBJ$, TAB$ and
COL$ tables according to the database version.

Oracle6 :> dul8.exe dictv6.ddl
Oracle7 :> dul8.exe dictv7.ddl
Oracle8 :> dul8.exe dictv8.ddl

Data UnLoader: Release 8.0.5.3.0 – Internal Use Only – on Tue Jun 22 22:19:
Copyright (c) 1994/1999 Bernard van Duijnen All rights reserved.

Parameter altered
Session altered.
Parameter altered
Session altered.
Parameter altered
Session altered.
Parameter altered
Session altered.
. unloading table OBJ$ 2271 rows unloaded
. unloading table TAB$ 245 rows unloaded
. unloading table COL$ 10489 rows unloaded
. unloading table USER$ 22 rows unloaded
. unloading table TABPART$ 0 rows unloaded
. unloading table IND$ 274 rows unloaded
. unloading table ICOL$ 514 rows unloaded
. unloading table LOB$ 13 rows unloaded

Life is DUL without it

This will unload the data of the USER$, OBJ$, TAB$ and COl$ data dictionary
tables into SQL*Loader files , this can not be manipulated into dump files
of the import format. The parameter export_mode = false is hardcoded into
the ddl scripts and can not be changed to the value “true” since this will
cause DUL to fail with the error:

. unloading table OBJ$
DUL: Error: Column “DATAOBJ#” actual size(2) greater than length in column
definition(1)
………….etc……………
2.4 Invoke DUL
~~~~~~~~~~~~~~

Start DUL in interactive mode or you can prepare a scripts that contains all
the ddl commands to unload the necessary data from the database. I will
describe in this document the most used commands, this is not a complete list
of possible parameters that can be specified. A complete list can be found at
section “DDL Description”.

DUL> unload database;
=> this will unload the entire database tables(includes sys’tables as well)

DUL> unload user <username>;
=> this will unload all the tables owned by that particullarly user.

DUL> unload table <username.table_name>;
=> this will unload the specified table owned by that username

DUL> describe <owner_name.table_name>;
=> will represent the table columns with there relative pointers to the
datafile(s) owned by the specified user.

DUL> scan database;
=> Scans all blocks of all data files.
Two files are generated:
1: seg.dat information of found segment headers (index/cluster/table)
(object id, file number, and block number).
2: ext.dat information of contiguous table/cluster data blocks.
(object id(V7), file and block number of segment header (V6),
file number and block number of first block,
number of blocks, number of tables)

DUL> scan tables;
=> Uses seg.dat and ext.dat as input.
Scans all tables in all data segments (a header block and at least one
matching extent with at least 1 table).
2.5 Rebuild the database
~~~~~~~~~~~~~~~~~~~~~~~~

Create the new database and use import or SQL*Loader to restore the data
retrieved by DUL. Note that when you only unloaded the data that table
structures, indexation, grants, PL/SQL and triggers will no longer exist in
the new database. To obtain an exactly same copy of the database as before
you will need to rerun your creation scripts for the tables, indexes, PL/SQL,
etc.

If you don’t have these scripts then you will need to perform the steps
described in section 3 of this document.
3. How to rebuild object definitions that are stored in the data dictionary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

You want to rebuild PL/SQL(packages, procedures, functions or triggers), grants,
indexes, constraints or storage clauses(old table structure) with DUL. This can
be done but is a little bit tricky. You need to unload the relevant data
dictionary tables using DUL and then load these tables into a healthy database,
be sure to use a different user than sys or (system). Loading the data
dictionary tables of the crashed database into the healthy database dictionary
could corrupt the healthy database as well.

Detailed explanation to retrieve for example pl/sql packages / procedures /
functions from a corrupted database :

1) Follow the steps explained in the “Using DUL” section and unload the data
dictionary table “source$”

2) Create a new user into a healthy database and specify the desired default
and temporary tablespace.

3) Grant connect, resource, imp_full_database to the new user.

4) Import/load the table “source$” into the new created schema:

e.g.: imp80 userid=newuser/passw file=d:\dul\scott_emp.dmp
log=d:\dul\impemp.txt full=y

5) You can now query from the table <newuser.source$> to rebuild the pl/sql
procedures/functions from the corrupted database. Scripts can be found on
WebIv to generate such PL/SQL creation scripts.

The same steps can be followed to recreate indexes, constraints, and storage
parameters or to regrant privileges to the appropiate users. Please notice that
you always need to use a script of some kind that can recreate the objects and
include all the features of the crashed database version. For example : when
the crashed database is of version 7.3.4 and you have several bitmap indexes,
if you would use a script that supports version 7.3.2 or prior, then you won’t
be able to recreate the bitmap indexes succesful !
4. How to unload data when the segment header block is corrupted
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When DUL can’t retrieve data block information on the normal way, it can scan
the database to create its own segment/extent map. The procedure of scanning
the database is necessary to unload the data from the datafiles.
(to illustrate this example I copied an empty block ontop of the segment header
block)

1) Create an appropiate “init.dul” (see 2.1) and “control.dul” (see 2.2) file.

2) Unload the table. This will fail and indicate that there is a corruption in
the segment header block:

DUL> unload table scott.emp;
. unloading table EMP
DUL: Warning: Block is never used, block type is zero
DUL: Error: While checking tablespace 6 file 10 block 2
DUL: Error: While processing block ts#=6, file#=10, block#=2
DUL: Error: Could not read/parse segment header
0 rows unloaded

3) run the scan database command :

DUL> scan database;
tablespace 0, data file 1: 10239 blocks scanned
tablespace 6, data file 10: 2559 blocks scanned

4) Indicate to DUL that it should use its own generated extent map rather than
the segment header information.

DUL> alter session set use_scanned_extent_map = true;
Parameter altered
Session altered.
DUL> unload table scott.emp;
. unloading table EMP 14 rows unloaded
5. How to unload data when the datafile header block is corrupted
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A corruption in the datafile header block is always listed at the moment you
open the database this is not like a header segment block corruption (see point
4) where the database can be succesfully openend and the corruption is listed
at the moment you do a query of a table. Dul has no problems with recovering
from such situations although there are other alternatives of recovering from
this situation like patching the datafile header block.

The error you will receive looks something like :
ORACLE instance started.
Total System Global Area 11739136 bytes
Fixed Size 49152 bytes
Variable Size 7421952 bytes
Database Buffers 4194304 bytes
Redo Buffers 73728 bytes
Database mounted.
ORA-01122: database file 10 failed verification check
ORA-01110: data file 10: ‘D:\DATA\TRGT\DATAFILES\JUR1TRGT.DBF’
ORA-01251: Unknown File Header Version read for file number 10
6. How to unload data without the system tablespace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If datafiles are not available for the system tablespace the unload can still
continue but the object information can’t be retrieved from the data dictionary
tables USER$, OBJ$, TAB$ and COL$. So ownername, tablename and columnnames will
not be loaded into the DUL dictionary. Identifying the tables can be an
overwhelming task and a good knowledge of the RDBMS internals are needed here.
First of all you need a good knowledge of your application and it’s tables.
Column types can be guessed by DUL, but table and column names will be lost.

Any old system tablespace from the same database (may be weeks old) can be a
great help !

1) Create the “init.dul” file and the “control.dul” file as explained in above
steps 1 and 2. In this case the control file will contain all the datafiles
from which you want to restore but it doesn’t require the system tablespace
information.

2) Then You invoke dul and type the following command :

DUL> scan database;
data file 6 1280 blocks scanned

This will build the extent and segment map. Probably the dul command
interpreter will be terminated as well.

3) reinvoke the dul command interpreter and do the following :

Data UnLoader: Release 8.0.5.3.0 – Internal Use Only – on Tue Aug 03 13:33:

Copyright (c) 1994/1999 Oracle Corporation, The Netherlands. All rights res
Loaded 4 segments
Loaded 2 extents
Extent map sorted
DUL> alter session set use_scanned_extent_map = true;
DUL> scan tables; (or scan extents;)

Scanning tables with segment header

Oid 1078 fno 6 bno 2 table number 0

UNLOAD TABLE T_O1078 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN )
STORAGE ( TABNO 0 EXTENTS( FILE 6 BLOCK 2));
Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
1 4 2 0% 0% 0% 100% 100% 0% 0%
2 4 10 0% 100% 100% 100% 0% 0% 0%
3 4 8 0% 100% 100% 100% 0% 0% 50%
“10” “ACCOUNTING” “NEW YORK”
“20” “RESEARCH” “DALLAS”
“30” “SALES” “CHICAGO”
“40” “OPERATIONS” “BOSTON”

Oid 1080 fno 6 bno 12 table number 0

UNLOAD TABLE T_O1080 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER,
C5 DATE, C6 NUMBER, C7 NUMBER, C8 NUMBER )
STORAGE ( TABNO 0 EXTENTS( FILE 6 BLOCK 12));
Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
1 14 3 0% 0% 0% 100% 100% 0% 0%
2 14 6 0% 100% 100% 100% 0% 0% 21%
3 14 9 0% 100% 100% 100% 0% 0% 0%
4 14 3 7% 0% 0% 100% 100% 0% 0%
5 14 7 0% 0% 0% 0% 0% 100% 0%
6 14 3 0% 0% 0% 100% 100% 0% 0%
7 14 2 71% 0% 0% 100% 100% 0% 0%
8 14 2 0% 0% 0% 100% 100% 0% 0%
“7369” “SMITH” “CLERK” “7902” “17-DEC-1980 AD 00:00:00″ “800” “” “20”
“7499” “ALLEN” “SALESMAN” “7698” “20-FEB-1981 AD 00:00:00″ “1600” “300” “30”
“7521” “WARD” “SALESMAN” “7698” “22-FEB-1981 AD 00:00:00″ “1250” “500” “30”
“7566” “JONES” “MANAGER” “7839” “02-APR-1981 AD 00:00:00″ “2975” “” “20”
“7654” “MARTIN” “SALESMAN” “7698” “28-SEP-1981 AD 00:00:00″ “1250” “1400” “30”

Note : it might be best that you redirect the output to a logfile since
commands like the “scan tables” can produce a lot of output.
On Windows NT you can do the following command :
C:\> dul8 > c:\temp\scan_tables.txt
scan tables;
exit;

4) Identify the lost tables from the output of step 3; if you look carefully to
the output above then you will notice that the unload syntax is already given
but that the table name will be of the format t_0<objectno> and the column
names will be of the format C<no>; datatypes will not be an exact match of
the datatype as it was before.

Look especially for strings like “Oid 1078 fno 6 bno 2 table number 0″ where:
oid = object id, will be used to unload the object
fno = (data)file number
bno = block number

5) Unload the identified tables with the “unload table” command :

DUL> unload table dept (deptno number(2), dname varchar2(14),
loc varchar2(13)) storage (OBJNO 1078)
Unloading extent(s) of table DEPT 4 rows.

oracle database 11.2.0.1 AIX 7.1的认证

oracle database 11.2.0.1 AIX 7.1的认证:11.2.0.1是认证AIX 7.1操作系统的。

但要求AIX 7.1上安装以下三个APAR:

The customers must install the follow APARs (IBM OS Patches) on AIX7.1:

  • IZ87216
  • IZ87564
  • IZ89165

 

Screen Shot 2014-11-14 at 2.30.46 PM

 

 

Oracle Database 11.2.0.1.0 with IBM AIX on POWER Systems (64-bit) 7.1

Product: For general information relating to certification for the Oracle Database product, including virtualization, interoperability, binary compatibiliy, general release and patch set information, see Core Database Certification Information (Doc ID 1306539.1).

Platform: For details about certification of all Oracle Database releases on IBM AIX on Power, click here.

ACFS requires a minimum OS level of  AIX 6.1 TL4 SP2, AIX 7.1 is not supported at this time. For more details please review Oracle documentation

Oracle Database products are tested and certified on the AIX 5,  AIX 6  and AIX 7 operating systems. The customers must install the follow APARs (IBM OS Patches) on AIX7.1:

  • IZ87216
  • IZ87564
  • IZ89165

For further details on minimum software versions and patch requirements, refer to 282036.1

Certification: For details specific to the certification of Oracle Database Release 11.2 on IBM AIX on Power, click here.

The following notes apply to release 11.2.0.1.0 of Oracle Database:

ACFS: Oracle Cloud File System (ACFS) certification details are listed under the “Oracle Cloud File System” product

 

Oracle数据库打不开的解决

造成Oracle数据库打不开,无法打开的情况大致有几种:

  • 参数设置不当
  • 控制文件损坏
  • 日志文件损坏
  • 数据文件头损坏
  • 数据字典损坏
  • UNDO损坏
  • SMON回滚事务时遇到问题

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638   QQ号:47079569    邮箱:service@parnassusdata.com

 

针对不同的报错ORA-00600/ORA-07445等,可以有不同的应对方法:

 

在ORACLE中形成 数据块损坏/坏块诊断corruption多种多样,但其症状大致为如下几种:

  • ORA-01578错误
  • ORA-600[61xx]错误
  • ORA-600[3339]或者ORA-600[3398]
  • ORA-600[2130],ORA-600[2845],ORA-600[4147]错误等等
  • SELECT 查询出讹误的数据

 

应当该类ORACLE数据块损坏/坏块诊断的问题 有这么几个三板斧的步骤:

1、如果数据库仍然是打开状态,则需要判断该块损坏/坏块所在的 数据文件号、块号 并定位到具体的对象(可能是表或者索引)。 结合ORA-1578错误或者ORA-600报出的变量信息,采取如下SQL来定位

 

SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents
WHERE file_id = &fileid
and &blockid between block_id AND block_id + blocks - 1;

 

2、取决于上一步获得的SEGMENT_TYPE, 如果是以下的SEGMENT_TYPE是可以重建的:

  • index
  • 数据可以重新获得的表,或者可以重建的表
  • 回滚段,除了SYSTEM这个回滚段
  • 排序段 , sort segment
  • 临时表

 

 

3、 如果不属于步骤2中支出的任何一种,那么需要注意以下的信息:

  • 数据库是否是归档模式
  • 有无表的备份数据,包括export /sqlldr
  • 是否该表上有基于 NOT NULL字段的索引?
  • 如果有这样的索引,那么是否是UNIUQE的?

 

4、是否这套库从前已经有块损坏/坏块的情况? 这一点有经验的DBA可以从alert.log大致了解情况的, 如果以往有过此类问题则可以参考下文的后续建议

 

5、如果用户正使用归档模式,则应当建议保存一份归档redo和在线日志以便今后的后续诊断。如果不是,则要求用户备份所有的在线日志

 

6、在有条件的情况下做10210,10211和10212 event来捕捉错误源头。 如果现场工程师怀疑问题不是由于 ORACLE本身引起的,则建议dump 有问题的数据块并结合OS和存储、卷管理器的日志来分析。  如果怀疑是内存损坏则有必要考虑_db_block_cache_protect ,注意不是所有平台支持_db_block_cache_protect而且其损坏较多性能

 

7、在某些情况下,有必要要求用户启用归档模式来避免后续再次发生问题时无法有效恢复

 

必要收集的证据

 

1、 包括ORACLE TRACE和ALERT文件,这个是我们诊断此类问题的源头, 并分析这些报告中是否有其他数据块被报告存在损坏

2、从OS角度转储坏的数据块

Unix: dd if=badfile.dbf count=5 bs=2048 skip=75

 

 

后续建议

 

1、当我们在分析trace或redo日志转储时 有必要调整用户的预期,要表达给用户这些信息:

  • 我们在帮助判断原因,而不是判断如何修复这些坏块
  • 我们在研究这些证据,但这些证据未必能让我们下决定性的结论

 

 

2、有时候数据块是在内存中损坏了 例如ORA-600[3398],为了验证这些情况可以:

  • analyze table X validate structure cascade;
  • alter system flush buffer_cache;
  • 从OS角度转储该数据块并分析

 

 

后续措施

 

1、寻找本质, 例如:

  • 所有的损坏都只发生在某个裸设备或者设备或者控制器上
  • 每数4个块出现一个坏块
  • 数据块本身没问题,但是出现的位置不对
  • 数据块的部分是健康的,但其他地方不正确

 

2、 通过绕过存在 损坏/坏块的数据块来重建表:

使用10231 level 10事件来执行一个全表扫描的CTAS

通过构建ROWID来避免访问损坏的数据块 【数据恢复】利用构造ROWID实现无备份情况下绕过ORA-1578、ORA-8103、ORA-1410等逻辑/物理坏块问题

 

3、 启用10210、10211和10212并更新数据块来进一步定位坏块的细节,并考虑使用10231 event

 

其他工具

 

其他可选的工具包括dul、oranum、orapatch、bbed等,这些都是ORACLE内部工具。

数据库管理员和开发人员MySQL指南

作 者:Diana Gray,甲骨文大学首席课程产品经理

本文地址:https://www.askmac.cn/archives/mysql-ocp-z2.html

 

 

MySQL是当前市场中最值得信赖和依靠的开源数据库平台。全世界十个最受欢迎、访问量最大的网站中便有九个采用MySQL,其优势在于拥有跨异构平台和跨应用堆栈的广泛应用范围。

MySQL以高性能、可靠性和易用性著称。MySQL 5.6 现已得到全面的完善和改进,将帮助富有创新力的数据库管理员和开发人员在最新一代的开发框架和硬件平台基础上创建及部 署下一代网络、嵌入式应用和Cloud/SaaS/DaaS 应用。

构 建下一代网络应用程序和服务

毫 无疑问,MySQL 5.6是一款功能更强大的MySQL, 它 对数据库内核的每个功能区都进行了完善,包括:

  • 增强了 InnoDB存储引擎,提供更强的信息处理能力
  • 增强了Optimizer,提供更理想的查 询执行时间和诊断功能。
  • 通过在线DDL/模式变化提供更好的应用程序可用性。
  • 通过NoSQL Access、Memcached API至InnoDB提高了开发人员的灵活性。
  • 增强了复制功能,提供更高的性能和自 愈分布式部署。
  • 增强了安全性,提供无忧应用程序部署
  • 增强了性能模式,提供更好的性能测量

MySQL课 程和认证考试最新发布

“MySQL for Database Administrator” 和“ MySQL for Developers” 培 训课程的内容与最新发布的GA版本“MySQL 5.6 Developr”和“MySQL 5.6 Database Administrator OCP”认证考试内容一致。

参 加实践培训并成功通过认证考试,将充分验证您的专业知识。

用全球最受欢迎的开源数据库MySQL证明您的专业知识

本文地址:https://www.askmac.cn/archives/mysql-ocp-z1.html

 

 

Oracle 认证现已发布新的MySQL 5.6版本认证考试—— MySQL 5.6 Developer(1Z0-882)和MySQL 5.6 Database Administrator (1Z0-883)。

5.6版本的MySQL对产品的几乎所有功能进行了提高 ,包括性能和可扩展性、InnoDB、Optimizer、 Replication和Performance Schema等。该版本也新增了一项NoSQL ACCESS TO INNODB功能,可大幅度提高开发员的灵活性。

Oracle Certified Professional, MySQL 5.6 Developer (OCP)和Oracle Certified Professional, MySQL 5.6 Database Administrator (OCP)认证专门为拥有丰富全面的技能和工具知识而能熟练使用MySQL数据库服务器编写应用程序的专业人士而设计。获得认证意味着您已熟练掌握全球最受欢迎的开源数据库的专业知识与技能,将帮助您在激烈的竞争中脱颖而出。

获取上述认证时培训虽然并非必要条件,但甲骨文大学为您提供的培训资源能够帮助您为认证考试做好更充分的准备。

  • MySQL for Developers 培训课程—— 强烈推荐参加本课程,有助于您成功通过Developer考试。您将在课程中学习所有与使用MySQL数据库服务器编写应用程序相关的知识、技能和工具等。
  • MySQL for Database Administrators培训课程——强烈推荐参加本课程,有助于您成功通过DBA认证考试。您将在课程中学习使用MySQL的所有功能,以发挥网页、云计算和嵌入式应用的最大优势。

立即注册考试1Z0-882或1Z0-883,请访问 pearsonvue.com/oracle

快速链接:

Oracle Optimizer Hint优化器提示分类表

Oracle Optimizer Hint优化器提示分类表

 

 

 

分类 9i R1 9i R2 10g R1 10g R2 11g R1 11g R2
优化器模式 ALL_ROWS ALL_ROWS ALL_ROWS ALL_ROWS ALL_ROWS ALL_ROWS
FIRST_ROWS(n) FIRST_ROWS(n) FIRST_ROWS(n) FIRST_ROWS(n) FIRST_ROWS(n) FIRST_ROWS(n)
CHOOSE CHOOSE
RULE RULE RULE ※askmac.cn
Hints OPTIMIZER_FEATURES_ENABLE
针对访问路径 FULL FULL FULL FULL FULL FULL
access path的HINT ROWID ROWID
CLUSTER CLUSTER CLUSTER CLUSTER CLUSTER CLUSTER
HASH HASH HASH HASH HASH HASH
INDEX INDEX INDEX INDEX INDEX INDEX
NO_INDEX NO_INDEX NO_INDEX NO_INDEX NO_INDEX NO_INDEX
INDEX_ASC INDEX_ASC INDEX_ASC INDEX_ASC INDEX_ASC INDEX_ASC
INDEX_COMBINE INDEX_COMBINE INDEX_COMBINE INDEX_COMBINE INDEX_COMBINE INDEX_COMBINE
INDEX_JOIN INDEX_JOIN INDEX_JOIN INDEX_JOIN INDEX_JOIN INDEX_JOIN
INDEX_DESC INDEX_DESC INDEX_DESC INDEX_DESC INDEX_DESC INDEX_DESC
INDEX_FFS INDEX_FFS INDEX_FFS INDEX_FFS INDEX_FFS INDEX_FFS
AND_EQUAL AND_EQUAL NO_INDEX_FFS NO_INDEX_FFS NO_INDEX_FFS NO_INDEX_FFS
INDEX_SS INDEX_SS INDEX_SS INDEX_SS
NO_INDEX_SS NO_INDEX_SS NO_INDEX_SS NO_INDEX_SS
INDEX_SS_ASC INDEX_SS_ASC INDEX_SS_ASC INDEX_SS_ASC
INDEX_SS_DESC INDEX_SS_DESC INDEX_SS_DESC INDEX_SS_DESC
关于转换的HINT NO_QUERY_TRANSFORMATION NO_QUERY_TRANSFORMATION NO_QUERY_TRANSFORMATION NO_QUERY_TRANSFORMATION
USE_CONCAT USE_CONCAT USE_CONCAT USE_CONCAT USE_CONCAT USE_CONCAT
NO_EXPAND NO_EXPAND NO_EXPAND NO_EXPAND NO_EXPAND NO_EXPAND
REWRITE REWRITE REWRITE REWRITE REWRITE REWRITE
EXPAND_GSET_TO_UNION
NOREWRITE NOREWRITE NO_REWRITE NO_REWRITE NO_REWRITE NO_REWRITE
MERGE MERGE MERGE MERGE MERGE MERGE
NO_MERGE NO_MERGE NO_MERGE NO_MERGE NO_MERGE NO_MERGE
STAR_TRANSFORMATION STAR_TRANSFORMATION STAR_TRANSFORMATION STAR_TRANSFORMATION STAR_TRANSFORMATION STAR_TRANSFORMATION
NO_STAR_TRANSFORMATION NO_STAR_TRANSFORMATION NO_STAR_TRANSFORMATION NO_STAR_TRANSFORMATION
FACT FACT FACT FACT FACT FACT
NO_FACT NO_FACT NO_FACT NO_FACT NO_FACT NO_FACT
UNNEST UNNEST UNNEST UNNEST
NO_UNNEST NO_UNNEST NO_UNNEST NO_UNNEST
JOIN order HINT LEADING LEADING LEADING LEADING
ORDERED ORDERED ORDERED ORDERED ORDERED ORDERED
STAR STAR
JOIN操作HINT USE_NL USE_NL USE_NL USE_NL USE_NL USE_NL
NO_USE_NL NO_USE_NL NO_USE_NL NO_USE_NL
USE_NL_WITH_INDEX USE_NL_WITH_INDEX USE_NL_WITH_INDEX USE_NL_WITH_INDEX
USE_MERGE USE_MERGE USE_MERGE USE_MERGE USE_MERGE USE_MERGE
NO_USE_MERGE NO_USE_MERGE NO_USE_MERGE NO_USE_MERGE
USE_HASH USE_HASH USE_HASH USE_HASH USE_HASH USE_HASH
NO_USE_HASH NO_USE_HASH NO_USE_HASH NO_USE_HASH
DRIVING_SITE DRIVING_SITE (参见其他HINT) (参见其他HINT) (参见其他HINT) (参见其他HINT)
LEADING LEADING
HASH_AJ、MERGE_AJ、NL_AJ HASH_AJ、MERGE_AJ、NL_AJ
HASH_SJ、MERGE_SJ、NL_SJ HASH_SJ、MERGE_SJ、NL_SJ
特殊 CHANGE_DUPKEY_ERROR_INDEX
IGNORE_ROW_ON_DUPKEY_INDEX
RETRY_ON_ROW_CHANGE
并行执行HINT PARALLEL PARALLEL PARALLEL PARALLEL PARALLEL PARALLEL
NOPARALLEL NOPARALLEL NO_PARALLEL NO_PARALLEL
PQ_DISTRIBUTE PQ_DISTRIBUTE PQ_DISTRIBUTE PQ_DISTRIBUTE PQ_DISTRIBUTE PQ_DISTRIBUTE
PARALLEL_INDEX PARALLEL_INDEX PARALLEL_INDEX PARALLEL_INDEX PARALLEL_INDEX PARALLEL_INDEX
NOPARALLEL_INDEX NOPARALLEL_INDEX NO_PARALLEL_INDEX NO_PARALLEL_INDEX NO_PARALLEL_INDEX NO_PARALLEL_INDEX
其他HINT APPEND APPEND APPEND APPEND APPEND APPEND
NOAPPEND NOAPPEND NOAPPEND NOAPPEND NOAPPEND NOAPPEND
APPEND_VALUES
CACHE CACHE CACHE 诗檀软件 CACHE CACHE CACHE
NOCACHE NOCACHE NOCACHE NOCACHE NOCACHE NOCACHE
UNNEST UNNEST
NO_UNNEST NO_UNNEST
PUSH_PRED PUSH_PRED PUSH_PRED PUSH_PRED PUSH_PRED PUSH_PRED
NO_PUSH_PRED NO_PUSH_PRED NO_PUSH_PRED NO_PUSH_PRED NO_PUSH_PRED NO_PUSH_PRED
PUSH_SUBQ PUSH_SUBQ PUSH_SUBQ PUSH_SUBQ PUSH_SUBQ PUSH_SUBQ
NO_PUSH_SUBQ NO_PUSH_SUBQ NO_PUSH_SUBQ NO_PUSH_SUBQ NO_PUSH_SUBQ
QB_NAME QB_NAME QB_NAME QB_NAME
ORDERED_PREDICATES ORDERED_PREDICATES
CURSOR_SHARING_EXACT CURSOR_SHARING_EXACT CURSOR_SHARING_EXACT CURSOR_SHARING_EXACT CURSOR_SHARING_EXACT CURSOR_SHARING_EXACT
DYNAMIC_SAMPLING DYNAMIC_SAMPLING DYNAMIC_SAMPLING DYNAMIC_SAMPLING DYNAMIC_SAMPLING
SPREAD_MIN_ANALYSIS
MODEL_MIN_ANALYSIS MODEL_MIN_ANALYSIS MODEL_MIN_ANALYSIS
DRIVING_SITE DRIVING_SITE DRIVING_SITE DRIVING_SITE

PRM-DUL成功案例:帮助北京某政府机构恢复硬盘损坏的Windows服务器上的oracle数据库

PRM-DUL成功案例:帮助北京某政府机构恢复硬盘损坏的Windows服务器上的oracle数据库。

 

该数据库版本为11.2.0.1,由于硬盘机械故障 存在较多的坏道,导致ORACLE实例无法正常启动 打开数据库,OPEN会因为ORA-01115、ORA-01110、ORA-27070、OSD-04006、O/S-Error:等错误而终止:

 

ORA-01115: IO error reading block from file  (block # )
ORA-01110: data file 3: 'ORCL\UNDOTBS01.DBF'
ORA-27070: async read/write failed
OSD-04006: ReadFile() 失败, 无法读取文件
O/S-Error: (OS 23) 数据错误(循环冗余检查)。

 

通过PRM-DUL直接使用字典模式加载所有数据文件后,直接绕过了无法读取的坏道数据块数据,成功加载数据字典,并恢复了用户Schema下的数据:

 

QQ截图20141128154838

 

 

QQ截图20141128155026

 

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638   QQ号:47079569    邮箱:service@parnassusdata.com

ORACLE PRM是诗檀软件独立研发的ORACLE数据库灾难恢复软件,其具有全程图形化界面、简单高效等特点。

欢迎下载使用ORACLE PRM。 下载地址:http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_3206.zip

PRM用户使用手册。http://www.parnassusdata.com/sites/default/files/ParnassusData%20Recovery%20Manager%20For%20Oracle%20Database%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C%20v0.3.pdf

 

津岛博士的性能讲座 第五回 优化与优化统计的收集

津岛博士的性能讲座

第5次 优化与优化统计的收集

 
这次我将对于优化统计的收集进行说明(用Statspack等等收集的话情报会有区别,所以为了生成优化的执行计划所要用到的东西,我将称之为优化统计信息)。最近,我有听到如下关于优化统计信息的问题,我将在此进行说明。

 

“因为近来变得只有CBO了,所以就必须进行优化统计收集,对象太多的话就不能小看收集时间了。为了缩短收集时间,就缩小样本尺寸,减少收集频率,从而导致执行计划不是最合适的情况,实在让人困扰。请告诉我优化统计的收集的方法。
对此,因为没有说“只要这么做就万事大吉”的方法,我想大家都应该对此挺烦恼的。对此,我整理了我的想法,希望能作为大家的参考。

 

首先我们说明下到底优化器optimizer是什么。从optimizer statistics 统计信息开始计算成本,然后选出其中成本最低的执行计划来执行就是CBO。(对此,直接由SQL文本所决定的执行计划就是RBO。以前的版本两种说法都可以使用,但现在只推荐用CBO了。

优化器统计信息分为如下几种

  1. Table表统计(行数、块数、平均行长等等)
  2. 列统计(基数,NULL数,最大值,最小值,历史数据等等。没有历史数据 情况时,用最小值和最大值将数据分布均一化。)
  3. 索引统计(叶块数Leaf block、树tree的高度、集群化洗漱等等索引的有效性。)
  4. 系统统计(CPU性能、I/O性能等等)

 

使用这个概念,CBO就是怎样考虑的(全扫描或索引扫描,以怎样的顺序怎样结合等等),以及如何使得成本最小化(使得访问行数最少)。如果这里出现纰漏的话,就做不出较好的执行计划(比如如图A的的数据分布,因为没有收集偏差部分的历史数据,就无法判断是否均一)。优化统计信息对于性能是非常重要的情报。

 

1

 

图(数据分布)

上述表的行A和行B是分别由各自的索引被制成的状态,假设用下述的1和2的SQL文执行的话,从选择率上来看,使用1的索引的B效率会更好。2是使用A的缩影效率会更好。如果A的数据分布均一化了请选择1的A索引。

 

那么要怎么去收集优化统计才好呢?基本上,使用如之后所述的准则来实行。因为其中有性能劣化的情况,我将在之后对此进行说明。

使用DBMS_STATS包package

 

ANALUZE命令也因为下位互换的原因被留下来了,所以请使用DBMS_STATS包package(将正确的统计情报会被效率地收集起来,历史管理也会自动执行。

对每个对象指定合适的收集频率

对于对象的变更量,根据因此优化统计变得不正确的等级来决定收集频率。这时,请注意对于静态的数据不要做没用的收集。我想这样表述大家还是很难明白,通常请按照下面的方法执行。(适合在Oracle里执行。)

存储暂时表和挥发性很高的数据的Table表频繁地大量行的DELETE或者被TRUNCATE的Table)是用动态的采样来执行(删除统计情报后,ROCK可以经常执行动态采样。观察以下的SQL文,我们可以确认统计ROCK状态的对象,

2

3

Table有合适的样本尺寸统计

因为在Table的统计收集中有必要执行分类处理,大规模的Table之中,这样的操作非常耗费时间。因此我们会设定Table的样本 尺寸来执行。(通常有5%的样本尺寸就没问题了)。

DBMS_STATS包的参数的值也,从Oracle Database 10g开始默认变成是estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE(依存于对象的行数的估计采样尺寸),所以我认为这里不存在问题(收集优化统计的的时候的样本尺寸是用以下SQL文来确认。)

 

4

索引中的完全统计

因为索引已经完成分类了(没必要分类处理),可以完全(100%)收集。

从Oracle Database 10g开始,制作索引或者再构筑时,优化统计会被自动收集。(Oracle9i中索引制作或者再构筑时,通过指定COMPUTE STATISTICS选项来完成收集)所以没有必要制作完成后再次收集。

这边的DBMS_STATS 包的参数值也是,Oracle Database 10g默认是cascade=>DBMS_STATS.AUTO_CASCADE(是否有必要去收集索引的统计以及醉适合的样本尺寸是什么是由Oracle决定的),我认为这里没有问题。用以下的SQL文来确认索引的优化统计。

 

 

5

数据中有偏差的列用直方图统计

像在WHERE语句的条件中 使用数据分布中有偏差的列的SQL文有不是最合适的执行计划的可能一样(像上次课程一样,姓是铃木和林的搜索结果不同一样,最适合的实行计划也有不同的可能性,有必要收集正确的数据分布。)

DBMS_STATS包的参数值也通常是用method_opt=>’FOR ALL COLUMNS SIZE AUTO’(从Oracle Database 10gR2开始的默认),我们可以判断没有问题。直方图的直方数用以下的SQL文来确认

 

SQL文的WHERE语句之中选定复数的列的话,会有列统计出现错误的情况,根据情况,请检查复数列的统计。

另外,在Oracle Database 10g以后追加的新功能也有让统计情报更便利的功能

以下我将介绍这些新功能,请大家一定要好好活用。

 

  • (1)自动优化统计收集

Oracle Database10g以后的版本中,默认的自动优化统计收集功能十分有效。因此,如下的优化就被自动收集到优化统计之中。

还没有收集到优化统计的对象。

有优化统计不是最新的状态的对象(10%以上的行被更新了)

 

被事先定义为周一到周五是22:00到第二天早上的2:00,周六周日是6:00到第二天早上的2:00之间收集(Oracle Database 10g以后之中周末是周六0点到周日的23:59:59之间),请调整时间使用。在之后的SQL文中可以确认自动维护任务窗口、执行日期、状态。

6

 

这时,dbms_stats.gather_database_stats_job_proc存储过程(与dbms_stats.gather_database_stats (options=>GATHER AUTO)的执行几乎同时)就会被同时执行。因为是使用被Table表的PARALLEL句指定的默认并列度,对于大规模Table表,想加速收集时间的时候,请调整对象Table的默认并列度)。要确认或者变更这些默认值,请使用get_prefs存储过程与set_*_prefs存储过程来执行。(直到Oracle Database 10g是使用dbms_stats.get_param和dbms_stats.set_param来运行),下面的SQL文就是变更和确认的例子。

7

 

(2)动态采样(初始化参数OPTIMIZER_DYNAMIC_SAMPLING)

优化统计不足的话,执行SQL文时就会自动地采样优化统计制作执行计划。(优化统计是被存储在词典Dictionary之中的删除共有区域的话就会被再次执行。因此,访问对象Table的SQL文很多的情况时,请手动收集)

Oracle Database 10g开始的默认之中(初始化参数Oracle Database 10g),所有没有被优化统计收集的Table表都将被执行动态采样。(因为采样被访问的块数从OD10g开始变成了64块。)通常这没有问题。(但是,复杂的SQL文等等需要在默认以外去做。关于这个我将另行说明。)

通常一直对Table表进行动态采样的情况下,如下图所示,请删除优化统计后再LOCK,这样,初始化参数OPTIMIZER_DYNAMIC_SAMPLING=2以上时通常就会被动态采样。

8

 

(3)复数列的统计

选定SQL文的WHERE语句里的复数列作为条件时,会出现列统计不正确的情况。比如如下图SQL文所示,制成c1和c2的复数列索引的情况时,判断进行索引扫描或全扫描时,在各自的列单独的统计中有无法正确判断的情况。如果不是根据两个列的组合决定数据分布的话就无法得出正确结果。(因为对两个列的值有必要估计到底有多少件是HIT的)8

9

 

 

这样的情况是收集Oracle Database 11g开始的复数列的统计(作为列Group的统计)(Oracle Database 10g以前有必要调整提示文)比如,假设将上述SQL文中WHERE语句里的c1列和c2列作为条件指定的话,就能如下所示一般收集。

11

 

 

  • 性能劣化的情况

优化统计和CBO也不是完美的。(也会有所有的优化统计不能说是完全正确的情况,以及即时使用完全的优化统计,CBO选择的执行计划也不是最适合的情况。因此,也有收集优化统计然后性能产生劣化的情况,出现这种情况时,请使用以下对策。

 

22

 

收集正确的优化统计(扩大样本尺寸和直方图的直方)

完全收集样本尺寸后,性能被改善了的话,然后就接着收集。(时间上出现问题时,将样本尺寸一点一点缩小,尽量不影响执行计划来进行调整。)

使用提示文来回避问题点

比较劣化前的执行计划,对于有差异的地方来插入提示文。比如,想使用索引ix_tab01_c1的时候如后图所示,插入INDEX提示(红字的部分)。

33

 

这样使用中性能出现急剧劣化的话很让人困扰,也能常常听到不想频繁改变执行计划。最后我将对此进行说明。

 

  • 不想改变实行计划

因为是用CBO,所以有可能让优化统计的执行计划发生改变。无论如何不想改变的话,按下述事项来做的话,也有不变更的方法。

提示文(无法变更SQL文且无法植入提示文的情况使用计划稳定性Plan Stability)

计划稳定性Plan Stability(为了实现执行计划的概要。作成“之后植入的提示文的集合”,执行时,在SQL文植入概要)

还有Table的数据分布等等发生改变,效率降低的情况,请在数据分布的倾向不发生改变的情况下使用。(比如列c1的索引访问效率较好,而改变数据分布之后效率就变差了)因此,改变数据分布的情况,请确认现在的执行计划是没有问题的。

因为也有上述这样的情况,Oracle Database 11g的话就如图一样使用SQL计划管理(SPM:SQL Plan Management)。执行实行计划的历史管理,就能采用最合适的执行计划。

44

 

  • 总结

这次我们对于优化统计的收集进行了各种各样的说明,正确把握所有数据是非常困难的。所以充分进行提前测试,再分别处理就显得很有必要了

有其他问题的话尽管来问,我在这里恭候大家。

那么,下次见。

Oracle数据库技术交流和知识转移

 oracle-kt

序号 名称 天数 服务对象 特点及主要内容
 1 数据库和SQL语言基础课程 2 初级开发人员 面向初级开发人员的入门级课程,讲述数据库基本原理、SQL基本操作,介绍SQL应用优化基本原理、常见性能问题和案例分析
2 PL/SQL语言基础课程 2 初、中级开发人员 介绍PL/SQL概述、PL/SQL变量声明、常用SQL语句编写、条件和循环语句编写、复合数据类型处理、游标使用、异常处理等,并介绍PL/SQL编写经验和相关案例
3 PL/SQL语言高级课程 3 高级开发人员 系统介绍PL/SQL开发高级技术。包括存储过程、函数、包的编写,Oracle主要内置包的介绍,动态SQL语句编写,大对象管理等,并介绍PL/SQL高级技术运用经验和相关案例
4 数据库逻辑设计及建模 2 数据库设计、开发人员和DBA 系统介绍数据库规范化设计理论,并通过SQL Developer Data Modeler工具的运用,结合案例讲述数据流程图设计和实体关系图(E-R)设计过程,提高客户数据库设计水平
序号 名称 天数 服务对象 特点及主要内容
 5 数据库物理设计 2 数据库设计、开发人员和DBA 全面介绍Oracle数据库物理设计过程,包括表空间、表、索引等物理设计,适合于海量数据管理的分区、ASM等技术和方案设计,10g/11g相关新特性,并介绍若干实际案例
6 DBA基础课程 3 初、中级DBA 介绍Oracle数据库体系结构,以及DBA日常运行维护操作。如数据库对象管理、备份/恢复、安全性基本管理、网络管理、性能监控和分析,以及AWR、ADDM等基本工具使用和分析
7 DBA高级课程 3 高级DBA 面向高级DBA,结合大量实战案例,在系统级和SQL应用级分别介绍性能优化高级技术,并讲述分区技术方案,故障诊断方法和实用技巧,以及Oracle最大高可用性(MAA)体系结构及技术方案
8 SQL应用优化高级课程 3 中高级DBA和应用开发人员 基于Oracle性能优化方法论,结合大量实战案例,系统介绍SQL应用优化技术,如索引策略、优化器原理,针对OLTP和OLAP不同类型系统的优化技术运用过程等

 

 

序号 名称 天数 服务对象 特点及主要内容
 9 Oracle分区技术 2 中高级DBA和应用开发人员 系统介绍适合于海量数据管理的Oracle分区技术。包括分区表和分区索引技术、分区技术运用、11g分区新特性、分区实施经验和案例分析等
10 故障诊断高级课程 2 高级DBA 系统介绍Oracle数据库故障诊断方法论,并介绍常见数据库故障诊断工具(RDA、AWR、ADDM、ASH、OSW、database dump、Hang analysis、diagcollect、CHM)、数据库坏块处理流程、数据库Hang的处理、ORA-4031内存错误的处理、RAC故障诊断方法,以及大量故障诊断实战案例
11 Oracle 11g数据库应用优化高级课程 3 中高级DBA和应用开发人员 系统介绍11g在应用优化方面的新技术。包括优化器高级操作优化、自动SQL优化功能、SQL Tuning Advisor、SQL Access Advisor、SQL PLAN 管理功能、HINT使用技巧,以及大量实战性能优化案例
12 Oracle RAC原理及实施 3 高级DBA 全面介绍Oracle RAC体系结构、RAC安装过程、RAC高可用性测试、RAC运行维护经验、RAC性能分析和优化等,并介绍ASM原理、实施经验等,以及11g在RAC和ASM方面新特性及相关实施案例

 

序号 名称 天数 服务对象 特点及主要内容
 13 Oracle OEM 3 高级DBA 系统介绍Oracle集中监控管理平台OEM的EM Repository、Grid Console、Agent三层架构设计,数据库基本管理功能,数据库在线诊断和优化包使用,报表方案设计,并介绍OEM实施案例
14 Oracle数据库安全性 3 高级DBA 全面介绍Oracle数据库安全体系结构,并介绍Oracle数据库安全加密技术,Oracle DB Vault,Oracle DB Mask,Oracle Audit Vault 等典型技术及实施案例
15 数据仓库设计和建模 3 数据仓库设计、开发和管理人员 面向数据仓库系统,介绍业务和逻辑模型设计、多维数据模型、物理模型、容量规划、典型ETL技术、汇总管理等内容,以及数据仓库实施案例
16 Oracle容灾课程 2 高级DBA 介绍Oracle容灾产品Data Guard的体系结构,物理Data Guard和逻辑Data Guard实施过程和日常运行维护管理,Data Guard优化技术和11g新特性,以及Data Guard若干实施案例

 

序号 名称 天数 服务对象 特点及主要内容
 17 11g新特性 3 数据库设计、开发人员和DBA 系统介绍11g新特性,包括SQL自动性能优化技术、SQL PLAN、Database Replay、SPA、分区新技术、SecureFiles、SQL Result Cache、RMAN新特性、安全性新技术等,以及11g实施案例
18 12c新特性 2 数据库设计、开发人员和DBA 介绍12c若干新特性:CDB和PDB概念和架构、CDB和PDB日常管理;以及信息生命周期管理新技术,如Heat Map和ADO技术、数据归档新技术等。
19 Oracle综合高级技术 3 数据库设计、开发人员和DBA 围绕11g新特性、11g升级解决方案、性能优化、分区技术、Oracle容灾解决方案等高级技术进行介绍,并介绍相关实施案例
20 Exadata 2 高级DBA 介绍Exadata原理和体系结构,Exadata 安装和配置,Exadata Database Michine和Exadata Storage Server日常运行维护、常见故障诊断,Exadata备份恢复,迁移至Exadata技术方案等,并介绍相关实施案例
21 Oracle TimesTen内存数据库 2 高级DBA 介绍Oracle TimesTen内存数据库基本原理,TimesTen内存数据库日常管理、并发控制和性能优化,TimesTen复制技术、TimesTen高可用性方案,以及相关实施案例。
22 Oracle云计算及数据整合技术 2 高级DBA Oracle云计算总体策略,数据库云计算平台(DBaaS),Oracle虚拟机技术,云计算的容灾和安全性,主要数据整合和迁移技术等,以及相关实施案例

 

 

沪ICP备14014813号-2

沪公网安备 31010802001379号