Does Oracle Goldengate support Parallel DML?

Golengate的基本工作原理是通过挖掘重做日志以获取数据库中的数据变化;而如果我们在数据库中使用并行DML去插入数据的话会因为直接路径插入而产生少量的redo重做日志。那么OGG的日志挖掘是否能正确捕获这些并行DML所产生的数据变化呢?接着我们来实际地测试一下:

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    177   78.5112305       82324480   22532608

/* 通过以上查询我们可以了解实际的redo写出情况:
    Current Offset说明了当前日志文件所写到的位置,
    而Left Space说明了当前日志文件所剩余的空间 
82324480(Current Offset)+22532608(Left space)+512(redo header)=logfile size=le.lesiz* redo block size
*/

SQL> alter system switch logfile;
System altered.

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    179   .002441406           2048  104855040

/* 初始位置为No 179的日志文件 */

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    180    58.277832       61108224   43748864

/* 使用普通非并行DML插入产生了104855040+61108224=158M的redo */


SQL> truncate table tv;
Table truncated.

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    180   60.6469727       63592448   41264640

/* 初始为No 180日志文件的63592448 */

SQL> alter session enable parallel dml;
Session altered.

/* 在session级别启用并行DML */

SQL> set autotrace on;
SQL> insert /*+ parallel(tv,4) */ into tv select * from sample;

3640772 rows created.
Execution Plan
----------------------------------------------------------
ERROR:
ORA-12838: cannot read/modify an object after modifying it in parallel
SP2-0612: Error generating AUTOTRACE EXPLAIN report
Statistics
----------------------------------------------------------
        111  recursive calls
       1168  db block gets
      17850  consistent gets
      17745  physical reads
      97944  redo size
        815  bytes sent via SQL*Net to client
        750  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
    3640772  rows processed

/* autotrace statistics显示并行插入仅产生了97944字节的redo */

SQL> commit;
Commit complete.

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    182   10.4882813       10997248   93859840

/* 而实际上日志由180切换到了182,实际的redo产生大约是41264640+104857600+10997248=150M */

/* 换而言之autotrace对并行DML语句所产生的实际redo统计远少于实际值,
    这也就保证了extract能够捕获到所有这些并行DML所引起的数据变化
*/

GGSCI (rh2.oracle.com) 59> stats load2
Sending STATS request to EXTRACT LOAD2 ...

Start of Statistics at 2010-12-16 20:17:35.

Output to /s01/new/ze:

Extracting from CLINIC.TV to CLINIC.TV:

*** Total statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

*** Daily statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

*** Hourly statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

*** Latest statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

End of Statistics.


GGSCI (rh2.oracle.com) 60> !
stats load2

Sending STATS request to EXTRACT LOAD2 ...

Start of Statistics at 2010-12-16 20:17:37.

Output to /s01/new/ze:

Extracting from CLINIC.TV to CLINIC.TV:

*** Total statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

*** Daily statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

*** Hourly statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

*** Latest statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

End of Statistics.


GGSCI (rh2.oracle.com) 61> !
stats load2

Sending STATS request to EXTRACT LOAD2 ...

Start of Statistics at 2010-12-16 20:17:39.

Output to /s01/new/ze:

Extracting from CLINIC.TV to CLINIC.TV:

*** Total statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

*** Daily statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

*** Hourly statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

*** Latest statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

End of Statistics.

/* 可以看到extract的统计信息中Total inserts不断递增,说明extract正确捕获了
    所有由并行INSERT引发的直接路径插入操作
*/

Does Oracle Goldengate support Parallel DML?
结论显然是: Yes。

Error accessing PRODUCT_USER_PROFILE?

有客户发邮件来表示新建user后,尝试使用该user登录时sqlplus会出现如下警告:

Error accessing PRODUCT_USER_PROFILE
Warning:  Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM

PRODUCT_USER_PROFILE是Oracle 10g中一个用来显示记录那些不希望用户可以在sqlplus执行命令的权限表,该表一般会在数据库创建时在system模式下被建立;显然用户的这个数据库可能是以手工创建数据库的方式create出来的,而在最后执行脚本的阶段没有执行建立该表的pupbld.sql脚本,该脚本一般位于$ORACLE_HOME/sqlplus/admin目录下:

[maclean@rh2 admin]$ pwd
/s01/10gdb/sqlplus/admin

[maclean@rh2 admin]$ cat pupbld.sql
--
-- Copyright (c) Oracle Corporation 1988, 2003.  All Rights Reserved.
--
-- NAME
--   pupbld.sql
--
-- DESCRIPTION
--   Script to install the SQL*Plus PRODUCT_USER_PROFILE tables.  These
--   tables allow SQL*Plus to disable commands per user.  The tables
--   are used only by SQL*Plus and do not affect other client tools
--   that access the database.  Refer to the SQL*Plus manual for table
--   usage information.
--
--   This script should be run on every database that SQL*Plus connects
--   to, even if the tables are not used to restrict commands.

-- USAGE
--   sqlplus system/ @pupbld
--
--   Connect as SYSTEM before running this script

-- If PRODUCT_USER_PROFILE exists, use its values and drop it

DROP SYNONYM PRODUCT_USER_PROFILE;

CREATE TABLE SQLPLUS_PRODUCT_PROFILE AS
  SELECT PRODUCT, USERID, ATTRIBUTE, SCOPE, NUMERIC_VALUE, CHAR_VALUE,
  DATE_VALUE FROM PRODUCT_USER_PROFILE;

DROP TABLE PRODUCT_USER_PROFILE;
ALTER TABLE SQLPLUS_PRODUCT_PROFILE ADD (LONG_VALUE LONG);

-- Create SQLPLUS_PRODUCT_PROFILE from scratch

CREATE TABLE SQLPLUS_PRODUCT_PROFILE
(
  PRODUCT        VARCHAR2 (30) NOT NULL,
  USERID         VARCHAR2 (30),
  ATTRIBUTE      VARCHAR2 (240),
  SCOPE          VARCHAR2 (240),
  NUMERIC_VALUE  DECIMAL (15,2),
  CHAR_VALUE     VARCHAR2 (240),
  DATE_VALUE     DATE,
  LONG_VALUE     LONG
);

-- Remove SQL*Plus V3 name for sqlplus_product_profile

DROP TABLE PRODUCT_PROFILE;

-- Create the view PRODUCT_PRIVS and grant access to that

DROP VIEW PRODUCT_PRIVS;
CREATE VIEW PRODUCT_PRIVS AS
  SELECT PRODUCT, USERID, ATTRIBUTE, SCOPE,
         NUMERIC_VALUE, CHAR_VALUE, DATE_VALUE, LONG_VALUE
  FROM SQLPLUS_PRODUCT_PROFILE
  WHERE USERID = 'PUBLIC' OR USER LIKE USERID;

GRANT SELECT ON PRODUCT_PRIVS TO PUBLIC;
DROP PUBLIC SYNONYM PRODUCT_PROFILE;
CREATE PUBLIC SYNONYM PRODUCT_PROFILE FOR SYSTEM.PRODUCT_PRIVS;
DROP SYNONYM PRODUCT_USER_PROFILE;
CREATE SYNONYM PRODUCT_USER_PROFILE FOR SYSTEM.SQLPLUS_PRODUCT_PROFILE;
DROP PUBLIC SYNONYM PRODUCT_USER_PROFILE;
CREATE PUBLIC SYNONYM PRODUCT_USER_PROFILE FOR SYSTEM.PRODUCT_PRIVS;

-- End of pupbld.sql

/*接下来我们在一个执行过pupbld.sql脚本的库中将PRODUCT_USER_PROFILE相关对象都drop掉 */

SQL> DROP TABLE SQLPLUS_PRODUCT_PROFILE;
DROP VIEW  PRODUCT_PRIVS;
DROP PUBLIC SYNONYM PRODUCT_PROFILE;
DROP SYNONYM PRODUCT_USER_PROFILE;
DROP PUBLIC SYNONYM PRODUCT_USER_PROFILE;
Table dropped.

SQL>
View dropped.

SQL>
Synonym dropped.

SQL>
Synonym dropped.

SQL>
Synonym dropped.

SQL> create user test identified by test;

User created.

SQL> grant connect to test;

Grant succeeded.

SQL> conn test/test
Error accessing PRODUCT_USER_PROFILE
Warning:  Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM
Connected.

SQL> conn system/password
Connected.
SQL> @pupbld
...........................
SQL> conn test/test
Connected.

/*重新执行pupbld.sql脚本后登录恢复正常 */

在一个新库中若出现Error accessing PRODUCT_USER_PROFILE等信息,那么极有可能是该库在手动创建过程中没有执行必要的pupbld.sql脚本。在这里我们有必要列举出在手动创建数据库后有必要执行的一系列脚本:

  1. @?/rdbms/admin/catalog.sql         –creates the data dictionary and public synonyms for many of its views, and grants PUBLIC access to the synonyms
  2. @?/rdbms/admin/catproc.sql         –runs all scripts required for or used with PL/SQL.
  3. @?/rdbms/admin/catblock.sql      –creates views that can dynamically display lock dependency graphs
  4. @?/rdbms/admin/dbmspool.sql   — create dbms_shared_pool package
  5. @?/rdbms/admin/owminst.plb     –install oracle workspace manager,视乎需求可不运行
  6. @?/rdbms/admin/catoctk.sql         –creates the Oracle Cryptographic Toolkit package,视乎需求可不运行

以上脚本均需以sysdba身份运行,而以下脚本以system用户运行:

  1. @?/sqlplus/admin/pupbld.sql                                        –install the SQL*Plus PRODUCT_USER_PROFILE tables
  2. @?/sqlplus/admin/help/hlpbld.sql helpus.sql                –create SQL*Plus HELP system

Oracle Solaris 11 Express发布了

甲骨文Solaris 11 Express操作系统在Solaris 10的基础上进一步加强了各种功能,Solaris 11 Express将为关键的企业系统环境提供最佳的UNIX体验(与之相对应的是Oracle Enterprise Linux,将提供最优的Linux体验)。举例而言新系统中基于网络的包管理工具(package management tools)可以大大减少系统停机时间,并提供完整安全的系统升级方案,同时其内建的网络虚拟化及委托管理将为应用程序的整合提供从所未有的灵活性,Solaris还将持续提供业界最高级别的系统安全。Oracle公司宣称Solaris 11 Express将是Solaris平台起劲为止最激动人心的版本。
Oracle Solaris 11 Express已经在多种多样或由Oracle或由其他第三方硬件供应商的提供的Sparce架构的或基于X86的硬件上通过了全面测试。此外支持Oracle独有的Exadata Database Machine数据库服务器和Exalogic云的Solaris 11 Express也即将到来。

现在我们可以从Oracle OTN下载到Solaris 11 Express的安装介质,Oracle自家的UNIX操作系统会是什么样子呢?

Goldengate can’t extract data from compressed table

OGG目前不支持对10g以后压缩表(compressed table)的extract,若挖掘到压缩表相关的DML纪录则extract会abend:

SQL> alter table sales compress 2;
Table altered.

SQL>  update sales set  AMOUNT_SOLD= AMOUNT_SOLD +1 where rownum<200;
199 rows updated.

SQL> commit;
Commit complete.

extract report:
***********************************************************************
**                     Run Time Messages                             **
***********************************************************************
2010-12-12 23:08:44  INFO    OGG-01517  Position of first record processed Sequence 51, 
RBA 12853264, SCN 0.53443895, 2010-12-12 下午11:08:42.
TABLE resolved (entry CLINIC.SALES):
  Table CLINIC.SALES;

Source Context :
  SourceModule            : [ggdb.ora.dbx]
  SourceID                : [/home/ecloud/workspace/Build_FBO_OpenSys_r11.1.1.0.0_078_[34093]/perforce/src/gglib/ggdbora/dbxoci.c]
  SourceFunction          : [get_object_attributes]
  SourceLine              : [5122]
  ThreadBacktrace         : [20] elements
                          : [/home/maclean/gg/extract(CMessageContext::AddThreadContext()+0x26) [0x7f6596]]
                          : [/home/maclean/gg/extract(CMessageFactory::CreateMessage(
CSourceContext*, unsigned int, ...)+0x7b2) [0x7ed032]]
                          : [/home/maclean/gg/extract(_MSG_ERR_ORACLE_COMPRESSED_TABLE
(CSourceContext*, DBString<777> const&, CMessageFactory::MessageDisposition)+0x92) [0x79c842]]
                          : [/home/maclean/gg/extract(gl_get_odbc_table_def(file_def*, short, char*)+0x6a4) [0x6a7ee4]]
                          : [/home/maclean/gg/extract [0x54e03e]]
                          : [/home/maclean/gg/extract [0x54f078]]
                          : [/home/maclean/gg/extract(get_map_entry(char*, int, __wc*, int)+0x841) [0x550cf1]]
                          : [/home/maclean/gg/extract [0x62ec62]]
                          : [/home/maclean/gg/extract(WILDCARD_check_table(char const*, char const*,
 int, unsigned int*, int, unsigned int, DBString<777>*, int)+0x320) [0x630560]]
                          : [/home/maclean/gg/extract(REDOORA_lookup_table(file_def**, log_context_t*, int)+0x1398) [0x6ce678]]
                          : [/home/maclean/gg/extract [0x70b81c]]
                          : [/home/maclean/gg/extract(REDOORAOUT_get_committed_record(objectpool*, 
cache_object*, log_context_t*, chkpt_context_t*, short, int, int, int*, char*, int*, int*, int*, char*, 
short*, int*, char*)+0x449) [0x713619]]
                          : [/home/maclean/gg/extract [0x6fc77e]]
                          : [/home/maclean/gg/extract(REDOORAOPS_process_ops
(subrec_info_t*, objectpool*, objectpool*, cache_object**, log_context_t*, redo_thread_t*, 
chkpt_context_t*, long, int, int, int, char*, int*, int*, char*, short*, int*, char*)+0x5fa) [0x700bca]]
                          : [/home/maclean/gg/extract(REDO_read(char*, int*, long*, char*, short*, int, char*)+0xb6a) [0x6ca6fa]]
                          : [/home/maclean/gg/extract(READ_EXTRACT_RECORD(chkpt_context_t*, 
short, char*, __std_rec_hdr*, int*, int*, long*, short, short, char*, short*, char*, short*, 
long*, char*)+0x344) [0x4f0824]]
                          : [/home/maclean/gg/extract [0x56ab17]]
                          : [/home/maclean/gg/extract(main+0xfaa) [0x56fc3a]]
                          : [/lib64/libc.so.6(__libc_start_main+0xf4) [0x39f1c1d994]]
                          : [/home/maclean/gg/extract(__gxx_personality_v0+0x1f2) [0x4e5ada]]

2010-12-12 23:10:11  ERROR   
OGG-01433  Failed to validate table CLINIC.SALES. 
The table is compressed and extract will not be able to extract data from Oracle logs.

Goldengate实现在线数据迁移

Goldengate的一大卖点就是可以实现在线的数据迁移,这意味着在整个过程中我们无需关闭数据库,甚至于无需停止应用。在<How to create a GoldenGate uni-directional target database in a production database zero downtime>中,大致介绍了实现零下线时间数据迁移所涉及的步骤,包括:

  1. 在source与target主机上完成OGG的初始化配置
  2. 创建extract及exttrail
  3. 创建pump
  4. 启动本地extract和pump
  5. 创建带有HANDLECOLLISIONS参数的replicat,但不启动它
  6. 完成数据的initial load即数据初始化
  7. 启动replicat
  8. 当replicat追上extract后取消replicat的HANDLECOLLISIONS参数

接下来我们通过以下实例来实践OGG的在线数据迁移:

SQL> conn clinic/clinic
Connected.

/* 以clinic为应用用户 */

SQL> create table tv (t1 int primary key,t2 int,t3 varchar2(30));
Table created.

/* 为别在source和target创建该测试用表,在这里我们先不考虑同步DDL*/

SQL> create sequence seqt1 start with 1 increment by 1;
Sequence created.

declare
  rnd number(9,2);
begin
   for i in 1..100000 loop
     insert into tv values(seqt1.nextval,i*dbms_random.value,'MACLEAN IS TESTING');
     commit;
   end loop;
end;
PL/SQL procedure successfully completed.

/* 在源端随机生成10w行数据 */

begin
   loop
    delete from tv where rownum=1;
     commit;
     insert into tv values(seqt1.nextval,200000*dbms_random.value,'MACLEAN IS UPDATING');
     commit;
	 insert into tv values(seqt1.nextval,300000*dbms_random.value,'MACLEAN IS UPDATING');
	 commit;
	update tv set t2=t2+10 where rownum=1;
	commit;
     dbms_lock.sleep(1);
     end loop;
end;

/* 以上匿名块可以造成简易的数据变化,它会在每1秒钟删除/更新/插入TV表的部分数据
    我们打开多个窗口执行以上匿名块,以模拟在线业务
*/

以上为测试前提工作,接下来我们正式配置OGG FOR ZERO DOWNTIME MIGRATION:

GGSCI (rh2.oracle.com) 3> dblogin userid maclean
Password:
Successfully logged into database.

GGSCI (rh2.oracle.com) 5> add trandata clinic.tv
Logging of supplemental redo data enabled for table CLINIC.TV.

/* 通过add trandata命令为tv表加上追加日志 */

GGSCI (rh2.oracle.com) 6> add extract load2,tranlog,begin now
EXTRACT added.

GGSCI (rh2.oracle.com) 7> add rmttrail /s01/new/ze,megabytes 200,extract load2
RMTTRAIL added.

GGSCI (rh2.oracle.com) 10> view params load2

extract load2
userid maclean, password AACAAAAAAAAAAAHANCUEFHPAVCYDNJVD,encryptkey default
RMTHOST rh3.oracle.com, MGRPORT 7809
RMTTRAIL /s01/new/ze
Table clinic.tv;

/* 为了让整个过程更简洁,我们在这里不适用pump */

GGSCI (rh2.oracle.com) 9> start load2
Sending START request to MANAGER ...
EXTRACT LOAD2 starting

GGGSCI (rh2.oracle.com) 11> stats load2
Sending STATS request to EXTRACT LOAD2 ...
Start of Statistics at 2010-12-11 20:00:44.
Output to /s01/new/ze:
Extracting from CLINIC.TV to CLINIC.TV:
*** Total statistics since 2010-12-11 20:00:32 ***
        Total inserts                              370.00
        Total updates                              185.00
        Total deletes                              185.00
        Total discards                               0.00
        Total operations                           740.00
*** Daily statistics since 2010-12-11 20:00:32 ***
        Total inserts                              370.00
        Total updates                              185.00
        Total deletes                              185.00
        Total discards                               0.00
        Total operations                           740.00
*** Hourly statistics since 2010-12-11 20:00:32 ***
        Total inserts                              370.00
        Total updates                              185.00
        Total deletes                              185.00
        Total discards                               0.00
        Total operations                           740.00
*** Latest statistics since 2010-12-11 20:00:32 ***
        Total inserts                              370.00
        Total updates                              185.00
        Total deletes                              185.00
        Total discards                               0.00
        Total operations                           740.00
End of Statistics.

/* 以上完成了extract的配置和启动,接下来我们配置target上的replicat */

GGSCI (rh3.oracle.com) 39> add replicat rep2,begin now,exttrail /s01/new/ze,checkpointtable maclean.checkpoint
REPLICAT added.

GGSCI (rh3.oracle.com) 42> view params rep2
replicat rep2
userid maclean,password maclean
ASSUMETARGETDEFS
discardfile /s01/discard/clinic.log,append,megabytes 10
HANDLECOLLISIONS
map clinic.tv, target clinic.tv;

/* 注意为replicat加上HANDLECOLLISIONS参数,
    用以处理可能的deplicate records和missed records;
    该参数的具体定义见Reference文档
*/ 

/* 注意在配置完replicat后,不要去启动它;直到完成initial load才能启动*/

/* 接下来要配置initial load使用的extract和replicat */

GGSCI (rh2.oracle.com) 14> add extract einit,sourceistable
EXTRACT added.

/* 以sourceistable选项创建extract */

GGSCI (rh2.oracle.com) 17> view params einit

extract einit
SOURCEISTABLE
userid maclean, password maclean
RMTHOST rh3.oracle.com, MGRPORT 7809
RMTFILE /s01/new/clinictv
Table clinic.tv;

/* 之后我们需要在命令行界面下调用该extract */
[maclean@rh2 gg]$ pwd
/home/maclean/gg

[maclean@rh2 gg]$ extract paramfile dirprm/einit.prm
2010-12-11 20:03:06  INFO    OGG-01017  Wildcard resolution set to IMMEDIATE because SOURCEISTABLE is used.
***********************************************************************
                 Oracle GoldenGate Capture for Oracle
                     Version 11.1.1.0.0 Build 078
   Linux, x64, 64bit (optimized), Oracle 10 on Jul 28 2010 15:39:00
 Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.
                    Starting at 2010-12-11 20:03:06
***********************************************************************
Operating System Version:
Linux
Version #1 SMP Mon Mar 29 22:10:29 EDT 2010, Release 2.6.18-194.el5
Node: rh2.oracle.com
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited
Process id: 21399
Description:
***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
extract einit
SOURCEISTABLE

2010-12-11 20:03:06  INFO    OGG-01017  Wildcard resolution set to IMMEDIATE because SOURCEISTABLE is used.
userid maclean, password *******
RMTHOST rh3.oracle.com, MGRPORT 7809
RMTFILE /s01/new/clinictv
Table clinic.tv;
Using the following key columns for source table CLINIC.TV: T1.
CACHEMGR virtual memory values (may have been adjusted)
CACHEBUFFERSIZE:                         64K
CACHESIZE:                                8G
CACHEBUFFERSIZE (soft max):               4M
CACHEPAGEOUTSIZE (normal):                4M
PROCESS VM AVAIL FROM OS (min):          16G
CACHESIZEMAX (strict force to disk):  13.99G

Database Version:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

Database Language and Character Set:
NLS_LANG environment variable specified has invalid format, default value will be used.
NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7ASCII.
NLS_LANGUAGE     = "AMERICAN"
NLS_TERRITORY    = "AMERICA"
NLS_CHARACTERSET = "WE8ISO8859P1"
Warning: your NLS_LANG setting does not match database server language setting.
Please refer to user manual for more information.
2010-12-11 20:03:06  INFO    OGG-01478  Output file /s01/new/clinictv is using format RELEASE 10.4/11.1.
2010-12-11 20:03:11  INFO    OGG-01226  Socket buffer size set to 27985 (flush size 27985).
Processing table CLINIC.TV
***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************
Report at 2010-12-11 20:03:12 (activity since 2010-12-11 20:03:06)
Output to /s01/new/clinictv:
From Table CLINIC.TV:
       #                   inserts:    100780
       #                   updates:         0
       #                   deletes:         0
       #                  discards:         0

/* 以上extract操作在target上RMTFILE指定的路径生成了导出文件*/

[maclean@rh3 new]$ pwd
/s01/new

[maclean@rh3 new]$ file clinictv
clinictv: data

[maclean@rh3 new]$ strings clinictv |head -20
uri:rh2:oracle:com:6
/s01/new/clinictv7
Linux1
rh2.oracle.com2
2.6.18-194.el53
##1 SMP Mon Mar 29 22:10:29 EDT 20104
x86_642
CLINICA2
clinica3
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
EINIT1
Version 11.1.1.0.0 Build 0784
CLINIC.TV
MACLEAN IS TESTINGZ
CLINIC.TV
MACLEAN IS TESTINGZ

/* 接下来我们要在target上配置initial load使用的replicat,以导入之前的EXTFILE */

GGSCI (rh3.oracle.com) 12> add replicat rinit,specialrun
REPLICAT added.

GGSCI (rh3.oracle.com) 17> view params rinit

replicat rinit
SPECIALRUN
userid maclean,password maclean
EXTFILE /s01/new/clinictv
ASSUMETARGETDEFS
map clinic.tv, target clinic.tv;

/ *之后我们在命令行上调用该replicat */

[maclean@rh3 gg]$ replicat paramfile dirprm/rinit.prm 

***********************************************************************
                 Oracle GoldenGate Delivery for Oracle
                     Version 11.1.1.0.0 Build 078
   Linux, x64, 64bit (optimized), Oracle 10 on Jul 28 2010 15:58:11

Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.

                    Starting at 2010-12-11 20:06:14
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Mon Mar 29 22:22:00 EDT 2010, Release 2.6.18-194.el5xen
Node: rh3.oracle.com
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 18554

Description: 

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
replicat rinit
SPECIALRUN
userid maclean,password *******
EXTFILE /s01/new/clinictv
ASSUMETARGETDEFS
map clinic.tv, target clinic.tv;

CACHEMGR virtual memory values (may have been adjusted)
CACHEBUFFERSIZE:                         64K
CACHESIZE:                              512M
CACHEBUFFERSIZE (soft max):               4M
CACHEPAGEOUTSIZE (normal):                4M
PROCESS VM AVAIL FROM OS (min):           1G
CACHESIZEMAX (strict force to disk):    881M

Database Version:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

Database Language and Character Set:
NLS_LANG environment variable specified has invalid format, default value will be used.
NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7ASCII.
NLS_LANGUAGE     = "AMERICAN"
NLS_TERRITORY    = "AMERICA"
NLS_CHARACTERSET = "WE8ISO8859P1" 

Warning: your NLS_LANG setting does not match database server language setting.
Please refer to user manual for more information.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************

Opened trail file /s01/new/clinictv at 2010-12-11 20:06:14

MAP resolved (entry CLINIC.TV):
  map CLINIC.TV, target clinic.tv;
Using following columns in default map by name:
  T1, T2, T3

Using the following key columns for target table CLINIC.TV: T1.

***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************

Last record for the last committed transaction is the following:
___________________________________________________________________
Trail name :  /s01/new/clinictv
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    52 (x0034)    IO Time    : 2010-12-11 20:03:10.969955
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :          0       AuditPos   : 0
Continued  :     N  (x00)     RecCount   :     1  (x01)

2010-12-11 20:03:10.969955 Insert             Len    52 RBA 11244940
Name: CLINIC.TV
___________________________________________________________________

Reading /s01/new/clinictv, current RBA 11245052, 100780 records

Report at 2010-12-11 20:06:54 (activity since 2010-12-11 20:06:14)

From Table CLINIC.TV to CLINIC.TV:
       #                   inserts:    100780
       #                   updates:         0
       #                   deletes:         0
       #                  discards:         0

Last log location read:
     FILE:      /s01/new/clinictv
     RBA:       11245052
     TIMESTAMP: 2010-12-11 20:03:10.969955
     EOF:       NO
     READERR:   400

SQL> select count(*) from tv;
  COUNT(*)
----------
    100780

/* 以上完成了initial load所要求的工作,接着我们打开同步使用的replicat*/
GGSCI (rh3.oracle.com) 3> start rep2
Sending START request to MANAGER ...
REPLICAT REP2 starting

GGSCI (rh3.oracle.com) 7> lag rep2
Sending GETLAG request to REPLICAT REP2 ...
Last record lag: 4 seconds.
At EOF, no more records to process.

GGSCI (rh3.oracle.com) 9> stats rep2

Sending STATS request to REPLICAT REP2 ...

Start of Statistics at 2010-12-11 20:10:15.

Replicating from CLINIC.TV to CLINIC.TV:

*** Total statistics since 2010-12-11 20:09:54 ***
        Total inserts                              204.00
        Total updates                              102.00
        Total deletes                              102.00
        Total discards                               0.00
        Total operations                           408.00

*** Daily statistics since 2010-12-11 20:09:54 ***
        Total inserts                              204.00
        Total updates                              102.00
        Total deletes                              102.00
        Total discards                               0.00
        Total operations                           408.00

*** Hourly statistics since 2010-12-11 20:09:54 ***
        Total inserts                              204.00
        Total updates                              102.00
        Total deletes                              102.00
        Total discards                               0.00
        Total operations                           408.00

*** Latest statistics since 2010-12-11 20:09:54 ***
        Total inserts                              204.00
        Total updates                              102.00
        Total deletes                              102.00
        Total discards                               0.00
        Total operations                           408.00
End of Statistics.

/* 当replicat追上extract后,需要取消HANDLECOLLISIONS参数 */

GGSCI (rh3.oracle.com) 11> view params rep2
replicat rep2
userid maclean,password maclean
ASSUMETARGETDEFS
discardfile /s01/discard/clinic.log,append,megabytes 10
map clinic.tv, target clinic.tv;

GGSCI (rh3.oracle.com) 12> stop rep2
Sending STOP request to REPLICAT REP2 ...
Request processed.

GGSCI (rh3.oracle.com) 13> start rep2
Sending START request to MANAGER ...
REPLICAT REP2 starting

接下来停止模拟的在线业务,进行数据对比。首先使用Goldengate veridata创建group及compare pair等配置:


接着生成job,并执行:

Job运行的结果界面:

对比发现同步正常的报告:

***********************************************************************
                  Oracle GoldenGate Veridata Server
                           Version 3.0.0.6
                              Build 002

Copyright (C) 2004, 2009, Oracle and/or its affiliates. All rights reserved.

                   Starting at 2010-12-11 20:15:58

***********************************************************************
Unexpected Windows major version
Process id: 6340, Thread id : 6392
Group: clinic (1) Compare Pairs
Performing 1 comparisons ...
        Compare pair: TV=TV
2010-12-11 20:15:58. Submitted (1 of 1) Compare Pair (TV=TV) RunId (runid=(1007, 1, 1)).
2010-12-11 20:16:00. Completed in-sync. Compare Pair (TV=TV). (0) Remaining

               Compare Pairs with Errors: 0
          Compare Pairs with Out-Of-Sync: 0
       Compare Pairs without Out-Of-Sync: 1
                 Compare Pairs Cancelled: 0

Group terminated normally.

接着我们在target端执行一些人工修改数据的SQL,以达到不同步的效果:

begin
   loop
    delete from tv where rownum=1;
     commit;
     insert into tv values(seqt1.nextval,200000*dbms_random.value,'MACLEAN IS UPDATING');
     commit;
	 insert into tv values(seqt1.nextval,300000*dbms_random.value,'MACLEAN IS UPDATING');
	 commit;
	update tv set t2=t2+10 where rownum=1;
	commit;
     dbms_lock.sleep(1);
     end loop;
end;
/

/* 在源端打开模拟的应用程序 */

/* 在target插入一条额外的纪录 */

SQL> insert into tv values(99993242,1213,'OUT OF SYNC');
1 row created.

SQL> commit;
Commit complete.

再次运行Goldengate veridata中相关的job,可以在job相关的日志中看到详细的out-of-sync纪录信息:

Profile: newp

General:
***********************************************************************
     Out-Of-Sync Output Format: binary
          Maximum Size of Each
  Out-Of-Sync XML Chunk (Rows): 500
    Output in-sync rows to OOS
                          file: false
Output in-sync after in-flight
              rows to OOS file: true
 Report in-sync rows to report
                          file: false
Report in-sync after in-flight
           rows to report file: false
          Sorting Method:
               Sort Data Using: database
     Maximum Memory Usage (MB): 50
  Number Of Concurrent Threads: 4
   Temporary Storage Directory
               for Source Data:
   Temporary Storage Directory
               for Target Data:
      NSort Memory Retry Limit: 0
NSort Memory Retry Wait Interval: 0

Initial Compare (General):
***********************************************************************
     Max Concurrent Comparison
                       Threads: 4
Terminate when Maximum Records
                   Out-Of-Sync: 100000
     Output Out-Of-Sync Record
        Details to Report File: false
      Update Report file Every
                     (seconds): 0
      Update Report file Every
                       Threads: 0
Terminate when Maximum Records
                   Out-Of-Sync: 100000
     Output Out-Of-Sync Record
        Details to Report File: false
      Update Report file Every
                     (seconds): 0
      Update Report file Every
                     (records): 0
    Limit Number of Input Rows: 0
     Delta processsing enabled: false

Initial Compare (Event Reporting):
***********************************************************************
             Generate Messages: both
 Generate Warning Messages For
        Out-Of-Sync Rows After
                 (differences): 9999

Initial Compare (Agent):
***********************************************************************
 Use Static Listening Port For
     For Agent During Row Hash
                     On Source: 0
 Use Static Listening Port For
     For Agent During Row Hash
                     On Target: 0

Initial Compare (NonStop Process):
***********************************************************************
  Source Process Name Starting
                          With:
     Source Process CPU Number: -1
       Source Process Priority: 0
  Target Process Name Starting
                          With:
     Target Process CPU Number: -1
       Target Process Priority: 0

Confirm-Out-Of-Sync (General):
***********************************************************************
   Perform Confirm Out-Of-Sync
                          Step: true
         Run Concurrently With
               Initial Compare: true
     Delay Confirm-Out-Of-Sync
                  By (seconds): 0
Terminate when Maximum Records
                   Out-Of-Sync: 100000
     Output Out-Of-Sync Record
        Details to Report File: true
            Update Report file
               Every (seconds): 0
            Update Report file
               Every (records): 0

Confirm-Out-Of-Sync (Event Reporting):
***********************************************************************
             Generate Messages: both
 Generate Warning Messages For
          For Out-Of-Sync Rows
           After (differences): 9999

Confirm-Out-Of-Sync (Agent):
***********************************************************************
 Use Static Listening Port For
         Agent During Row Hash
                     On Source: 0
 Use Static Listening Port For
         Agent During Row Hash
                     On Target: 0

Confirm-Out-Of-Sync (NonStop Process):
***********************************************************************
  Source Process Name Starting
                          With:
     Source Process CPU Number: -1
       Source Process Priority: 0
  Target Process Name Starting
                          With:
     Target Process CPU Number: -1
       Target Process Priority: 0

Starting  Veriagt session for source rowhash at 2010-12-11 20:32:25.

Initial Compare Source Agent Information:
                       ORACLE_HOME : /s01/10gdb
       NLS_LANG (from environment) :
                          ORA_SDTZ :
              NLS_LANG (from DBMS) : AMERICAN_AMERICA.WE8ISO8859P1
              NLS_LENGTH_SEMANTICS : BYTE
                   SESSIONTIMEZONE : -05:00
                        Process ID : 21473
                  Operating System : Linux
                      Architecture : x86_64
                       Report File : /home/maclean/veridata/agent/dirver/report/maclean_PC_4150_000010110000000100000001___source_rh.rpt
                        Trace File : /home/maclean/veridata/agent/dirver/trace/maclean_PC_4150_000010110000000100000001___source_rh.trc

Starting  Veriagt session for target rowhash at 2010-12-11 20:32:26.

Initial Compare Target Agent Information:
                       ORACLE_HOME : /s01/db_1
       NLS_LANG (from environment) :
                          ORA_SDTZ :
              NLS_LANG (from DBMS) : AMERICAN_AMERICA.WE8ISO8859P1
              NLS_LENGTH_SEMANTICS : BYTE
                   SESSIONTIMEZONE : +08:00
                        Process ID : 18639
                  Operating System : Linux
                      Architecture : x86_64
                       Report File : /home/maclean/veridata/agent/dirver/report/maclean_PC_4150_000010110000000100000001___target_rh.rpt
                        Trace File : /home/maclean/veridata/agent/dirver/trace/maclean_PC_4150_000010110000000100000001___target_rh.trc

Processing first rowhash block from source at 2010-12-11 20:32:26.
Processing first rowhash block from target at 2010-12-11 20:32:27.
Comparing first row at 2010-12-11 20:32:27.
Starting  Veriagt session for source COOS at 2010-12-11 20:32:27.

Performance Statistics for source Rowhash at 2010-12-11 20:32:27.

                     rows: 103959
          duration (secs): 00:00:02
                 rows/sec: 51979.50
                row bytes: 4345147
            row bytes/sec: 2172573
                bytes/row: 41
             rh bytes/row: 23
             rows skipped: 0
           blocks skipped: 0
           hash comp rate: 0.55
          total comp rate: 0.23
        pct time fetching: 15.44
         pct time waiting: 5.91
     time until first row: 00:00:00
                 ipc msgs: 12
                ipc bytes: 2809974
                bytes/msg: 234164
     compressed bytes/msg: 99262
                bytes/sec: 1404987
     compressed bytes/sec: 595572
    msg compression ratio: 0.42

*** Summarizing Initial Row Comparison Step ***

                          Time: 2010-12-11 20:32:27.
                  Elapsed Time: 00:00:00
         Comparisons performed: 103961
               Rows per second: 169593
              Rows out-of-sync: 11
                       inserts: 9
                       updates: 0
                       deletes: 2

Performance Statistics for target Rowhash at 2010-12-11 20:32:27.

                     rows: 103952
          duration (secs): 00:00:01
                 rows/sec: 103952.00
                row bytes: 4344826
            row bytes/sec: 4344826
                bytes/row: 41
             rh bytes/row: 23
             rows skipped: 0
           blocks skipped: 0
           hash comp rate: 0.55
          total comp rate: 0.23
        pct time fetching: 20.51
         pct time waiting: 17.15
     time until first row: 00:00:00
                 ipc msgs: 12
                ipc bytes: 2809779
                bytes/msg: 234148
     compressed bytes/msg: 99256
                bytes/sec: 2809779
     compressed bytes/sec: 1191076
    msg compression ratio: 0.42

COOS source Agent Information:
                       ORACLE_HOME : /s01/10gdb
       NLS_LANG (from environment) :
                          ORA_SDTZ :
              NLS_LANG (from DBMS) : AMERICAN_AMERICA.WE8ISO8859P1
              NLS_LENGTH_SEMANTICS : BYTE
                   SESSIONTIMEZONE : -05:00
                        Process ID : 21475
                  Operating System : Linux
                      Architecture : x86_64
                       Report File : /home/maclean/veridata/agent/dirver/report/maclean_PC_4150_000010110000000100000001___source_coos.rpt
                        Trace File : /home/maclean/veridata/agent/dirver/trace/maclean_PC_4150_000010110000000100000001___source_coos.trc

Starting  Veriagt session for target COOS at 2010-12-11 20:32:28.

COOS target Agent Information:
                       ORACLE_HOME : /s01/db_1
       NLS_LANG (from environment) :
                          ORA_SDTZ :
              NLS_LANG (from DBMS) : AMERICAN_AMERICA.WE8ISO8859P1
              NLS_LENGTH_SEMANTICS : BYTE
                   SESSIONTIMEZONE : +08:00
                        Process ID : 18641
                  Operating System : Linux
                      Architecture : x86_64
                       Report File : /home/maclean/veridata/agent/dirver/report/maclean_PC_4150_000010110000000100000001___target_coos.rpt
                        Trace File : /home/maclean/veridata/agent/dirver/trace/maclean_PC_4150_000010110000000100000001___target_coos.trc

Persistent out of sync row (optype: insert)

  source row values:

                  T1  K     0     6  107913
           GGROWHASH        1     8  0x0E8563AC22AA9C0F
                  T2  H     2     5  55512
                  T3  H     3    19  MACLEAN IS UPDATING

Persistent out of sync row (optype: insert)

  source row values:

                  T1  K     0     6  107914
           GGROWHASH        1     8  0xAAB002BF5C1B3233
                  T2  H     2     6  255834
                  T3  H     3    19  MACLEAN IS UPDATING

Persistent out of sync row (optype: insert)

  source row values:

                  T1  K     0     6  107917
           GGROWHASH        1     8  0xC0723BE324DC032D
                  T2  H     2     6  197710
                  T3  H     3    19  MACLEAN IS UPDATING

Persistent out of sync row (optype: insert)

  source row values:

                  T1  K     0     6  107918
           GGROWHASH        1     8  0x8A63DA5F06C3A12C
                  T2  H     2     6  194628
                  T3  H     3    19  MACLEAN IS UPDATING

Persistent out of sync row (optype: delete)

  target row values:

                  T1  K     0     8  99993242
           GGROWHASH        1     8  0x958B6304CEC4BAD3
                  T2  H     2     4  1213
                  T3  H     3    11  OUT OF SYNC

*** Summarizing Confirm Out-of-Sync Step ***

                          Time: 2010-12-11 20:32:29.
                  Elapsed Time: 00:00:01
                Rows confirmed: 11
               Rows per second: 11

*** Summarizing Persistently Out-of-Sync Rows ***

              Rows out-of-sync: 5
                       inserts: 4
                       updates: 0
                       deletes: 1

*** Summarizing In-Sync-after-In-Flight Rows ***

  Rows in-sync-after-in-flight: 6
                       inserts: 5
                       updates: 0
                       deletes: 1

*** Summarizing Still-Changing Rows ***

                Rows in-flight: 0
                       inserts: 0
                       updates: 0
                       deletes: 0

Comparison terminated normally.

配置GoldenGate同步DDL语句(3)

在配置GoldenGate同步DDL语句(2)中我们针对多种DDL语句在GoldenGate环境中的同步进行了测试,但在默认情况下replicat在复制DDL语句失败时不会在replicat report或者ggserr.log等日志文件中产生该DDL失败的具体信息,这就会导致我们在发现DDL同步有error的情况下无法了解失败的原因。之后又去翻官方的文档,在中找到了DDLOPTIONS参数:

Use the DDLOPTIONS parameter to configure aspects of DDL processing other than filtering
and string substitution. You can use multiple DDLOPTIONS statements, but using one is
recommended. If using multiple DDLOPTIONS statements, make each of them unique so that
one does not override the other. Multiple DDLOPTIONS statements are executed in the order
listed.

这个参数可用的选项很多,我们需要用到的是REPORT选项:

Valid for Extract and Replicat (Oracle and Teradata).
Controls whether or not expanded DDL processing
information is written to the report file. The default of
NOREPORT reports basic DDL statistics. REPORT adds the
parameters being used and a step-by-step history of the
operations that were processed

在replicat的参数文件中添加上”DDLOPTIONS REPORT”,可以让replicat在同步DDL语句时若出现问题,将该问题的详细情况记录到该replicat的report 文件中,以便找出DDL复制失败的root cause。

GGSCI (rh3.oracle.com) 9> view params rep1

replicat rep1
userid maclean,password maclean
ASSUMETARGETDEFS
discardfile /s01/discard/rep1.log,append,megabytes 10
-- Support DDL here
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE RETRYOP MAXRETRIES 3 RETRYDELAY 5
DDLOPTIONS REPORT
map sender.* , target receiver.*;

GGSCI (rh3.oracle.com) 10> stop replicat rep1

Sending STOP request to REPLICAT REP1 ...
Request processed.

GGSCI (rh3.oracle.com) 11> start replicat rep1
Sending START request to MANAGER ...
REPLICAT REP1 starting

在源端创建一外部表
SQL> CREATE TABLE extgg
  2  ORGANIZATION EXTERNAL
  3  (TYPE oracle_datapump
  4  DEFAULT DIRECTORY dgg
  5  LOCATION ('extgg.dat'))
  6  AS
  7  select * from dba_objects;

Table created.

之后检查replicat的report信息可以发现DDL复制失败的详细SQL语句等信息:

GGSCI (rh3.oracle.com) 12> view report rep1
***********************************************************************
                 Oracle GoldenGate Delivery for Oracle
                     Version 11.1.1.0.0 Build 078
   Linux, x64, 64bit (optimized), Oracle 10 on Jul 28 2010 15:58:11

Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.

                    Starting at 2010-12-10 05:54:32
***********************************************************************
...............
2010-12-09 09:10:13  INFO    OGG-01408  Restoring current schema for DDL operation to [MACLEAN].
2010-12-09 09:10:13  INFO    OGG-00482  DDL found, operation [CREATE
GLOBAL TEMPORARY TABLE "SENDER"."SYS_TEMP_0FD9D6609_2B3F4ED" ("OWNER" VARCHAR2(30),
"OBJECT_NAME" VARCHAR2(128),"SUBOBJECT_NAME" VARCHAR2(30),"OBJECT_ID"
NUMBER,"DATA_OBJECT_ID" NUMBER,"OBJECT_TYPE" VARCHAR2(19),"CREATED" DATE,"LAST_DDL_TIME" DATE,
"TIMESTAMP" VARCHA R2(19),"STATUS" VARCHAR2(7),"TEMPORARY" VARCHAR2(1),
"GENERATED" VARCHAR2(1),"SECONDARY" VARCHAR2(1) ) IN_MEMORY_METADATA
CURSOR_SPECIFIC_SEGMENT STORAGE (OBJNO 4254950
921 ) NOPARALLEL  (size 456)].

2010-12-09 09:10:13  INFO    OGG-00489  DDL is of mapped scope, after mapping new operation
[CREATE GLOBAL TEMPORARY TABLE "RECEIVER"."SYS_TEMP_0FD9D6609_2B3F4ED"
("OWNER" VARCHAR2(30),"OBJECT_NAME" VARCHAR2(128),"SUBOBJECT_NAME"
VARCHAR2(30),"OBJECT_ID" NUMBER,"DATA_OBJECT_ID" NUMBER,"OBJECT_TYPE"
VARCHAR2(19),"CREATED" DATE,"LAST_DDL_TIME" DATE,"TIMESTAMP" VARCHAR2(19),
"STATUS" VARCHAR2(7),"TEMPORARY" VARCHAR2(1),
"GENERATED" VARCHAR2(1),"SECONDARY" VARCHAR2(1) ) IN_MEMORY_METADATA CURSOR_SPECIF
IC_SEGMENT STORAGE (OBJNO 4254950921 ) NOPARALLEL  (size 458)].

2010-12-09 09:10:13  INFO    OGG-00487  DDL operation included [INCLUDE ALL], optype [CREATE],
objtype [TABLE], objowner [RECEIVER], objname [SYS_TEMP_0FD9D6609_2B3F4E
D].
2010-12-09 09:10:13  INFO    OGG-01407  Setting current schema for DDL operation to [SYS].
2010-12-09 09:10:13  INFO    OGG-00484  Executing DDL operation.
2010-12-09 09:10:18  INFO    OGG-00495  DDL error ignored for next retry: error code
[DEFAULT],
filter [include all (default)], error text [Error code [900],
ORA-00900: invalid SQL statement, SQL CREATE GLOBAL TEMPORARY TABLE "RECEIVER"."SYS_TEMP_0FD9D6609_2B3F4ED"
("OWNER" VARCHAR2(30),"OBJECT_NAME" VARCHAR2(128),"SUBOBJECT_NAME" V
ARCHAR2(30),"OBJECT_ID" NUMBER,"DATA_OBJECT_ID" NUMBER,"OBJECT], retry [1].

2010-12-09 09:10:18  INFO    OGG-01407  Setting current schema for DDL operation to [SYS].
2010-12-09 09:10:18  INFO    OGG-00484  Executing DDL operation trying again due to RETRYOP parameter.
2010-12-09 09:10:23  INFO    OGG-00495  DDL error ignored for next retry: error code [DEFAULT],
filter [include all (default)], error text
[Error code [900], ORA-00900
: invalid SQL statement, SQL CREATE GLOBAL TEMPORARY TABLE "RECEIVER"."SYS_TEMP_0FD9D6609_2B3F4ED"
("OWNER" VARCHAR2(30),"OBJECT_NAME" VARCHAR2(128),"SUBOBJECT_NAME" V
ARCHAR2(30),"OBJECT_ID" NUMBER,"DATA_OBJECT_ID" NUMBER,"OBJECT], retry [2].
2010-12-09 09:10:23  INFO    OGG-01407  Setting current schema for DDL operation to [SYS].

另外需要注意的是如create table as select这样的建表DDL语句应该被杜绝使用,因为在goldengate复制环境中create table as select语句仍会以DDL形式被应用,这就导致源端和目标的端的数据不一致,如:

/* 源端中dba_object视图共有52046行数据 */
SQL> select count(*) from dba_objects;
  COUNT(*)
----------
     52046

SQL> create table ddlproblem as select * from dba_objects;
Table created.

SQL> select count(*) from ddlproblem;

  COUNT(*)
----------
     52047

而在目标端的复制情况:
SQL> desc ddlproblem
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(30)
 OBJECT_NAME                                        VARCHAR2(128)
 SUBOBJECT_NAME                                     VARCHAR2(30)
 OBJECT_ID                                          NUMBER
 DATA_OBJECT_ID                                     NUMBER
 OBJECT_TYPE                                        VARCHAR2(19)
 CREATED                                            DATE
 LAST_DDL_TIME                                      DATE
 TIMESTAMP                                          VARCHAR2(19)
 STATUS                                             VARCHAR2(7)
 TEMPORARY                                          VARCHAR2(1)
 GENERATED                                          VARCHAR2(1)
 SECONDARY                                          VARCHAR2(1)

SQL> select count(*) from ddlproblem;

  COUNT(*)
----------
     51008

/* 因为replicat在目标端仅是简单的执行"create table as select"的建表DDL语句,
    而源端和目标端中这个被select的对象中的数据并不一致,因而CTAS操作只会导致
    2端的数据不一致
*/

Fractured block found during backing up datafile

一套AIX上的10.2.0.3系统,在rman备份期间告警日志出现如下记录:

======================= alert log record ============================
Hex dump of (file 35, block 1087687) in trace file /oracle/product/10.2.0/admin/MS/udump/ms_ora_103548.trc
Corrupt block relative dba: 0x08d098c7 (file 35, block 1087687)
Fractured block found during backing up datafile
Data in bad block:
 type: 6 format: 2 rdba: 0x08d098c7
 last change scn: 0x0006.44443e06 seq: 0x1 flg: 0x06
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x6d910601
 check value in block header: 0xc0b0
 computed block checksum: 0x4286
Reread of blocknum=1087687, file=/dev/vx/rdsk/oradgMS/lv_ms_DB31. found valid data

=========== trace information for process 103548 ========================
Corrupt block relative dba: 0x08d098c7 (file 35, block 1087687)
Fractured block found during backing up datafile
Data in bad block:
 type: 6 format: 2 rdba: 0x08d098c7
 last change scn: 0x0006.44443e06 seq: 0x1 flg: 0x06
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x6d910601
 check value in block header: 0xc0b0
 computed block checksum: 0x4286
Reread of blocknum=1087687, file=/dev/vx/rdsk/oradgMS/lv_ms_DB31. found valid data

可以看到该块的type为6,下面列出了Oracle 9.2中已经存在的各种块类型:

Type Description
1 KTU UNDO HEADER
2 KTU UNDO BLOCK
3 KTT SAVE UNDO HEADER
4 KTT SAVE UNDO BLOCK
5 DATA SEGMENT HEADER
6 trans data
7 Unknown
8 Unknown
9 Unknown
10 DATA SEGMENT FREE LIST BLOCK
11 Unknown
12 DATA SEGMENT HEADER WITH FREE LIST BLOCKS
13 Compatibility segment
14 KTU UNDO HEADER W/UNLIMITED EXTENTS
15 KTT SAVE UNDO HEADER W/UNLIMITED EXTENTS
16 DATA SEGMENT HEADER – UNLIMITED
17 DATA SEGMENT HEADER WITH FREE LIST BLKS – UNLIMITED
18 EXTENT MAP BLOCK
19 Unknown
20 Unknown
21 Unknown
22 DATA SEGMENT FREE LIST BLOCK WITH FREE BLOCK COUNT
23 BITMAPPED DATA SEGMENT HEADER
24 BITMAPPED DATA SEGMENT FREELIST
25 BITMAP INDEX BLOCK
26 BITMAP BLOCK
27 LOB BLOCK
28 KTU BITMAP UNDO HEADER – LIMITED EXTENTS
29 KTFB Bitmapped File Space Header
30 KTFB Bitmapped File Space Bitmap
31 TEMP INDEX BLOCK
32 FIRST LEVEL BITMAP BLOCK
33 SECOND LEVEL BITMAP BLOCK
34 THIRD LEVEL BITMAP BLOCK
35 PAGETABLE SEGMENT HEADER
36 PAGETABLE EXTENT MAP BLOCK
37 EXTENT MAP BLOCK OF SYSTEM MANAGED UNDO SEGMENT
38 KTU SMU HEADER BLOCK
39 Unknown
40 PAGETABLE MANAGED LOB BLOCK
41 Unknown
42 Unknown
43 Unknown
44 Unknown
45 Unknown
46 Unknown
47 Unknown

表和索引的块均可能为type 6的trans data;也就是说rman在备份期间读取到该数据块,并在初次读取时发现该块断裂了(Fractured),但Fractured并不代表块就真的corrupted了;从告警日志看rman在初次读取发现该块Fractured后,又再次读取时发现数据块已经恢复正常(found valid data)。所以上述告警日志并代表所列出的数据块存在讹误,很有可能是该数据块所在数据文件在备份期间发生了剧烈的IO操作,当rman读取到该数据块时可能存储正在对其进行写的操作,所以rman在第一次读取时认为该快断裂了(Fractured);之后rman对该块进行reread发现”断裂”现象已不存在,而”Corrupt block”仅仅是一种假象;针对上述问题可以对表或索引进行进一步的analyze..validate操作以确保不存在坏块。
同时上述”Corrupt block误报”现象极有可能是因为在Rman备份期间个别数据文件的IO过于活跃所致(如频繁的dml操作),建议在磁盘活跃度低的时间段运行rman备份工作。

参考文档:

Fractured Block Messages in Alert.log During RMAN Backup of Datafile
    * fact: Oracle Server - Enterprise Edition 8
    * fact: Oracle Server - Enterprise Edition 9
    * fact: Recovery Manager (RMAN)
    * symptom: Fractured block found during backup up datafile
    * symptom: Reread of blocknum found some corrupt data
    * symptom: Analyze table validate structure cascade returns no errors
    * change: NOTE ROLE: The messages are of the form Reread of blocknum=36256, 
       file=/pdscdata/pdsclive/data1/dispatch_data_large2. dbf. 
found same corrupt data *** Corrupt block relative dba: 0xfc008dc0 (file 63, block 36288) 
Fractured block found during backing up datafile Data in bad block - 
type: 0 format: 0 rdba: 0x00000000 last change scn: 0x0000.00000000 seq: 0x0 flg: 0x00 consistency 
value in tail: 0x53494e53 check value in block header: 0x0, block checksum disabled 
spare1: 0x0, spare2: 0x0, spare3: 0x0
    * cause: RMAN backups of datafile are being performed while the datafile is involved in heavy I/O. 
RMAN reads Oracle blocks from disk. If it finds that the block is fractured, which means it is being actively used, 
it performs a reread of the block. If that fails again then the block is assumed to be corrupt. 
By identifying the object that these blocks belong to by following Handling Oracle Block Corruptions in 
Oracle7/8/8i and performing an analyze .. validate structure cascade on the object involved you can 
confirm that the object is not corrupt. 

fix:
Run the backups when the tablespace has less I/O activity.

配置GoldenGate同步DDL语句(2)

接下来我们会测试更多不同DDL语句的同步,这些我能想到的DDL语句包括:

  • CREATE USER
  • CREATE TABLESPACE
  • CREATE PROCEDURE
  • CREATE PROFILE
  • CREATE DIRECTORY
  • CREATE EXTERNAL TABLE 等等

但在这之前我们需要修改我们在GoldenGate同步DDL语句(1)中配置的extract和replicat的参数文件:

GGSCI (rh2.oracle.com) 2> view params load1

extract load1
userid maclean, password AACAAAAAAAAAAAHANCUEFHPAVCYDNJVD,encryptkey default
RMTHOST rh3.oracle.com, MGRPORT 7809
RMTTRAIL /s01/rmt/ma
DDL INCLUDE ALL
Table sender.*;

/*源端将 DLL INCLUDE MAPPED 修改为DLL INCLUDE ALL,
   不同于MAPPED,INCLUDE ALL意味包含所有DDL语句  
   MAPPED applies INCLUDE or EXCLUDE to DDL operations that are of
   MAPPED scope. MAPPED filtering is performed before filtering that is
   specified with other DDL parameter options
   ALL applies INCLUDE or EXCLUDE to DDL operations of all scopes.
 */
GGSCI (rh2.oracle.com) 4> stop extract load1
Sending STOP request to EXTRACT LOAD1 ...
Request processed.

GGSCI (rh2.oracle.com) 5> start extract load1
Sending START request to MANAGER ...
EXTRACT LOAD1 starting


GGSCI (rh3.oracle.com) 3> view params rep1

replicat rep1
userid maclean,password maclean
ASSUMETARGETDEFS 
discardfile /s01/discard/rep1.log,append,megabytes 10
-- Support DDL here 
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE RETRYOP MAXRETRIES 3 RETRYDELAY 5
map sender.* , target receiver.*;

/* 目标端也需要将原来的DDL 补充为DDL INCLUDE ALL以及加上
    DDLERROR DEFAULT IGNORE RETRYOP MAXRETRIES 3 RETRYDELAY 5 */

/* DDLERROR参数指定了出现DDL同步错误时的处理方式,此处为以5秒(RETRYDELAY)
    为间隔重试3次(MAXRETRIES),若该DDL仍报错,则忽略该DDL
*/

GGSCI (rh3.oracle.com) 4> stop replicat rep1

Sending STOP request to REPLICAT REP1 ...
Request processed.

GGSCI (rh3.oracle.com) 5> start replicat rep1

Sending START request to MANAGER ...
REPLICAT REP1 starting

接下来我们要具体测试多种DDL语句的gg同步情况:

SQL> create user guser identified by guser;
User created.

源端的$ORACLE_BASE/admin/$SID/udump下的ggs_ddl_trace.log,纪录了该create user ddl语句:
 Start of log for DDL sequence [137], v[ $Id: //depot/releases/OpenSys/v11.1.1.0/redist/SQLScripts
/ddl_setup.sql#2 $ ] trace level [0], owner schema of DDL package [MACLEAN], objtype [USER] name [.GUSER]
SESS 4294967295-2010-12-07 22:21:06 : DDLTRACE1 : Before Trigger: point in execution = [1.0], objtype [USER] name [.GUSER]
SESS 4294967295-2010-12-07 22:21:06 : DDL : DDL operation [create user guser identified by ***** ]
, sequence [137], DDL type [CREATE] USER, real object type [USER], va
lidity [], object ID [], object [.GUSER], real object [.GUSER], base object schema [], base object name [], logged as [SYS]
SESS 4294967295-2010-12-07 22:21:06 : DDL : Start SCN found [45484306]
End of log for DDL sequence [137], no DDL history metadata recorded for this DDL operation

目标端同步情况:
SQL>  select username from dba_users where created>=all(select created from dba_users);
USERNAME
------------------------------
GUSER

/* 可以看到create user语句复制成功 */

SQL> show parameter db_create_file_dest 

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /s01/10gdb/oradata

SQL> create tablespace gtb datafile size 10M;
Tablespace created.

SQL> show parameter db_create_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /s01/oradata

SQL> select file_name from dba_data_files where tablespace_name='GTB';

FILE_NAME
--------------------------------------------------------------------------------
/s01/oradata/CLINICB/datafile/o1_mf_gtb_6hxjpstg_.dbf

/* 在远端和目标端都启用了10g中自动管理数据文件创建的前提下,
    创建表空间的DDL语句被复制成功
*/


/* 若我们在源端创建某表,且为该表指定一个目标端不存在的表空间时,
    create table语句会如何呢? 
*/
SQL> conn sender/sender
Connected.
SQL> create table rbf (t1 int) tablespace bf;
Table created.

可以在目标端的replicat report中看到以下4次尝试,其后三次应为我们之前定义的MAXRETRIES:
2010-12-08 07:37:43  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 07:37:48  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 07:37:53  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 07:37:58  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 07:37:58  INFO    OGG-01408  Restoring current schema for DDL operation to [MACLEAN].

GGSCI (rh3.oracle.com) 21>  stats replicat rep1

Sending STATS request to REPLICAT REP1 ...

No active replication maps
DDL replication statistics:

*** Total statistics since replicat started     ***
        Operations                                   4.00
        Mapped operations                            2.00
        Unmapped operations                          0.00
        Other operations                             2.00
        Excluded operations                          0.00
        Errors                                       8.00
        Retried errors                               6.00
        Discarded errors                             0.00
        Ignored errors                               2.00

/* 虽然可以从stats replicat的输出中看到Errors和Retried errors数,
    但在replicat的report和ggserr日志中都没有纪录DDL同步失败的具体信息,准备提SR了!
*/

/* 在源端创建以下简单的存储过程 */
SQL> conn sender/sender
SQL> create or replace procedure progg as 
  2  i  int;
  3  begin
  4  select t1 into i from tddl where rownum=1;
  5  dbms_output.put_line(i);
  6  end;
  7  /
Procedure created.

SQL> conn receiver/receiver
Connected.

SQL> select t1 from tddl;

        T1
----------
         1
         2

SQL> set serveroutput on;
SQL> exec progg;
1

PL/SQL procedure successfully completed.

/* 因为目标端存在该存储过程的依赖条件,所以复制后编译成功;
    否则复制可以成功但该对象状态为INVALID */
SQL> drop procedure progg;
Procedure dropped.
SQL> drop table tddl;
Table dropped.

在源端再次创建该procedure后:

SQL> col object_name for a30 
SQL> select object_name,status  from dba_objects where object_name='PROGG';

OBJECT_NAME                    STATUS
------------------------------ -------
PROGG                          INVALID

/* 在源端创建profile配置文件 */
SQL> create profile  gg_profile limit sessions_per_user  2;               
Profile created.

SQL> select RESOURCE_NAME,LIMIT from dba_profiles where profile='GG_PROFILE' and limit!='DEFAULT';

RESOURCE_NAME                    LIMIT
-------------------------------- ----------------------------------------
SESSIONS_PER_USER                2

/*在目标端配置文件信息同步成功 */

/* 在源端创建文件夹directory对象 */
SQL> create directory dgg as '/s01/dump';
Directory created.

SQL> col directory_name for a3
SQL> col directory_path for a20
SQL> select directory_name,directory_path from dba_directories where directory_name='DGG';

DIR DIRECTORY_PATH
--- --------------------
DGG /s01/dump
/*在文件系统上都存在/s01/dump文件夹的前提下create directory ddl同步成功*/

/*在源端创建type为oracle_datapump的外部表 */
SQL> CREATE TABLE extgg
  2  ORGANIZATION EXTERNAL
  3  (TYPE oracle_datapump
  4  DEFAULT DIRECTORY dgg
  5  LOCATION ('extgg.dat'))
  6  AS
  7  select * from dba_objects;

Table created.

在目标端replicat中出现多次retry纪录:
2010-12-08 08:13:09  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 08:13:14  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 08:13:19  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 08:13:24  INFO    OGG-01407  Setting current schema for DDL operation to [RECEIVER].
2010-12-08 08:13:24  INFO    OGG-01408  Restoring current schema for DDL operation to [MACLEAN].

/* 同时ddl Errors数目增加 */
GGSCI (rh3.oracle.com) 34> stats replicat rep1
Sending STATS request to REPLICAT REP1 ...
No active replication maps
DDL replication statistics:
*** Total statistics since replicat started     ***
        Operations                                  11.00
        Mapped operations                            6.00
        Unmapped operations                          0.00
        Other operations                             5.00
        Excluded operations                          0.00
        Errors                                      33.00
        Retried errors                              24.00
        Discarded errors                             0.00
        Ignored errors                               9.00

/* 无疑是创建外部表的DDL语句复制失败,当因为目前找不到相关的日志所以无法进一步分析了 */

How to create a GoldenGate uni-directional target database in a production database zero downtime

Problem Description: W2 have a production database and need to configure a GoldenGate uni-directional replication in real time without downtime. Is there a best practice, white paper, knowledge document would help us in desiging the transition? We can start with a pair of database which is Data Guard Physical replication, or start with a brande new target database.

Generic steps provided by Metalink:

Implementing Oracle GoldenGate


0) Install OGG on the source and target systems
1) Establish what tables are to be replicated from the source and what tables they will be mapped to on the target.
2) Run defgen against the source tables and ftp the created efs file (call it defs.def for example) to the
dirdef subdirectory on the target install.
3) Create an ongoing capture extract to capture changes to the archive / redo logs.
4) Add the extract via ggsci
5) add the exttrail via ggsci
6) Create a datapump to read the local trail and write a remote trail to the target machine.
7) Start the local capture and datapump
8) Create an ingoing replicat. Do not start it.


9) Initial Load the target database
This can be done with native or local utilities
or with GoldenGate as follows

Begin Initial Load:
Much of what you will do depends on having enough resources and space.
Divide up your tables so that you can create several (say 10) extracts and their datapumps.
Create a sourceisfile extract for each grouping of tables. They can be run in parallel if you have the horsepower.
Create an initial load replicat for each extract/datapump thread.


There are 3 ways I recommend to do the initial load with OGG
First, do the normal prescribed manner writing to the target sode over TCP
or
Second, write to local disks and ftp the written trails to the target side.
Run the initial load replicats against the trail files. The ftp method is fatser.
or, if you have disks connected to both source and target databases, there is a fatser method.
This method is especially useful for vary large databases.


Begin initial load method
Have your sourceisfile extracts write their trails to shared disks.
You can start the initial load replicats while their trails are still being written. This reduces overall time.
You are loading the target at the same time you are unloading the source.
When all initial load replicats are finished, they can be stopped.
End Initial Load




Start the ongoing replicat(s) with HANDLECOLLISIONS turned on. When the replicats catch up, stop them and
remove HANDLECOLLISIONS, and restart them.
You should be migrated.
You can divide up your tables into as many extracts as you have memory and CPU power to do so.
If you use the ftp method, you will need at least 1.3 times the amount of data in your database on the
source and on the target for intermediate trails.
If you use the shared disk method, and start the initial load replicats while extraction is going on, you can
get by with a lot less space

N.B.
Creating initial load trails for this size of data requires a special technique.
Use the options megabytes 2000, maxtrails 900 for the extfile. Make the trail file name two characters.
This will cause the initial load trails to look just like normal trails.
Your initial load replicats will look just like regular replicats because they will be reading a trail of files, not a single file.

11g新特性:Note raised when explain plan for create index

这是一个挺实用的小特性,在11g r2中使用explain plan for create index时Oracle会提示评估的索引大小(estimated index size)了:

SQL> set linesize 200 pagesize 1400;
SQL> explain plan for create index ind_t23 on maclean(t2,t3);
Explained.

SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Plan hash value: 2510282917
----------------------------------------------------------------------------------
| Id  | Operation              | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | CREATE INDEX STATEMENT |         |  1000K|  9765K|  1267   (1)| 00:00:16 |
|   1 |  INDEX BUILD NON UNIQUE| IND_T23 |       |       |            |          |
|   2 |   SORT CREATE INDEX    |         |  1000K|  9765K|            |          |
|   3 |    TABLE ACCESS FULL   | MACLEAN |  1000K|  9765K|   760   (2)| 00:00:10 |
----------------------------------------------------------------------------------

Note
-----
   - estimated index size: 24M bytes

14 rows selected.

SQL> create index ind_t23 on maclean(t2,t3);
Index created.

SQL> select bytes/1024/1024 from dba_segments where segment_name='IND_T23';
BYTES/1024/1024
---------------
             23

SQL> analyze index ind_t23 validate structure;
Index analyzed.

SQL> select btree_space/1024/1024,used_space/1024/1024 from index_stats;
BTREE_SPACE/1024/1024 USED_SPACE/1024/1024
--------------------- --------------------
           22.3849487           20.0912952

/* 可以看到 explain plan给出的索引评估大小与实际索引占用空间差别不大 */

沪ICP备14014813号-2

沪公网安备 31010802001379号