Does Oracle Goldengate support Parallel DML?

Golengate的基本工作原理是通过挖掘重做日志以获取数据库中的数据变化;而如果我们在数据库中使用并行DML去插入数据的话会因为直接路径插入而产生少量的redo重做日志。那么OGG的日志挖掘是否能正确捕获这些并行DML所产生的数据变化呢?接着我们来实际地测试一下:

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    177   78.5112305       82324480   22532608

/* 通过以上查询我们可以了解实际的redo写出情况:
    Current Offset说明了当前日志文件所写到的位置,
    而Left Space说明了当前日志文件所剩余的空间 
82324480(Current Offset)+22532608(Left space)+512(redo header)=logfile size=le.lesiz* redo block size
*/

SQL> alter system switch logfile;
System altered.

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    179   .002441406           2048  104855040

/* 初始位置为No 179的日志文件 */

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    180    58.277832       61108224   43748864

/* 使用普通非并行DML插入产生了104855040+61108224=158M的redo */


SQL> truncate table tv;
Table truncated.

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    180   60.6469727       63592448   41264640

/* 初始为No 180日志文件的63592448 */

SQL> alter session enable parallel dml;
Session altered.

/* 在session级别启用并行DML */

SQL> set autotrace on;
SQL> insert /*+ parallel(tv,4) */ into tv select * from sample;

3640772 rows created.
Execution Plan
----------------------------------------------------------
ERROR:
ORA-12838: cannot read/modify an object after modifying it in parallel
SP2-0612: Error generating AUTOTRACE EXPLAIN report
Statistics
----------------------------------------------------------
        111  recursive calls
       1168  db block gets
      17850  consistent gets
      17745  physical reads
      97944  redo size
        815  bytes sent via SQL*Net to client
        750  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
    3640772  rows processed

/* autotrace statistics显示并行插入仅产生了97944字节的redo */

SQL> commit;
Commit complete.

SQL> select le.leseq "Current log sequence No",
  2         100 * cp.cpodr_bno / le.lesiz "Percent Full",
  3         (cpodr_bno - 1) * 512 "Current Offset",
  4         le.lesiz * 512 - cpodr_bno * 512 "Left space"
  5    from x$kcccp cp, x$kccle le
  6   where LE.leseq = CP.cpodr_seq
  7     and bitand(le.leflg, 24) = 8;

Current log sequence No Percent Full Current Offset Left space
----------------------- ------------ -------------- ----------
                    182   10.4882813       10997248   93859840

/* 而实际上日志由180切换到了182,实际的redo产生大约是41264640+104857600+10997248=150M */

/* 换而言之autotrace对并行DML语句所产生的实际redo统计远少于实际值,
    这也就保证了extract能够捕获到所有这些并行DML所引起的数据变化
*/

GGSCI (rh2.oracle.com) 59> stats load2
Sending STATS request to EXTRACT LOAD2 ...

Start of Statistics at 2010-12-16 20:17:35.

Output to /s01/new/ze:

Extracting from CLINIC.TV to CLINIC.TV:

*** Total statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

*** Daily statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

*** Hourly statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

*** Latest statistics since 2010-12-16 20:17:24 ***
        Total inserts                           923555.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        923555.00

End of Statistics.


GGSCI (rh2.oracle.com) 60> !
stats load2

Sending STATS request to EXTRACT LOAD2 ...

Start of Statistics at 2010-12-16 20:17:37.

Output to /s01/new/ze:

Extracting from CLINIC.TV to CLINIC.TV:

*** Total statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

*** Daily statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

*** Hourly statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

*** Latest statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1090336.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1090336.00

End of Statistics.


GGSCI (rh2.oracle.com) 61> !
stats load2

Sending STATS request to EXTRACT LOAD2 ...

Start of Statistics at 2010-12-16 20:17:39.

Output to /s01/new/ze:

Extracting from CLINIC.TV to CLINIC.TV:

*** Total statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

*** Daily statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

*** Hourly statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

*** Latest statistics since 2010-12-16 20:17:24 ***
        Total inserts                           1249284.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        1249284.00

End of Statistics.

/* 可以看到extract的统计信息中Total inserts不断递增,说明extract正确捕获了
    所有由并行INSERT引发的直接路径插入操作
*/

Does Oracle Goldengate support Parallel DML?
结论显然是: Yes。

Goldengate can’t extract data from compressed table

OGG目前不支持对10g以后压缩表(compressed table)的extract,若挖掘到压缩表相关的DML纪录则extract会abend:

SQL> alter table sales compress 2;
Table altered.

SQL>  update sales set  AMOUNT_SOLD= AMOUNT_SOLD +1 where rownum<200;
199 rows updated.

SQL> commit;
Commit complete.

extract report:
***********************************************************************
**                     Run Time Messages                             **
***********************************************************************
2010-12-12 23:08:44  INFO    OGG-01517  Position of first record processed Sequence 51, 
RBA 12853264, SCN 0.53443895, 2010-12-12 下午11:08:42.
TABLE resolved (entry CLINIC.SALES):
  Table CLINIC.SALES;

Source Context :
  SourceModule            : [ggdb.ora.dbx]
  SourceID                : [/home/ecloud/workspace/Build_FBO_OpenSys_r11.1.1.0.0_078_[34093]/perforce/src/gglib/ggdbora/dbxoci.c]
  SourceFunction          : [get_object_attributes]
  SourceLine              : [5122]
  ThreadBacktrace         : [20] elements
                          : [/home/maclean/gg/extract(CMessageContext::AddThreadContext()+0x26) [0x7f6596]]
                          : [/home/maclean/gg/extract(CMessageFactory::CreateMessage(
CSourceContext*, unsigned int, ...)+0x7b2) [0x7ed032]]
                          : [/home/maclean/gg/extract(_MSG_ERR_ORACLE_COMPRESSED_TABLE
(CSourceContext*, DBString<777> const&, CMessageFactory::MessageDisposition)+0x92) [0x79c842]]
                          : [/home/maclean/gg/extract(gl_get_odbc_table_def(file_def*, short, char*)+0x6a4) [0x6a7ee4]]
                          : [/home/maclean/gg/extract [0x54e03e]]
                          : [/home/maclean/gg/extract [0x54f078]]
                          : [/home/maclean/gg/extract(get_map_entry(char*, int, __wc*, int)+0x841) [0x550cf1]]
                          : [/home/maclean/gg/extract [0x62ec62]]
                          : [/home/maclean/gg/extract(WILDCARD_check_table(char const*, char const*,
 int, unsigned int*, int, unsigned int, DBString<777>*, int)+0x320) [0x630560]]
                          : [/home/maclean/gg/extract(REDOORA_lookup_table(file_def**, log_context_t*, int)+0x1398) [0x6ce678]]
                          : [/home/maclean/gg/extract [0x70b81c]]
                          : [/home/maclean/gg/extract(REDOORAOUT_get_committed_record(objectpool*, 
cache_object*, log_context_t*, chkpt_context_t*, short, int, int, int*, char*, int*, int*, int*, char*, 
short*, int*, char*)+0x449) [0x713619]]
                          : [/home/maclean/gg/extract [0x6fc77e]]
                          : [/home/maclean/gg/extract(REDOORAOPS_process_ops
(subrec_info_t*, objectpool*, objectpool*, cache_object**, log_context_t*, redo_thread_t*, 
chkpt_context_t*, long, int, int, int, char*, int*, int*, char*, short*, int*, char*)+0x5fa) [0x700bca]]
                          : [/home/maclean/gg/extract(REDO_read(char*, int*, long*, char*, short*, int, char*)+0xb6a) [0x6ca6fa]]
                          : [/home/maclean/gg/extract(READ_EXTRACT_RECORD(chkpt_context_t*, 
short, char*, __std_rec_hdr*, int*, int*, long*, short, short, char*, short*, char*, short*, 
long*, char*)+0x344) [0x4f0824]]
                          : [/home/maclean/gg/extract [0x56ab17]]
                          : [/home/maclean/gg/extract(main+0xfaa) [0x56fc3a]]
                          : [/lib64/libc.so.6(__libc_start_main+0xf4) [0x39f1c1d994]]
                          : [/home/maclean/gg/extract(__gxx_personality_v0+0x1f2) [0x4e5ada]]

2010-12-12 23:10:11  ERROR   
OGG-01433  Failed to validate table CLINIC.SALES. 
The table is compressed and extract will not be able to extract data from Oracle logs.

Goldengate单向大事务复制性能测试

Goldengate单向数据复制,为了图省事没有配pump,只配了source的extract和target的replicat;实际操作发现gg对大事务的支持还是比streams好一些,streams碰上大事务就只看到SPILL MESSAGES了:

/*源端配置信息*/
GGSCI (rh2.oracle.com) 1> view params mgr

PORT 7809


GGSCI (rh2.oracle.com) 2> view params load1


--extract group--
extract load1
--connection to database--
userid ggate, password ggate
--hostname and port for trail--
rmthost rh3.oracle.com, mgrport 7809
--path and name for trail--
rmttrail /s01/gg
ddl include mapped objname sender.*
table sender.*;

GGSCI (rh2.oracle.com) 4> stats extract load1

Sending STATS request to EXTRACT LOAD1 ...

Start of Statistics at 2010-11-29 17:44:41.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                   1.00
        Mapped operations                            1.00
        Unmapped operations                          0.00
        Other operations                             0.00
        Excluded operations                          0.00

Output to /s01/gg:

Extracting from GGATE.GGS_MARKER to GGATE.GGS_MARKER:

*** Total statistics since 2010-11-29 16:22:25 ***

        No database operations have been performed.

*** Daily statistics since 2010-11-29 16:22:25 ***

        No database operations have been performed.

*** Hourly statistics since 2010-11-29 17:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2010-11-29 16:22:25 ***

        No database operations have been performed.

Extracting from SENDER.ABC to SENDER.ABC:

*** Total statistics since 2010-11-29 16:22:25 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        2621440.00

*** Daily statistics since 2010-11-29 16:22:25 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        2621440.00

*** Hourly statistics since 2010-11-29 17:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2010-11-29 16:22:25 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        2621440.00

End of Statistics.

GGSCI (rh2.oracle.com) 5> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     LOAD1       00:00:00      00:00:08    

/*目标端配置信息*/

GSCI (rh3.oracle.com) 1> view params mgr

PORT 7809
USERID ggate, PASSWORD ggate
PURGEOLDEXTRACTS /s01/gg


GGSCI (rh3.oracle.com) 2> view params rep1

--Replicat group --
replicat rep1
--source and target definitions
ASSUMETARGETDEFS
--target database login --
userid ggate, password ggate
--file for dicarded transaction --
discardfile /s01/discard.txt, append, megabytes 10
--ddl support
DDL
--Specify table mapping ---
map sender.*, target receiver.*;

GGSCI (rh3.oracle.com) 3> stats replicat rep1

Sending STATS request to REPLICAT REP1 ...

Start of Statistics at 2010-11-30 02:44:16.

DDL replication statistics:

*** Total statistics since replicat started     ***
        Operations                                   1.00
        Mapped operations                            1.00
        Unmapped operations                          0.00
        Other operations                             0.00
        Excluded operations                          0.00
        Errors                                       0.00
        Retried errors                               0.00
        Discarded errors                             0.00
        Ignored errors                               0.00

Replicating from SENDER.ABC to RECEIVER.ABC:

*** Total statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        2621440.00

*** Daily statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        2621440.00

*** Hourly statistics since 2010-11-30 02:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                        2621440.00

End of Statistics.

GGSCI (rh3.oracle.com) 4> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING                                           
REPLICAT    RUNNING     REP1        00:00:00      00:00:01   

接着尝试在源库端sender模式下产生一个大事务操作,观察目标段数据同步情况*/

SQL> delete sender.abc;
2621440 rows deleted.

SQL> commit;
Commit complete.
/* commit操作完成前,EXTRACT进程并不会抽取日志 */

Goldengate日志ggserror.log的相关记录:
010-11-29 17:49:49  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000029.
2010-11-29 17:49:50  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000030.
2010-11-29 17:49:51  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000031.
2010-11-29 17:49:53  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000032.
2010-11-29 17:49:54  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000033.
2010-11-29 17:49:55  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000034.
2010-11-29 17:49:56  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000035.
2010-11-29 17:49:57  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000036.
2010-11-29 17:49:58  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000037.
2010-11-29 17:49:59  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000038.
2010-11-29 17:50:00  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000039.
2010-11-29 17:50:01  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000040.
2010-11-29 17:50:03  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000041.
2010-11-29 17:50:04  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000042.
2010-11-29 17:50:05  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000043.
2010-11-29 17:50:06  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000044.
2010-11-29 17:50:07  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000045.
2010-11-29 17:50:08  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000046.
2010-11-29 17:50:09  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000047.
2010-11-29 17:50:10  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000048.
2010-11-29 17:50:11  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000049.
2010-11-29 17:50:13  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000050.
2010-11-29 17:50:14  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000051.
2010-11-29 17:50:15  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000052.
2010-11-29 17:50:16  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000053.
2010-11-29 17:50:17  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, load1.prm:  Rolling over remote file /s01/gg000054.

/*产生了26个trail文件*/

/*同时备库端开始陆续应用日志*/
REP1.rpt是rep1 replicat复制进程的文本报告,出现相关内容:
Switching to next trail file /s01/gg000029 at 2010-11-30 02:47:54 due to EOF, with current RBA 9999949
Opened trail file /s01/gg000029 at 2010-11-30 02:47:54

Switching to next trail file /s01/gg000030 at 2010-11-30 02:49:53 due to EOF, with current RBA 9999925
Opened trail file /s01/gg000030 at 2010-11-30 02:49:53

GGSCI (rh3.oracle.com) 6> stats replicat rep1

Sending STATS request to REPLICAT REP1 ...

Start of Statistics at 2010-11-30 02:52:20.

DDL replication statistics:

*** Total statistics since replicat started     ***
        Operations                                   1.00
        Mapped operations                            1.00
        Unmapped operations                          0.00
        Other operations                             0.00
        Excluded operations                          0.00
        Errors                                       0.00
        Retried errors                               0.00
        Discarded errors                             0.00
        Ignored errors                               0.00

Replicating from SENDER.ABC to RECEIVER.ABC:

*** Total statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                           208150.00
        Total discards                               0.00
        Total operations                        2829590.00

*** Daily statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                           208150.00
        Total discards                               0.00
        Total operations                        2829590.00

*** Hourly statistics since 2010-11-30 02:00:00 ***
        Total inserts                                0.00
        Total updates                                0.00
        Total deletes                           208150.00
        Total discards                               0.00
        Total operations                        208150.00

*** Latest statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                           208150.00
        Total discards                               0.00
        Total operations                        2829590.00

End of Statistics.


GGSCI (rh3.oracle.com) 7> !
stats replicat rep1

Sending STATS request to REPLICAT REP1 ...

Start of Statistics at 2010-11-30 02:52:26.

DDL replication statistics:

*** Total statistics since replicat started     ***
        Operations                                   1.00
        Mapped operations                            1.00
        Unmapped operations                          0.00
        Other operations                             0.00
        Excluded operations                          0.00
        Errors                                       0.00
        Retried errors                               0.00
        Discarded errors                             0.00
        Ignored errors                               0.00

Replicating from SENDER.ABC to RECEIVER.ABC:

*** Total statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                           210767.00
        Total discards                               0.00
        Total operations                        2832207.00

*** Daily statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                           210767.00
        Total discards                               0.00
        Total operations                        2832207.00

*** Hourly statistics since 2010-11-30 02:00:00 ***
        Total inserts                                0.00
        Total updates                                0.00
        Total deletes                           210767.00
        Total discards                               0.00
        Total operations                        210767.00

*** Latest statistics since 2010-11-30 01:21:09 ***
        Total inserts                           2621440.00
        Total updates                                0.00
        Total deletes                           210767.00
        Total discards                               0.00
        Total operations                        2832207.00

End of Statistics.


/*由以上对replicat进程的统计信息可知其正在应用delete操作*/

Switching to next trail file /s01/gg000031 at 2010-11-30 02:53:26 due to EOF, with current RBA 9999925
Opened trail file /s01/gg000031 at 2010-11-30 02:53:26

Switching to next trail file /s01/gg000032 at 2010-11-30 02:58:28 due to EOF, with current RBA 9999925
Opened trail file /s01/gg000032 at 2010-11-30 02:58:28

[maclean@rh3 s01]$ ls -lh gg00003[1-2]
-rw-rw-rw- 1 maclean oinstall 9.6M Nov 30 02:47 gg000031
-rw-rw-rw- 1 maclean oinstall 9.6M Nov 30 02:47 gg000032

/*由上列trace信息推断,以这样一台双核cpu主频为2.2GHZ的pc机为例,gg应用一个大小为9.6M的trail文件耗时也要将近5分钟(当然这并不准确)。


/*从进程argument可以看到replicate和extract进程的本质是调用了多个配置文件的$GGATE/extract和$GGATE/replicat*/
[maclean@rh3 ~]$ ps -ef|grep repl
maclean   7817  7476  3 01:18 ?        00:03:29 
/home/maclean/gg/replicat PARAMFILE /home/maclean/gg/dirprm/rep1.prm 
REPORTFILE /home/maclean/gg/dirrpt/REP1.rpt 
PROCESSID REP1 USESUBDIRS

[maclean@rh2 dirrpt]$ ps -ef|grep ex
maclean    544 32432  1 16:21 ?        00:01:12 
/home/maclean/gg/extract PARAMFILE /home/maclean/gg/dirprm/load1.prm 
REPORTFILE /home/maclean/gg/dirrpt/LOAD1.rpt 
PROCESSID LOAD1 USESUBDIRS

07年博客迁移:datafile block extract lab

昨天在家里的Linux服务器上,尝试用C 写一个抽取data block 的例子,用到的system_call()
简单得很就是标准的文件读写.
块 头的读取比较顺利,block_type种类大多我不熟悉,那是应用见的少了,只见过表,索引;IOT,cluster则从没见过。不过其实国内用这类高 级特性的恐怕也是极少,加了一个Oracle 的mail-list,看外国人对DB层的研究确实不懈的,而且人家只要这技术有优势就有本事和胆量拿来用,这份精神实在不殆。
块头之后是事务 槽,同行字典一样多少不定,是以行数据是倒过来存储的,即由尾而头,这番道理估计现下的DB上都是一般的。然而Oracle 之所以精妙与这事务槽同回滚段实现的读一致大有关系,然而反过来说Oracle本身也是背了一个极大的包袱在行走能有如今的效用真是不易之极,无怪乎 latch之类要用到汇编指令,但这又加大了改换平台的难度.
行字典中最末是每行的绝对距离. offsets=sizeof(head)+phead->itc*ITL_SIZE
相对地址为: pri[j]+offsets
完成之后,抽取多行却格式总是层次不齐,前前后后迂回了几个小时,最后蓦然回首发现自己参考的格式居然是9i的,怪不得读了读取行总是不工整.
回过头来说oracle的文件格式在今天来说基本是一点悬念也没有了,然而对于shared_pool的管理理论,sql的机器optimizer,以及架构等等都无愧为龙头老大,说要超越确实千难万难,何况即便超越了,其势本身极大要,谗食也不容易.
datablock的格式,都是前辈高人一个字节一个字节试出来的,其志诚嘉.

Offset 0 1 2 3 4 5 6 7 8 9 a b c d e f
00014000 06 A2 00 00 0A 00 40 01 0E 89 43 00 00 00 05 02
type frmt spare1/2_kcbh rdba scn seq flg
1 : 20 bytes
type: 0x06=trans data defined in kcb.h
frmt: 8i~9i 都是0x02 10.1.0 2k: 0x62 4k:0x82 8k:0xa2 16k:0xc2 (logfile 0x22 512 bytes)
spare1/2_kcbh: ub1 spare1_kcbh this field is no longer used (old inc#, now always 0)
ub1 spare2_kcbh this field is no longer used (old ts#, now always 0)
rdba: 0x0140000a 转换成2进制后它的前10 bit 表示file id 后22 bit 表示的block id
可以看出一个tablespace 可以有1023 个datafile ,每个datafile可以有4M 的block
10G 出现的 big datafile 这里表示的就是block id了 没有file id
9.2.0试验过一个tablespace可以有1023个datafile 一个object可以存放在1023个datafile中
scn: scn: 0x0000.0043890e
seq: A sequence number incremented for each change to a block at the same SCN
A new SCN is allocated if the sequence number wraps.
同一个SCN影响这个block中的行数大于 254 行就会为这个事务分配一个新的SCN
如下面的操作就可能引起同一个SCN但影响的同一个block 中的行超过254行
“delete from table_name”
影响的行数(最大254) 是用从 0x01 到 0xfe 表示的
当这个byte 的数据为 0xff 的时候标志这个 block 坏调了—> ora-01578
Sequence number:
SEQ -> 0 /* non-logged changes – do not advance seq# */
SEQ -> (UB1MAXVAL-1)/* maximum possible sequence number */
SEQ -> (UB1MAXVAL) /* seq# to indicate a block is corrupt,equal to FF. soft corrupt*/
0xff : When present it indicates that the block has been marked as corrupt by Oracle. either by the db_block_checking functionality or the equivalent events (10210 for data blocks, 10211 for index blocks, and 10212 for cluster blocks) when making a database change, or by the DBMS_REPAIR.FIX_CORRUPT_BLOCKS procedure, or by PMON after an unsuccessful online block recovery attempt while recovering a failed process, or by RMAN during a BACKUP, COPY or VALIDATE command with the CHECK LOGICAL option. Logical corruptions are normally due to either recovery through a NOLOGGING operation, or an Oracle software bug.
flg: as defined in kcbh.h
#define KCBHFNEW 0x01 /* new block – zeroed data area */
#define KCBHFDLC 0x02 /* Delayed Logging Change advance SCN/seq */
#define KCBHFCKV 0x04 /* ChecK Value saved-block xor’s to zero */
#define KCBHFTMP 0x08 /* Temporary block */
这是一个可以组合的值 也就是说有为 6 的时候是 2,4 两种情况的组合
Block structure as defined in kcbh.h:
struct kcbh
{ub1 type_kcbh; /* Block type* /
ub1 frmt_kcbh; /* #define KCBH_FRMT8 2 */
ub1 spare1_kcbh;
ub1 spare2_kcbh;
krdba rdba_kcbh; /* relative DBA /
ub4 bas_kcbh; /* base of SCN */
ub2 wrp_kcbh; /* wrap of SCN */
ub1 seq_kcbh; /* sequence # of changes at same scn */
ub1 flg_kcbh;
ub2 chkval_kcbh;
};
00014010 00 00 00 00 01 00 17 00 54 D2 00 00 0A 89 43 00
chkval spare3_kcbh typ ? seg/obj csc
spare3_kcbh : ub2 spare3_kcbh
2 : 24 bytes (总计44bytes)
typ : 1 – DATA 2 index
改成3了在10.1.0 上引起了ora-600[2032]然后ORA-27101: shared memory realm does not exist
oracle进行查询的时候是根据 obj$表中的情况来判断对象的类型的,不是根据这个typ
也就是说如果有一个表但改变表中block的这个标志位,一样可以查询出数据来,
但dump block 时会出错,ORA-00600: 内部错误代码,自变量: [4555], [0], [], [], [], [], [], []
错误中的 [0] 就是typ对应的数据
在10G中改变它后update这个block的数据commit可以但rollback的报错
? 见过有其他值 但用编辑器改这个值 在 dump 文件中显示不出来变化
seg/obj: 0xd254
csc : 0x00.43890a The SCN at which the last full cleanout was performed on the block
00014020 00 00 E8 1F 02 00 03 00 00 00 00 00 04 00 0C 00
csc ? itc ? flg fsl fnx xid
3 : 24 bytes * itl (2个itl总计92bytes)
? 见过有其他值 但用编辑器改这个值 在 dump 文件中显示不出来变化
itc ITL 条目的个数 max 255超过会报ORA-02207
ORA-00060 ORA-00054 可能是没空间分配itl条目了或它的争用引起的
在8i中 INITRANS default为1 , 9.2.0中 INITRANS default为2
flg indicates that the block is on a freelist. Otherwise the flag is –
9i 的ASSM 的情况下这个值为 E
ixora 上说他占用 2 bytes 但我下面的试验和他的结果有一定的出入
我观察到的情况是 : Object id on Block? Y flg: O ver: 0x01
上面的3项是用同一个 byte 来表示的

flg: O ver: 0x01 Object id on Block? Y
从我的观察中 dump 出来的文件中 flg ver Object id on Block
他们共同占用的这个一个字节 他的规律可以从下面的情况看出
2进制数据 flg ver Object id on Block?
0x00 – 0x00 N
0x01 0 0x00 N
0x02 – 0x01 Y
0x03 0 0x01 Y
0x04 – 0x02 Y
0x05 0 0x02 Y
0x06 – 0x03 Y
0x07 0 0x03 Y
0x08 – 0x04 N
0x09 0 0x04 N
0x0a – 0x05 Y
0x0b 0 0x05 Y
0x0c – 0x06 Y
0x0d 0 0x06 Y
0x0e – 0x07 Y
0x0f 0 0x07 Y
0x10 … 类似上面的循环了 这种情况在9i上已经改变因为ASSM的出现

fsl : Index to the first slot on the ITL freelist. ITL TX freelist slot
fnx : 自由列表中下一块的地址 Null if this block is not on a freelist 有数据例如: fnx: 0x1000029
00014030 50 18 00 00 96 14 80 00 B9 07 01 00 01 20 00 00
xid uba Lck Flag Scn/Fsc
xid : Transaction ID (UndoSeg.Slot.Wrap)
值可以用select XIDUSN, XIDSLOT,XIDSQN from v$transaction;查到
This is comprised of the rollback segment number (2 bytes), the slot number
in the transaction table of that rollback segment (2 bytes), and the number
of times use of that transaction table has wrapped (4 bytes).
uba : Undo address (UndoDBA.SeqNo.RecordNo)
The location of the undo for the most recent change to this block by this transaction. This is comprised of the DBA of the rollback segment block (4 bytes), the sequence number (2 bytes), and the record number for the change in that undo block (1 byte), plus 1 unused byte.
Lck Flag: Lck 锁定的row数 这里还用到了下一个 byte 的数据
2 对应的二进制表示为 0010 正好和dump文件中的 –U- 吻合
flag 1 nibble
C = Committed; U = Commit Upper Bound; T = Active at CSC; B = Rollback of this UBA gives before image of the ITL.
—- = transaction is active, or committed pending cleanout
C— = transaction has been committed and locks cleaned out
-B– = this undo record contains the undo for this ITL entry
–U- = transaction committed (maybe long ago); SCN is an upper bound
—T = transaction was still active at block cleanout SCN
Lck 3 nibbles
The number of row-level locks held in the block by this transaction.
Scn/Fsc : If the transaction has been cleaned out, this is the commit SCN or an upper bound thereof. Otherwise the leading two bytes contain the free space credit for the transaction – that is, the number of bytes freed in the block by the transaction
Scn = SCN of commited TX; Fsc = Free space credit (bytes)
00014040 0E 89 43 00 00 00 00 00 00 00 00 00 00 00 00 00
Scn/Fsc 第2条itl 这里没使用
00014050 00 00 00 00 00 00 00 00 00 00 00 00 00 01 01 00
第2条itl 这里没使用 flag ntab nrow
4 : 14 bytes 从这个flag位置开始是data区 也是下面的行的offset的起始地址
flag : N=pctfree hit(clusters), F=don’t put on free list
K=flushable cluster keys. 当然还有别的标记: A …
ntab : 这block中有几个table的数据 cluster这个就可能大于1
nrow : block 有多少行数据
00014060 FF FF 14 00 9B 1F 83 1F 83 1F 00 00 01 00 9B 1F
frre fsbo fseo avsp tosp offs nrow row offs
frre : First free row index entry. -1=you have to add one.
fsbo : Free Space Begin offset 出去row dict 后面的可以放数据的空间的起始位置
也可以看成是从这个区域的开始”flag”到最后一个 “row offs”占用的空间
fseo : Free Space End offset ( 9.2.0 )参与db_block_checking的计算剩余空间
select 的时候oracle不是简单的根据offset定位row.这个值也是参与了定位row的
avsp : Available space in the block (pctfree and pctused) ORA-01578
tosp : Total available space when all TXs commit ( 9.2.0 )参与db_block_checking
offs : 偏移量 用 cluster 的时候可以看出值
nrow : 这个table有多少行数据
row offs : 这行数据相对的起始位置 after delete & commit is 0xffff
00015FF0 00 00 00 00 00 00 00 2C 01 01 01 61 05 06 0E 89
fb lb cc length data block tail
5 : 用户数据
6 : 4 bytes block tail
fb : K = Cluster Key (Flags may change meaning if this is set to show HASH cluster)
C = Cluster table member
H = Head piece of row
D = Deleted row
F = First data piece
L = Last data piece
P = First column continues from previous piece
N = Last column continues in next piece
lb : 和上面的 ITL 的lck相对应 表示这行是否被 lock 了
cc : 有几列数据 这里只能表示255列 超过了就会有链接行
length : 这列的数据的长度是多少
0xfa ( 250 bytes ) 其实0xfb,0xfc,0xfd 也同样是250bytes
0xfe fb 00 ( 0xfb 00 表示的251 bytes 0xfe表示row的长度超过了250 bytes)
0xff 表示number 的 null 这也是oracle中null的表现形式排序的时候null最大了
字段的数据超过250字节是就用3bytes来表示字段的长度,因为如果是long类型它的字段再长
它在这个block中的数据的长度不会超过64K 所以最长用3bytes来表示行的长度已经够了.再长就链接行了
data : ‘a’
block tail : 改这 block 最后的4 bytes 数据中的任意肯定ora-1578
第 1 byte : 对应开始的 seq
第 2 byte : 对应开始的 type
第3,4byte : 对应开始的scn的末2为 control file 这里是control seq
10.1.0~lgone@ONE.LG.OK> create table a(v varchar2(4000)) TABLESPACE t;

Table created.

10.1.0~lgone@ONE.LG.OK> insert into a values('a');

1 row created.

Start dump data blocks tsn: 17 file#: 5 minblk 10 maxblk 10
buffer tsn: 17 rdba: 0x0140000a (5/10)
   //// buffer tsn:
         数据文件对应的 tablespace 的 number   这只是dump文件中记录的数据而已
         block 中是没有记录 tablespace 的 number 的 

scn: 0x0000.0043890e seq: 0x05 flg: 0x02 tail: 0x890e0605
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data
Block header dump:  0x0140000a
 Object id on Block? Y
 seg/obj: 0xd254  csc: 0x00.43890a  itc: 2  flg: O  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0004.00c.00001850  0x00801496.07b9.01  --U-    1  fsc 0x0000.0043890e
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000

data_block_dump,data header at 0x87e125c
   ////  data_block_dump,data header at 0x87e125c
         其实这个block不是直接从 data buffer 中 dump 出来的这个表示真正dump时 block 的数据区的起始位置
         也就是下面这部分开始的位置
         
===============        ////  tsiz:    hsiz:   pbl:   bdba: 在数据文件都是没有存储的 
tsiz: 0x1fa0           //// Total data area size
                     8k的block: 8192-20(block head)-24(Transaction Header)-24*2(一个事务条)-4(block tail)=8096(0x1fa0)
hsiz: 0x14             //// Data header size  数据块头20个字节+数据块尾4个字节=24字节(0x14)
pbl: 0x087e125c        //// Pointer to buffer holding the block
bdba: 0x0140000a
     76543210

flag=--------
ntab=1
nrow=1
frre=-1
fsbo=0x14
fseo=0x1f9b
avsp=0x1f83
tosp=0x1f83
0xe:pti[0]  nrow=1  offs=0
0x12:pri[0] offs=0x1f9b
block_row_dump:
tab 0, row 0, @0x1f9b
tl: 5 fb: --H-FL-- lb: 0x1  cc: 1
col  0: [ 1]  61
end_of_block_dump
End dump data blocks tsn: 17 file#: 5 minblk 10 maxblk 10

block 坏掉了还可以报:
    ORA-600 (4519) Cache layer block type is incorrect
    ORA-600 (4393) Check for Type for Segment header with free list
    ORA-600 (4136) Check Rollback segment block
    ORA-600 (4154) Check Rollback segment block

    Ora-600[kcbzpb_1],[d],[kind],[chk] gets signaled when the block got corrupted in memory.
    The only way it should be bad is if a stray store into memory destroyed the header or tail.
    d = blocknumber, kind= kind of corruption detected,chk = checksum flag

    ora-600[3398] and ora-600[3339]
    ora-600[3398] is not in oracle 8.
    ora-600[3398] means it failed a verification check before writing back to disk,  so it must
        be an in-memory corruption.
    ora-600[3339] comes with ora-1578 and means either disk corruption or in memory corruption after read.
    ora-600 [3339] has been removed from 7.2+
    From 7.2+  ora-600 [3398] has become ora-600 [3374] with some checks added.

2进制存储格式
               ALTER SESSION SET EVENTS '10289 trace name context forever, level 1';
               ALTER SESSION SET EVENTS '10289 trace name context off';

Know Oracle Date And Time Function

Oracle9i provides extended date and time support across different time zones with the help of new datetime data types and functions. To understand the working of these data types and functions, it is necessary to be familiar with the concept of time zones.

This topic group introduces you to the concepts of time such as Coordinated Universal Time, time zones, and daylight saving time.

Objectives

After completing this topic group, you should be able to:

Calculate the date and time for any time zone region using time zone offsets.

Time Zones

The hours of the day are measured by the turning of the earth. The time of day at any particular moment depends on where you are.
100000

The earth is divided into twenty four time zones, one for each hour of the day. The time along the prime meridian in Greenwich, England is known as Coordinated Universal Time, or UTC (formerly known as Greenwich Mean Time, or GMT ). UTC is the time standard against which all other time zones are referenced.

Note: The following topics discuss prime meridian and UTC in more detail.

Coordinated Universal Time

100001

Since time began, the time flow on earth has been ruled by the apparent position of the sun in the sky.

In the past, when methods of transportation made even short travels last for several days, no one, except astronomers, understood that solar time at any given moment is different from place to place.

Around the 1800s with the development of faster modes of transportation and a need for accurate time references for sea navigation, Greenwich mean time (GMT), which later became known as Coordinated Universal Time (UTC), was introduced.

The earth surface is divided into 24 adjacent, equal, and equatorially perpendicular zones, called time zones. Each time zone is delimited by 2 meridians. UTC is the time standard against which all other time zones in the world are referenced.

UTC is measured with astronomical techniques at the Greenwich astronomical observatory in England.

Daylight Saving Time

“Just as sunflowers turn their heads to catch every sunbeam, there is a simple way to get more from the sun.”

Purpose of Daylight Saving Time

100004

The main purpose of daylight saving time (called Summer Time in many places around the world) is to make better use of daylight. By switching clocks an hour forward in summer, we can save a lot of energy and enjoy sunny summer evenings. Today approximately 70 countries use daylight saving time.

When Is Daylight Saving Time Observed Around the World?

Country Begin Daylight Saving Time Back to Standard time
US; Mexico; Canada 2:00 a.m. on the first Sunday of April 2:00 a.m. on the last Sunday of October
European Union 1:00 a.m. on the last Sunday in March 2:00 a.m. on the last Sunday of October

Equatorial and tropical countries from the lower latitudes do not observe daylight saving time. Because the daylight hours are similar during every season, there is no advantage to moving clocks forward during the summer.

How Is This Information Relevant To Time Zones?

The world is divided into 24 time zones and UTC is the time standard against which all other time zones in the world are referenced. When daylight saving time comes into effect in certain countries, the time zone offset for that country is adjusted to accomodate the change in time.

For example: The standard time zone offset for Geneva, Switzerland is UTC +01:00 hour. But when daylight saving time comes into effect the time zone offset changes to UTC +02:00 hours. The time zone offset changes to UTC +01:00 hour again, on the last Sunday in October, when the daylight saving time comes to an end.

Summary

The key learning points in this topic group included:

Coordinated Universal Time:
UTC is the time standard against which all other time zones in the world are referenced.

UTC Conversion:
To convert UTC to local time, you add or subtract hours from it. For regions
west of the zero meridian to the international date line (which includes all of North
America), hours are subtracted from UTC to convert to local time.

Daylight Saving Time:
Daylight saving time is used to make better use of daylight hours by switching clocks an hour forward in summer.

All this information is necessary to understand how the Oracle9i server provides support for time zones in its multi geography applications.

The next topic group “Database Time Zone Versus Session Time Zone” discusses the difference between Database Time Zone and Session Time Zone.

Database Time Zone Versus Session Time Zone

100015

Database Time Zone
Database time zone refers to the time zone in which the database is located.

Session Time Zone
Session time zone refers to the user’s time zone, from where he or she has logged on to the database.

Global Corporation is a finance company with offices around the world. The company head office is located in Barcelona (time zone : +01 hours). The company database is located in New York (time zone : -05 hours). Miguel from Sydney (time zone : +10 hours) has established a connection to the database.

DBTIMEZONE

The DBTIMEZONEfunction returns the value of the database time zone. The default database time zone is the same as the operating system’s time zone.

The return type is a time zone offset (a character type in the format ‘[+ | -]TZH:TZM‘ ) or a time zone region name, depending on how the user specified the database time zone value in the most recent CREATE DATABASE or
ALTER DATABASE
statement.

100016

You can set the database’s default time zone by specifying the SET TIME_ZONE clause of the CREATE DATABASE statement. If omitted, the default database time zone is the operating system time zone.


SESSIONTIMEZONE

The SESSIONTIMEZONEfunction returns the value of the session’s time zone.

The return type is a time zone offset (a character type in the format ‘[+|-]TZH:TZM’) or a time zone region name, depending on how the user specified the session time zone value in the most recent ALTER SESSION statement.

Altering the Session Time Zone

How can I change the session time zone?

The session time zone for a session can be changed with an ALTER SESSIONcommand.

Syntax

ALTER SESSION
SET TIME ZONE = ‘[+ |-] hh:mm’;

 

The key learning points in this topic group included:

Database Time Zone:
Database time zone refers to the time zone in which the database is located. You can use the DBTIMEZONE function to query the value of the database time zone.

Session Time Zone:
Session time zone refers to the time zone from which the user has logged on to the database. You can use the SESSIONTIMEZONE function to query the value of the session time zone.

TIMESTAMP

The TIMESTAMP data type is an extension of the DATEdata type.

It stores the year, month, and day of the DATE data type; the hour, minute, and second values; as well as the fractional second value.

Format

TIMESTAMP [(fractional_seconds_precision)]

The fractional_seconds_precision is used to specify the number of digits in the fractional part of the SECOND datetime field and can be a number in the range 0 to 9. The default is 6.

Grand Prix Qualifying Run

The line-up position for the Formula 1 Grand Prix is determined by the results of the qualifying run. Because the difference between the finishing times of the various drivers is very close, the finishing time of each driver is measured in fractional seconds. To store this kind of information, you can use the new TIMESTAMP data type.

TIMESTAMP WITH TIME ZONE

TIMESTAMP WITH TIME ZONE is a variant of the TIMESTAMP data type, that includes a time zone displacementin its value.

Format

TIMESTAMP[(fractional_seconds_precision)] WITH TIME ZONE

Earthquake Monitoring Station

Earthquake monitoring stations around the world record the details of tremors detected in their respective regions. The date and time of the occurrence of these tremors are stored, along with the time zone displacement, using the new TIMESTAMP WITH TIME ZONE data type. This helps people who analyze the information from locations around the world obtain an accurate perspective of the time when the event occurred.

TIMESTAMP WITH LOCAL TIME ZONE

TIMESTAMP WITH LOCAL TIME ZONE is another variant of the TIMESTAMPdata type. This data type also includes a time zone displacement.

Format

TIMESTAMP[(fractional_seconds_precision)] WITH LOCAL TIME ZONE

The TIMESTAMP WITH LOCAL TIME ZONE datatype differs from TIMESTAMP WITH TIME ZONE in that when you insert a value into a database column, the time zone displacement is used to convert the value to the database time zone.

Example

When a New York client inserts TIMESTAMP’1998-1-23 6:00:00-5:00′ into a TIMESTAMP WITH LOCAL TIME ZONE column in the San Francisco database. The inserted data is stored in San Francisco as binary value 1998-1-23 3:00:00.

The time-zone displacement is not stored in the database column.When you retrieve the value, Oracle returns it in your local session time zone.

When the New York client selects that inserted data from the San Francisco database, the value displayed in New York is `1998-1-23 6:00:00′. A San Francisco client, selecting the same data, gets the value ‘1998-1-23 3:00:00’.

New Year Celebration Broadcast

A television company is planning a live broadcast of New Year celebrations across the globe. To schedule a broadcast of the various events from across the globe, they use an application that stores the broadcast time using the TIMESTAMP WITH LOCAL TIME ZONE data type. Reporters located in different time zones can easily query to find out when to start and end their broadcasts, the output of which will be in their respective time zones.

TIMESTAMP:
With the new TIMESTAMP data type you can store the year, month, and day of the DATE data type; hour, minute, and second values; as well as the fractional second value.

TIMESTAMP WITH TIME ZONE:
The TIMESTAMP WITH TIME ZONE data type is a variant of the TIMESTAMP data type, that includes a time zone displacement in its value.

TIMESTAMP WITH LOCAL TIME ZONE:
The data stored in a column of type TIMESTAMP WITH LOCAL TIME ZONE is converted and normalized to the database time zone. Whenever a user queries the column data, Oracle returns the data in the user’s local session time zone.

TZ_OFFSET

Richard, a marketing executive, travels frequently to cities across the globe. He carries his laptop while travelling and updates the database located at the head office in San Francisco with information about his activities at the end of each day.

Since Richard is using a laptop for his work, he needs to update the session time zone every time he visits a new city.

Richard uses the TZ_OFFSET function to find the time zone offset for that city.

Syntax

SELECT TZ_OFFSET(‘Canada/Pacific’) FROM DUAL;

Note: For a listing of valid time zone name values, you can query the V$TIMEZONE_NAMES dynamic performance view.

ALTER SESSION Command

After Richard finds the time zone offset for the city he is visiting, he alters his session time zone using the ALTER SESSION command.

ALTER SESSION
SET TIME_ZONE = ‘-08:00’;

Richard then uses any of the following functions to view the current date and time in the session time zone.

CURRENT_DATE
CURRENT_TIMESTAMP
LOCAL_TIMESTAMP

Note: The following pages contain a detailed explanation of the functions listed above.

CURRENT_DATE

The CURRENT_DATE function returns the current date in the session’s time zone.The return value is a date in the Gregorian calendar. (The ALTER SESSION command can be used to set the date format to ‘DD-MON-YYYY HH24:MI:SS’.)

The CURRENT_DATE function is sensitive to the session time zone.

When Richard alters his session time zone to the time zone of the city that he is visiting, the output of the CURRENT_DATE function changes.

Example

Before the Session Time Zone is Altered

After the Session Time Zone is Altered

Observe in the output that the value of CURRENT_DATE changes when the TIME_ZONE parameter value is changed to -08:00.

Note: The SYSDATE remains the same irrespective of the change in the TIME_ZONE.
SYSDATE is not sensitive to the session’s time zone.

CURRENT_TIMESTAMP

The CURRENT_TIMESTAMP function returns the current date and time in the session time zone, as a value of the TIMESTAMP WITH TIME ZONE data type.

The time zone displacement reflects the local time zone of the SQL session.

Format

CURRENT_TIMESTAMP (precision)

Where precision is an optional argument that specifies the fractional second precision of the time value returned.

LOCALTIMESTAMP

 

The LOCALTIMESTAMP function returns the current date and time in the session time zone in a value of TIMESTAMP data type.

The difference between this function and the CURRENT_TIMESTAMP function is that LOCALTIMESTAMP returns a TIMESTAMP value, whereas CURRENT_TIMESTAMP returns a TIMESTAMP WITH TIME ZONE value.

Format

LOCALTIMESTAMP (TIMESTAMP_precision)

Where TIMESTAMP_precision is an optional argument that specifies the fractional second precision of the TIMESTAMP value returned.

EXTRACT

So far you have learned how Richard can alter his session date and view the current date and time in the session time zone.

Now observe how Richard can query a specified datetime field from a datetime or interval value expression using the EXTRACT function.

Format

SELECT EXTRACT ([YEAR] [MONTH] [DAY] [HOUR] [MINUTE] [SECOND]  [TIMEZONE_HOUR] [TIMEZONE_MINUTE] [TIMEZONE_REGION] [TIMEZONE_ABBR]
FROM [datetime_value_expression] [interval_value_expression]);

Using the EXTRACT function, Richard can extract any of the components mentioned in the preceding syntax.

Example

Richard can query the time zone displacement for the current session as follows:

SELECT EXTRACT(TIMEZONE_HOUR FROM CURRENT_TIMESTAMP) "Hour",                         
EXTRACT(TIMEZONE_MINUTE FROM CURRENT_TIMESTAMP) "Minute" FROM DUAL;

Datetime Functions: Conversion

Now examine some additional functions that help convert a CHAR value to a TIMESTAMP value, a TIMESTAMP value to a TIMESTAMP WITH TIME ZONEvalue, and so on.

The functions are:

TO_TIMESTAMP
TO_TIMESTAMP_TZ
FROM_TZ

TO_TIMESTAMP
The TO_TIMESTAMP function converts a string of CHAR, VARCHAR2, NCHAR, or NVARCHAR2 data type to a value of TIMESTAMPdata type.

Format

TO_TIMESTAMP(char,[fmt],[‘nlsparam’])

The optional fmt specifies the format of char. If you omit fmt, the string must be in the default format of the TIMESTAMP data type.

The optional nlsparam specifies the language in which month and day names and abbreviations are returned. If you omit nlsparams, this function uses the default date language for your session.

Example

SELECT TO_TIMESTAMP(‘2000-12-01 11:00:00’,
‘YYYY-MM-DD HH:MI:SS’)
FROM DUAL;

TO_TIMESTAMP_TZ

The TO_TIMESTAMP_TZ function converts a string of CHAR, VARCHAR2, NCHAR, or NVARCHAR2 data type to a value of TIMESTAMP WITH TIME ZONEdata type.

Format

TO_TIMESTAMP_TZ
(char,[fmt],[‘nlsparam’])

The optional fmt specifies the format of char. If you omit fmt, the string must be in the default format of the TIMESTAMP data type.

The optional nlsparam specifies the language in which month and day names and abbreviations are returned. If you omit nlsparams, this function uses the default date language for your session.

Example

SELECT TO_TIMESTAMP_TZ(‘2000-12-01 11:00:00 -08:00’,
‘YYYY-MM-DD HH:MI:SS TZH:TZM’)
FROM DUAL;

Note: The TO_TIMESTAMP_TZ function does not convert character strings to TIMESTAMP WITH LOCAL TIME ZONE.

FROM_TZ

The FROM_TZ function converts a timestamp value to a TIMESTAMP WITH TIME ZONEvalue.

Format

FROM_TZ(timestamp_value, time_zone_value)

Time_zone_value can be a character string in the format ‘TZH:TZM’ format or a character expression that returns a string in TZR (time zone region) format with optional TZD (time zone displacement) format.

Example Using the Format TZH:TZM

SELECT from_tz(TIMESTAMP ‘2000-12-01 11:00:00’,
‘-8:00’) “FROM_TZ”
FROM DUAL;

Example Using TZR

SELECT FROM_TZ(TIMESTAMP ‘2000-12-01 11:00:00’, ‘AUSTRALIA/NORTH’) “FROM_TZ”
FROM DUAL;

INTERVAL Data Type

The INTERVALdata type is used to represent the precise difference between two datetime values.

The two INTERVAL data types introduced in Oracle9i are:

INTERVAL YEAR TO MONTH
INTERVAL DAY TO SECOND

Usage of the INTERVAL Datatype

The INTERVAL data type can be used to set a reminder for a time in the future or check whether a certain period of time has elapsed since a particular date.
For example: You can use it to record the time between the start and end of a race.

INTERVAL YEAR TO MONTH

You can use the INTERVAL YEAR TO MONTHdata type to store and manipulate intervals of years and months.

Format

INTERVAL YEAR[(precision)] TO MONTH

Where precision specifies the number of digits in the years field.

You cannot use a symbolic constant or variable to specify the precision; you must use an integer literal in the range 0-4. The default value is 2.

Automated Generation of Expiration Date

The packaging department of Home Food Products Ltd has decided to automate the generation of the expiration date details of its products.

INTERVAL DAY TO SECOND

INTERVAL DAY TO SECONDstores a period of time in terms of days, hours, minutes, and seconds.

Format

INTERVAL DAY[(day_precision)]
TO SECOND[(fractional_seconds_precision)]

Where day_precision is the number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2.

Fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values are 0 to 9. The default is 6.

Automated Generation of the Arrival Time

The Railway Enquiry department wants to automate the generation of the arrival time for all of its trains.

You have just learned about the new INTERVAL data types introduced with the Oracle9iserver.

INTERVAL YEAR TO MONTH:

The data type INTERVAL YEAR TO MONTH is used to store and manipulate intervals of years and months.

TO_YMINTERVAL function:

The TO_YMINTERVAL function converts a character string of CHAR, VARCHAR2, NCHAR, or NVARCHAR2 data type to an INTERVAL YEAR TO MONTH type, where CHAR is the character string to be converted.

INTERVAL DAY TO SECOND:

The INTERVAL DAY TO SECOND data type stores a period of time in terms of days, hours, minutes, and seconds.

TO_DSINTERVAL function:

The TO_DSINTERVAL function converts a character string of CHAR, VARCHAR2, NCHAR, or NVARCHAR2 data type to an INTERVAL DAY TO SECOND data type.

 

沪ICP备14014813号-2

沪公网安备 31010802001379号