【转】TSM5.3安装步骤

安装步骤

2.1 安装TSM服务器

创建/usr/Tivoli/tsm 文件系统,大小1G左右

放入安装介质,smit installp

安装如下组件

tivoli.tsm.server.aix5.rte64

tivoli.tsm.server.com       

tivoli.tsm.server.webcon    

tivoli.tsm.server.aix5.rte64

tivoli.tsm.server.com     

 

2.2 安装TSM备份归档客户机

放入AIX client 安装介质,smit installp

安装如下组件

tivoli.tsm.client.api.64bit

  tivoli.tsm.client.api.jfs2

  tivoli.tsm.client.ba.jfs2.base

  tivoli.tsm.client.ba.jfs2.common

  tivoli.tsm.client.ba.jfs2.image

  tivoli.tsm.client.ba.jfs2.nas

  tivoli.tsm.client.ba.jfs2.web

 

2.3 安装ORACLE TDP

放入tivoli data protection for oracle 安装介质,smit installp

安装如下组件

 

tivoli.tsm.client.oracle.aix51.64bit

tivoli.tsm.client.oracle.tools.aix51.64bit

2.4 安装SAN Agent

放入SAN Agent 安装介质,smit installp

安装如下组件

tivoli.tsm.StorageAgent.rte

tivoli.tsm.devices.aix5.rte

三、配置步骤

3.1 配置TSM 服务端

1 修改环境变量

在TSM Server的.profile文件增加如下内容:
export DSMSERV_DIR=/usr/tivoli/tsm/server/bin
export DSMSERV_CONFIG=/usr/tivoli/tsm/server/bin/dmserv.opt
2注册license , register lic file=* number=20

审核license  audit lic

3定义库以及库路径

define library lib3582 libtype=SCSI SERial=AUTODetect  AUTOLabel=no  SHARED=yes RESETDrives=Yes

define path  tsm  lib3582 srctype=server  desttype=library device=/dev/smc0 online=yes

4定义驱动器以及驱动器路径

define drive lib3582 drv0 online=yes element=256

define drive lib3582 drv1 online=yes element=257

define path  tsm drv0 srctype=server desttype=drive library=lib3582 device=/dev/rmt0 online=yes

define path  tsm drv1 srctype=server desttype=drive library=lib3582 device=/dev/rmt1 online=yes

5定义设备类

define devclass lto2dev library=lib3582 devtype=lto

6 检入库卷

label libvolume lib3582 search=yes checkin=scratch LABELSource=Barcode

 OVERWRITE=yes VOLRange=zy9580,zy9599

7 定义TSM DB卷,并扩展

define dbvolume /usr/tivoli/tsm/server/db1.dsm  f=500

extend db 500

8 定义 TSM LOG卷,并扩展

define logvolume /usr/tivoli/tsm/server/log1.dsm  f=256

extend log 500

9 定义存储池

define stgpool P5_FS_POOL lto2dev maxscratch=0

define volume P5_FS_POOL zy9582

define volume P5_FS_POOL zy9583

define stgpool P5_ORA_POOL lto2dev maxscratch=0

define volume P5_ORA_POOL zy9595

define volume P5_ORA_POOL zy9596

define volume P5_ORA_POOL zy9597

define volume P5_ORA_POOL zy9598

define volume P5_ORA_POOL zy9599

10 定义策略域

 

define domain P5_FS_DOMAIN

define policyset P5_FS_DOMAIN P5_FS_SET

define mgmtclass  P5_FS_DOMAIN P5_FS_SET P5_FS_CLASS

define copygroup  P5_FS_DOMAIN P5_FS_SET P5_FS_CLASS standard type=backu destination=P5_FS_POOL

assign defmgmtclass P5_FS_DOMAIN  P5_FS_SET P5_FS_CLASS

activate policyset P5_FS_DOMAIN P5_FS_SET

 

define domain P5_ORA_DOMAIN

define policyset P5_ORA_DOMAIN P5_ORA_SET

define mgmtclass  P5_ORA_DOMAIN P5_ORA_SET P5_ORA_CLASS

define copygroup  P5_ORA_DOMAIN P5_ORA_SET P5_ORA_CLASS standard type=backu destination=P5_ORA_POOL VERExists=1 VERDeleted=0 RETExtra=0 RETOnly=0

assign defmgmtclass P5_ORA_DOMAIN  P5_ORA_SET P5_ORA_CLASS

activate policyset P5_ORA_DOMAIN P5_ORA_SET

11 注册客户机

register node P570_FS p570_fs domain=P5_FS_DOMAIN backdelete=yes archdelete=yes passe=0

register node P550_FS p550_fs domain= P5_FS_DOMAIN backdelete=yes archdelete=yes passe=0

 

register node P570_TDPO p570_tdpo domain=P5_ORA_DOMAIN backdelete=yes archdelete=yes passe=0

register node P550_TDPO p550_tdpo domain= P5_ORA_DOMAIN backdelete=yes archdelete=yes passe=0

12 定义管理调度

 

DEFine SCHedule bkupdb Type=Administrative  CMD=’backup db type=full devclass=lto2dev volumenames=zy9580 scratch=no’ ACTIVE=yes STARTTime=12:00:00

DEFine SCHedule delvolhist Type=Administrative CMD=’DELete VOLHistory TODate=today-1 type=dbbackup’ ACTIVE=Yes STARTTime=11:50:00

 

13 启动TSM 服务器

   到/usr/tivoli/tsm/server/bin

   nohup dsmserv&

 

3.2 配置TSM 客户端

1 修改环境标量

在TSM Client的.profile文件增加如下内容:
export DSM_DIR=/opt/tivoli/tsm/client/ba/bin
export DSM_CONFIG=/opt/tivoli/tsm/client/ba/bin/dsm.opt
export DSM_LOG=/opt/tivoli/tsm/client/ba/bin

export DSMI_CONFIG=/opt/tivoli/tsm/client/api/bin64/dsm.opt

export DSMI_DIR=/opt/tivoli/tsm/client/api/bin64

export DSMI_LOG=/opt/tivoli/tsm/client/api/bin64

 

 

2修改/usr/tivoli/tsm/client/ba/bin 目录下的dsm.opt 以及dsm.sys 文件如下

dsm.opt

 

dsm.sys

 

 

3 用命令dsmc 启动命令行客户机,连接TSM 服务器看是否成功

3.3 配置ORACLE TDP

1修改tdpo.opt 如下:

 

 

2 修改/usr/tivoli/tsm/client/api/bin64 目录下的dsm.opt 以及dsm.sys 文件如下:

dsm.opt

 

dsm.sys

 

 

3 生成密码文件

tdpoconf password –tdpo_optfile=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt

4. 链接libobk.a

ln –s /usr/lib/libobk.a  $ORACLE_HOME/lib64/libobk.a

链接完后重新启动数据库

5        sbttest 测试是否TDP和ORACLE 链接成功

 $ORACLE_HOME/bin/sbttest test

6  安装Catalog 数据库

6.1为恢复目录单独创建表空间catalog_tbs,

6.2创建恢复目录用户

CREATE USER rman IDENTIFIED BY rman DEFAULT TABLESPACE catalog_tbs;

6.3 为用户rcat_user 赋予角色

GRANT connect,resource,recovery_catalog_owner TO rman;

6.4 使用RMAN 连接恢复目录

RMAN  catalog=rman/rman@rman9i

6.5 在RMAN 提示符下创建catalog

create catalog tablespace catalog_tbs;

6.5 使用RMAN 同时连接目标数据和恢复目录

set ORACLE_SID=nc31

RMAN target /  catalog=rman/rman@rman9i

6.6使用恢复目录注册数据库

RMAN>Register database;

 

7.如果sbttest 不成功 需要执行这一步

 7.1添加errorlogname  /oracle/mytdpo.log 到DSMI_DIR下的 dsm.sys文件

  7.2重新运行tdpo password 命令生成密码

7.3进入/oracle 目录确认生成了mytdpo.log 文件

7.4更改mytdpo.log  的权限,

chmod 666 /oracle/mytdpo.log

chown oracle:dba /oracle/mytdpo.log

3.4 配置SAN Agent

1 修改/usr/tivoli/tsm/StorageAgent/bin/dsmsta.opt 增加一行

  DEVCONFIG   devconfig.out

2 使用如下命令

  dsmsta setstorageserver myname=storagent mypassword=storagent myhladdress=172.16.1.12 servername=tsm serverpassword=tsm hladdress=172.16.1.21 lladdress=1500

  这条命令在devconfig.txt 文件产生如下行:

 

  这条命令在dsmsta.opt 产生如下行:

  SERVERNAME    TSM

3 在TSM server 上发出如下命令

    set servername TSM

set serverpassword tsm

set serverhladdress 172.16.1.21

set serverlladdress 1500

set crossdefine on

   设置完可以用q status  查看结果

4 在TSM server 上为san agent 定义服务器

define server storagnt serverpassword=storagnt hladdress=172.16.1.12 lladdress=1500 validateprotocol=all

 

6        在TSMserver 上定义源服务器为storagnt 的驱动器路径

define path storagnt drv0 srctype=server desttype=drive library=lib3582 device=/dev/rmt0

define path storagnt drv1 srctype=server desttype=drive library=lib3582 device=/dev/rmt1

 

7        为启用lan-free 修改客户机的dsm.sys 文件增加

   LANFREECOMMMETHOD TCPIP

   LANFREECOMMMETHOD SHAREDMEM

   LANFREETCPPORT 1500

   LANFREESHMPORT 1510

   enablelanfree      yes

 

8        启动san agent

nohup  dsmsta&

四、TSM 策略域,存储池配置列表

4.1 策略域概要配置

 

Policy Domain Name Policy Set Management Class Default (Y/N) Backup Copy Group Archive Copy Group Storage Pool
P5_FS_DOMAIN standard standard Y standard standard P5_FS_POOL
P5_ORA_DOMAIN standard standard Y standard standard P5_ORA_POOL
             

 

4.2 策略域详细配置

策略域: P5_FS_DOMAIN
备份副本组
Type BACKUP 副本组的类型是备份.
DESTINATION P5_FS_POOL 目标存储池是(请参考存储池定义表).
FREQUENCY 0 可以立即备份刚刚备份过的文件
VEREXISTS 2 在客户机文件系统上存在的文件最多保留的2个备份版本
VERDELETED 1 在客户机文件系统上不存在的文件最多保留的1个备份版本
RETEXTRA 30 当备份版本到期后30天删除.
RETONLY 60 从客户机上删除的文件保留最新的版本60天
MODE Absolute 不管上次备份的文件是否改变都进行备份
SERIALIZATION Static 备份没有正在被修改文件并仅尝试一次

 

策略域: P5_ORA_DOMAIN
备份副本组
Type BACKUP 副本组的类型是备份.
DESTINATION P5_ORA_POOL 目标存储池是(请参考存储池定义表).
FREQUENCY 0 可以立即备份刚刚备份过的文件
VEREXISTS 1 在客户机文件系统上存在的文件最多保留的2个备份版本
VERDELETED 0 在客户机文件系统上不存在的文件最多保留的1个备份版本
RETEXTRA 0 当备份版本到期后立即删除.
RETONLY 0 从客户机上删除的文件保留最新的版本60天
MODE Absolute 不管上次备份的文件是否改变都进行备份
SERIALIZATION Static 备份没有正在被修改文件并仅尝试一次

4.3节点配置

Node name

password

Policy domain

note

P550_FS p550_fs P5_FS_DOMAIN 550文件系统备份
P550_TDPO p550_tdpo P5_ORA_DOMAIN 550 Oracle备份
P570_FS p570_fs P5_FS_DOMAIN 570文件系统备份
P570_TDPO p570_tdpo P5_ORA_DOMAIN 570 Oracle备份

 

4.4存储池配置

POOL name

VOLUME

P5_FS_POOL ZY9582,ZY9583
P5_ORA_POOL ZY9595,ZY9596,ZY9597,ZY9598,ZY9599

 

 

五、Oracle 备份策略,调度和备份脚本

5.1 备份策略和调度

每周日凌晨1点增量0 备份,周4 凌晨1点增量1备份,周一,二,三,五,六凌晨1点做增量2备份 。每天每隔4个小时做一次归档日志的备份。

每天凌晨5点删除15天以前的归档,每天凌晨6点删除15天前的DB备份。

 

查看oracle 用户的调度 crontab –l

0 1 * * 0 /oracle/rmanscript/incr0.sh &

0 1 * * 4 /oracle/rmanscript/incr1.sh &

0 1 * * 1,2,3,5,6 /oracle/rmanscript/incr2.sh &

0 5 * * * /oracle/rmanscript/dele_arch.sh &

0 6 * * * /oracle/rmanscript/dele_tape.sh &

0 4,8,12,16,20 * * * /oracle/rmanscript/arch.sh &

5.2 备份脚本

5.2.1 incr0.sh 增量0备份

. /oracle/.profile

target=’target /’    

rcvcat=’catalog rman/rman@rman9i’

 

# Get the current time for constructing a fairly unique filename in /tmp:

time=`date ‘+%m%d%H’`

 

# Construct filenames using $time for uniqueness:

 

cmdfile=/oracle/rmanscript/log/incr0_$time.rcv

msglog=/oracle/rmanscript/log/incr0_$time.log

 

cat <<EOF >$cmdfile

run{

    allocate channel sbt1 type ‘SBT_TAPE’;

    resync catalog;

 

    sql ‘alter system archive log current’ ;

    

    backup

     filesperset 10

      format ‘arch_%s_%p’

     (archivelog like ‘/oradata/archivelog/%’ delete input channel sbt1);

   

    backup

      format ‘cf_%s_%p’

      (current controlfile);

 

    backup

       incremental level 0

       filesperset 10

       tag ‘incr_level_0’

       format ‘incr0_%u_%p_%c’

       database;

  

   sql ‘alter system archive log current’ ;

 

   backup

      filesperset 10

      format ‘arch_%s_%p’

     (archivelog like ‘/oradata/archivelog/%’ delete input channel sbt1)

 

    release channel sbt1;

  }

EOF

 

rman $target $rcvcat cmdfile $cmdfile msglog $msglog

 

exit

5.2.2 incr1.sh 增量1备份

. /oracle/.profile

target=’target /’    

rcvcat=’catalog rman/rman@rman9i’

 

# Get the current time for constructing a fairly unique filename in /tmp:

time=`date ‘+%m%d%H’`

 

# Construct filenames using $time for uniqueness:

 

cmdfile=/oracle/rmanscript/log/incr1_$time.rcv

msglog=/oracle/rmanscript/log/incr1_$time.log

 

cat <<EOF >$cmdfile

run{

allocate channel sbt1 type ‘SBT_TAPE’;

resync catalog;

 

    sql ‘alter system archive log current’ ;

    

    backup

     filesperset 10

      format ‘arch_%s_%p’

     (archivelog like ‘/oradata/archivelog/%’ delete input channel sbt1);

    

  backup

      format ‘cf_%s_%p’

      (current controlfile);

 

    backup

       incremental level 1

       filesperset 10

       tag ‘incr_level_1’

       format ‘incr1_%u_%p_%c’

       database;

  

   sql ‘alter system archive log current’ ;

 

   backup

      filesperset 10

      format ‘arch_%s_%p’

     (archivelog like ‘/oradata/archivelog/%’ delete input channel sbt1);

    

    release channel sbt1;

  

  }

EOF

 

rman $target $rcvcat cmdfile $cmdfile msglog $msglog

 

exit

 

5.2.3 incr2.sh 增量2备份

. /oracle/.profile

target=’target /’    

rcvcat=’catalog rman/rman@rman9i’

 

# Get the current time for constructing a fairly unique filename in /tmp:

time=`date ‘+%m%d%H’`

 

# Construct filenames using $time for uniqueness:

 

cmdfile=/oracle/rmanscript/log/incr2_$time.rcv

msglog=/oracle/rmanscript/log/incr2_$time.log

 

cat <<EOF >$cmdfile

run{

    allocate channel sbt1 type ‘SBT_TAPE’;

    resync catalog;

 

    sql ‘alter system archive log current’ ;

    

    backup

     filesperset 10

      format ‘arch_%s_%p’

     (archivelog like ‘/oradata/archivelog/%’ delete input channel sbt1);

 

    backup

      format ‘cf_%s_%p’

      (current controlfile);

 

    backup

       incremental level 2

       filesperset 10

       tag ‘incr_level_2’

       format ‘incr2_%u_%p_%c’

       database;

  

   sql ‘alter system archive log current’ ;

 

   backup

      filesperset 10

      format ‘arch_%s_%p’

     (archivelog like ‘/oradata/archivelog/%’ delete input channel sbt1)

   

 

    release channel sbt1;

   

  }

EOF

 

rman $target $rcvcat cmdfile $cmdfile msglog $msglog

 

exit

5.2.4 arch.sh 归档日志备份

. /oracle/.profile

 

target=’target /’    

rcvcat=’rcvcat rman/rman@rman9i’

 

# Get the current time for constructing a fairly unique filename in /tmp:

time=`date ‘+%m%d%H’`

 

# Construct filenames using $time for uniqueness:

 

cmdfile=/oracle/rmanscript/log/arch_$time.rcv

msglog=/oracle/rmanscript/log/arch_$time.log

 

cat <<EOF >$cmdfile

run{

   allocate channel sbt1 type ‘SBT_TAPE’ ;

 

   resync catalog;

   sql ‘alter system archive log current’ ;

   backup

     filesperset 10

     format ‘arch_%s_%p’

     (archivelog like ‘/oradata/archivelog/%’ delete input channel sbt1);

 

   release channel sbt1;

 

}

EOF

 

rman $target $rcvcat cmdfile $cmdfile msglog $msglog

 

exit

5.2.5 del_arch.sh定期删除归档日志备份

# Initialize default connect string variables:

. /oracle/.profile

target=’target /’

rcvcat=’rcvcat rman/rman@rman9i’

 

time=`date ‘+%m%d%H’`

# Initialize filenames:

 

obsolete_tape=/oracle/rmanscript/log/arch_tape_$time.log   # list of obsolete tape backups

cmdfile=/oracle/rmanscript/log/delete_arch$time.cmd          # the generated RMAN commmand script

delete_log=/oracle/rmanscript/log/deleted_arch_$time.log      # execution log from generated script

 

# Get a list of obsolete tape files:

 

rman msgno $target $catalog msglog $obsolete_tape << EOF > /dev/null

list backup of archivelog until time ‘sysdate-15’;

exit;

EOF

 

# sed actions:

 

 

del_bpiece=”-e /AVAILABLE/ s/\(.* \)\(.*\)$/change backuppiece ‘\2’ delete;/”

 

#echo “allocate channel for delete type ‘sbt_tape’ ;” > $cmdfile

 

grep AVAILABLE $obsolete_tape | sed “$del_bpiece” >> $cmdfile

 

echo “release channel;” >> $cmdfile

 

#

# Execute the RMAN command file to delete the obsolete files:

#

rman msgno $target $rcvcat msglog $delete_log cmdfile $cmdfile

 

exit

      

5.2.6 del_tape.sh 定期删除数据库备份

# Initialize default connect string variables:

. /oracle/.profile

target=’target /’

rcvcat=’catalog rman/rman@rman9i’

 

time=`date ‘+%m%d%H’`

# Initialize filenames:

 

obsolete_tape=/oracle/rmanscript/log/ob_tape_$time.log   # list of obsolete tape backups

cmdfile=/oracle/rmanscript/log/delete$time.cmd          # the generated RMAN commmand script

delete_log=/oracle/rmanscript/log/deleted_$time.log      # execution log from generated script

 

# Get a list of obsolete tape files:

 

rman msgno $target $catalog msglog $obsolete_tape << EOF > /dev/null

report obsolete redundancy=1 until time ‘sysdate-15’ device type ‘sbt_tape’;

#report obsolete device type ‘sbt_tape’;

exit;

EOF

 

# sed actions:

 

 

del_bpiece=”-e /RMAN-06285/ s/\(.* \)\(.*\)$/change backuppiece ‘\2’ delete;/”

 

echo “allocate channel for delete type ‘sbt_tape’ ;” > $cmdfile

 

grep RMAN-06285 $obsolete_tape | sed “$del_bpiece” >> $cmdfile

 

echo “release channel;” >> $cmdfile

 

#

# Execute the RMAN command file to delete the obsolete files:

#

rman msgno $target $rcvcat msglog $delete_log cmdfile $cmdfile

 

exit

六 FAQ

1.      如何启动和停止TSM 服务器

方法一:

启动:进入/usr/tivoli/tsm/server/bin 目录

输入命令 dsmserv          

      停止:在dsmserv 的提示符TSM:TSM>下

输入halt 

 

方法二:

启动:进入/usr/tivoli/tsm/server/bin目录

输入命令 nohup dsmserv&

停止:ps –ef | grep dsm 查看dsmserv 的进程号

      kill 进程号

 tail –f nohup.out 查看日志

当机器重新启动时,TSM 服务器依靠

/etc/inittab 中的

自动启动

 

2 如何启动管理命令行接口

  直接在系统提示符下输入

  dsmadmc

  提示输入user id : 这里为admin

  提示输入password 这里为 admin

  然后进入TSM 管理命令行

 

  若要更改admin的password

  输入:update admin admin newpassword

 

3如何启动WEB管理接口

 打开IE ,输入http://172.16.1.21:8041/ibm/console

 用户名:iscadmin

 口令  :password

 如果打开页面无效

 进入/opt/IBM/ISC/PortalServer/bin 输入startISC.sh ISC_Portal

 

4 如何进行文件系统的备份

  直接在系统提示符下输入命令如:

  备份/usr 目录及下面的子目录dsmc sel /usr/ -subdir=yes

  单独备份/etc/host 文件      dsmc sel /etc/host

 

 

  

5 如何查看备份

  oracle连接rman

  rman target / catalog rman/rman@rman9i

5.1概述可用的备份       

RMAN>list backup summary;

 B 表示 backup

 F 表示 FULL

 A 表示 archive log

 0 1 表示 incremental backup

 

5.2按备份类型列出备份   

RMAN>list backup by file;

按照 数据文件备份,归档日志备份,控制文件备份,服务器参数文件备份  列出

 

5.3列出详细备份         

RMAN>list backup;

      

 

5.4列出表空间和数据文件备份

list backup of tablespace 和list backup of datafile 输出和list backup 相似 如:list backup of tablespace user_tbs;

 list backup of datafile 3;

 

5.5列出归档日志备份     

RMAN>list archivelog all;            简要信息

RMAN>list backup of archivelog all;  详细信息

 

5.6列出控制文件和服务器参数文件

 RMAN>list backup of controfile;

 RMAN>list backup of spfile;

 

6如何进行数据库故障恢复

如果安装配置了OEM 可以使用恢复向导进行恢复

步骤

1 startup mount

2 restore database

3 recover database

4 alter database open resetlogs

 

7 如何查看RMAN备份日志验证是否备份成功。

 rman 的备份日志放在/oracle/rmanscript/log 目录下,

根据时间以及备份级别命名

incr0_110915.log 表示11月9号15时做的增量0 备份

查看日志

如果日志结尾为

Recovery Manager complete.

表示该级别的备份成功

 

如果日志包含如下:

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

表示有错误 需要更正。

 

七 恢复

当数据库出现故障 需要进行恢复,运行脚本restore.sh

su – oracle

cd /rmanscript

./restore.sh

还原结束后,查看还原日志,看是否成功

日志位于/oracle/rmanscript/log 目录下

日志示例:

Recovery Manager: Release 9.2.0.6.0 – 64bit Production

 

Copyright (c) 1995, 2002, Oracle Corporation.  All rights reserved.

 

connected to target database (not started)

connected to recovery catalog database

 

RMAN> startup nomount;

2> run

3> {

4> allocate channel sbt1 type ‘sbt_tape’;

5> restore controlfile;

6> alter database mount;

7> restore database;

8> recover database;

9> alter database open resetlogs;

10> release channel sbt1;

11> }

12>

Oracle instance started

 

Total System Global Area     320308816 bytes

 

Fixed Size                      742992 bytes

Variable Size                285212672 bytes

Database Buffers              33554432 bytes

Redo Buffers                    798720 bytes

 

allocated channel: sbt1

channel sbt1: sid=13 devtype=SBT_TAPE

channel sbt1: Tivoli Data Protection for Oracle: version 5.2.0.0

 

Starting restore at 10-NOV-05

 

channel sbt1: starting datafile backupset restore

channel sbt1: restoring controlfile

output filename=/oradata/oratest/control01.ctl

channel sbt1: restored backup piece 1

piece handle=oratestfull_03h3dd2p_1_1 tag=ORATEST_FULL params=NULL

channel sbt1: restore complete

replicating controlfile

input filename=/oradata/oratest/control01.ctl

output filename=/oradata/oratest/control02.ctl

output filename=/oradata/oratest/control03.ctl

Finished restore at 10-NOV-05

 

database mounted

 

Starting restore at 10-NOV-05

 

channel sbt1: starting datafile backupset restore

channel sbt1: specifying datafile(s) to restore from backup set

restoring datafile 00001 to /oradata/oratest/system01.dbf

restoring datafile 00003 to /oradata/oratest/cwmlite01.dbf

restoring datafile 00004 to /oradata/oratest/drsys01.dbf

restoring datafile 00008 to /oradata/oratest/tools01.dbf

restoring datafile 00011 to /oradata/oratest/NNC_DATA01.dbf

restoring datafile 00013 to /oradata/oratest/NNC_DATA03.dbf

restoring datafile 00015 to /oradata/oratest/NNC_INDEX02.dbf

channel sbt1: restored backup piece 1

piece handle=oratestfull_03h3dd2p_1_1 tag=ORATEST_FULL params=NULL

channel sbt1: restore complete

channel sbt1: starting datafile backupset restore

channel sbt1: specifying datafile(s) to restore from backup set

restoring datafile 00002 to /oradata/oratest/undotbs01.dbf

restoring datafile 00005 to /oradata/oratest/example01.dbf

restoring datafile 00006 to /oradata/oratest/indx01.dbf

restoring datafile 00007 to /oradata/oratest/odm01.dbf

restoring datafile 00009 to /oradata/oratest/users01.dbf

restoring datafile 00010 to /oradata/oratest/xdb01.dbf

restoring datafile 00012 to /oradata/oratest/NNC_DATA02.dbf

restoring datafile 00014 to /oradata/oratest/NNC_INDEX01.dbf

restoring datafile 00016 to /oradata/oratest/NNC_INDEX03.dbf

channel sbt1: restored backup piece 1

piece handle=oratestfull_04h3dd3s_1_1 tag=ORATEST_FULL params=NULL

channel sbt1: restore complete

Finished restore at 10-NOV-05

 

Starting recover at 10-NOV-05

 

starting media recovery

 

archive log thread 1 sequence 4 is already on disk as file /oradata/oratest/redo03.log

archive log thread 1 sequence 5 is already on disk as file /oradata/oratest/redo01.log

archive log filename=/oradata/oratest/redo03.log thread=1 sequence=4

archive log filename=/oradata/oratest/redo01.log thread=1 sequence=5

media recovery complete

Finished recover at 10-NOV-05

 

database opened

new incarnation of database registered in recovery catalog

starting full resync of recovery catalog

full resync complete

 

released channel: sbt1

 

Recovery Manager complete.

附录:还原测试

创建新库test ,导入数据从NC31

1.      还原之前先对数据库做一次备份

  . /oracle/.profile

target=’target sys/sys@test’    

rcvcat=’catalog rman/rman@rman9i’

 

# Get the current time for constructing a fairly unique filename in /tmp:

time=`date ‘+%m%d%H’`

 

# Construct filenames using $time for uniqueness:

 

cmdfile=/oracle/rmanscript/log/test_$time.rcv

msglog=/oracle/rmanscript/log/test_$time.log

 

cat <<EOF >$cmdfile

run{

    allocate channel sbt1 type ‘SBT_TAPE’;

    resync catalog;

 

    sql ‘alter system archive log current’ ;

    

    backup

     filesperset 10

      format ‘testarch_%s_%p’

     (archivelog like ‘/oradata/testarchivelog/%’ delete input channel sbt1);

 

    backup

      format ‘testcf_%s_%p’

      (current controlfile);

 

    backup

       full

       filesperset 10

       tag ‘test_full’

       format ‘testfull_%u_%p_%c’

       database;

  

   sql ‘alter system archive log current’ ;

 

   backup

      filesperset 10

      format ‘testarch_%s_%p’

     (archivelog like ‘/oradata/testarchivelog/%’ delete input channel sbt1);

 

    release channel sbt1;

  }

EOF

 

rman $target $rcvcat cmdfile $cmdfile msglog $msglog

 

exit

 

2.备份之前先验证数据

   sqlplus sys/sys@test as sysdba

 SQL> select count(*) from test.OM_JOB;

 

  COUNT(*)

———-

1759

3.模拟数据库故障

shutdown  immediate; 关闭数据库

重命名所有数据文件和控制文件

 

 

4.运行还原脚本

#before restore ,you must shutdown db

target=’target sys/sys@test’

rcvcat=’catalog rman/rman@rman9i’

time=`date ‘+%m%d%H’`

cmdfile=/oracle/rmanscript/log/restore_$time.rcv

msglog=/oracle/rmanscript/log/restore_$time.log

cat <<EOF >$cmdfile

startup nomount;

run

{

allocate channel sbt1 type ‘sbt_tape’;

restore controlfile;

alter database mount;

restore database;

recover database;

alter database open resetlogs;

release channel sbt1;

}

EOF

rman $target $rcvcat cmdfile $cmdfile msglog $msglog

exit

5. 还原结束验证数据

sqlplus sys/sys@test as sysdba

 SQL> select count(*) from test.OM_JOB;

 

  COUNT(*)

———-

1759

 

ORA-600 [17003]错误一例

一套AIX 上的10.2.0.4系统运行catupgrd.sql脚本时出现ORA-600 [17003]错误, 详细的日志如下:

 

ALTER TYPE lcr$_row_record ADD MEMBER FUNCTION
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [17003], [0x70000008E6DA8C8], [1],
[1], [], [], [], []

1. Provide a list of invalid objects taken before upgrade was initially attempted.

spool invalid_pre.lst
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <> ‘VALID’;
spool off

2. Provide output of select obj# ,name from obj$ where name in (‘LCR$_ROW_LIST’,’LCR$_ROW_RECORD’);

3. Provide an output of :
select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status
psta, P_TIMESTAMP from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID
and do.object_ID in (select object_id from dba_objects where
OBJECT_NAME=’LCR$_ROW_LIST’)
/

4. Run same SQL in step 3 in another database of the same version – 10.1.0.4

select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status
psta, P_TIMESTAMP from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID
and do.object_ID in (select object_id from dba_objects where
OBJECT_NAME=’AQ$_REG_INFO’)
/

Compare and verify that this object has as many parent entries in dependency$ as they
are in a fresh Database of the same version. If they are not, alter
compile this object,and using same query above to verify that parent rows are created.

5. To verify timestamp discrepancy, run SQL

spool &spoolfilename
set pagesize 10000
column d_name format a20
column p_name format a20
select do.obj# d_obj,do.name d_name, do.type# d_type,
po.obj# p_obj,po.name p_name,
to_char(p_timestamp,’DD-MON-YYYY HH24:MI:SS’) “P_Timestamp”,
to_char(po.stime ,’DD-MON-YYYY HH24:MI:SS’) “STIME”,
decode(sign(po.stime-p_timestamp),0,’SAME’,’*DIFFER*’) X
from sys.obj$ do, sys.dependency$ d, sys.obj$ po
where P_OBJ#=po.obj#(+)
and D_OBJ#=do.obj#
and do.status=1 /*dependent is valid*/
and po.status=1 /*parent is valid*/
and po.stime!=p_timestamp /*parent timestamp not match*/
order by 2,1;

SQL> select obj# ,name from obj$ where name in (‘LCR$_ROW_LIST’,’LCR$_ROW_RECORD’);

OBJ# NAME
———- ——————————
5738 LCR$_ROW_RECORD
5776 LCR$_ROW_RECORD
256054 LCR$_ROW_LIST

SQL> select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status
2 3 psta, P_TIMESTAMP from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
4 where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID
5 and do.object_ID in (select object_id from dba_objects where
6 OBJECT_NAME=’LCR$_ROW_LIST’)
7 /

no rows selected

SQL> spool timestamp
SQL> set pagesize 10000
column d_name format a20
column p_name format a20
select do.obj# d_obj,do.name d_name, do.type# d_type,
po.obj# p_obj,po.name p_name,
SQL> to_char(p_timestamp,’DD-MON-YYYY HH24:MI:SS’) “P_Timestamp”,
to_char(po.stime ,’DD-MON-YYYY HH24:MI:SS’) “STIME”,
decode(sign(po.stime-p_timestamp),0,’SAME’,’*DIFFER*’) X
from sys.obj$ do, sys.dependency$ d, sys.obj$ po
where P_OBJ#=po.obj#(+)
and D_OBJ#=do.obj#
SQL> and do.status=1 /*dependent is valid*/
2 3 4 5 6 7 8 and po.status=1 /*parent is valid*/
9 10 11 and po.stime!=p_timestamp /*parent timestamp not match*/
12 order by 2,1;

no rows selected

SQL> spool off

Errors in file ora_594098.trc:
ORA-07445: exception encountered: core dump [kgghstfel+0074] [SIGSEGV]
[Address not mapped to object] [0x204000000208] [] []

1. Install 10.2.0.1, install 10.2.0.4
2. run utlu102i.sql, check output.
3. change Oracle Home 10.1 to 10.2
4. startup upgrade and run catupgrd.sql

SQL> set lines 200
SQL> col comp_name format a50
SQL> select comp_name,version,status from dba_registry;

COMP_NAME VERSION STATUS
————————————————– —————————— ———————————
Oracle XML Database 10.2.0.4.0 VALID
Oracle Enterprise Manager 10.2.0.4.0 VALID
Oracle Data Mining 10.2.0.4.0 VALID
OLAP Catalog 10.2.0.4.0 VALID
Oracle Text 10.2.0.4.0 VALID
Spatial 10.2.0.4.0 VALID
Oracle interMedia 10.2.0.4.0 VALID
Oracle Database Catalog Views 10.2.0.4.0 VALID
Oracle Database Packages and Types 10.2.0.4.0 INVALID
Oracle Real Application Clusters 10.2.0.4.0 VALID
JServer JAVA Virtual Machine 10.2.0.4.0 VALID
Oracle XDK 10.2.0.4.0 VALID
Oracle Database Java Packages 10.2.0.4.0 VALID
OLAP Analytic Workspace 10.2.0.4.0 VALID
Oracle OLAP API 10.2.0.4.0 VALID

If the above query shows that everything is valid and is in proper version,
it means, that we would need to correct the problem with lcr$_row_record before upgrade.

I see that one component is invalid ‘Database packages and types’. To resolve this, please perform the following:

SQL> shutdown immediate
SQL> startup upgrade
SQL> @?/rdbms/admin/catproc.sql
SQL> @?/rdbms/admin/utlrp.sql
SQL> shutdown immediate
SQL> startup
SQL> set lines 200
SQL> col comp_name format a50
SQL> select comp_name,version,status from dba_registry;

Recompiled also get ORA-00600: internal error code, arguments: [17003]

SQL> alter type LCR$_ROW_RECORD compile;
alter type LCR$_ROW_RECORD compile
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [17003], [0x70000007C3A0378], [1],
[1], [], [], [], []

select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status psta, P_TIMESTAMP
from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID and do.object_ID =
select object_id from dba_objects where OBJECT_NAME=’LCR$_ROW_RECORD’);

SQL> select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
2 D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status psta, P_TIMESTAMP
3 from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
4 where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID and do.object_ID =
5 (select object_id from dba_objects where OBJECT_NAME=’LCR$_ROW_RECORD’);

(select object_id from dba_objects where OBJECT_NAME=’LCR$_ROW_RECORD’)
*
ERROR at line 5:
ORA-01427: single-row subquery returns more than one row

SQL> set lines 200
SQL> col comp_name format a50
SQL> select comp_name,version,status from dba_registry;

SQL> col object_name format a20
SQL> select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status psta, P_TIMESTAMP
from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID and do.object_ID in
(select object_id from dba_objects where OBJECT_NAME=’LCR$_ROW_RECORD’);

 

MOS Bug Info:

 

Hdr: 6611530 10.2.0.2.0 RDBMS 10.2.0.2.0 AQ PRODID-5 PORTID-197 ORA-600
Abstract: ORA-600 17003

PROBLEM:
——–
Customer was trying to apply the 10.2.0.3 patchset and kept getting ora-600
17003 errors running catproc.  They tested the running of catalog, catproc
and utlrp in the 10.2.0.2 database version before upgrade and got the same
error.  The errors are reproducing trying to compile at least these 2
objects:
ALTER PACKAGE “SYS”.”DBMS_AQADM_SYS” COMPILE BODY REUSE SETTINGS
ALTER TYPE “SYS”.”LCR$_ROW_RECORD” COMPILE SPECIFICATION REUSE SETTINGS

This issue started in SR 6535356.994 and since the issue reproduces before
upgrade it was determined that this isn’t an upgrade issue.

DIAGNOSTIC ANALYSIS:
——————–
Have reviewed alert.log and trace file.
Had customer run hcheck – output will be uploaded.

In the trace files, I couldn’t find the handle for the second argument of the
ora-600 17003 error, so can not determine the object.

WORKAROUND:
———–
none known

RELATED BUGS:
————-
Looks exactly like Bug 5857558.  Was going to try the generic fix in this bug
but cannot determine the object because cannot find the handle in the trace
files.

REPRODUCIBILITY:
—————-
It is reproducible everytime they run catproc.  catalog runs fine.

TEST CASE:
———-
none

STACK TRACE:
————
ksedst
ksedmp
ksfdmp
kgeriv
kgeasi
kglget
kglgob
kgldpo0
kgldpo
kgldon
pl_don
ptgxtn
ptg_nd
phdbte
phncrr_check_remote_refs
phncee_check_extra_errors
phnr_resolve
ph2exp
ph2ext
ph2osa
ph2of1
ph2exp
ph2ext
ph2osa
ph2of1

SUPPORTING INFORMATION:
———————–

24 HOUR CONTACT INFORMATION FOR P1 BUGS:
—————————————-

DIAL-IN INFORMATION:
——————–

IMPACT DATE:
————

The query returned no rows:
SQL> select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status
psta, P_TIMESTAMP from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po where
D_OBJ# = do.object_ID and P_OBJ#= po.object_ID and do.object_ID = (select
object_id from dba_objects where OBJECT_NAME=’AQ$_SRVNTFN_MESSAGE’);
  2    3    4    5  
/

no rows selected

Therefore, I have asked him to send up the files needed to set-up the system
tablespace in-house.  Do you need any other tablespaces?

Customer did the following:

1.    Recovered the database back to 10.2.0.2 before we ran catalog and catproc.
2. Performed the steps for dictionary fix as following ( he had tested this
out ? please see dev.log with the test he did before this ):
(1) update obj$ set status=5 where obj#=(select object_id from dba_objects
where OBJECT_NAME=’AQ$_SRVNTFN_MESSAGE’);
     commit;
     select obj#, name, status from obj$ where obj# = (select object_id from
dba_objects where OBJECT_NAME=’AQ$_SRVNTFN_MESSAGE’);
.
(2) FLUSH the shared pool(or bounce the DB), to reflect this changed status
in cache as well.
.
(3) alter type AQ$_SRVNTFN_MESSAGE compile;
.
(4) Check required dependency$ rows are recreated.
     select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp, po.status
psta, P_TIMESTAMP from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po where
D_OBJ# = do.object_ID and P_OBJ#= po.object_ID and do.object_ID = (select
object_id from dba_objects where OBJECT_NAME=’AQ$_SRVNTFN_MESSAGE’);
.
(5) ALTER PACKAGE “SYS”.”DBMS_AQADM_SYS” COMPILE BODY REUSE SETTINGS;
     select obj#, name, status from obj$ where obj# = (select object_id from
dba_objects where OBJECT_NAME=’DBMS_AQADM_SYS’ and OBJECT_TYPE in (‘PACKAGE
BODY’));
3. Executed catalog
4. Executed catproc and encountered Ora-600 errors:
    First error:
       UPDATE SYS.AQ_SRVNTFN_TABLE tab
           *
ERROR at line 1:
ORA-81: address range [0x60000000000DBDB0, 0x60000000000DBDB4) is not
readable
ORA-600: internal error code, arguments: [kksfbc-reparse-infinite-loop],
[0x9FFFFFFFBE9CED88], [], [], [], [], [], []

     Second error:
Warning: Type created with compilation errors.

Errors for TYPE LCR$_ROW_RECORD:

LINE/COL ERROR
——– —————————————————————–
0/0      ORA-81: address range [0x60000000000DBD80, 0x60000000000DBD84)
         is not readable
         ORA-600: internal error code, arguments: [17003],
         [0xC0000004F1DD89D0], [1], [1], [], [], [], []

Invalids after utlrp:

SYS                STREAMS$_EVALUATION_CONTEXT      EVALUATION CONTEXT
INVALID
                   DBMS_STREAMS                     PACKAGE            
INVALID
                   DBMS_STREAMS_DATAPUMP            PACKAGE BODY       
INVALID
                   DBMS_STREAMS                     PACKAGE BODY       
INVALID
                   DBMS_LOGREP_IMP                  PACKAGE BODY       
INVALID
                   LCR$_ROW_RECORD                  TYPE               
INVALID
                   LCR$_ROW_RECORD                  TYPE BODY          
INVALID
                   AQ$_AQ_SRVNTFN_TABLE_F           VIEW               
INVALID
                   AQ$AQ_SRVNTFN_TABLE              VIEW               
INVALID

I then asked him to up the following parameters which did no good (didn?t
think they would but wanted to try):
shared_pool_size greater then 800M
large_pool_size greater then 250M
java_pool_size greater then 250M
db_cache_size greater then 500M
compatible set to 10.2.0.0
pga_aggregate_target greater then 1000M

Then had him do the following:
SQL> select distinct o.name, o.stime, d.p_timestamp
from obj$ o, dependency$ d
where o.stime != d.p_timestamp and o.type#=13 and o.obj#=d.p_obj# ;

  2    3  
NAME                           STIME     P_TIMESTA
—————————— ——— ———
AQ$_SRVNTFN_MESSAGE            08-NOV-07 11-NOV-06

SQL> startup restrict
SQL> conn / as sysdba

SQL> update dependency$ d set p_timestamp = (select stime from obj$ p where
d.p_obj#=p.obj#)
           where d.p_obj# in (select obj# from obj$ where type#=13) and
d.p_timestamp != (select stime
           from obj$ o where d.p_obj#=o.obj#);

ENSURE BY THE PREVIOUS STATEMENT that only 1 rows are updated.

——— THIS RETURNED 3 rows therefore he rolled back and this is where we
are now.

/upload/bug6611530 directory:
Dev.log ? gives the outputs after the dictionary patch was done
alert_oasc1[1].log.11082007.txt ? the alert.log after the dictionary patch
was done and the migration was then attempted
oasc1_ora_3475.trc – The first trace file in the series

You can see in the trace file the following 2 ora errors:
ORA-600: internal error code, arguments: [kksfbc-reparse-infinite-loop],
[0x9FFFFFFFBE9CED88], [], [], [], [], [], []
AND
ORA-600: internal error code, arguments: [17003], [0xC0000004F1DD89D0],
[1], [1], [], [], [], []
—- where I still cannot find the handle in the trace file.

The ora-600 kksfbc-reparse-infinite-loop occured after the data dictionary
patching so I included it here for clarity.  Will work on that one after this
is fixed (if it can be).

Here is the results of the requested query.  Since this object has both an
OBJECT_TYPE of TYPE and TYPE BODY there are 2 queries that were run:

SQL> select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
  2  D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp,
po.status psta,
  3   P_TIMESTAMP from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
  4  where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID and do.object_ID =
  5  (select object_id from dba_objects where OBJECT_NAME=’LCR$_ROW_RECORD’
and OBJECT_TYPE = ‘TYPE’);

    D_OBJ#
———-
OBJECT_NAME
——————————————————————————

DTYP                DSTA    D_TIMESTA     ORDER#     P_OBJ#
——————- ——- ——— ———- ———-
OBJECT_NAME
——————————————————————————

PTYP                PSTA    P_TIMESTA
——————- ——- ———
   1462820
LCR$_ROW_RECORD
TYPE                INVALID 09-NOV-07          2    3320201
LCR$_ROW_LIST
TYPE                VALID   11-NOV-06

   1462820
LCR$_ROW_RECORD
TYPE                INVALID 09-NOV-07          1        309
STANDARD
PACKAGE             VALID   18-APR-03

   1462820
LCR$_ROW_RECORD
TYPE                INVALID 09-NOV-07          0    1462817
LCR_ROW_LIB
LIBRARY             VALID   15-NOV-03

SQL>
SQL> select D_OBJ#, do.object_name, do.object_type dtyp, do.status dsta,
  2  D_TIMESTAMP, ORDER#, P_OBJ#, po.object_name, po.object_type ptyp,
po.status psta,
  3   P_TIMESTAMP from dependency$ d, DBA_OBJECTS do, DBA_OBJECTS po
  4  where D_OBJ# = do.object_ID and P_OBJ#= po.object_ID and do.object_ID =
  5  (select object_id from dba_objects where OBJECT_NAME=’LCR$_ROW_RECORD’
and OBJECT_TYPE = ‘TYPE BODY’);

    D_OBJ#
———-
OBJECT_NAME
——————————————————————————

DTYP                DSTA    D_TIMESTA     ORDER#     P_OBJ#
——————- ——- ——— ———- ———-
OBJECT_NAME
——————————————————————————

PTYP                PSTA    P_TIMESTA
——————- ——- ———
   1462858
LCR$_ROW_RECORD
TYPE BODY           INVALID 09-NOV-07          0    1462820
LCR$_ROW_RECORD
TYPE                INVALID 09-NOV-07

 

 

可以尝试在startup upgrade模式下重新运行catproc.sql脚本来解决组件失效的问题(catproc.sql is to try validate the registry and that should resolve the issue)。

职业生涯的一个中转站

00年代的最后一个月,总算没有辜负这三年来对于oracle的学习;在一个月的无尽等待不断反思中,祝贺信终于来了:

Dear Xiang Bing,

Congratulations! on the successful completion of the Oracle DBA 10g Certified Master practicum.

You are now a member of an elite group of Oracle professionals.  You will receive your Oracle DBA 10g Certified Master fulfillment kit that includes a congratulations letter, OCM certificate, OCM ID card, and denim OCM shirt to the mailing address mentioned in your Oracle DBA 10g OCM Hands-on course requirement Form.

OCM, Hello World!

10年代是我们80后的年代!!

使用logminer日志挖掘技术

使用logminer日志挖掘技术

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638   QQ号:47079569    邮箱:service@parnassusdata.com

[gview file=”https://www.askmac.cn/wp-content/uploads/2014/11/less11_Flashback.pdf”]

使用Oracle闪回flashback技术

使用Oracle闪回flashback技术

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638   QQ号:47079569    邮箱:service@parnassusdata.com

 

 

[gview file=”https://www.askmac.cn/wp-content/uploads/2014/11/less11_Flashback.pdf”]

Java程序导入文本文件生成XLS格式

最近工作中碰到一件事,要从文本文件中导入数据生成xls文件。 想想不难的,前端时间写过一个java程序,是放在server上跑的,从Oracle数据库里取数据再生成xls文件。 于是,把原先的那个程序给拿过来改改,很快就成了。由于这个是给用户在客户端操作的,所以用swing写了个很简朴的GUI界面。
但实际运行测试时,发现了一个很大的问题,那就是内存的问题。原先的那个程序,将要导入的数据先都写到一个vector中。 在服务器上,30多万行的数据都轻松搞定, 但一放到Win XP上就不行了。(我的机器内存512MB) 我测试的文本文件,有10万行数据,导入到vector后,再将数据通过POI包写到xls文件时就会使java虚拟机堆溢出,我手动调整了虚拟机最大可用内存量,即设置 -Xmx也还是不行,最后将测试数据减少到25000行程序顺利生成了xls文件。 故最终的程序的原理改成,先将导入的文本数据输出到temp文件,每个temp文件第一行为列名,从第二行起为数据,最多25000行。每个temp文件对应生成一个xls文件。经测试,效率还是相当不错的。哈哈,java现在效率其实很不错的啦,除非是那种对时间很苛刻的环境,那没有办法。

最后发布的时候,由于是给无专业IT知识的用户使用,所以我使用exe4j软件将其打包成一个exe文件,方便用户使用。 用户只要保证装有JRE 5.0或以上版本就可以了。(我没测试过6.0,相信没有问题的,哈哈,想当然了)

使用方法:点击select按钮,选中要导入的文件就可以了。生成的xls文件在被导入文件同一目录下;若有忽略的数据,在程序所在目录下生成log文件。

数据文件说明:
数据的第一行应为头信息,即表的列名,余下为数据本身,每个字段之间用|分割。 可下载测试样本来看。

软件下载地址:
TextConverter.zip

测试数据下载:
test2.txt

软件界面

软件界面

Script:Generate A DDL Script For A Table

以下脚本用于生成创建表的DDL语句,需要用到DBMS_METADATA.GET_DDL:

-- How to use ddl.sql
-- Run ddl.sql on the sql*plus.
-- Login the sql*plus with apps user or dba user
-- Start ddl.sql, which will ask you table_name and table_owner that you're looking for.
-- It will generate tablename_ddl.txt

set long 1000000
SET PAGESIZE 3000
set lines 200
SET HEADING OFF
SET VERIFY OFF
SET FEEDBACK OFF
set echo on
set timing off
set wrap On

ACCEPT TABLE_NAME CHAR PROMPT 'Enter Table Name : '
ACCEPT TABLE_OWNER CHAR PROMPT 'Enter Table Owner : '

select DBMS_METADATA.GET_DDL('TABLE',OBJECT_NAME,OWNER)
FROM Dba_objects
where owner = UPPER('&TABLE_OWNER') and object_name = UPPER('&TABLE_NAME')
and object_type = 'TABLE'
union all
select dbms_metadata.GET_DEPENDENT_DDL ('COMMENT', TABLE_NAME, OWNER )
FROM (select table_name,owner
from Dba_col_comments
where owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
and comments is not null
union
select table_name,owner
from sys.Dba_TAB_comments
where owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
and comments is not null)
union all
select DBMS_METADATA.GET_DEPENDENT_DDL('INDEX',TABLE_NAME, TABLE_OWNER)
FROM (select table_name,table_owner
FROM Dba_indexes
where table_owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
and index_name not in (select constraint_name
from sys.Dba_constraints
where table_name = table_name
and constraint_type = 'P' )
and rownum = 1)
union all
select dbms_metadata.GET_DDL ('TRIGGER', trigger_name ,owner )
from Dba_triggers
where table_owner = UPPER('&TABLE_OWNER')
and table_name = UPPER('&TABLE_NAME')
.
SET CONCAT +
spool &TABLE_NAME+_ddl.txt
/
spool off

Sample output:

SQL> @ddl
SQL> set timing off
SQL> set wrap On
SQL>
SQL> ACCEPT TABLE_NAME CHAR PROMPT 'Enter Table Name : '
Enter Table Name : TAB$
SQL> ACCEPT TABLE_OWNER CHAR PROMPT 'Enter Table Owner : '
Enter Table Owner : SYS
SQL>
SQL> select DBMS_METADATA.GET_DDL('TABLE',OBJECT_NAME,OWNER)
  2  FROM Dba_objects
  3  where owner = UPPER('&TABLE_OWNER') and object_name = UPPER('&TABLE_NAME')
  4  and object_type = 'TABLE'
  5  union all
  6  select dbms_metadata.GET_DEPENDENT_DDL ('COMMENT', TABLE_NAME, OWNER )
  7  FROM (select table_name,owner
  8  from Dba_col_comments
  9  where owner = UPPER('&TABLE_OWNER')
 10  and table_name = UPPER('&TABLE_NAME')
 11  and comments is not null
 12  union
 13  select table_name,owner
 14  from sys.Dba_TAB_comments
 15  where owner = UPPER('&TABLE_OWNER')
 16  and table_name = UPPER('&TABLE_NAME')
 17  and comments is not null)
 18  union all
 19  select DBMS_METADATA.GET_DEPENDENT_DDL('INDEX',TABLE_NAME, TABLE_OWNER)
 20  FROM (select table_name,table_owner
 21  FROM Dba_indexes
 22  where table_owner = UPPER('&TABLE_OWNER')
 23  and table_name = UPPER('&TABLE_NAME')
 24  and index_name not in (select constraint_name
 25  from sys.Dba_constraints
 26  where table_name = table_name
 27  and constraint_type = 'P' )
 28  and rownum = 1)
 29  union all
 30  select dbms_metadata.GET_DDL ('TRIGGER', trigger_name ,owner )
 31  from Dba_triggers
 32  where table_owner = UPPER('&TABLE_OWNER')
 33  and table_name = UPPER('&TABLE_NAME')
 34  .
SQL> SET CONCAT +
SQL> spool &TABLE_NAME+_ddl.txt
SP2-0332: Cannot create spool file.
SQL> /


  CREATE TABLE "SYS"."TAB$"
   (    "OBJ#" NUMBER NOT NULL ENABLE,
        "DATAOBJ#" NUMBER,
        "TS#" NUMBER NOT NULL ENABLE,
        "FILE#" NUMBER NOT NULL ENABLE,
        "BLOCK#" NUMBER NOT NULL ENABLE,
        "BOBJ#" NUMBER,
        "TAB#" NUMBER,
        "COLS" NUMBER NOT NULL ENABLE,
        "CLUCOLS" NUMBER,
        "PCTFREE$" NUMBER NOT NULL ENABLE,
        "PCTUSED$" NUMBER NOT NULL ENABLE,
        "INITRANS" NUMBER NOT NULL ENABLE,
        "MAXTRANS" NUMBER NOT NULL ENABLE,
        "FLAGS" NUMBER NOT NULL ENABLE,
        "AUDIT$" VARCHAR2(38) NOT NULL ENABLE,
        "ROWCNT" NUMBER,
        "BLKCNT" NUMBER,
        "EMPCNT" NUMBER,
        "AVGSPC" NUMBER,
        "CHNCNT" NUMBER,
        "AVGRLN" NUMBER,
        "AVGSPC_FLB" NUMBER,
        "FLBCNT" NUMBER,
        "ANALYZETIME" DATE,
        "SAMPLESIZE" NUMBER,
        "DEGREE" NUMBER,
        "INSTANCES" NUMBER,
        "INTCOLS" NUMBER NOT NULL ENABLE,
        "KERNELCOLS" NUMBER NOT NULL ENABLE,
        "PROPERTY" NUMBER NOT NULL ENABLE,
        "TRIGFLAG" NUMBER,
        "SPARE1" NUMBER,
        "SPARE2" NUMBER,
        "SPARE3" NUMBER,
        "SPARE4" VARCHAR2(1000),
        "SPARE5" VARCHAR2(1000),
        "SPARE6" DATE
   ) CLUSTER "SYS"."C_OBJ#" ("OBJ#")



  CREATE INDEX "SYS"."I_TAB1" ON "SYS"."TAB$" ("BOBJ#")
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "SYSTEM"

优化模式区别(all_rows & first_rows_n)

FIRST_ROWS优化模式以最快速度地检索出结果 集中的一行为其指导目标。当系统用户正在使用OLTP系统检索单条记录时,该 优化模式最为有效。但是该模式对于批处理密集型(batch)作业环境来说并不是最理想 的选择,在这种环境中一个查询通常需要检索许多行。FIRST_ROWS提示 一般会强制使用某些索引,而在默认环境(ALL_ROWS)中可能不采用这些索引。在使 用UPDATE和DELETE语句时FIRST_ROWS模式会被忽略,因这些DML操 作中所查询到的所有记录都会被更新或删除。另当使用以下分组语句(如GROUP BY,DISTINCT,INTERSECT,MINUS和UNION)时FIRST_ROWS模式均被ALL_ROWS模式取代,因为这些语句进行分组时必须检索所有行。当语句中有ORDER BY子句时,如果索引扫描可以进行实际的排序工作,则优化器将避免额外的排 序。当索引扫描可用并且索引处于内部表(inner table)时,优化器将更倾向于NESTED LOOPS即嵌套循环而非SORT MERGE排 序连接。

另10g中现有的FIRST_ROWS模式的变体FIRST_ROWS_N来 指定以多少行数最快返回。这个值介于10~1000之间,这个使用FIRST_ROWS_N的新方法是完全基于成本的方法,它对于N的取值较敏感,若N甚小,优化器就会产生包 括嵌套循环以及索引查找的计划。如果N值较大,优化器也可能生成由散列连接和全表扫描组 成的计划(类似于ALL_ROWS)。 又FIRST_ROW与FIRST_ROWS_N存 在不同,FIRST_ROW模式中保量了部分基于规则的代码,而FIRST_ROWS_N模式则是完完全全基于统计信息计算相应成本,如Oracle文档所述:

ALL_ROWS优化模式指导查询以最快速度检索出所 有行(最佳吞吐量)。当系统用户 处于需要大量批处理报告的环境中,该模式较理想。

在实际的SQL硬解析过程中,FIRST_ROWS_N模式将首先以ALL_ROWS模 式的方式计算一次各执行计划的具体代价,之后将我们需要的N条记录代入成本计算中代替实 际全部的候选行(CARD)以得出FIRST_ROWS_N中 的计划成本。

create table test as select  * from dba_objects;

create table testa as select * from test;

alter session set events’10053 trace name context forever,level 1′;    –使用10053事 件获取成本计算过程trace

alter session set optimizer_mode=all_rows;

select test.owner from test,testa where test.object_id=testa.object_id

alter session set events’10053 trace name context off’;

下为ALL_ROWS模式中,最佳连接方式的选 取:

NL Join

Outer table: Card: 9622.00  Cost: 35.37  Resp: 35.37  Degree: 1  Bytes: 7

Inner table: TESTA  Alias: TESTA

Access Path: TableScan

NL Join:  Cost: 318924.52  Resp: 318924.52  Degree: 0

Cost_io: 315358.00  Cost_cpu: 27736509932

Resp_io: 315358.00  Resp_cpu: 27736509932

Access Path: index (index (FFS))

Index: INDA_ID

resc_io: 5.69  resc_cpu: 1304190

ix_sel: 0.0000e+00  ix_sel_with_filters: 1

Inner table: TESTA  Alias: TESTA

Access Path: index (FFS)

NL Join:  Cost: 56375.98  Resp: 56375.98  Degree: 0

Cost_io: 54762.00  Cost_cpu: 12551800804

Resp_io: 54762.00  Resp_cpu: 12551800804

Access Path: index (AllEqJoinGuess)

Index: INDA_ID

resc_io: 1.00  resc_cpu: 8171

ix_sel: 1.0393e-04  ix_sel_with_filters: 1.0393e-04

NL Join: Cost: 9667.48  Resp: 9667.48  Degree: 1

Cost_io: 9657.00  Cost_cpu: 81507910

Resp_io: 9657.00  Resp_cpu: 81507910

Best NL cost: 9667.48

resc: 9667.48 resc_io: 9657.00 resc_cpu: 81507910

resp: 9667.48 resp_io: 9657.00 resp_cpu: 81507910

Join Card:  9622.00 = outer (9622.00) * inner (9622.00) * sel (1.0393e-04)

Join Card – Rounded: 9622 Computed: 9622.00

SM Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 203.62  Resp: 203.62  [multiMatchCost=0.00]

HA Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA Join (swap)

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 35.37  card: 9622.00  bytes: 7  deg: 1  resp: 35.37

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA cost: 43.35

resc: 43.35 resc_io: 42.00 resc_cpu: 10480460

resp: 43.35 resp_io: 42.00 resp_cpu: 10480460

Best:: JoinMethod: Hash

Cost: 43.35  Degree: 1  Resp: 43.35  Card: 9622.00  Bytes: 10

***********************

Best so far: Table#: 0  cost: 35.3706  card: 9622.0000  bytes: 67354

Table#: 1  cost: 43.3476  card: 9622.0000  bytes: 96220

可以看到连接中二表上的候选行都是9622条,实际结果集也是9622条。

我们来看FIRST_ROWS_10情况下的trace:

alter session set events’10053 trace name context forever,level 1′;

alter session set optimizer_mode=first_rows_10;

select test.owner from test,testa where test.object_id=testa.object_id;

alter session set events’10053 trace name context off’;

Now joining: TEST[TEST]#0

***************

NL Join

Outer table: Card: 11.00  Cost: 2.00  Resp: 2.00  Degree: 1  Bytes: 3

Inner table: TEST  Alias: TEST

Access Path: TableScan

NL Join:  Cost: 368.08  Resp: 368.08  Degree: 0

Cost_io: 364.00  Cost_cpu: 31713898

Resp_io: 364.00  Resp_cpu: 31713898

Access Path: index (AllEqJoinGuess)

Index: IND_ID

resc_io: 2.00  resc_cpu: 15503

ix_sel: 1.0393e-04  ix_sel_with_filters: 1.0393e-04

NL Join (ordered): Cost: 24.02  Resp: 24.02  Degree: 1

Cost_io: 24.00  Cost_cpu: 178973

Resp_io: 24.00  Resp_cpu: 178973

Best NL cost: 24.02

resc: 24.02 resc_io: 24.00 resc_cpu: 178973

resp: 24.02 resp_io: 24.00 resp_cpu: 178973

Join Card:  11.00 = outer (11.00) * inner (9622.00) * sel (1.0393e-04)

Join Card – Rounded: 11 Computed: 11.00

SM Join

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 35.37  card: 9622.00  bytes: 7  deg: 1  resp: 35.37

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      22 Row size:           18 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         14

Total IO sort cost: 36      Total CPU sort cost: 14055006

Total Temp space used: 320000

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 109.14  Resp: 109.14  [multiMatchCost=0.00]

SM cost: 109.14

resc: 109.14 resc_io: 105.00 resc_cpu: 32173386

resp: 109.14 resp_io: 105.00 resp_cpu: 32173386

SM Join (with index on outer)

Access Path: index (FullScan)

Index: IND_ID

resc_io: 167.00  resc_cpu: 5134300

ix_sel: 1  ix_sel_with_filters: 1

Cost: 167.66  Resp: 167.66  Degree: 1

Outer table:

resc: 167.66  card 11.00  bytes: 7  deg: 1  resp: 167.66

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 203.62  Resp: 203.62  [multiMatchCost=0.00]

HA Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA Join (swap)

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 2.00  card: 11.00  bytes: 7  deg: 1  resp: 2.00

using dmeth: 2  #groups: 1

Cost per ptn: 0.69  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 9.85  Resp: 9.85  [multiMatchCost=0.00]

HA cost: 9.85

resc: 9.85 resc_io: 9.00 resc_cpu: 6646477

resp: 9.85 resp_io: 9.00 resp_cpu: 6646477

First K Rows: copy A one plan, tab=TESTA

Best:: JoinMethod: Hash

Cost: 9.85  Degree: 1  Resp: 9.85  Card: 9622.00  Bytes: 17

***********************

Best so far: Table#: 0  cost: 2.0012  card: 11.0000  bytes: 77

Table#: 1  cost: 9.8546  card: 9622.0000  bytes: 163574

可以看到此次计算中代入了用户希望最先返回的结果 条数11(为10+1),通过设 置连接对象的候选结果集(Card)以到达相关优化目的,相应的COST均有所下降。

下为FIRST_ROWS_1000的情况:

alter session set events’10053 trace name context forever,level 1′;

alter session set optimizer_mode=first_rows_1000;

select test.owner from test,testa where test.object_id=testa.object_id;

alter session set events’10053 trace name context off’;

NL Join

Outer table: Card: 1000.00  Cost: 5.04  Resp: 5.04  Degree: 1  Bytes: 7

Inner table: TESTA  Alias: TESTA

Access Path: TableScan

NL Join:  Cost: 33147.66  Resp: 33147.66  Degree: 0

Cost_io: 32777.00  Cost_cpu: 2882616819

Resp_io: 32777.00  Resp_cpu: 2882616819

Access Path: index (index (FFS))

Index: INDA_ID

resc_io: 5.69  resc_cpu: 1304190

ix_sel: 0.0000e+00  ix_sel_with_filters: 1

Inner table: TESTA  Alias: TESTA

Access Path: index (FFS)

NL Join:  Cost: 5861.74  Resp: 5861.74  Degree: 0

Cost_io: 5694.00  Cost_cpu: 1304492819

Resp_io: 5694.00  Resp_cpu: 1304492819

Access Path: index (AllEqJoinGuess)

Index: INDA_ID

resc_io: 1.00  resc_cpu: 8171

ix_sel: 1.0393e-04  ix_sel_with_filters: 1.0393e-04

NL Join: Cost: 1006.09  Resp: 1006.09  Degree: 1

Cost_io: 1005.00  Cost_cpu: 8474019

Resp_io: 1005.00  Resp_cpu: 8474019

Best NL cost: 1006.09

resc: 1006.09 resc_io: 1005.00 resc_cpu: 8474019

resp: 1006.09 resp_io: 1005.00 resp_cpu: 8474019

Join Card:  1000.00 = outer (1000.00) * inner (9622.00) * sel (1.0393e-04)

Join Card – Rounded: 1000 Computed: 1000.00

SM Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      22 Row size:           18 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         14

Total IO sort cost: 36      Total CPU sort cost: 14055006

Total Temp space used: 320000

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 109.14  Resp: 109.14  [multiMatchCost=0.00]

SM cost: 109.14

resc: 109.14 resc_io: 105.00 resc_cpu: 32173386

resp: 109.14 resp_io: 105.00 resp_cpu: 32173386

SM Join (with index on outer)

Access Path: index (FullScan)

Index: IND_ID

resc_io: 167.00  resc_cpu: 5134300

ix_sel: 1  ix_sel_with_filters: 1

Cost: 167.66  Resp: 167.66  Degree: 1

Outer table:

resc: 167.66  card 1000.00  bytes: 7  deg: 1  resp: 167.66

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

SORT resource      Sort statistics

Sort width:          70 Area size:      131072 Max Area size:    12582912

Degree:               1

Blocks to Sort:      17 Row size:           14 Total Rows:           9622

Initial runs:         2 Merge passes:        1 IO Cost / pass:         10

Total IO sort cost: 27      Total CPU sort cost: 13931876

Total Temp space used: 254000

SM join: Resc: 203.62  Resp: 203.62  [multiMatchCost=0.00]

HA Join

Outer table:

resc: 35.37  card 9622.00  bytes: 7  deg: 1  resp: 35.37

Inner table: TESTA  Alias: TESTA

resc: 7.17  card: 9622.00  bytes: 3  deg: 1  resp: 7.17

using dmeth: 2  #groups: 1

Cost per ptn: 0.81  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 43.35  Resp: 43.35  [multiMatchCost=0.00]

HA Join (swap)

Outer table:

resc: 7.17  card 9622.00  bytes: 3  deg: 1  resp: 7.17

Inner table: TEST  Alias: TEST

resc: 5.04  card: 1000.00  bytes: 7  deg: 1  resp: 5.04

using dmeth: 2  #groups: 1

Cost per ptn: 0.70  #ptns: 1

hash_area: 124 (max=3072)   Hash join: Resc: 12.91  Resp: 12.91  [multiMatchCost=0.00]

HA cost: 12.91

resc: 12.91 resc_io: 12.00 resc_cpu: 7038524

resp: 12.91 resp_io: 12.00 resp_cpu: 7038524

First K Rows: copy A one plan, tab=TESTA

Best:: JoinMethod: Hash

Cost: 12.91  Degree: 1  Resp: 12.91  Card: 9622.00  Bytes: 17

***********************

Best so far: Table#: 0  cost: 5.0389  card: 1000.0000  bytes: 7000

Table#: 1  cost: 12.9051  card: 9622.0000  bytes: 163574

可以看到此处代入了1000为某一连接对象的候选行数。

MOS上有一个著名的《MIGRATING TO THE COST-BASED OPTIMIZER》教材,详细介绍了RBO和CBO的区别:
[gview file=”http://askmac.cn/wp-content/uploads/resource/40178_rbo_rip.doc”]

Know More about Oracle PGA Memory

Question :

 

1. How can we know available PGA and temporary tablespace before we issue a huge operation?
2. Can we estimate PGA and temporary tablespace for a huge operation?
3. As we know, there is limitation for a user process. Can we set unlimited and how to do it?

 

We say:

 

PROPOSED SOLUTION(S)
====================
ANSWER
======
Note 223730.1 Title Automatic PGA Memory Management in 9i and 10g

This Document briefly describes how Oracle 9i manage PGA work area and how to
tune it and some of the common issues and some of the common misunderstood issues.

Automatic PGA Memory Management

Automatic PGA Memory Management
***Checked for relevance on 07-Jan-2011***

note:note:
Automatic PGA Memory Management
——————————-

Process Global Area,  often known as the Program Global Area (PGA) resides in the
process private memory of the server process. It contains global variables and data
structures and control information for a server process. example of such information
is the runtime area of a cursor. Each time a cursor is executed, a new runtime
area is created for that cursor in the PGA memory region of the server process
executing that cursor.

The performance of complex long running queries, typical in a DSS environment,
depend to a large extent on the memory available in the Program Global Area (PGA).
which is also called work area.

The size of a work area can be controlled and tuned. Generally, bigger work areas
can significantly improve the performance of a particular operator at the cost of
higher memory consumption. Ideally, the size of a work area is big enough that it
can accommodate the input data and auxiliary memory structures allocated by its
associated SQL operator. This is known as the optimal size of a work area (e.g.
a memory sort). When the size of the work area is smaller than optimal
(e.g. a disk sort), the response time increases, because an extra pass is performed
over part of the input data. This is known as the one-pass size of the work area.
Under the one-pass threshold, when the size of a work area is far too small compared
to the input data size, multiple passes over the input data are needed. This could
dramatically increase the response time of the operator. This is known as the multi-pass
size of the work area.

In Oracle8i administrators sized the PGA by carefully adjusting a number of
initialization parameters, such as, SORT_AREA_SIZE, HASH_AREA_SIZE,
BITMAP_MERGE_AREA_SIZE, and CREATE_BITMAP_AREA_SIZE, etc.

Starting with Oracle9i, an option is provided to completely automate the
management of PGA memory.  Administrators merely need to specify the
maximum amount of PGA memory available to an instance using a newly
introduced initialization parameter PGA_AGGREGATE_TARGET.

The database server automatically distributes this memory among various
active queries in an intelligent manner so as to ensure maximum performance
benefits and the most efficient utilization of memory. Furthermore, Oracle9i
and newer releases can adapt itself to changing workload thus utilizing
resources efficiently regardless of the load on the system. The amount of
the PGA memory available to an instance can be changed dynamically by
altering the value of the PGA_AGGREGATE_TARGET parameter making it possible
to add to and remove PGA memory from an active instance online. Since the
database engine itself is better equipped to determine SQL execution memory
requirements, database administrators should use this feature and not try
to tune the PGA manually. This should translate to better throughput for
large number of users on the system as well as improved response time for
queries.

The automatic SQL execution memory management feature is enabled by setting the
parameter WORKAREA_SIZE_POLICY to AUTO and by specifying a size of
PGA_AGGREGATE_TARGET in the initialization file. These two parameters can also be
set dynamically using the ALTER SYSTEM command. In the absence of either of these
parameters, the database will revert to manual PGA management mode. In Oracle9i
Release 2, an advisory for PGA_AGGREGATE_TARGET was introduced. Just like in Buffer
Cache Advisory, the PGA Advisory will suggest the appropriate size for PGA memory
and thus make PGA tuning an even simpler task.

Version specific notes:

Until 9iR2, PGA_AGGREGATE_TARGET parameter controls the sizing of workareas for
all dedicated server connections, but it has no effect on shared servers (aka
MTS) connections and the *_AREA_SIZE parameters will take precedence in this
case.

In 10g, PGA_AGGREGATE_TARGET controls workareas allocated by both dedicated and
shared connections.

As of 11g, Automatic Memory Management (AMM) expands to managing both SGA and
PGA memory.   Under memory pressure for PGA memory, SGA memory will be
re-allocated for use by a process to accommodate workarea needs.  On the
flip-side, if PGA memory is under allocated, memory can be added to the
auto-tuned components in the SGA beyond the original SGA configuration.
NOTE:   With AMM, setting an explicit value for PGA_AGGREGATE_TARGET will
act as a minimum setting that AMM will not shrink below.  See note:443746.1
for more information.

How To Tune PGA_AGGREGATE_TARGET
———————————-

The first question we will have when we set this parameter is what is the best
value for it?

To determine the appropriate setting for PGA_AGGREGATE_TARGET  parameter we
recommend to follow the following steps

1- Make a first estimate for PGA_AGGREGATE_TARGET  based on the following rule

– For OLTP systems

PGA_AGGREGATE_TARGET  = ( * 80%) * 20%

– For DSS systems

PGA_AGGREGATE_TARGET  = ( * 80%) * 50%

So for example, if we have an Oracle instance configured on system with 16G of
Physical memory, then the suggested PGA_AGGREGATE_TARGET parameter value we
should start with incase we have OLTP system is (16 G * 80%)*20% ~= 2.5G and
incase we have DSS system is (16 G * 80%)* 50% ~= 6.5 G.

In the above equation, we assume that 20% of the memory will be used by the OS,
and in OLTP system 20% of the remaining memory will be used for
PGA_AGGREGATE_TARGET and the remaining memory is going for Oracle SGA
memory and non-oracle processes memory. So make sure that you have
enough memory for your SGA and also for non-oracle processes

2- A second step in tuning the PGA_AGGREGATE_TARGET is to monitor performance
using available PGA statistics and see if PGA_AGGREGATE_TARGET is under sized
or over sized. Several dynamic performance views are available for this
purpose:

– V$PGASTAT

This view provides instance-level statistics on the PGA memory usage and
the automatic PGA memory manager. For example:

SELECT * FROM V$PGASTAT;

NAME                                               VALUE
——————————————————–
aggregate PGA target parameter                     524288000 bytes
aggregate PGA auto target                          463435776 bytes
global memory bound                                25600 bytes
total PGA inuse                                    9353216 bytes
total PGA allocated                                73516032 bytes
maximum PGA allocated                              698371072 bytes
total PGA used for auto workareas                  0 bytes
maximum PGA used for auto workareas                560744448 bytes
total PGA used for manual workareas                0 bytes
maximum PGA used for manual workareas              0 bytes
over allocation count                              0 bytes
total bytes processed                              4.0072E+10 bytes
total extra bytes read/written                     3.1517E+10 bytes
cache hit percentage                               55.97 percent

Main statistics to look at

(a) aggregate PGA auto target : This gives the amount of PGA memory Oracle can
use for work areas running in automatic mode. This part of memory represent the
tunable part of PGA memory,i.e. memory allocated for intensive memory SQL operators
like sorts, hash-join, group-by, bitmap merge and bitmap index create. This memory
part can be shrinked/expanded in function of the system load. Other parts of
PGA memory are known as untunable, i.e. they require a size that can’t be negociated
(e.g. context information for each session, for each open/active cursor,
PL/SQL or Java memory).

So, the aggregate PGA auto target should not be small compared to the value of
PGA_AGGREGATE_TARGET. You must ensure that enough PGA memory is left for work areas
running in automatic mode.

(b) total PGA used for auto workarea: This gives the actual tunable PGA memory used by
the system. The ‘maximum PGA used for auto workareas’ gives the maximum reached
by previous statistic since instance startup.

(c) total PGA in used: This gives the total PGA memory in use. The detail of this
value can be found in the PGA_USED_MEM column of the v$process view.

Oracle92, 10g, 11g:
(d) over allocation count: Over-allocating PGA memory can happen if the value of
PGA_AGGREGATE_TARGET is too small to accommodate the untunable PGA memory part plus
the minimum memory required to execute the work area workload. When this happens,
Oracle cannot honor the initialization parameter PGA_AGGREGATE_TARGET, and extra
PGA memory needs to be allocated. over allocation count is the number of time the
system was detected in this state since database startup. This count should ideally be
equal to zero.

Oracle92,10g, 11g:
(e) cache hit percentage: This metric is computed by Oracle to reflect the
performance of the PGA memory component. It is cumulative from instance
start-up. A value of 100% means that all work areas executed by the system
since instance start-up have used an optimal amount of PGA memory. This is,
of course, ideal but rarely happens except maybe for pure OLTP systems. In
reality, some work areas run one-pass or even multi-pass, depending on the
overall size of the PGA memory. When a work area cannot run optimally, one or
more extra passes is performed over the input data. This reduces the cache
hit percentage in proportion to the size of the input data and the number of
extra passes performed. this value if computed from the “total bytes processed”
and “total extra bytes read/written” statistics available in the same view using
the following formula:

total bytes processed * 100
PGA Cache Hit Ratio =  ——————————————————
(total bytes processed + total extra bytes read/written)

– V$SQL_WORKAREA_HISTOGRAM (Oracle92, 10g, 11g)

This view shows the number of work areas executed with optimal memory size, one-
pass memory size, and multi-pass memory size since instance start-up. Statistics
in this view are subdivided into buckets that are defined by the optimal memory
requirement of the work area. Each bucket is identified by a range of optimal
memory requirements specified by the values of the columns LOW_OPTIMAL_SIZE and
HIGH_OPTIMAL_SIZE.

Example :

The following query shows statistics for all nonempty buckets.

SELECT LOW_OPTIMAL_SIZE/1024 low_kb,(HIGH_OPTIMAL_SIZE+1)/1024 high_kb,
optimal_executions, onepass_executions, multipasses_executions
FROM   v$sql_workarea_histogram
WHERE  total_executions != 0;

The result of the query might look like the following:

LOW_KB HIGH_KB OPTIMAL_EXECUTIONS ONEPASS_EXECUTIONS MULTIPASSES_EXECUTIONS
—— ——- —————— —————— ———————-
8      16      156255             0                   0
16     32      150                0                   0
32     64      89                 0                   0
64     128     13                 0                   0
128    256     60                 0                   0
256    512     8                  0                   0
512    1024    657                0                   0
1024   2048    551                16                  0
2048   4096    538                26                  0
4096   8192    243                28                  0
8192   16384   137                35                  0
16384  32768   45                 107                 0
32768  65536   0                  153                 0
65536  131072  0                  73                  0
131072 262144  0                  44                  0
262144 524288  0                  22                  0

The query result shows that, in the 1024 KB to 2048 KB bucket, 551 work areas used
an optimal amount of memory, while 16 ran in one-pass mode and none ran in
multi-pass mode. It also shows that all work areas under 1 MB were able to run in
optimal mode.

You can also use V$SQL_WORKAREA_HISTOGRAM to find the percentage of times work
areas were executed in optimal, one-pass, or multi-pass mode since start-up.

Example :

SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
onepass_count, round(onepass_count*100/total, 2) onepass_perc,
multipass_count, round(multipass_count*100/total, 2) multipass_perc
FROM
(SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
sum(OPTIMAL_EXECUTIONS) optimal_count,
sum(ONEPASS_EXECUTIONS) onepass_count,
sum(MULTIPASSES_EXECUTIONS) multipass_count
FROM   v$sql_workarea_histogram
WHERE  low_optimal_size > 64*1024);   —- for 64 K optimal size

– V$SQL_WORKAREA_ACTIVE

This view can be used to display the work areas that are active (or executing)
in the instance. Small active sorts (under 64 KB) are excluded from the view.
Use this view to precisely monitor the size of all active work areas and to
determine if these active work areas spill to a temporary segment.

Example :

SELECT to_number(decode(SID, 65535, NULL, SID)) sid,
operation_type OPERATION,trunc(EXPECTED_SIZE/1024) ESIZE,
trunc(ACTUAL_MEM_USED/1024) MEM, trunc(MAX_MEM_USED/1024) “MAX MEM”,
NUMBER_PASSES PASS, trunc(TEMPSEG_SIZE/1024) TSIZE
FROM V$SQL_WORKAREA_ACTIVE
ORDER BY 1,2;

SID OPERATION         ESIZE     MEM       MAX MEM    PASS TSIZE
— —————– ——— ——— ——— —– ——-
8   GROUP BY (SORT)   315       280       904         0
8   HASH-JOIN         2995      2377      2430        1   20000
9   GROUP BY (SORT)   34300     22688     22688       0
11  HASH-JOIN         18044     54482     54482       0
12  HASH-JOIN         18044     11406     21406       1   120000

This output shows that session 12 (column SID) is running a hash-join having its
work area running in one-pass mode (PASS column). This work area is currently
using 11406 KB of memory (MEM column) and has used, in the past, up to 21406 KB
of PGA memory (MAX MEM column). It has also spilled to a temporary segment of
size 120000 KB. Finally, the column ESIZE indicates the maximum amount of memory
that the PGA memory manager expects this hash-join to use. This maximum is dynamically
computed by the PGA memory manager according to workload.

When a work area is deallocated—that is, when the execution of its associated SQL
operator is complete—the work area is automatically removed from the
V$SQL_WORKAREA_ACTIVE view.

– note: have some other queries we use to monitor SQL execution memory

3- The Third and last step is tuning the PGA_AGGREGATE_TARGET. In Oracle 9i
Release 2 we have 2 new views that help us in this task

– V$PGA_TARGET_ADVICE
– V$PGA_TARGET_ADVICE_HISTOGRAM

By examining these two views, you will be able to determine how key PGA statistics
will be impacted if you change the value of PGA_AGGREGATE_TARGET.

To enable automatic generation of PGA advice performance views, make sure the
following parameters are set:

– PGA_AGGREGATE_TARGET
– STATISTICS_LEVEL. Set this to TYPICAL (the default) or ALL; setting this
parameter to BASIC turns off generation of PGA performance advice views.

The content of these PGA advice performance views is reset at instance start-up
or when PGA_AGGREGATE_TARGET is altered.   NOTE:  PGA_AGGREGATE can change
automatically over time starting with 11g as part of the Automatic Memory
Management enhancements available at 11g.   See note:443746.1 for more
details.

V$PGA_TARGET_ADVICE view predicts how the statistics cache hit percentage and
over allocation count in V$PGASTAT will be impacted if you change the value of
the initialization parameter PGA_AGGREGATE_TARGET.

The following select statement can be used to find this information

SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
ESTD_OVERALLOC_COUNT
FROM   v$pga_target_advice;

The output of this query might look like the following:

TARGET_MB  CACHE_HIT_PERC ESTD_OVERALLOC_COUNT
———- ————– ——————–
63         23             367
125        24             30
250        30             3
375        39             0
500        58             0
600        59             0
700        59             0
800        60             0
900        60             0
1000       61             0
1500       67             0
2000       76             0
3000       83             0
4000       85             0

From the above results we should set the PGA_AGGREGATE_TARGET parameter to a
value where we avoid any over allocation, so lowest PGA_AGGREGATE_TARGET value
we can set is 375 ( where ESTD_OVERALLOC_COUNT is 0)

After eliminating over-allocations, the goal is to maximize the PGA cache hit
percentage, based on your response-time requirement and memory constraints.

V$PGA_TARGET_ADVICE_HISTOGRAM view predicts how the statistics displayed
by the performance view V$SQL_WORKAREA_HISTOGRAM will be impacted if you
change the value of the initialization parameter PGA_AGGREGATE_TARGET. You can
use the dynamic view V$PGA_TARGET_ADVICE_HISTOGRAM to view detailed
information on the predicted number of optimal, one-pass and multi-pass work
area executions for the set of PGA_AGGREGATE_TARGET values you use for the
prediction.

Common issues
—————

1- When we set the PGA_AGGREGATE_TARGET  and WORKAREA_SIZE_POLICY to auto
then the *_area_size parameter are automatically ignored and oracle will
automatically use the computed value for these parameters.

2- In Oracle 8i and earlier, the PGA memory was static, once the process started
and started to allocate memory for it’s PGA area then it will not release it
back to the OS unless the process exits or dies. But the OS and under heavy
memory pressure will decide to page out unused memory pages belongs to a process
PGA to the swap space.

In Oracle 9i and under the automatic PGA memory management, Oracle will be able
to unallocate memory from a process PGA which is not using it any more so
another process can use it, also it can adjust the different work areas size
to accommodate the current workload and the amount of memory can be used.

3- Using automatic PGA memory management feature will help limiting resources
used by oracle process, and will use it more efficiently.

4- Using automatic PGA memory management will help also reducing the possibility
of getting ora-4030 errors unless we hit a OS limit, because work area sizes
will be controlled and adjusted automatically based on the PGA_AGGGREGATE_TARGET
parameter first and then the current work load.

5- If column ESTD_OVERALLOCATION_COUNT in the V$PGA_TARGET_ADVICE VIEW is nonzero,
It indicates that PGA_AGGREGATE_TARGET is too small to even meet the minimum
PGA memory needs. If PGA_AGGREGATE_TARGET is set within the over allocation
zone, the memory manager will over-allocate memory and actual PGA memory
consumed will be more than the limit you set. It is therefore meaningless to
set a value of PGA_AGGREGATE_TARGET in that zone.

6- Some customer reported that SQL LOADER in Oracle 9i is slower than SQL Loader
in Oracle 8i, and example of this is bug: which was closed as not a
bug. Using PGA_AGGREGATE_TARGET alleviated the problem.

7- PGA_AGGREGATE_TARGET  is not supported on VMS, for more information please refer
to note: “Oracle9i Release Notes Release 1 (9.0.1) for Alpha OpenVMS”.
ORA-3113 is returned on instance startup when init.ora PGA_AGGREGATE_TARGET is set.

8- Setting PGA_AGGREGATE_TARGET in 9.0.1 on HP-UX 11.0 may panic the OS. for
more information please refer to note: “ALERT HP-UX  Patch Levels
Advised” and Bug:2122307.

Known Bugs
———–

– bug:
Details: Attempting to set pga_aggregate_target over 4000Gb should error with
ORA-4032 but no error is signalled.
Fixed-Releases: 9.2.0.1.0

– Bug:2122307 HP System crash when setting PGA_AGGREGATE_TARGET to 10M or more
in Oracle 9.0.1.

This is basically an OS Problem that cause the crash. The system call pattern
automatic PGA management is using causing HP/UX to try to extend fixed region
and leads to memory allocation failures.

To resolve the bug both this patch and PHKL_25188 (or later) must be installed.

 

As the DBA you need to get confirmation from your operating system administrator that the amount of memory reported as being in use by a process includes or does not include shared memory. If shared memory is included in the value displayed by the operating system utility, then the shared pool size must be deducted from that value to know how much private memory the process is actually using.

See note 174555.1 “UNIX  Determining the Size of an Oracle Process”.

If an RDBMS user process is using more private memory than expected, then the DBA has three options:

– Do nothing
– Monitor the RDBMS iuser session to find out what SQL statements are being performed or were being performed by that RDBMS session. Using the SQL*Trace functionality of the database would normally be done if information from the end user cannot be obtained directly as to what they were doing since the memory usage was higher than expected or what they are doing right now.
– Kill that RDBMS user session.
PGA_AGGREGATE_TARGET does not set a hard limit on pga usage. It is only a target value used to dynamically size the process work areas. It also does not affect other areas of the pga that are allowed to grow beyond this limit.

There are certain areas of pga that cannot be controlled by initialization parameters. Such areas include pl/sql memory collections such as pl/sql tables and varrays.

Depending on the programming code and amount of data being handled these areas can grow very large (up to 20G internal limit on 10) and can consume large amounts of memory. This memory growth can be controlled by good programming practices. As an example, use LIMIT clause with BULK COLLECT.

Additionally, programming mistakes can also lead to excessive memory usage.

You can take steps to control the size of a process. However, from within the database framework you cannot place a hard limit on the size of a process by setting any initialization parameters or database configuration.

You can limit the size of a process from the OS side by setting kernel limits or user shell limits but this leads to the ORA-4030 and will cause transaction rollback.

As noted in bug 7279150, “… this is not a hard limit and that we will exceed it when it is undersized and the workload increases rapidly, such as when they start the workload for their testing or when they spawn a new set of sessions from their application servers.”

 

During the execution of SQL statements, server processes consume memory for various operations. Part of this memory is assigned to work areas for operations such as sorts and hash joins. Ideally, a work area should be large enough to support the SQL operation being performed. This size is known as the optimal size of a work area. When a work area is smaller than its optimal size, an extra pass is performed over part of the input data. This is known as the one-pass size of the work area. When the size of a work area is far too small compared to the input data size, multiple passes over the input data are needed. This is known as the multi-pass size of the work area. Operations performed in one-pass and multi-pass work area sizes increase response times, sometimes dramatically in the case of the latter.

You can set the size of the various work areas with individual initialization parameters but the same amount of memory is allocated to each process. So if your parameters are set to avoid any multi-pass operations, processes will be assigned this much memory even for operations that could run in a much smaller optimal size.

With Oracle9i, you can use the PGA_AGGREGATE_TARGET initialization parameter to assign memory that is shared by the server processes active in the instance and is automatically reallocated to the processes that currently need it. In the initial release of Oracle9i, additional information was added to various dynamic performance tables to help you monitor whether you had set a sufficiently high value for the PGA target. However, these statistics do not provide a lot of detail nor give you any guidance for setting a more appropriate value when it is under- or over-sized.

Additional statistics are available in Oracle9i Database Release 2 to help you monitor and tune the PGA_AGGREGATE_TARGET parameter. Some of these statistics are provided through new views and others through modified views. In this section of the lesson, you can find more details about managing your PGA memory with the views listed above.

 

The new statistics to help you monitor the performance of the PGA memory component for a particular value of PGA_AGGREGATE_TARGET are based on two concepts: work areas groups and a computed PGA cache hit percentage value.

By grouping work areas based on their optimal memory requirement, statistics can be collected for each group based on the number of optimal, one-pass, and multi-pass executions for each of these work area groups. With this finer granularity of work area statistics than previously available, you can more accurately predict how much memory is required across the instance to maximize the number of optimal executions.

The PGA cache hit percentage summarizes statistics on work area executions into a single measure of PGA performance for a given value of PGA_AGGREGATE_TARGET. The PGA cache hit percentage is derived from the number of work areas that run optimal, the number of passes for the non-optimal work areas, and the sizes of the work areas.

The new view, v$sql_workarea_histogram, enables you to study the nature of the work area workload over a chosen time period. The work areas are split into 33 groups based on their optimal memory requirements. Each group is defined by the lower bound on its optimal memory requirement, displayed in the low_optimal_size column, and its upper bound, displayed in the high_optimal_size column. For each group, the view accumulates the number of executions using optimal, one-pass, multi-pass memory since instance start up. These are displayed in the optimal_executions, onepass-executions, and multipasses_executions columns respectively. The total_executions column contains the sum of the other three execution counts.

To determine if you have set a good value for the PGA target size, query the v$sql_workarea_histogram view using a statement such as the following, which converts the low_optimal_size and high_optimal_size column values to kilobytes:

SQL> SELECT low_optimal_size/1024                          AS low_kb,
2     (high_optimal_size+1)/1024                         AS high_kb,
3     ROUND(100*optimal_executions/total_executions)     AS optimal,
4     ROUND(100*onepass_executions/total_executions)     AS onepass,
5     ROUND(100*multipasses_executions/total_executions) AS multipass
6  FROM v$sql_workarea_histogram
7  WHERE total_executions != 0
8  ORDER by low_kb;

Of course, as with any monitoring effort using dynamic views, you should issue the query at the beginning and at the end of a typical work period and use the differences between the two results to determine what activity occurred during that period.

Although it would be ideal for all work areas to execute in the optimal size, this goal is typically achieved by over-allocating memory to the PGA. If you graph the results of your query as a histogram, you can quickly identify the location in the graph where work groups begin to use one-pass, or even multi-pass, sizes. In the example shown the former occurs with a work area size of about 200KB and the latter about 100MB. Based on your knowledge of the type of work being done and the current level of performance, you can decide if this is acceptable or if the PGA_AGGREGATE_TARGET parameter value should be changed.

 

If a work area runs in one-pass or multi-pass mode, extra bytes will be processed since one or more extra pass over the input data will be performed. A new statistic, PGA cache hit percentage, condenses in one global numerical metric the relative performance of the PGA memory component. The PGA cache hit percentage is the percentage formed from the ratio of the number of bytes that need to be processed when all work areas run in optimal mode to the total bytes really processed. The higher the PGA cache hit percentage, the better the performance of PGA memory and hence of the system.

The PGA cache hit percentage statistic is stored in the v$pgastat view and can be retrieved from the row with the value cache hit percentage in the name column. A sample query is shown above.

The v$sql_workarea and v$sql_workarea_active views have been modified in Oracle9i Database Release 2 to reflect information on the temporary segment an operator (work area) uses.

The v$sort_usage view is renamed to v$tempseg_usage to reflect that information about all temporary segments, not only sort segments, is recorded in this view. In the current release, the old view name is being maintained for backward compatibility.

 

If you monitor the PGA space consumption with the various views provided for that purpose (v$sql_workarea_histogram, v$pgastat, v$sql_workarea, and v$sql_workarea_active), you may decide that you need to reset your PGA_AGGREGATE_TARGET initialization parameter value to make better use of your available memory. In some cases, you may want to reduce the memory allocated, in other cases; you may realize that you should increase the value. To help you determine by what factor you should change the parameter value, you can use two new views, provided in Oracle9i Database Release 2, that provide PGA sizing advice. These views, v$pga_target_advice and v$pga_target_advice_histogram, accumulate advice statistics to help you tune the PGA_AGGREGATE_TARGET value. The views are only populated if PGA_AGGREGATE_TARGET is set to a non-zero value that enables auto management of the PGA. Also the initialization parameter STATISTICS_LEVEL value must be set to Typical or All. Further, the view contents are refreshed when you shut down the instance or when you change the value of PGA_AGGREGATE_TARGET dynamically.

Rows in these two views correspond to setting the PGA_AGGREGATE_TARGET value to 0.125, 0.25, 0.5, 0.75, 1, 1.2, 1.4, 1.6, 1.8, 2, 3, 4, 6, and 8 times its current value. However, if these values are either less than 10MB or more than 256GB, they will not be included in the output.

To begin monitoring and tuning the PGA target size, you should issue query against the v$pga_target_advice view similar to:

SQL> SELECT
2    ROUND(pga_target_for_estimate/1024/1024) AS target_mb,
3    estd_pga_cache_hit_percentage    AS cache_hit_percent,
4    estd_overalloc_count
5  FROM v$pga_target_advice
6  ORDER BY target_mb;

For output that is easier to read and use, this query converts the value of the pga_target_for_estimate column from bytes to megabytes. As mentioned earlier, you should issue this query at the beginning and at the end of a typical work period and use the differences between the two results to obtain advise related to the work performed during that period.

The output from the query on v$pga_target_advice contains up to 14 rows that contain estimated statistics based on the multiples of the current PGA_AGGREGATE_TARGET value.

Assuming that your query produced the output shown above with the PGA_AGGREGATE_TARGET value set to 500MB, follow these steps to evaluate the results:

Step 1: Identify the first row with a value of zero in the estd_overallocation_count column. The rows above this one are for PGA_AGGREGATE_TARGET values (shown in the target_mb column) that are too small for the minimum PGA memory needs. In this case, this is the fifth row, which contains data for the current PGA_AGGREGATE_TARGET value, 500MB. Had the target_mb column value in the first row with a zero estd_overallocation_count been larger than the current setting, you should increase the PGA_AGGREGATE_TARGET parameter to at least this size.

Step 2: Examine the rows with PGA values larger than the minimum required to find the first pair of adjacent rows with values in the cache_hit_percent column that are within a few percentage points of each other. These rows indicate where, were you to graph the values, you would see an inflection point (sometimes referred to as a “knee”) in the curve. The optimal value for the PGA_AGGREGATE_TARGET parameter is at this inflection point, represented by the target_mb column value in the first of these two rows. Based on the above results, you should set the PGA_AGGREGATE_TARGET to 3000MB if you have sufficient memory. If you have even more memory available, you should assign it to some other use, such as one of the SGA components, rather than increasing the PGA target size.

 

如何在windows vista/2008/7中 安装Oracle OMS 即Grid Control

之前因为考试的原因,希望能在本机的windows上安装 Oracle OMS 以方便练习,OCM考试中使用的Grid Control 版本为10.2.0.1, 仅支持Windows相关系统号5,对于6或6.1的vista/2008/7安装过程中都会遇到问题无法解决,最为明显的是10.2.0.1自带数据库为10.1.0.4版本在windows相关版本号6以上的平台中均无法创建数据库。Oracle OCM考试中会要求你使用DBCA 建立一个干净的Customer定制数据库,仅需要建立必要的试图和过程(catalog.sql与catproc.sql运行过)并带有label security 组件(其他如Spatial与OLAP组建包括Oracle JVM等均不需要),如图:

labelsecurity

Oracle Grid Control 的10.2.0.5版本已经发布,其中10.2.0.4版本已支持最新的Windows操作系统,但只有patch install版本而无直接安装的full install 版本,且直接安装10.2.0.1 版本在OMS configuration部分将出错导致配置失败故我们需要另一种安装方式,如文档 ID:     763072.1

Installing On Windows 2008/Vista

The following sections discuss installing Enterprise Manager 10g Grid Control on Windows 2008/Vista.

Installing Enterprise Manager 10g Grid Control Using A New Database On Windows 2008/Vista

Enterprise Manager 10g Grid Control will not support the New Database install option on Windows 2008/Vista as the Database 10.1.0.4 (which is the default Database for Grid Control) is not supported for Windows 2008/Vista.

Installing Enterprise Manager 10g Grid Control Using an Existing Database On Windows 2008/Vista

To install Enterprise Manager 10g Grid Control using an existing database on Windows 2008/Vista, follow these steps:

  1. Download the fixed OUI 10204 from ARU Microsoft Windows (32-bit) (Bug 6640752) and extract it to a designated location, for example, C:OUI_Shiphome
  2. Copy Enterprise Manager 10g Grid Control 10.2.0.2.0 base shiphome (Disk1) into a location, for example, C:EMGC_102020shiphome
  3. Install the database that supports Vista/2008, such as 10.2.0.3.0/10.2.0.4. Make the following changes in the response file:

    C:EMGC_Shiphome102020shiphomeDisk1installresponseemnoseed.rsp

    1. Give the absolute path of products.xml for the FROM_LOCATION parameter.

      FROM_LOCATION="C:EMGC_102020shiphomeDisk1/oms/Disk1/stage/products.xml"

    2. Give the OMS Oracle Home location. For example:

      ORACLE_HOME=c:emnoseedoms10g

    3. Give the Oracle Home name. For example:

      ORACLE_HOME_NAME=oms10g12

    4. Give the Agent Oracle Home location. For example:

      s_agentHome=c:emnoseedagent10g

    5. Give the Agent Home Name. For example:

      s_agentHomeName=agent10g12

    Note:

    Ensure that the Oracle Home location and name are not identical to any previous Homes.

  4. Invoke the setup.exe for Install in the following way:

    C:OUI_ShiphomecdDisk1installsetup.exe -ignoreDiskLabel -responseFile C:EMGC_102020shiphomeDisk1installresponseemnoseed.rsp

  5. If the Agent is not coming up, and if the value of agentTZRegion in emd.properties is GMT then do the following:
    1. Reset TimeZone on Agent box to GMT.
    2. Execute the following command in <Oracle home of Agent>BIN>emctl resetTZ agent
  6. Patch OMS and Agent to version 10.2.0.5.

Installing An Additional Management Service On Windows 2008/Vista

Follow the below instructions to install an additional Management Service on Windows 2008/Vista.

  1. Download the fixed OUI 10204 from ARU Microsoft Windows (32-bit) (Bug 6640752) and extract it to a designated location, for example, C:OUI_Shiphome
  2. Copy Enterprise Manager 10g Grid Control 10.2.0.2.0 base shiphome (Disk1) into a location, for example, C:EMGC_102020shiphome
  3. Make the following changes in the response file:

    C:EMGC_102020shiphomeDisk1installresponseoms.rsp

    1. Supply the absolute path of products.xml for FROM_LOCATION parameter.

      FROM_LOCATION="C:EMGC_102020shiphome/Disk1/oms/Disk1/stage/products.xml"

    2. Supply the OMS Oracle Home location. For example: ORACLE_HOME=C:OH102020oms10g
    3. Specify the Oracle Home name. For example, ORACLE_HOME_NAME=oms10g20
    4. Supply the Agent Oracle Home location. For example, s_agentHome=C:OH102020agent10g
    5. Supply the Agent Home Name. For example, s_agentHomeName=agent10g20

    Note:

    Ensure that the Oracle Home location and name not be the same as any previous homes.

  4. Invoke the setup.exe for Install in the following way:

    C:OUI_ShiphomecdDisk1installsetup.exe -ignoreDiskLabel -responseFile

    C:EMGC_102020shiphomeDisk1installresponseoms.rsp

  5. If the Agent does not come up, and if the value of agentTZRegion in emd.properties is GMT, then do the following:
    1. Reset TimeZone on the Agent installation box to GMT
    2. Execute the following command in <Oracle home of Agent>BIN>emctl resetTZ agent
  6. Patch OMS and Agent to version 10.2.0.5.

Installing An Additional Management Agent On Windows 2008/Vista

Follow the below instructions to install an additional Management Agent on Windows 2008/Vista.

  1. Download the fixed OUI 10204 from ARU Microsoft Windows (32-bit) (Bug 6640752) and extract it to a designated location, for example, C:OUI_Shiphome
  2. Copy Enterprise Manager 10g Grid Control 10.2.0.2.0 base shiphome (Disk1) into a location, for example, C:EMGC_102020shiphome
  3. Make the following changes in the response file:

    C:EMGC_102020shiphomeDisk1installresponseagent.rsp

    1. Supply the absolute path of products.xml for FROM_LOCATION parameter.

      FROM_LOCATION="C:EMGC_102020shiphomeDisk1/oms/Disk1/stage/products.xml"

    2. Supply the OMS Oracle Home location. For example, ORACLE_HOME=c:agtagent10g
    3. Supply the Oracle Home name. For example, ORACLE_HOME_NAME=agent10g13

    Note:

    Ensure that Oracle Home location and name are not the same as any previous homes.

  4. Invoke the setup.exe for Install in the following way:

    C:OUI_ShiphomecdDisk1installsetup.exe -ignoreDiskLabel -responseFile

    C:EMGC_102020shiphomeDisk1installresponseagent.rsp

  5. If the Agent is not coming up, and if the value of agentTZRegion in emd.properties is GMT then do the following:
    1. Reset TimeZone on the Agent installation box to GMT.
    2. Execute the following command in <Oracle home of Agent>BIN>emctl resetTZ agent

Patch Agent to version 10.2.0.5.

沪ICP备14014813号-2

沪公网安备 31010802001379号