Initial buffer sizes: read 1024K, overflow 832K, change 805K

Initial buffer sizes: read 1024K, overflow 832K, change 805K

   1691 #ifdef KCRFR_DEBUG

   1692     ksdwrf(“Initial buffer sizes: read %luK, overflow %luK, change %luK\n”,

1693            (unsigned long)fx->krrxread_sz/1024,

1694            (unsigned long)fx->krrxovf_sz/1024,

1695            (unsigned long)kcocv_max_read_size/1024);                    /*NF*/

 

 

void kcrfrxini kcrfrxini: Initialize redo read context Allocate buffers for read, LWN/record overflow and change vector moves.

kcrfrxini 初始化redo读时 分配buffer 使用ksdwrf 会打印Initial buffer sizes: read 1024K, overflow 832K, change 805K。

oracle中导出统计信息到其他表的过程

oracle中导出统计信息到其他表的过程


exec dbms_stats.create_stat_table('&OWNER','MY_STATS_TAB');

exec dbms_stats.export_table_stats('&OWNER','&SOURCE_TABNAME',NULL,'MY_STATS_TAB');

exec dbms_stats.import_table_stats('&OWNER','&TARGET_TABNAME', null, 'MY_STATS_TAB');

检验:
select table_name, num_rows from dba_tables where table_name in ('&SOURCE_TABNAME' ,'&TARGET_TABNAME' );

 

rman 注册归档(arch)的语句

rman 注册归档(arch)的语句

One way of registering a logfile is via the following:

SQL> alter database register logfile
'/var/arch/arch_1_101.arc';

So this is a straightforward way of registering a logfile giving the full path to the logfile. However, what if you have a very large number of logfiles to register, the above does not really scale all that well. Thankfully there is an rman command that enables you to register all logfiles within a directory:

rman> catalog start with '/var/arch';

This can save you a lot of time if you need to register a large number of files.

resmgr:internal state change等待

resmgr:internal state change等待

 resmgr:internal state change==> session 对应的resource manager plan正在改变中

Hang or similar symptoms when changing resource manager plans during
 heavy workload activity.
 One or more sessions block on a wait for "resmgr:internal state change"

drop index partition ORA-14076

SQL> select partition_name from dba_ind_partitions where index_name='LOCAL_ONE';

PARTITION_NAME
---------------------------------------------------------------------------------------------

EMPLOYEES_PART1
EMPLOYEES_PART2
EMPLOYEES_PART3

SQL> alter index LOCAL_ONE drop partition EMPLOYEES_PART1;
alter index LOCAL_ONE drop partition EMPLOYEES_PART1
            *
第 1 行出现错误:
ORA-14076: 提交的变更索引分区/子分区操作对本地分区的索引无效

You cannot explicitly drop a partition from a local index.  Instead, local 
index partitions are dropped only when you drop a partition from the underlying 
table. You have to disable the local partition index by making it unusable.

Example:

-- alter table make all local partition indexes unusable
ALTER TABLE &v_table_name
MODIFY PARTITION &v_part_name
UNUSABLE LOCAL INDEXES;

This will set the UNUSABLE status in the DBA_IND_PARTITIONS on all
the local partitioned indexes equipartitioned with &v_part_name.

ALTER SESSION SET skip_unusable_indexes = true;
 
-- load data in the corresponding partitions
INSERT ... VALUES ...
 
-- alter table make partition local index usable
ALTER TABLE &v_table_name
MODIFY PARTITION &v_part_name
REBUILD UNUSABLE LOCAL INDEXES;

The status of the &v_part_name partitions indexes will be now USABLE again in
DBA_IND_PARTITIONS.

 

clssnmvDiskCheck: Aborting, 0 of 1 configured voting disks available, need 1

cssd.log中的报错信息如下:
2013-09-25 08:46:03.739: [    CSSD][2834](:CSSNM00018:)clssnmvDiskCheck: Aborting, 0 of 1 configured voting disks available, need 1
2013-09-25 08:46:03.749: [    CSSD][2834]###################################
2013-09-25 08:46:03.749: [    CSSD][2834]clssscExit: CSSD aborting from thread clssnmvDiskPingMonitorThread
2013-09-25 08:46:03.749: [    CSSD][2834]###################################
2013-09-25 08:46:03.749: [    CSSD][2834](:CSSSC00012:)clssscExit: A fatal error occurred and the CSS daemon is terminating abnormally
2013-09-25 08:46:03.753: [    CSSD][2834]
 
 
该报错与BUG 13869978  较为一致
 
该BUG 13869978的特点是 当仅有一个votedisk时才会触发, 当使用ASM DISKGROUP存放votedisk时:
external redundancy  => 1份votedisk
normal   redundancy  => 3份
High     redundancy  => 5份
当使用external redundancy 时由于只有1份votedisk 所以可能触发该BUG
该bug 目前在11.2.0.3.3 +AIX上有补丁可以打
建议:
考虑使用 normal redundancy  的diskgroup存放votedisk ,或者 应用补丁13869978
 
 

Bug 13869978  OCSSD reports that the voting file is offline without reporting the reason

 

Affects:

Product (Component) Oracle Server (PCW)
Range of versions believed to be affected Versions BELOW 12.1
Versions confirmed as being affected
  • 11.2.0.3
  • 11.2.0.2
Platforms affected Generic (all / most platforms affected)

Fixed:

This issue is fixed in
  • 12.1.0.1 (Base Release)
  • 11.2.0.4 (Future Patch Set)
  • 11.2.0.3.4 Grid Infrastructure Patch Set Update (GI PSU)
  • 11.2.0.3 Patch 11 on Windows Platforms

Symptoms:

Related To:

  • (None Specified)
  • Cluster Ready Services / Parallel Server Management

Description

When we have a single voting file CSSD report the file offline, but thre is no IO error or hung condition
previous to taking the voting file offline:

[    CSSD][29](:CSSNM00018:)clssnmvDiskCheck: Aborting, 0 of 1 configured voting disks available, need 1
[    CSSD][29]###################################
[    CSSD][29]clssscExit: CSSD aborting from thread clssnmvDiskPingMonitorThread
[    CSSD][29]###################################
[    CSSD][29](:CSSSC00012:)clssscExit: A fatal error occurred and the CSS daemon is terminating abnormally

Rediscovery Notes:

If there is no IO error or IO hung that caused to set the voting file offline, then we may be getting this bug.

Workaround
Use more than 1 voting file.

BUG 13931044 – ORA-600 [13009], [5000], [1], [17]

在BUG 13931044 – ORA-600 [13009], [5000], [1], [17]中  相关的语句:
SELECT * FROM PART_BRANCH_PC_BATCH_VIEW2 WHERE PAB_BRA_BRANCH_CODE = :B1 FOR UPDATE
 
也是没有Nested Loop的,但是通过”_nlj_batching_enabled”=false 绕过了该问题, 所以还是建议先使一下使用“_nlj_batching_enabled”=false作为workaround的情况。
DIAGNOSTIC ANALYSIS
===================
The same batch job fails on several databases but does not reproduce at
will. The databases are not clones of eachother but are running the same
batch job.
The problem started after the database was upgraded from 10.2 to 11.2 via
datapump.
Underlying tables have been analyzed without problems.
The patch for bug 13117043 was applied but the problem continued.
Running with “_nlj_batching_enabled”=0 avoids the error but testing is still
underway to see if this can be used as a temporary workaround.
The query plan doesn’t change if set “_connect_by_use_union_all” =
old_plan_mode;
WORKAROUND?
===========
Yes
TECHNICAL IMPACT
================
Customer runs the same batch on multiple databases which includes select for
update statement – the batch fails on some of the databases
SELECT * FROM PART_BRANCH_PC_BATCH_VIEW2 WHERE PAB_BRA_BRANCH_CODE = :B1 FOR
UPDATE
RELATED ISSUES (bugs, forums, RFAs)
===================================
Select For Update Fails With Ora-600 [13009] Error On A Table With A
Recently Added Column (Doc ID 863779.1) –> solved in 11.2 and CT doesn’t
want to recreate the tables involved in the view called in the failing SQL
 

 

oracle中闪回数据库到还原点的操作

oracle中闪回数据库到还原点的操作

CREATE RESTORE POINT before_clean GUARANTEE FLASHBACK DATABASE;
 
==>必须保证闪回回复区flashback recovery area有足够的磁盘空间
 
在standby 上,  注意 操作之前要记得 关闭redo传输
 
alter database recover managed standby database finish force;
 
alter database open;
 
操作
 
shutdown immediate;
startup mount;
 
flashback database to RESTORE POINT before_clean;

oracle中以测试为目的人为制造物理坏块的方法

oracle中以测试为目的人为制造物理坏块的方法

SQL> create table maclean_corrupt tablespace users as select * from dba_tables;
 
表已创建。
 
SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from maclean_corrupt where rownum<10;
 
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID)
———————————— ————————————
                              382307                                    6
                              382307                                    6
                              382307                                    6
                              382307                                    6
                              382307                                    6
                              382307                                    6
                              382307                                    6
                              382307                                    6
                              382307                                    6
 
已选择 9 行。
 
 
 
SQL> alter system checkpoint;
 
系统已更改。
 
 
C:\Users\xiangbli>rman target /
 
恢复管理器: Release 12.1.0.1.0 – Production on 星期日 9月 22 09:23:50 2013
 
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
 
已连接到目标数据库: MACLEAN (DBID=1694338843)
 
RMAN> recover datafile 6 block 382307 clear;
 
启动 recover 于 22-9月 -13
使用目标数据库控制文件替代恢复目录
分配的通道: ORA_DISK_1
通道 ORA_DISK_1: SID=254 设备类型=DISK
完成 recover 于 22-9月 -13
 
 
 
SQL> alter system flush buffer_cache;
 
系统已更改。
 
SQL> select count(*) from maclean_corrupt;
select count(*) from maclean_corrupt
       *
第 1 行出现错误:
ORA-01578: ORACLE 数据块损坏 (文件号 6, 块号 382307)
ORA-01110: 数据文件 6: ‘C:\APP\XIANGBLI\ORADATA\MACLEAN\USERS01.DBF’

Oracle ASM ACFS 安装失败问题

Oracle ASM ACFS 安装失败问题

 

[client(10813644)]CRS-10001:29-Sep-13 14:07 ACFS-9200: Supported
[client(7798872)]CRS-10001:29-Sep-13 14:07 ACFS-9300: ADVM/ACFS distribution files found.
[client(7798880)]CRS-10001:29-Sep-13 14:07 ACFS-9312: Existing ADVM/ACFS installation detected.
[client(7798888)]CRS-10001:29-Sep-13 14:07 ACFS-9314: Removing previous ADVM/ACFS installation.
[client(7798896)]CRS-10001:29-Sep-13 14:07 ACFS-9361: Removing device ‘acfsctl’ failed with error code ‘5888’.
[client(7798898)]CRS-10001:29-Sep-13 14:07 ACFS-9307: Installing requested ADVM/ACFS software.
[client(7143672)]CRS-10001:29-Sep-13 14:07 ACFS-9308: Loading installed ADVM/ACFS drivers.
[client(7143678)]CRS-10001:29-Sep-13 14:07 ACFS-9154: Loading ‘oracleadvm.ext’ driver.
[client(7143430)]CRS-10001:29-Sep-13 14:07 ACFS-9154: Loading ‘oracleacfs.ext’ driver.
[client(7143438)]CRS-10001:29-Sep-13 14:07 ACFS-9109: oracleacfs.ext driver failed to load.
[client(7143440)]CRS-10001:29-Sep-13 14:07 ACFS-9310: ADVM/ACFS installation failed.
[client(7143442)]CRS-10001:29-Sep-13 14:07 ACFS-9311: not all components were detected after the installation.
 
 
 
以上ACFS 安装失败问题,与 ACFS-9109: [oracleacfs.ext driver failed to load] Error During 11.2.0.3 RAC Grid Infrastructure Installation (On AIX). (Doc ID 1541476.1) 描述的较为一致。
 
请收集一下  ls -l /dev/   的信息 
 
 

1) Brand New 11.2.0.3 RAC Grid Infrastructure Installation (on AIX) is reporting the following errors during the “root.sh” script execution:

[client(7274562)]CRS-10001:23-Mar-13 01:14 ACFS-9314: Removing previous ADVM/ACFS installation.
[client(7274570)]CRS-10001:23-Mar-13 01:14 ACFS-9361: Removing device ‘acfsctl’ failed with error code ‘5888’.
[client(7274572)]CRS-10001:23-Mar-13 01:14 ACFS-9307: Installing requested ADVM/ACFS software.
[client(9568274)]CRS-10001:23-Mar-13 01:14 ACFS-9308: Loading installed ADVM/ACFS drivers.
[client(10551446)]CRS-10001:23-Mar-13 01:14 ACFS-9109: oracleacfs.ext driver failed to load.
[client(10551448)]CRS-10001:23-Mar-13 01:14 ACFS-9310: ADVM/ACFS installation failed.
[client(10551450)]CRS-10001:23-Mar-13 01:14 ACFS-9311: not all components were detected after the installation.
[client(7274634)]CRS-10001:23-Mar-13 01:21 ACFS-9300: ADVM/ACFS distribution files found.
[client(7274646)]CRS-10001:23-Mar-13 01:21 ACFS-9307: Installing requested ADVM/ACFS software.
[client(7274660)]CRS-10001:23-Mar-13 01:21 ACFS-9308: Loading installed ADVM/ACFS drivers.
[client(7274678)]CRS-10001:23-Mar-13 01:21 ACFS-9109: oracleacfs.ext driver failed to load.
[client(7274680)]CRS-10001:23-Mar-13 01:21 ACFS-9310: ADVM/ACFS installation failed.
[client(7274682)]CRS-10001:23-Mar-13 01:21 ACFS-9311: not all components were detected after the installation.

 
2) An attempt to manually install the ACFS/ADVM layer could isolated the problem and reported the next errors (ACFS-11053):

asmhost1[/u01/grid/bin]#./acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
This may take several minutes. Please wait …
(udefacfsctl): ACFS-11053: failed to release major number for device ofsctl
(udefacfsctl): ACFS-11055: failed to remove device special file /dev/ofsctl, errno 2 (No such file or directory)
ACFS-9361: Removing device ‘acfsctl’ failed with error code ‘14848’.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
(cfgacfsctl): ACFS-11022: failed to configure device ofsctl, errno 22 (Invalid argument)
(cfgacfsctl): ACFS-11041: trying to clean up after encountering an error
ACFS-9109: oracleacfs.ext driver failed to load.
ACFS-9310: ADVM/ACFS installation failed.
ACFS-9311: not all components were detected after the installation.

 

CAUSE

 

The “[ACFS-11053: failed to release major number for device ofsctl]” error indicates a conflict between the “/dev/ofsctl” & “/dev/dlmadrv” devices since both have the same major and minor numbers:

 

# cd  /dev

# ls -l dlmadrv  

crw——-    1 root     system       37,  0 Feb  1 14:34 dlmadrv  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

# ls -l ofsctl

crw——-    1 root     system       37,  0 Mar  1 10:15 ofsctl


沪ICP备14014813号-2

沪公网安备 31010802001379号