【ASM】asmca图形化工具与/etc/oratab

11g的asmca图形化工具启动后可能存在问题, 对于已经启动的ASM实例无法识别, 报是否需要启动asm实例。

asmca的日志位于 ASMCA logs from $ORACLE_BASE/cfgtoollogs/asmca/*

日志显示:

 

[main] [ 2013-09-26 15:53:29.906 GMT+08:00 ] [InventoryUtil.getOUIInvSession:336]  oracleHome is null. Leaving OUI properties to defaults
[Finalizer thread] [ 2013-09-26 15:53:29.912 GMT+08:00 ] [Util.finalize:126]  Util: finalized called for oracle.ops.mgmt.has.Util@32523252
[main] [ 2013-09-26 15:53:29.917 GMT+08:00 ] [InventoryUtil.getOUIInvSession:347]  setting OUI READ level to ACCESSLEVEL_READ_LOCKLESS
[main] [ 2013-09-26 15:53:29.917 GMT+08:00 ] [OracleHome.getVersion:957]  Current Version From Inventory: null
[main] [ 2013-09-26 15:53:29.918 GMT+08:00 ] [SQLPlusEngine.getCmmdParams:222]  m_home null
[main] [ 2013-09-26 15:53:29.918 GMT+08:00 ] [SQLPlusEngine.getCmmdParams:223]  version > 112 false
[main] [ 2013-09-26 15:53:29.918 GMT+08:00 ] [SQLEngine.getEnvParams:555]  Default NLS_LANG: AMERICAN_AMERICA.AL32UTF8
[main] [ 2013-09-26 15:53:29.918 GMT+08:00 ] [SQLEngine.getEnvParams:565]  NLS_LANG: AMERICAN_AMERICA.AL32UTF8
[main] [ 2013-09-26 15:53:29.919 GMT+08:00 ] [SQLEngine.initialize:325]  Execing SQLPLUS/SVRMGR process...
[main] [ 2013-09-26 15:53:29.920 GMT+08:00 ] [UsmcaLogger.logException:173]  SEVERE:method oracle.sysman.assistants.usmca.backend.USMInstance:checkAndStartupInstance
[main] [ 2013-09-26 15:53:29.920 GMT+08:00 ] [UsmcaLogger.logException:174]  There is an error in creating the following process:
null/bin/sqlplus -S /NOLOG 
The error is:
null/bin/sqlplus: not found
[main] [ 2013-09-26 15:53:29.920 GMT+08:00 ] [UsmcaLogger.logException:175]  java.io.IOException: There is an error in creating the following process:
null/bin/sqlplus -S /NOLOG 
The error is:
null/bin/sqlplus: not found

 

 

oracleHome is null 与  null/bin/sqlplus: not found 2点都很可疑,怀疑是ORACLE_HOME/ORACLE_BASE/ORAINVENTORY信息问题, 之后发现/etc/目录下 缺失oraInst.loc和oratab。

 

重建oratab并填入ASM实例名字及ORACLE_HOME后问题解决, 真的是什么细节都不能忽视

Oracle Database 11g 中的 ASM 限制

ASM 强制实施下列限制:

  • 存储系统中包含 63 个磁盘组
  • 存储系统中包含 10,000 个 ASM 磁盘
  • 每个 ASM 磁盘的最大存储空间为 4 PB
  • 每个存储系统的最大存储空间为 40 EB
  • 每个磁盘组包含 1 百万个文件
  • 最大文件大小取决于所使用的磁盘组的冗余类型:外部冗余为 140 PB(该值当前大于可能的数据库文件大小),正常冗余为 42 PB,高冗余为 15 PB。

注:在 Oracle Database 10g 中,外部冗余的最大 ASM 文件大小为 35 TB。

 

–Variable size extents

  • grows automatically with file size

–Benefits

  • Increase ASM file size
  • Reduce memory utilization in SGA

–100% automatic

–Note: RDBMS limits file size to 128TB

 

  • 63 disk groups
  • 10,000 ASM disks
  • 4 petabyte per ASM disk
  • 40 exabyte of storage
  • 1 million files per disk group
  • Maximum file size:
  • External redundancy: 140 PB
  • Normal redundancy: 42 PB
  • High redundancy: 15 PB

了解更多关于Oracle ASM md_backup md_restore

The md_backup command creates a backup file containing metadata for one or more disk groups.
Volume and Oracle Automatic Storage Management Cluster File System (Oracle ACFS) file system
information is not backed up.

 

Synopsis
md_backup <backup_file> [-G <diskgroups,…>]
Description
The options for the md_backup command are described below.
backup_file – Specifies the backup file in which you want to store the metadata.
-G diskgroup – Specifies the disk group name of the disk group that must be backed up
By default all the mounted disk groups are included in the backup file, which is saved in the current
working directory.
Examples
The first example shows the use of the backup command when run
without the disk group option. This example backs up all of the mounted
disk groups and creates the backup image in the /scratch/backup/alldgs20100422 file. The second
example creates a backup of DATA disk group. The backup that this example creates is saved in the
/tmp/dgbackup20090716 file.
ASMCMD [+] > md_backup /scratch/backup/alldgs20100422
Disk group metadata to be backed up: DATA
Disk group metadata to be backed up: FRA
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ORCL
Current alias directory path: ASM
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/TEMPFILE

 

Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/ARCHIVELOG/2009_07_13
Current alias directory path: ORCL/BACKUPSET/2009_07_14
Current alias directory path: ORCL/ARCHIVELOG/2009_07_14
Current alias directory path: ORCL
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/ARCHIVELOG
Current alias directory path: ORCL/BACKUPSET
Current alias directory path: ORCL/ONLINELOG
ASMCMD [+] > md_backup /scratch/backup/data20100422 -G DATA
Disk group metadata to be backed up: DATA
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/TEMPFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ASM
Current alias directory path: ORCL
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/ARCHIVELOG
Current alias directory path: ORCL/BACKUPSET
Current alias directory path: ORCL/ONLINELOG

 

Backing Up ASM Disk Group Metadata

Use the ASMCMD md_backup command to create a backup file containing metadata for one or more disk groups
In the event of a loss of the ASM disk group, the backup file is used to reconstruct the disk group and its metadata.
Without the metadata backup file, the disk group must be manually re-created in the event of a loss.
Backing up all mounted disk groups

ASMCMD> md_backup /backup/asm_metadata

Backing up the DATA disk group:

ASMCMD> md_backup /backup/asm_metadata –G data

You can use the ASMCMD md_backup command to create a backup file of ASM disk group metadata. This backup file can be used to reconstruct the ASM disk group and its metadata if the disk group is lost. Without this metadata backup file, you must manually re-create the ASM disk group in the event of a loss of the disk group.
As shown in the first example in the slide, you can use the md_backup command to back up the metadata for all mounted groups. By using the –G option, you can name specific disk groups to be backed up.
If you do not specify a full path for the backup file, it is saved in the current working directory.

 

ASMCMD 扩展

  • ASMCMD 已经过扩展,包括了 ASM 元数据备份,并具备了还原功能。这样一来,就可以使用完全对应的模板和别名目录结构重新创建先前存在的 ASM 磁盘组。目前,如果丢失了 ASM 磁盘组,则可以使用 RMAN 来还原丢失的文件,但必须手动重新创建 ASM 磁盘组以及任何必需的用户目录或模板。
    ASM 元数据备份和还原 (AMBR) 有两种运行模式:

-在备份模式下,AMBR 会分析 ASM 固定表和视图来收集有关现有磁盘和故障组配置、模板以及别名目录结构的信息;然后,将此元数据信息转储至某个文本
文件。

-在还原模式下,AMBR 会读取以前生成的文件来重建磁盘组及其元数据。可以在还原模式下控制 AMBR 行为以完成 full、nodg 或 newdg 还原。这三种子模式间的差别在于是否需要包括磁盘组创建并更改其特性。

  • lsdsk 命令可列出 ASM 磁盘信息。此命令可在两种模式下运行:

-在连接模式下,ASMCMD 使用 V$ 视图和 GV$ 视图来检索磁盘信息。

-在非连接模式下,ASMCMD 使用 ASM 磁盘字符串来限制搜索集,对磁盘头进行扫描以检索磁盘信息。连接模式始终为首选操作。

 

 

ASMCMD 扩展(续)

  • 使用 cp 命令可以在本地实例和远程实例上的 ASM 磁盘组之间复制文件。以下是一个可能的用法示例:
    cp +DATA/ORCL/DATAFILE/TBSJFV.256.629730771 +DATA/ORCL/tbsjfv.bak
    在上例中,在本地复制了一个现有文件。但是,可以指定一个连接字符串将文件复制到远程 ASM 磁盘组。被复制的文件的格式在 Little-Endian 系统和 Big-Endian 系统之间是可移植的。也可以使用 cp 命令将 ASM 文件复制到您的操作系统。例如:
    cp +DATA/ORCL/DATAFILE/TBSJFV.256.629730771 /home/oracle/tbsjfv.dbf
    同样,也可以使用 cp 命令将文件从您的操作系统复制到某个 ASM 目录。例如:
    cp /home/oracle/tbsjfv.dbf +data/jfv
    如果要将某个 ASM 文件从本地 ASM 实例复制到远程 ASM 实例,可使用以下语法:
    cp +DATA/orcl/datafile/tbsjfv.256.629989893 \sys@edcdr12p1.+ASM2:+D2/jfv/tbsjfv.dbf

注:有关上述每个命令的语法的详细信息,请参阅《Oracle Database Storage Administrator’s Guide》。

asmcmd扩展

ASMCMD> md_backup –b jfv_backup_file -g data

Disk group to be backed up: DATA#

Current alias directory path: jfv

ASMCMD>

 

意外删除磁盘组

 

ASMCMD> md_restore -b jfv_backup_file -t full -g data

Disk group to be restored: DATA#

ASMCMDAMBR-09358, Option -t newdg specified without any override options.

Current Diskgroup being restored: DATA

Diskgroup DATA created!

User Alias directory +DATA/jfv

                 created!

ASMCMD>

 

使用 RMAN 还原磁盘组文件

 

ASMCMD 扩展:示例

此示例说明了如何使用 md_backup 命令备份 ASM 元数据,以及如何使用 md_restore 命令还原数据。

第一个语句指定了命令的 –b 选项和 –g 选项。这定义生成的包含备份信息的文件的名称,以及需要备份的磁盘组(在幻灯片中分别是 jfv_backup_file 和 data)。

在第 2 步中,假定 DATA 磁盘组中存在问题。因此,该磁盘组被删除了。在还原该磁盘组包含的数据库文件之前,必须先还原该磁盘组本身。

在第 3 步中,使用 md_restore 命令启动磁盘组重新创建操作及其元数据的还原操作。在这一步中,需要指定第 1 步中生成的备份文件的名称以及要还原的磁盘组的名称,还要指定所需的还原类型。在此示例中,因为磁盘组已不再存在,所以进行了完全还原。

重新创建了磁盘组之后,可以使用 RMAN 之类的功能还原其数据库文件。

 

【Oracle ASM】被drop掉的ASM Disk/Diskgroup可能仍有进程未释放

在Oracle中 当一个ASM disk / diskgroup 被drop/dismount掉后,一般认为所有相关进程都将释放这些 ASM Disk对应的文件描述符(Disk descriptors)了,但实际运维过程中经常发现drop disk/diskgroup 后仍有进程不释放这些磁盘资源。

 

该问题主要是由于Oracle ASM的一些bug引起的,包括:

Bug 11666137  ASM dismounted disks are still held by background processes for long time

Bug 7225720 – ASM does not close open descriptors (Doc ID 7225720.8)

Bug:11785938 – ASM 11.2.0.2 IS NOT RELEASING FILE DESCRIPTORS AFTER DROP DISKGROUP

虽然这些bug 都被宣称在11.2.0.2版本中修复了,但实际在11.2.0.3上还可能遇到该问题。

还可以参考文档:

ASM 11.2.0.2 Is Not Releasing File Descriptors After Drop or Dismount Diskgroup. (Doc ID 1306574.1)

如果这些不释放资源的进程是前台进程,那么可以通过KILL进程来绕过该问题;如果是后台关键进程则只能等待其主动释放磁盘描述符了。

oracle asm 深入讨论

 

  • 最近某个非常重要的产品中采用了ASM
  • 这次我们想分享在这个产品中获知的优点,希望OJ职员们能够认识到ASM的优点
  • ASM 是 Exadata 必需的部件

 

  • 解除ASM相关的常有的疑虑与不安情绪

–为了大家能够更加自信地向客人推荐ASM

  • 深入理解ASM
  • 通过使用ASM ,使得系统结构简化

 

来自事先分发的调查问卷

 

asmz1

 

全世界范围内的ASM使用情况

使用RAC的客户存储

asmz2

 

 

虽然说ASM很方便・・・

会场里充满疑虑与不安

  • 设计上会不会很麻烦?
  • 会不会使得故障更加容易发生?
  • 通过ASM 进行stripe的话,会不会使得单一实例访问性能恶化?
  • 通过ASM的镜像功能,硬件RAID会不会可行性更高?
  • 重复调整平衡以及制成表区域时,数据会不会有偏差?
  • 虚拟存储(自动精简配置)会不会更容易管理?
  • 阶段化存储可以自动调优会不会用起来更加方便?
  • 在备份中随意使用存储拷贝会不会不好?
  • 有没有质量问题?

 

asmz3

 

Best Practice 是什么?

  • Disk Group 的设计(細分化/个数/依赖关系)
  • AU 存储(4M 有效的例子)
  • asm_power_limit 的推荐值?
  • 以storage 观点来考虑
  • 与3rd party 制成的 C/W 的组合情况 (11g R2 中的 非 RAC 环境)
  • 设计point有没有增加?
  • ASM 实例的可用性怎样?

到底应该让什么做什么才好?

  • 是否推荐与存储备份的组合?

–Storage Copy? / RMAN?

–差异、优点、缺点、需要注意的问题

  • 冗余结构应该在哪里使用? (H/W RAID 中? 还是ASM 中?)
  • S/W RAID 的 ASM比起 H/W 的 RAID是否可信性较低?
  • 组合的 Best Practice
  • 存储虚拟化功能(Thin Provisioning) 组合

将现行环境以以下方式进行变更?

 

  • 不得不变更至今为止的使用方式
  • 需要学习新的操作
  • Cluster File System 要更好
  • 想继续使用RAW (不想使用不支持RAW 的版本)
  • 更换磁盘时,无法做到/不知道ASM的操作(设定、确认)。

– (磁盘更换工作人员/存储管理人员/非oracle工程师)

  • 「不得不设计、架构、使用的存储仅限用于数据库,所以在数据库中使用ASM是没有效率的。从IT整体的存储计划来考虑的话,最好不要使用ASM。

 

质量

  • 11g R2 中增加了许多功能,质量没有问题吗?
  • 最新版本bug较多?

 

[质量相关] 在使用了ASM 之后的建议

虽然是一些理所当然的事情….

 

  • 定期更新推荐补丁

–ASM dsikgroup 产生破损的故障修复推荐在补丁中加入 Pro-active

–想要定期应用推荐补丁的话,请考虑以下项目。

  • 冗余为多少比较合适?
  • 考虑是否需要切换Data guard ?

–如果不重新考虑应用定期,应用补丁就会变成大得超过预想的event

  • 请使用较高的版本

–版本越高,质量也就越有保证,处理致命性的故障时,也更加完善
例) 11g 中,保留了 ASM disk header 的拷贝。拥有采取diskgroup dump 的机制

 

性能

到底用Storage还是用ASM 来做冗余?

  • ASM DG 的冗余对性能的影响如何(Normal<二重>、High<三重>)?
  • 哪个性能较好?

–存储/ASM

–随机读/序列读

 

  • ASMLib 相关内容
  • 无法改变ASM 的的冗余结构
  • 用户选择怎样的存储才最好呢?

–Database 如果采用便宜的存储的话,ASM更好,但不太可能企业整体的存储都是使用的便宜货。

–Database 以外的数据放在便宜的存储中时,是否可以使用ACFS?

–是否存在ZFS

 

  • 与TTS 没有配合
  • 是否因为执行技术导致,无法执行分割镜像以及复制?
  • 能否在stretched cluster(Exteded RAC)使用?
  • 物理故障时的操作如何?
  • ASM 每次发生 I/O 时,都会被误解为追加了layer(过载的主要原因)。
  • ASM 的空白区域难以监视,大家是如何处理的?

 

总结

对于使用ASM,我希望oracle整体都对此有一定认识

  • 为了解决常见的疑虑与不安,请认真理解ASM
  • 通过采用ASM ,经常会要改变使用方式,由于结构简单,所以优点较多

–物理设计、性能调优、運用管理

  • 能接受ASM = 能接受Exadata
  • 如果还是有疑虑和不安的话,请咨询Storage Initiative、基础技术部、演讲嘉宾

 

【Maclean Liu技术分 享】深入了解Oracle ASM(一)基础概念

【Maclean Liu技术分 享】深入了解Oracle ASM(一)基础概念

本文适合刚入门ASM,基础概念仍不清晰的同学。

 

下载地址:

【Maclean Liu技术分享】深入了解Oracle ASM(一)基础概念

Oracle ASM CORRUPTED AT BLOCKS : ORA-15196: INVALID ASM BLOCK HEADER

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

Thu Oct 04 19:32:32 2012
SUCCESS: ALTER DISKGROUP DG01 ADD DISK 'ORCL:DISK024' SIZE 100821 M
,'ORCL:DISK025' SIZE 100821 M
NOTE: starting rebalance of group 3/0xf8194f61 (DG01) at power 5
Starting background process ARB0
NOTE: assigning ARB3 to group 3/0xf8194f61 (DG01)
NOTE: assigning ARB4 to group 3/0xf8194f61 (DG01)
Thu Oct 04 19:36:43 2012
WARNNING: cache read a corrupted block group=DG01 dsk=6 blk=40 from disk 6
NOTE: a corrupted block from group DG01 was dumped to
/orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb1_11708.trc
WARNNING: cache read(retry) a corrupted block group=DG01 dsk=6 blk=40 from
disk 6
ERROR: cache failed to read group=DG01 dsk=6 blk=40 from disk(s): 6 DISK007
ORA-15196: invalid ASM block header [kfc.c:23908] [check_kfbh] [2147483654]
[40] [2202114410 != 1169765350]
ORA-15196: invalid ASM block header [kfc.c:23908] [check_kfbh] [2147483654]
[40] [2202114410 != 1169765350]
System State dumped to trace file
/orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb1_11708.trc
NOTE: failed to create amdu dump with error -1
NOTE: cache initiating offline of disk 6 group DG01
NOTE: process 11708 initiating offline of disk 6.3916021716 (DISK007) with
mask 0x7e in group 3
Thu Oct 04 19:36:43 2012
WARNING: Disk DISK007 in mode 0x7f is now being offlined
NOTE: initiating PST update: grp = 3, dsk = 6/0xe969bfd4, mode = 0x15
kfdp_updateDsk(): 22
Thu Oct 04 19:36:43 2012
kfdp_updateDskBg(): 22
ERROR: too many offline disks in PST (grp 3)
WARNING: Disk DISK007 in mode 0x7f offline aborted
Thu Oct 04 19:36:43 2012
NOTE: active pin 0x0x6b834e68 found in ARB1
ERROR: ORA-15130 thrown in ARB1 for group number 3
Thu Oct 04 19:36:43 2012
ERROR: ORA-15130 thrown in ARB2 for group number 3
Errors in file /orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb1_11708.trc:
ORA-15130: diskgroup "DG01" is being dismounted
ORA-15066: offlining disk "DISK007" may result in a data loss
Errors in file /orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb2_11710.trc:
ORA-15130: diskgroup "DG01" is being dismounted.


1) +ASM2 instance reported corrupted blocks during an add disk operation on
the DG01 diskgroup thus this diskgroup was dismounted:
======================================================

Thu Oct 04 19:32:32 2012
SUCCESS: ALTER DISKGROUP DG01 ADD DISK 'ORCL:DISK024' SIZE 100821 M
,'ORCL:DISK025' SIZE 100821 M
NOTE: starting rebalance of group 3/0xf8194f61 (DG01) at power 5

Starting background process ARB0
NOTE: assigning ARB3 to group 3/0xf8194f61 (DG01)
NOTE: assigning ARB4 to group 3/0xf8194f61 (DG01)
Thu Oct 04 19:36:43 2012
WARNNING: cache read a corrupted block group=DG01 dsk=6 blk=40 from disk 6
NOTE: a corrupted block from group DG01 was dumped to
/orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb1_11708.trc
WARNNING: cache read(retry) a corrupted block group=DG01 dsk=6 blk=40 from
disk 6
ERROR: cache failed to read group=DG01 dsk=6 blk=40 from disk(s): 6 DISK007
ORA-15196: invalid ASM block header [kfc.c:23908] [check_kfbh] [2147483654]
[40] [2202114410 != 1169765350]
ORA-15196: invalid ASM block header [kfc.c:23908] [check_kfbh] [2147483654]
[40] [2202114410 != 1169765350]
System State dumped to trace file
/orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb1_11708.trc
NOTE: failed to create amdu dump with error -1
NOTE: cache initiating offline of disk 6 group DG01
NOTE: process 11708 initiating offline of disk 6.3916021716 (DISK007) with
mask 0x7e in group 3
Thu Oct 04 19:36:43 2012
WARNING: Disk DISK007 in mode 0x7f is now being offlined
NOTE: initiating PST update: grp = 3, dsk = 6/0xe969bfd4, mode = 0x15
kfdp_updateDsk(): 22
Thu Oct 04 19:36:43 2012
kfdp_updateDskBg(): 22
ERROR: too many offline disks in PST (grp 3)
WARNING: Disk DISK007 in mode 0x7f offline aborted
Thu Oct 04 19:36:43 2012
NOTE: active pin 0x0x6b834e68 found in ARB1
ERROR: ORA-15130 thrown in ARB1 for group number 3
Thu Oct 04 19:36:43 2012
ERROR: ORA-15130 thrown in ARB2 for group number 3
Errors in file /orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb1_11708.trc:
ORA-15130: diskgroup "DG01" is being dismounted
ORA-15066: offlining disk "DISK007" may result in a data loss
Errors in file /orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb2_11710.trc:
ORA-15130: diskgroup "DG01" is being dismounted.

2) Affected disk is:
======================================================
WARNNING: cache read a corrupted block group=DG01 dsk=6 blk=40 from disk 6
======================================================
NOTE: offline of disk(s) signalled ORA-15130
ORA-15130: diskgroup “DG01” is being dismounted
ORA-15066: offlining disk “DISK007” may result in a data loss

======================================================
=)> disk 6 of grp 3: DISK007 label:DISK007
======================================================

3) This is an 11.2.0.1.0 ASM RAC configuration.
4) This problem occurred at “Thu Oct 04 19:36:43 2012”

======================================================
Thu Oct 04 19:36:43 2012
WARNNING: cache read a corrupted block group=DG01 dsk=6 blk=40 from disk 6
NOTE: a corrupted block from group DG01 was dumped to
/orabase/diag/asm/+asm/+ASM2/trace/+ASM2_arb1_11708.trc
WARNNING: cache read(retry) a corrupted block group=DG01 dsk=6 blk=40 from
disk 6
ERROR: cache failed to read group=DG01 dsk=6 blk=40 from disk(s): 6 DISK007
ORA-15196: invalid ASM block header [kfc.c:23908] [check_kfbh] [2147483654]
[40] [2202114410 != 1169765350]
ORA-15196: invalid ASM block header [kfc.c:23908] [check_kfbh] [2147483654]
[40] [2202114410 != 1169765350]
======================================================

5) AMDU dump reports 2 AT blocks and 2 ASM metadata blocks as corrupted:
=====================================================

******************************* AMDU Settings
********************************
ORACLE_HOME = /oragridbase/product/11.2.0/grid
System name: Linux
Node name: ausu596a
Release: 2.6.18-194.17.1.el5
Version: #1 SMP Mon Sep 20 07:12:06 EDT 2010
Machine: x86_64
amdu run: 04-OCT-12 23:53:23
Endianess: 1
--------------------------------- Operations
---------------------------------
-dump DG01
------------------------------- Disk Selection
-------------------------------
-diskstring '/dev/oracleasm/disks/*'
------------------------------ Reading Control
-------------------------------
------------------------------- Output Control
-------------------------------
********************************* DISCOVERY
**********************************
---------------------------- SCANNING DISK N0023
-----------------------------
Disk N0023: '/dev/oracleasm/disks/DISK007'
AMDU-00209: Corrupt block found: Disk N0023 AU [0] block [40] type [0]
AMDU-00201: Disk N0023: '/dev/oracleasm/disks/DISK007'
AMDU-00209: Corrupt block found: Disk N0023 AU [0] block [41] type [0]
AMDU-00201: Disk N0023: '/dev/oracleasm/disks/DISK007'
AMDU-00209: Corrupt block found: Disk N0023 AU [0] block [40] type [3]
AMDU-00201: Disk N0023: '/dev/oracleasm/disks/DISK007'


** UNABLE TO SCAN AU 17024 THROUGH 17471 **
AMDU-00209: Corrupt block found: Disk N0023 AU [0] block [41] type [3]
AMDU-00201: Disk N0023: '/dev/oracleasm/disks/DISK007'
** UNABLE TO SCAN AU 17024 THROUGH 17471 **
Allocated AU's: 94948
Free AU's: 5873
AU's read for dump: 11
Block images saved: 1655
Map lines written: 11
Heartbeats seen: 0
Corrupt metadata blocks: 2
Corrupt AT blocks: 2
---------------------------- SCANNING DISK N0005
-----------------------------
------------------------- SUMMARY FOR DISKGROUP DG01
-------------------------
Allocated AU's: 2252039
Free AU's: 268486
AU's read for dump: 296
Block images saved: 40920
Map lines written: 296
Heartbeats seen: 0
Corrupt metadata blocks: 2
Corrupt AT blocks: 2
======================================================

6) Related bugs (closed as vendor issue):
======================================================
=)> Bug.13829821 (45) ORA-15196 [KFC.C 25210] [CHECK_KFBH] [2147483649]
[8] [2170839822 != 2170840087]
=)> Bug.13591322 (45) ORA-15196 AT BLOCK CORRUPTION
=)> Bug.12861891 (32) CORRUPTED BLOCK FOUND IN ASM_YB_FLASH DISK GROUP ON
RAC NODE
=)> Bug.10267691 (45) ORA-15196 INVALID ASM BLOCK HEADER [KFC.C 9195]
[HARD_KFBH] [2147483648] [1

o Ct had a corruption on an AT block, when they were trying to add disks to
DG01, dg created w/ external redundancy . They collected amdu and kfed for
the corrupted block, but they didn’t get the dd copy for the blocks.

1) They confirmed for sure that the I/O problems reported by the OS on the
physical disks are not associated to the ASM disks members or affected ASM
disk.
.
2) They explained that the dd dump provided so far was taken from the entire
disk, instead of the partition created in this entire disk, which was used to
the ‘/dev/oracleasm/disks/DISK007’ ASMLIB disk

[orcl tar7]$ kfed read dd.out
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 0 ; 0x001: 0x00
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 0 ; 0x004: blk=0
kfbh.block.obj: 0 ; 0x008: file=0
kfbh.check: 0 ; 0x00c: 0x00000000
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
2B272579F400 00000000 00000000 00000000 00000000 […………….]
Repeat 26 times
2B272579F5B0 00000000 00000000 00000000 01000000 […………….]
2B272579F5C0 FE830001 003FFFFF AFB60000 00000C4E [……?…..N…]
2B272579F5D0 00000000 00000000 00000000 00000000 […………….]
Repeat 1 times
2B272579F5F0 00000000 00000000 00000000 AA550000 […………..U.]
2B272579F600 00000000 00000000 00000000 00000000 […………….]
Repeat 223 times
KFED-00322: Invalid content encountered during block traversal:
[kfbtTraverseBlock][Invalid OSM block type][][0]

[orcl tar7]$ od -c dd.out | more
.
.
.
0077040 O R C L D I S K D I S K 0 0 7 \0
0077060 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0077100 \0 \0 \v 006 \0 001 003 D I S K 0 0 7 \0

it appears there some operation has wiped out ie. ‘written all zeros’
part of the AT blocks 40 and all of block 41.-
.
aunum=0 blknum=41 | more
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 0 ; 0x001: 0x00
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj: 0 ; 0x008: TYPE=0x0 NUMB=0x0
kfbh.check: 0 ; 0x00c: 0x00000000
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
F5285200 00000000 00000000 00000000 00000000 […………….]
Repeat 31 times
.
.
offsets :00281f0: 001c 0000 8701 8000 e428 0000 3101 8000 ………(..1…
0028200: 0000 0000 0000 0000 0000 0000 0000 0000 …………….
0028210: 0000 0000 0000 0000 0000 0000 0000 0000 …………….
0028220: 0000 0000 0000 0000 0000 0000 0000 0000 …………….
0028230: 0000 0000 0000 0000 0000 0000 0000 0000 …………….
0028240: 0000 0000 0000 0000 0000 0000 0000 0000 …………….
0028250: 0000 0000 0000 0000 0000 0000 0000 0000 …………….
0028260: 0000 0000 0000 0000 0000 0000 0000 0000 …………….
0028270: 0000 0000 0000 0000 0000 0000 0000 0000 …………….

ASM disk header is duplicated at locations 0x00007e10 and 0x00205e10
ASM backsup the disk header for repairs, in case it’s overwritten by
an external entity.
.
Corruption seen here is with allocation table blocks. In 12.1, ASM can
survive such a corruption provided the diskgroup compatibility is
advanced to 12.1.

【Oracle ASM数据恢复】 ORA-600 [kfcChkAio01]错误解析

如果ASM实例经常奔溃crash,且后台日志alert.log中发现如下错误信息的话,则有必要参考本篇文章了:

 

NOTE: starting recovery of thread=1 ckpt=201.9904 group=2
NOTE: starting recovery of thread=2 ckpt=139.4186 group=2

Tue Dec 16 03:00:51 2008
Errors in file /u01/app/oracle/product/10.2.0/asm/admin/+ASM/udump/+asm2_ora_15305.trc:
ORA-00600: internal error code, arguments: [kfcChkAio01], [], [], [], [], [], [], []
ORA-15196: invalid ASM block header [kfc.c:5552] [endian_kfbh] [2079] [2147483648] [1 != 0]
Abort recovery for domain 2
NOTE: crash recovery signalled OER-600
ERROR: ORA-600 signalled during mount of diskgroup FLASH

这个错误会导致 diskgroup被dismount,一般是由于bug 7589862 所造成。

也可以通过trace文件中的stack call来进一步确认是否是该问题:

kfcChkAio

函数kfxdrvMount是mount diskgroup时调用,其属于ASM恢复层kfrcrv

该错误的主要表现即是:

ORA-00600: internal error code, arguments: [kfcChkAio01], [], [], [], [], [], [], []
kfcChkAio01 表示IO操作因为无效的块而发生错误

ORA-15196: invalid ASM block header [kfc.c:5552] [endian_kfbh] [2079] [2147483648] [1 != 0]
上面的报错说明 无效的块

其中

  • endian_kfbh 是block header中的数据
  • 2079 ASM FILE NUMBER
  • 2147483648 ASM BLOCK NUMBER
  • 1 != 0 1 was the value found on the field referenced on the first argument, but 0 was the expected value.

 

遇到该问题需要通过手工Patch ASM metadata来解决, 如果不熟悉ASM内部结构 ,那么建议请专业人员来操作。

 

 

如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!

【Oracle ASM数据恢复】ORA-15042: ASM disk is missing after add disk took place错误解析

如果在10.2.0.4 以后版本当向ASM Diskgroup中加入新的磁盘后diskgroup被dismount,尝试mount该diskgroup时报错ORA-15042: ASM disk is missing after add disk took place,那么可以参考本帖。

 

 

 

Tue Feb 12 17:33:59 2013
NOTE: X->S down convert bast on F1B3 bastCount=2
Wed Feb 13 04:06:38 2013 < ALTER DISKGROUP DG1 ADD DISK
  '/dev/mapper/t1_asm03p1',
  '/dev/mapper/t1_asm04p1',
  '/dev/mapper/t1_asm05p1',
  '/dev/mapper/t1_asm06p1'
  rebalance power 4 
Wed Feb 13 04:06:38 2013
NOTE: reconfiguration of group 1/0x53bffa1 (DG1), full=1
Wed Feb 13 04:06:39 2013
NOTE: initializing header on grp 1 disk DG1_0026
NOTE: initializing header on grp 1 disk DG1_0027
NOTE: initializing header on grp 1 disk DG1_0028
NOTE: initializing header on grp 1 disk DG1_0029
NOTE: cache opening disk 26 of grp 1: DG1_0026 path:/dev/mapper/t1_asm03p1
NOTE: cache opening disk 27 of grp 1: DG1_0027 path:/dev/mapper/t1_asm04p1
NOTE: cache opening disk 28 of grp 1: DG1_0028 path:/dev/mapper/t1_asm05p1
NOTE: cache opening disk 29 of grp 1: DG1_0029 path:/dev/mapper/t1_asm06p1
NOTE: PST update: grp = 1
NOTE: requesting all-instance disk validation for group=1
Wed Feb 13 04:06:39 2013
NOTE: disk validation pending for group 1/0x53bffa1 (DG1)
Wed Feb 13 04:06:40 2013
NOTE: requesting all-instance membership refresh for group=1
Wed Feb 13 04:06:40 2013
NOTE: membership refresh pending for group 1/0x53bffa1 (DG1)
SUCCESS: validated disks for 1/0x53bffa1 (DG1)
SUCCESS: refreshed membership for 1/0x53bffa1 (DG1)
Wed Feb 13 04:07:11 2013 < ALTER DISKGROUP DG1 ADD DISK
  '/dev/mapper/t1_asm03p1',
  '/dev/mapper/t1_asm04p1',
  '/dev/mapper/t1_asm05p1',
  '/dev/mapper/t1_asm06p1'
  rebalance power 4 
NOTE: cache closing disk 26 of grp 1: DG1_0026 path:/dev/mapper/t1_asm03p1
NOTE: cache closing disk 26 of grp 1: DG1_0026 path:/dev/mapper/t1_asm03p1
NOTE: cache closing disk 27 of grp 1: DG1_0027 path:/dev/mapper/t1_asm04p1
NOTE: cache closing disk 27 of grp 1: DG1_0027 path:/dev/mapper/t1_asm04p1
NOTE: cache closing disk 28 of grp 1: DG1_0028 path:/dev/mapper/t1_asm05p1
NOTE: cache closing disk 28 of grp 1: DG1_0028 path:/dev/mapper/t1_asm05p1
NOTE: cache closing disk 29 of grp 1: DG1_0029 path:/dev/mapper/t1_asm06p1
NOTE: cache closing disk 29 of grp 1: DG1_0029 path:/dev/mapper/t1_asm06p1
Wed Feb 13 04:09:36 2013
SQL> ALTER DISKGROUP DG1 ADD DISK
  '/dev/mapper/t1_asm03p1',
  '/dev/mapper/t1_asm04p1',
  '/dev/mapper/t1_asm05p1',
  '/dev/mapper/t1_asm06p1'
  rebalance power 4 
Wed Feb 13 04:09:36 2013
NOTE: reconfiguration of group 1/0x53bffa1 (DG1), full=1
Wed Feb 13 04:09:36 2013
NOTE: requesting all-instance membership refresh for group=1
Wed Feb 13 04:09:36 2013
NOTE: membership refresh pending for group 1/0x53bffa1 (DG1)
SUCCESS: validated disks for 1/0x53bffa1 (DG1)
NOTE: PST update: grp = 1, dsk = 26, mode = 0x4
NOTE: PST update: grp = 1, dsk = 27, mode = 0x4
NOTE: PST update: grp = 1, dsk = 28, mode = 0x4
NOTE: PST update: grp = 1, dsk = 29, mode = 0x4
Wed Feb 13 04:09:42 2013
ERROR: too many offline disks in PST (grp 1)
Wed Feb 13 04:09:42 2013
SUCCESS: refreshed membership for 1/0x53bffa1 (DG1)
ERROR: ORA-15040 thrown in RBAL for group number 1
Wed Feb 13 04:09:42 2013
Errors in file /opt/oracle/product/10.2.0/asm/admin/+ASM/bdump/+asm1_rbal_30556.trc:
ORA-15040: diskgroup is incomplete
ORA-15066: offlining disk "" may result in a data loss
ORA-15042: ASM disk "29" is missing
ORA-15042: ASM disk "28" is missing
ORA-15042: ASM disk "27" is missing
ORA-15042: ASM disk "26" is missing
Wed Feb 13 04:09:43 2013
ERROR: PST-initiated MANDATORY DISMOUNT of group DG1
 Received dirty detach msg from node 3 for dom 1
Wed Feb 13 04:09:43 2013
Dirty detach reconfiguration started (old inc 12, new inc 12)

 

 

这时我们需要分析ASM的 DISK DIRECTORY和PST以及 DISK HEADER:
我们来看看:

kfddde[4].entry.incarn: 2 ; 0x724: A=0 NUMM=0x1
kfddde[4].entry.hash: 0 ; 0x728: 0x00000000
kfddde[4].entry.refer.number: 0 ; 0x72c: 0x00000000
kfddde[4].entry.refer.incarn: 0 ; 0x730: A=0 NUMM=0x0
kfddde[4].dsknum: 28 ; 0x734: 0x001c
kfddde[4].state: 8 ; 0x736: KFDSTA_ADDING <<<===============================
kfddde[4].ub1spare: 0 ; 0x737: 0x00
kfddde[4].dskname: DG1_0028 ; 0x738: length=8
kfddde[4].fgname: DG1_0028 ; 0x758: length=8
kfddde[4].crestmp.hi: 32983460 ; 0x778: HOUR=0x4 DAYS=0xd MNTH=0x2 YEAR=0x7dd
kfddde[4].crestmp.lo: 443710464 ; 0x77c: USEC=0x0 MSEC=0x9f SECS=0x27 MINS=0x6
kfddde[4].failstmp.hi: 0 ; 0x780: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[4].failstmp.lo: 0 ; 0x784: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfddde[4].timer: 0 ; 0x788: 0x00000000
kfddde[4].size: 307199 ; 0x78c: 0x0004afff

kfddde[3].entry.incarn: 2 ; 0x564: A=0 NUMM=0x1
kfddde[3].entry.hash: 0 ; 0x568: 0x00000000
kfddde[3].entry.refer.number: 0 ; 0x56c: 0x00000000
kfddde[3].entry.refer.incarn: 0 ; 0x570: A=0 NUMM=0x0
kfddde[3].dsknum: 27 ; 0x574: 0x001b
kfddde[3].state: 8 ; 0x576: KFDSTA_ADDING <<<===============================
kfddde[3].ub1spare: 0 ; 0x577: 0x00
kfddde[3].dskname: DG1_0027 ; 0x578: length=8
kfddde[3].fgname: DG1_0027 ; 0x598: length=8
kfddde[3].crestmp.hi: 32983460 ; 0x5b8: HOUR=0x4 DAYS=0xd MNTH=0x2 YEAR=0x7dd
kfddde[3].crestmp.lo: 443710464 ; 0x5bc: USEC=0x0 MSEC=0x9f SECS=0x27 MINS=0x6
kfddde[3].failstmp.hi: 0 ; 0x5c0: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[3].failstmp.lo: 0 ; 0x5c4: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0

kfddde[4].entry.hash: 0 ; 0x728: 0x00000000
kfddde[4].entry.refer.number: 0 ; 0x72c: 0x00000000
kfddde[4].entry.refer.incarn: 0 ; 0x730: A=0 NUMM=0x0
kfddde[4].dsknum: 28 ; 0x734: 0x001c
kfddde[4].state: 8 ; 0x736: KFDSTA_ADDING <<<===============================
kfddde[4].ub1spare: 0 ; 0x737: 0x00
kfddde[4].dskname: DG1_0028 ; 0x738: length=8
kfddde[4].fgname: DG1_0028 ; 0x758: length=8
kfddde[4].crestmp.hi: 32983460 ; 0x778: HOUR=0x4 DAYS=0xd MNTH=0x2 YEAR=0x7dd
kfddde[4].crestmp.lo: 443710464 ; 0x77c: USEC=0x0 MSEC=0x9f SECS=0x27 MINS=0x6
kfddde[4].failstmp.hi: 0 ; 0x780: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[4].failstmp.lo: 0 ; 0x784: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfddde[4].timer: 0 ; 0x788: 0x00000000
kfddde[4].size: 307199 ; 0x78c: 0x0004afff

kfddde[5].entry.incarn: 2 ; 0x8e4: A=0 NUMM=0x1
kfddde[5].entry.hash: 0 ; 0x8e8: 0x00000000
kfddde[5].entry.refer.number: 0 ; 0x8ec: 0x00000000
kfddde[5].entry.refer.incarn: 0 ; 0x8f0: A=0 NUMM=0x0
kfddde[5].dsknum: 29 ; 0x8f4: 0x001d
kfddde[5].state: 8 ; 0x8f6: KFDSTA_ADDING <<<===============================
kfddde[5].ub1spare: 0 ; 0x8f7: 0x00
kfddde[5].dskname: DG1_0029 ; 0x8f8: length=8
kfddde[5].fgname: DG1_0029 ; 0x918: length=8
kfddde[5].crestmp.hi: 32983460 ; 0x938: HOUR=0x4 DAYS=0xd MNTH=0x2 YEAR=0x7dd
kfddde[5].crestmp.lo: 443710464 ; 0x93c: USEC=0x0 MSEC=0x9f SECS=0x27 MINS=0x6
kfddde[5].failstmp.hi: 0 ; 0x940: HOUR=0x0 DAYS=0x0 MNTH=0x0 YEAR=0x0
kfddde[5].failstmp.lo: 0 ; 0x944: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfddde[5].timer: 0 ; 0x948: 0x00000000

File_name :: dg1_3.kfed

/dev/mapper/t1_asm05p1
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 0 ; 0x001: 0x00
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj: 0 ; 0x008: TYPE=0x0 NUMB=0x0
kfbh.check: 0 ; 0x00c: 0x00000000
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000

/dev/mapper/t1_asm06p1
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 0 ; 0x001: 0x00
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj: 0 ; 0x008: TYPE=0x0 NUMB=0x0
kfbh.check: 0 ; 0x00c: 0x00000000
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000

 

 

从上面的DISK DIRECTORY中的status可以看到KFDSTA_ADDING ,即新加入的磁盘仍在加入过程中,同时也没有完成rebalance。

 

查询PST的脚本如下:

 

vi kfed_pst.sh
-----
#! /bin/sh
rm /tmp/kfed_PST.out
for i in `ls *`
do
echo $i >> /tmp/kfed_PST.out
./kfed read $i aun=1 blkn=2 >> /tmp/kfed_PST.out
done
----

chmod u+x kfed_pst.sh

 

 

对于该问题需要手动Patch ASM metadata的方法来解决,否则无法让diskgroup重新mount起来。

 

如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!

【Oracle ASM】ORA-15196: invalid ASM block header [kfc.c:9194] [check_kfbh]错误解析

该问题的典型症状如下:

  1. 在低于版本11.2的RAC环境中发生
  2. 当从一个已经mount的diskgroup中增加或者drop磁盘时发现alert.log中出现ORA-15196错误
  3. 一般在alert.log中是显示blk=2即block number=2的metadata block出现checksum错误

 

 

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638   QQ号:47079569    邮箱:service@parnassusdata.com

 

例如:

 

WARNING: cache read a corrupted block gn=27 dsk=3 blk=2 from disk 3
NOTE: a corrupted block was dumped to /oracle/product/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_551.trc
ERROR: cache failed to read gn=27 dsk=3 blk=2 from disk(s): 3
ORA-15196: invalid ASM block header [kfc.c:9194] [check_kfbh] [2147483651] [2] [2158748224 != 4194727149]
System State dumped to trace file /oracle/product/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_551.trc
NOTE: cache initiating offline of disk 3 group 27

 

  1. 这里的blk=2 一般代表allocation table
  2. 如果blk=#不是2则可能与本文档描述的现象不一致

 

通过16进制dump可以看到在第三个块中出现etoV的字样,例如:

 

dd if=/tmp/etoV.dd bs=4096 skip=2 count=1 | hexdump -C | grep etoV

1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 1.6e-05 seconds, 256 MB/s
00000600 65 74 6f 56 03 00 00 00 01 03 0b 01 00 00 00 00 |etoV............|

或者

$ dd if=/tmp/etoV.dd bs=4096 skip=2 count=1 | od -t xz | grep etoV
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 1.7e-05 seconds, 241 MB/s
0003000 566f7465 00000003 010b0301 00000000 >etoV............<

 

 

问题发生的原因是可能是由于OS级别的磁盘路径配置错误,常见于系统重启后,导致CRS启动时将ASM 磁盘错认为是vote disk device,并将vote disk信息写入到错误的设备上。

 

对于该问题 如果是high/normal redundancy则易于解决, 但如果是 EXTERNAL redundancy则可能需要专业人员手工patch asm disk来修复了。

 

如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!

 

 

沪ICP备14014813号-2

沪公网安备 31010802001379号