【转】ASM metadata blocks

An ASM instance manages the metadata needed to make ASM files available to Oracle databases and other ASM clients. ASM metadata is stored in disk groups and organised in metadata structures. These metadata structures consist of one or more ASM metadata blocks. For example, the ASM disk header consist of a single ASM metadata block. Other structures, like the Partnership and Status Table, consist of exactly one allocation unit (AU). Some ASM metadata, like the File Directory, can span multiple AUs and will not have the predefined size; in fact, the File Directory will grow as needed and will be managed as any other ASM file.

ASM metadata block types

The following are the ASM metadata block types:

  • KFBTYP_DISKHEAD – The ASM disk header – the very first block in every ASM disk. A copy of this block will be in the second last Partnership and Status Table (PST) block (in ASM version 11.1.0.7 and later). The copy of this block will also be in the very first block in Allocation Unit 11, for disk groups with COMPATIBLE.ASM=12.1 or higher.
  • KFBTYP_FREESPC – The Free Space Table block.
  • KFBTYP_ALLOCTBL – The Allocation Table block.
  • KFBTYP_PST_META – The Partnership and Status Table (PST) block. The PST blocks 0 and 1 will be of this type.
  • KFBTYP_PST_DTA – The PST blocks with the actual PST data.
  • KFBTYP_PST_NONE – The PST block with no PST data. Remember that Allocation Unit 1 (AU1) on every disk is reserved for the PST, but only some disks will have the PST data.
  • KFBTYP_HBEAT – The heartbeat block, in the PST.
  • KFBTYP_FILEDIR – The File Directory block.
  • KFBTYP_INDIRECT – The Indirect File Directory block, containing a pointer to another file directory block.
  • KFBTYP_LISTHEAD – The Disk Directory block. The very first block in the ASM disk directory. The field kfdhdb.f1b1locn in the ASM disk header will point the the allocation unit whose block 0 will be of this type.
  • KFBTYP_DISKDIR – The rest of the blocks in the Disk Directory will be of this type.
  • KFBTYP_ACDC – The Active Change Directory (ACD) block. The very first block of the ACD will be of this type.
  • KFBTYP_CHNGDIR – The blocks with the actual ACD data.
  • KFBTYP_COD_BGO – The Continuing Operations Directory (COD) block for background operations data.
  • KFBTYP_COD_RBO – The COD block that marks the rollback operations data.
  • KFBTYP_COD_DATA – The COD block with the actual rollback operations data.
  • KFBTYP_TMPLTDIR – The Template Directory block.
  • KFBTYP_ALIASDIR – The Alias Directory block.
  • KFBTYP_SR – The Staleness Registry block.
  • KFBTYP_STALEDIR – The Staleness Directory block.
  • KFBTYP_VOLUMEDIR -The ADVM Volume Directory block.
  • KFBTYP_ATTRDIR -The Attributes Directory block.
  • KFBTYP_USERDIR – The User Directory block.
  • KFBTYP_GROUPDIR – The User Group Directory block.
  • KFBTYP_USEDSPC – The Disk Used Space Directory block.
  • KFBTYP_ASMSPFALS -The ASM spfile alias block.
  • KFBTYP_PASWDDIR – The ASM Password Directory block.
  • KFBTYP_INVALID – Not an ASM metadata block.

Note that the KFBTYP_INVALID is not an actual block type stored in ASM metadata block. Instead, ASM will return this if it encounters a block where the type is not one of the valid ASM metadata block types. For example if the ASM disk header is corrupt, say zeroed out, ASM will report it as KFBTYP_INVALID. We will also see the same when reading such block with the kfed tool.

ASM metadata block

The default ASM metadata block size is 4096 bytes. The block size will be specified in the ASM disk header field kfdhdb.blksize. Note that the ASM metadata block size has nothing to do with the database block size.

ASM metadata block header

The first 32 bytes of an ASM metadata block contains the block header (not to be confused with the ASM disk header). The block header has the following information:

  • kfbh.endian – Platform endianness.
  • kfbh.hard – H.A.R.D. (Hardware Assisted Resilient Data) signature.
  • kfbh.type – Block type.
  • kfbh.datfmt – Block data format.
  • kfbh.block.blk – Location (block number).
  • kfbh.block.obj – Data type held in this block.
  • kfbh.check – Block checksum.
  • kfbh.fcn.base – Block change control number (base).
  • kfbh.fcn.wrap – Block change control number (wrap).

The FCN is the ASM equivalent of database SCN.

The rest of the contents of an ASM metadata block will be specific to the block type. In other words, an ASM disk header block will have the disk header specific data – disk number, disk name, disk group name, etc. A file directory block will have the extent location data for a file, etc.

Conclusion

An ASM instance manages ASM metadata blocks. It creates them, updates them, calculates and updates the check sum on writes, reads and verifies the check sums on reads, exchanges the blocks with other instances, etc. ASM metadata structures consist of one of more ASM metadata blocks. A tool like kfed can be used to read and modify ASM metadata blocks.

【转】Tell me about your ASM

When diagnosing ASM issues, it helps to know a bit about the setup – disk group names and types, the state of disks, ASM instance initialisation parameters and if any rebalance operations are in progress. In those cases I usually ask for an HTML report, that gets produced by running the SQL script against one of the ASM instances. This post is about that script with the comments about the output.

The script

First, here is the script, that may be saved as asm_report.sql:

spool /tmp/ASM_report.html
set markup html on
set echo off
set feedback off
set pages 10000
break on INST_ID on GROUP_NUMBER
prompt ASM report
select to_char(SYSDATE, ‘DD-Mon-YYYY HH24:MI:SS’) “Time” from dual;
prompt Version
select * from V$VERSION where BANNER like ‘%Database%’ order by 1;
prompt Cluster wide operations
select * from GV$ASM_OPERATION order by 1;
prompt
prompt Disk groups, including the dismounted disk groups
select * from V$ASM_DISKGROUP order by 1, 2, 3;
prompt All disks, including the candidate disks
select GROUP_NUMBER, DISK_NUMBER, FAILGROUP, NAME, LABEL, PATH, MOUNT_STATUS, HEADER_STATUS, STATE, OS_MB, TOTAL_MB, FREE_MB, CREATE_DATE, MOUNT_DATE, SECTOR_SIZE, VOTING_FILE, FAILGROUP_TYPE
from V$ASM_DISK
where MODE_STATUS=’ONLINE’
order by 1, 2;
prompt Offline disks
select GROUP_NUMBER, DISK_NUMBER, FAILGROUP, NAME, MOUNT_STATUS, HEADER_STATUS, STATE, REPAIR_TIMER
from V$ASM_DISK
where MODE_STATUS=’OFFLINE’
order by 1, 2;
prompt Disk group attributes
select GROUP_NUMBER, NAME, VALUE from V$ASM_ATTRIBUTE where NAME not like ‘template%’ order by 1;
prompt Connected clients
select * from V$ASM_CLIENT order by 1, 2; 
prompt Non-default ASM specific initialisation parameters, including the hidden ones
select KSPPINM “Parameter”, KSPFTCTXVL “Value”
from X$KSPPI a, X$KSPPCV2 b
where a.INDX + 1 = KSPFTCTXPN and (KSPPINM like ‘%asm%’ or KSPPINM like ‘%balance%’ or KSPPINM like ‘%auto_manage%’) and kspftctxdf = ‘FALSE’
order by 1 desc;
prompt Memory, cluster and instance specific initialisation parameters
select NAME “Parameter”, VALUE “Value”, ISDEFAULT “Default”
from V$PARAMETER
where NAME like ‘%target%’ or NAME like ‘%pool%’ or NAME like ‘cluster%’ or NAME like ‘instance%’
order by 1;
prompt Disk group imbalance
select g.NAME “Diskgroup”,
100*(max((d.TOTAL_MB-d.FREE_MB + (128*g.ALLOCATION_UNIT_SIZE/1048576))/(d.TOTAL_MB + (128*g.ALLOCATION_UNIT_SIZE/1048576)))-min((d.TOTAL_MB-d.FREE_MB + (128*g.ALLOCATION_UNIT_SIZE/1048576))/(d.TOTAL_MB + (128*g.ALLOCATION_UNIT_SIZE/1048576))))/max((d.TOTAL_MB-d.FREE_MB + (128*g.ALLOCATION_UNIT_SIZE/1048576))/(d.TOTAL_MB + (128*g.ALLOCATION_UNIT_SIZE/1048576))) “Imbalance”,
count(*) “Disk count”,
g.TYPE “Type”
from V$ASM_DISK_STAT d , V$ASM_DISKGROUP_STAT g
where d.GROUP_NUMBER = g.GROUP_NUMBER and d.STATE = ‘NORMAL’ and d.MOUNT_STATUS = ‘CACHED’
group by g.NAME, g.TYPE;
prompt End of ASM report
set markup html off
set echo on
set feedback on
exit

To produce the report, that will be saved as /tmp/ASM_report.html, run the following command as the OS user that owns the Grid Infrastructure home (usually grid or oracle), against an ASM instance (say +ASM1), like this:

$ sqlplus -S / as sysasm @asm_report.sql

To save the output in a different location or under a different name, just modify the spool command (line 1 in the script).

The report

The reports first shows the time of the report and ASM version.

It then shows if there are any ASM operations in progress. In this excerpt we see a rebalance running in ASM instance 1. It can also be seen that the resync and rebalance have completed and that the compacting is the only outstanding operation:

Next we see the information about all disk groups, including the dismounted disk groups. This is then followed by the info about disks, again with the note that this includes the candidate disks.

I have separated the info about offline disks, as this may be of interest when dealing with disk issues. That section looks like this:

Next are the disk group attributes, with the note that this will be displayed only for ASM version 11.1 and later, as we did not have the disk group attributes in earlier versions.

This is followed by the list of connected clients, usually database instances served by that ASM instance.

The section with ASM initialisation parameters includes hidden and some Exadata specific (_auto_manage) parameters. Here is a small sample:

I have also separated the memory, cluster and instance specific initialisation parameters as they are often of special interest.

The last section shows the disk group imbalance report.

Sample reports

Here is a sample report from an Exadata system: ASM_report_Exa.html.

And here is a sample report from a version 12c Oracle Restart system: ASM_report_12c.html.

Conclusion

While I use this report for a quick overview of the ASM, it can also be used as a ‘backup’ info about your ASM setup. You are welcome to modify the script to produce a report that suits your needs. Please let me know if you find any issues with the script or if you have suggestions for improvements.

Acknowledgments

The bulk of the script is based on My Oracle Support (MOS) Doc ID 470211.1, by Oracle Support engineer Esteban D. Bernal.

The imbalance SQL is based on the Reporting Disk Imbalances script from Oracle Press book Oracle Automatic Storage Management, Under-the-Hood & Practical Deployment Guide, by Nitin Vengurlekar, Murali Vallath and Rich Long.

【转】The ASM password directory

Password file authentication for Oracle Database or ASM can work for both local and remote connections. In Oracle version 12c, the password files can reside in an ASM disk group. The ASM metadata structure for managing the passwords is the ASM Password Directory – ASM metadata file number 13.

Note that the password files are accessible only after the disk group is mounted. One implication of this is that no remote SYSASM access to ASM and no remote SYSDBA access to database is possible, until the disk group with the password file is mounted.

The password file

The disk group based password file is managed by ASMCMD commands, ORAPWD tool and SRVCTL commands. The password file can be created with ORAPWD and ASMCA (at the time ASM is configured). All other password file manipulations are performed with ASMCMD or SRVCTL commands.

The COMPATIBLE.ASM disk group attribute must be set to at least 12.1 for the disk group where the password is to be located. The SYSASM privilege is required to manage the ASM password file and the SYSDBA privilege is required to manage the database password file.

Let’s create the ASM password file in disk group DATA.

First make sure the COMPATIBLE.ASM attribute is set to the minimum required value:

$ asmcmd lsattr -G DATA -l compatible.asm

Name            Value

compatible.asm  12.1.0.0.0

Create the ASM password file:

$ orapwd file=’+DATA/orapwasm’ asm=y

Enter password for SYS: *******

$

Get the ASM password file name:

$ asmcmd pwget –asm

+DATA/orapwasm

And finally, find the ASM password file location and fully qualified name:

$ asmcmd find +DATA “*” –type password

+DATA/ASM/PASSWORD/pwdasm.256.837972683

+DATA/orapwasm

From this we see that +DATA/orapwasm is an alias to the actual file that has a special (+[DISK GROUP NAME]/ASM/PASSWORD) placeholder.

The ASM password directory

The ASM metadata structure for managing the disk group based passwords is the ASM Password Directory – ASM metadata file number 13. Note that the password file is also managed by the ASM File Directory, like any other ASM based file. I guess this redundancy just highlights the importance of the password file.

Let’s locate the ASM Password Directory. As that is file number 13, we can look it up in the ASM File Directory. That means, we first need to locate the ASM File Directory itself. The pointer to the first AU of the ASM File Directory is in the disk header of disk 0, in the field f1b2locn:

First match the disk numbers to disk paths for disk group DATA:

$ asmcmd lsdsk -p -G DATA | cut -c12-21,78-88

Disk_Num  Path

0  /dev/sdc1

1  /dev/sdd1

2  /dev/sde1

3  /dev/sdf1

Now get the starting point of the disk directory:

$ kfed read /dev/sdc1 | grep f1b1locn

kfdhdb.f1b1locn:                     10 ; 0x0d4: 0x0000000a

This is telling us that the file directory starts at AU 10 on that disk. Now look up block 13 in AU 10 – that will be the directory entry for ASM file 13, i.e. the ASM Password Directory.

$ kfed read /dev/sdc1 aun=10 blkn=13 | egrep “au|disk” | head

kfffde[0].xptr.au:                   47 ; 0x4a0: 0x0000002f

kfffde[0].xptr.disk:                  2 ; 0x4a4: 0x0002

kfffde[1].xptr.au:                   45 ; 0x4a8: 0x0000002d

kfffde[1].xptr.disk:                  1 ; 0x4ac: 0x0001

kfffde[2].xptr.au:                   46 ; 0x4b0: 0x0000002e

kfffde[2].xptr.disk:                  3 ; 0x4b4: 0x0003

kfffde[3].xptr.au:           4294967295 ; 0x4b8: 0xffffffff

kfffde[3].xptr.disk:              65535 ; 0x4bc: 0xffff

The output is telling us that the ASM Password Directory is in AU 47 on disk 2 (with copies in AU 45 on disk 1, and AU 46 on disk 3). Note that the ASM Password Directory is triple mirrored, even in a normal redundancy disk group.

Now look at AU 47 on disk 2:

$ kfed read /dev/sde1 aun=47 blkn=1 | more

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                           29 ; 0x002: KFBTYP_PASWDDIR

kfzpdb.block.incarn:                  3 ; 0x000: A=1 NUMM=0x1

kfzpdb.block.frlist.number:  4294967295 ; 0x004: 0xffffffff

kfzpdb.block.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0

kfzpdb.next.number:                  15 ; 0x00c: 0x0000000f

kfzpdb.next.incarn:                   3 ; 0x010: A=1 NUMM=0x1

kfzpdb.flags:                         0 ; 0x014: 0x00000000

kfzpdb.file:                        256 ; 0x018: 0x00000100

kfzpdb.finc:                  837972683 ; 0x01c: 0x31f272cb

kfzpdb.bcount:                       15 ; 0x020: 0x0000000f

kfzpdb.size:                        512 ; 0x024: 0x00000200

The ASM metadata block type is KFBTYP_PASWDDIR, i.e. the ASM Password Directory. We see that it points to ASM file number 256 (kfzpdb.file=256). From the earlier asmcmd find command we already know the actual password file is ASM file 256 in disk group DATA:

$ asmcmd ls -l +DATA/ASM/PASSWORD

Type      Redund  Striped  Time             Sys  Name

PASSWORD  HIGH    COARSE   JAN 27 18:00:00  Y    pwdasm.256.837972683

Again, note that the ASM password file is triple mirrored (Redund=HIGH) even in a normal redundancy disk group.

Conclusion

Starting with ASM version 12c we can store ASM and database password files in an ASM disk group. The ASM password can be created at the time of Grid Infrastructure installation or later with the ORAPWD command. The disk group based password files are managed with ASMCMD, ORAPWD and SRVCTL commands.

【转】ACFS disk group rebalance

Starting with Oracle Database version 11.2, an ASM disk group can be used for hosting one or more cluster file systems. These are known as Oracle ASM Cluster File Systems or Oracle ACFS. This functionality is achieved by creating special volume files inside the ASM disk group, which are then exposed to the OS as the block devices. The file systems are then created on those block devices.

 

This post is about the rebalancing, mirroring and extent management of the ACFS volume files.

 

The environment used for the examples:

* 64-bit Oracle Linux 5.4, in Oracle Virtual Box

* Oracle Restart and ASM version 11.2.0.3.0 – 64bit

* ASMLib/oracleasm version 2.1.7

 

Set up ACFS volumes

 

As this is an Oracle Restart environment (single instance), I have to load ADVM/ACFS drivers manually (as root user).

 

# acfsload start

ACFS-9391: Checking for existing ADVM/ACFS installation.

ACFS-9392: Validating ADVM/ACFS installation files for operating system.

ACFS-9393: Verifying ASM Administrator setup.

ACFS-9308: Loading installed ADVM/ACFS drivers.

ACFS-9154: Loading ‘oracleoks.ko’ driver.

ACFS-9154: Loading ‘oracleadvm.ko’ driver.

ACFS-9154: Loading ‘oracleacfs.ko’ driver.

ACFS-9327: Verifying ADVM/ACFS devices.

ACFS-9156: Detecting control device ‘/dev/asm/.asm_ctl_spec’.

ACFS-9156: Detecting control device ‘/dev/ofsctl’.

ACFS-9322: completed

#

 

Create a disk group to hold ASM cluster file systems.

 

$ sqlplus / as sysasm

 

SQL> create diskgroup ACFS

disk ‘ORCL:ASMDISK5’, ‘ORCL:ASMDISK6’

attribute ‘COMPATIBLE.ASM’ = ‘11.2’, ‘COMPATIBLE.ADVM’ = ‘11.2’;

 

Diskgroup created.

 

SQL>

 

While it is possible and supported to have a disk group that holds database files and ACFS volume files, I recommend to have a separate disk group for ACFS volumes. This provides the role/function separation and potential performance benefits to database files.

 

Check the allocation unit (AU) sizes for all disk groups.

 

SQL> select group_number “Group#”, name “Name”, allocation_unit_size “AU size”

from v$asm_diskgroup_stat;

 

Group# Name        AU size

———- ——– ———-

1 ACFS        1048576

2 DATA        1048576

 

SQL>

 

Note the default AU size (1MB) for both disk groups. I will refer to this later on, when I talk about the extent sizes for the volume files.

 

Create some volumes in disk group ACFS.

 

$ asmcmd volcreate -G ACFS -s 4G VOL1

$ asmcmd volcreate -G ACFS -s 2G VOL2

$ asmcmd volcreate -G ACFS -s 1G VOL3

 

Get the volume info.

 

$ asmcmd volinfo -a

Diskgroup Name: ACFS

 

Volume Name: VOL1

Volume Device: /dev/asm/vol1-142

State: ENABLED

Size (MB): 4096

Resize Unit (MB): 32

Redundancy: MIRROR

Stripe Columns: 4

Stripe Width (K): 128

Usage:

Mountpath:

 

Volume Name: VOL2

Volume Device: /dev/asm/vol2-142

State: ENABLED

Size (MB): 2048

Resize Unit (MB): 32

Redundancy: MIRROR

Stripe Columns: 4

Stripe Width (K): 128

Usage:

Mountpath:

 

Volume Name: VOL3

Volume Device: /dev/asm/vol3-142

State: ENABLED

Size (MB): 1024

Resize Unit (MB): 32

Redundancy: MIRROR

Stripe Columns: 4

Stripe Width (K): 128

Usage:

Mountpath:

 

$

 

Note that the volumes are automatically enabled after creation. On (server) restart we would need to manually load ADVM/ACFS drivers (acfsload start) and enable the volumes (asmcmd volenable -a).

 

ASM files for ACFS support

 

For each volume, the ASM creates a volume file. In a redundant disk group, each volume will also have a dirty region logging (DRL) file.

 

Get some info about our volume files.

 

SQL> select file_number “File#”, volume_name “Volume”, volume_device “Device”, size_mb “MB”, drl_file_number “DRL#”

from v$asm_volume;

 

File# Volume Device               MB DRL#

—— —— —————– —– —-

256 VOL1   /dev/asm/vol1-142  4096  257

259 VOL2   /dev/asm/vol2-142  2048  258

261 VOL3   /dev/asm/vol3-142  1024  260

 

SQL>

 

In addition to volume names, device names and sizes, this shows ASM files numbers 256, 259 and 261 for volume devices, and ASM file numbers 257, 258 and 260 for the associated DRL files.

 

Volume file extents

 

Get the extent distribution info for one of the volume files.

 

SQL> select xnum_kffxp “Extent”, au_kffxp “AU”, disk_kffxp “Disk”

from x$kffxp

where group_kffxp=2 and number_kffxp=261

order by 1,2;

 

Extent         AU       Disk

———- ———- ———-

0       6256          0

0       6256          1

1       6264          0

1       6264          1

2       6272          1

2       6272          0

3       6280          0

3       6280          1

127       7272          0

127       7272          1

2147483648       6252          0

2147483648       6252          1

2147483648 4294967294      65534

 

259 rows selected.

 

SQL>

 

First thing to note is that each extent is mirrored, as the volume is in a normal redundancy disk group.

 

We also see that the volume file 261 has 128 extents. As the volume size is 1GB, that means each extent size is 8MB or 8 AUs. The point here is that the volume files have their own extent size, unlike the standard ASM files that inherit the (initial) extent size from the disk group AU size.

 

ASM based cluster file systems

 

We can now use the volumes to create ASM cluster file systems and let everyone use them (this needs to be done as root user, of course):

 

# mkdir /acfs1

# mkdir /acfs2

# mkdir /acfs3

 

# chmod 777 /acfs?

 

# /sbin/mkfs -t acfs /dev/asm/vol1-142

mkfs.acfs: version         = 11.2.0.3.0

mkfs.acfs: on-disk version = 39.0

mkfs.acfs: volume          = /dev/asm/vol1-142

mkfs.acfs: volume size     = 4294967296

mkfs.acfs: Format complete.

 

# /sbin/mkfs -t acfs /dev/asm/vol2-142

mkfs.acfs: version         = 11.2.0.3.0

mkfs.acfs: on-disk version = 39.0

mkfs.acfs: volume          = /dev/asm/vol2-142

mkfs.acfs: volume size     = 2147483648

mkfs.acfs: Format complete.

 

# /sbin/mkfs -t acfs /dev/asm/vol3-142

mkfs.acfs: version         = 11.2.0.3.0

mkfs.acfs: on-disk version = 39.0

mkfs.acfs: volume          = /dev/asm/vol3-142

mkfs.acfs: volume size     = 1073741824

mkfs.acfs: Format complete.

 

# mount -t acfs /dev/asm/vol1-142 /acfs1

# mount -t acfs /dev/asm/vol2-142 /acfs2

# mount -t acfs /dev/asm/vol3-142 /acfs3

 

# mount | grep acfs

/dev/asm/vol1-142 on /acfs1 type acfs (rw)

/dev/asm/vol2-142 on /acfs2 type acfs (rw)

/dev/asm/vol3-142 on /acfs3 type acfs (rw)

 

Copy some files into the new file systems.

 

$ cp diag/asm/+asm/+ASM/trace/* /acfs1

$ cp diag/rdbms/db/DB/trace/* /acfs1

$ cp oradata/DB/datafile/* /acfs1

 

$ cp diag/asm/+asm/+ASM/trace/* /acfs2

$ cp oradata/DB/datafile/* /acfs2

 

$ cp fra/DB/backupset/* /acfs3

 

Check the used space.

 

$ df -h /acfs?

Filesystem         Size  Used Avail Use% Mounted on

/dev/asm/vol1-142  4.0G  1.3G  2.8G  31% /acfs1

/dev/asm/vol2-142  2.0G  1.3G  797M  62% /acfs2

/dev/asm/vol3-142  1.0G  577M  448M  57% /acfs3

 

ACFS disk group rebalance

 

Let’s add one disk to the ACFS disk group and monitor the rebalance operation.

 

SQL> alter diskgroup ACFS add disk ‘ORCL:ASMDISK4’;

 

Diskgroup altered.

 

SQL>

 

Get the ARB0 PID from the ASM alert log.

 

$ tail alert_+ASM.log

Sat Feb 15 12:44:53 2014

SQL> alter diskgroup ACFS add disk ‘ORCL:ASMDISK4’

NOTE: Assigning number (2,2) to disk (ORCL:ASMDISK4)

NOTE: starting rebalance of group 2/0x80486fe8 (ACFS) at power 1

SUCCESS: alter diskgroup ACFS add disk ‘ORCL:ASMDISK4’

Starting background process ARB0

Sat Feb 15 12:45:00 2014

ARB0 started with pid=27, OS id=10767

 

And monitor the rebalance by tailing the ARB0 trace file.

 

$ tail -f ./+ASM_arb0_10767.trc

 

*** ACTION NAME:() 2014-02-15 12:45:00.151

ARB0 relocating file +ACFS.1.1 (2 entries)

ARB0 relocating file +ACFS.2.1 (1 entries)

ARB0 relocating file +ACFS.3.1 (42 entries)

ARB0 relocating file +ACFS.3.1 (1 entries)

ARB0 relocating file +ACFS.4.1 (2 entries)

ARB0 relocating file +ACFS.5.1 (1 entries)

ARB0 relocating file +ACFS.6.1 (1 entries)

ARB0 relocating file +ACFS.7.1 (1 entries)

ARB0 relocating file +ACFS.8.1 (1 entries)

ARB0 relocating file +ACFS.9.1 (1 entries)

ARB0 relocating file +ACFS.256.839587727 (120 entries)

 

*** 2014-02-15 12:46:58.905

ARB0 relocating file +ACFS.256.839587727 (117 entries)

ARB0 relocating file +ACFS.256.839587727 (1 entries)

ARB0 relocating file +ACFS.257.839587727 (17 entries)

ARB0 relocating file +ACFS.258.839590377 (17 entries)

 

*** 2014-02-15 12:47:50.744

ARB0 relocating file +ACFS.259.839590377 (119 entries)

ARB0 relocating file +ACFS.259.839590377 (1 entries)

ARB0 relocating file +ACFS.260.839590389 (17 entries)

ARB0 relocating file +ACFS.261.839590389 (60 entries)

ARB0 relocating file +ACFS.261.839590389 (1 entries)

 

We see that the rebalance is per ASM file. This is exactly the same behaviour as with database files – ASM performs the rebalance on a per file basis. The ASM metadata files (1-9) get rebalanced first. The ASM then rebalances the volume file 256, DRL file 257, and so on.

 

From this we see that the ASM rebalances volume files (and other ASM files), not the OS files in the associated file system(s).

 

Disk online operation in an ACFS disk group

 

When an ASM disk goes offline, the ASM creates the staleness registry and staleness directory, to track the extents that should be modified on the offline disk. Once the disk comes back online, the ASM uses that information to perform the fast mirror resync.

 

That functionality is not available to volume files in ASM version 11.2. Instead, to online the disk, the ASM rebuilds the entire content of that disk. This is why the disk online performance, for disk groups with volume files, is inferior to the disk group with standard database files.

 

The fast mirror resync functionality for volume files is available in ASM version 12.1 and later.

 

Conclusion

 

ASM disk groups can be used to host a general purpose cluster file systems. ASM does this by creating volume files inside the disk groups, that are exposed to the operating system as block devices.

 

Existing ASM disk group mirroring functionality (normal and high redundancy) can be used to protect the user files at the file system level. ASM does this by mirroring extents for the volume files, in the same fashion it does this for any other ASM file. The volume files have their own extent sizes, unlike the standard database files that inherit the (initial) extent size from the disk group AU size.
The rebalance operation, in an ASM disk group that hosts ASM cluster file system volumes, is per volume file, not per the individual user files stored in the associated file system(s).

ASM spfile in a disk group

Starting with ASM version 11.2, the ASM spfile can be stored in an ASM disk group. Indeed, during a new ASM installation, the Oracle Universal Installer (OUI) will place the ASM spfile in the disk group that gets created during the installation. This is true for both Oracle Restart (single instance environments) and Cluster installations. It should be noted that the first disk group created during the installation is the default spfile location, but not a requirement. The spfile can still be on a file system, in say $ORACLE_HOME/dbs directory.

 

New ASMCMD commands

 

To support this feature, new ASMCMD commands were introduced to back up, copy and move the ASM spfile. The commands are:

  • spbackup – backs up an ASM spfile to a backup file. The backup file is not a special file type and is not identified as an spfile.
  • spcopy – copies an ASM spfile from the source location to an spfile in the destination location.
  • spmove – moves an ASM spfile from source to destination and automatically updates the GPnP profile.

 

The SQL commands CREATE PFILE FROM SPFILE and CREATE SPFILE FROM PFILE are still valid for the ASM spfile stored in the disk group.

 

ASM spfile in disk group DATA

 

In my environment, the ASM spfile is (somewhere) in the disk group DATA. Let’s find it:

 

$ asmcmd find –type ASMPARAMETERFILE +DATA “*”

+DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.822856169

 

As we can see, the ASM spfile is in a special location and it has ASM file number 253. The ASM spfile stored in the disk group is a registry file, and will always be the ASM metadata file number 253.

 

Of course, we see the same thing from the sqlplus:

 

$ sqlplus / as sysasm

 

SQL> show parameter spfile

 

NAME   TYPE   VALUE

—— —— ————————————————-

spfile string +DATA/ASM/ASMPARAMETERFILE/registry.253.822856169

 

SQL>

 

Let’s make a backup of that ASM spfile.

 

$ asmcmd spbackup +DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.822856169 /tmp/ASMspfile.backup

 

And check out the contents of the file:

 

$ strings /tmp/ASMspfile.backup

+ASM.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value

+ASM.asm_diskgroups=’RECO’,’ACFS’#Manual Mount

*.asm_power_limit=1

*.large_pool_size=12M

*.remote_login_passwordfile=’EXCLUSIVE’

 

As we can see, this is a copy of the ASM spfile, that includes the parameters and associated comments.

 

ASM spfile discovery

 

So, how can the ASM instance read the spfile on startup, if the spfile is in a disk group that is not mounted yet? Not only that – the ASM doesn’t really know which disk group has the spfile, or even if the spfile is in a disk group. And what is the value of the ASM discovery string?

 

The ASM Admin guide says this on the topic:

 

When an Oracle ASM instance searches for an initialization parameter file, the search order is:

  1. The location of the initialization parameter file specified in the Grid Plug and Play (GPnP) profile.
  2. If the location has not been set in the GPnP profile, then the search order changes to:
    1. SPFILE in the Oracle ASM instance home (e.g. $ORACLE_HOME/dbs/spfile+ASM.ora)
    2. PFILE in the Oracle ASM instance home

 

This does not tell us anything about the ASM discovery string, but at least it tells us about the spfile and the GPnP profile. It turns out the ASM discovery string is also in the GPnP profile. Here are the values from an Exadata environment:

 

$ gpnptool getpval -p=profile.xml -asm_dis -o-

o/*/*

$ gpnptool getpval -p=profile.xml -asm_spf -o-

+DBFS_DG/spfileASM.ora

 

There is no GPnP profile in a single instance set up, so this information is in the ASM resource (ora.asm), stored in the Oracle Local Repository (OLR). Here are the values from a single instance environment:

 

$ crsctl stat res ora.asm -p | egrep “ASM_DISKSTRING|SPFILE”

ASM_DISKSTRING=

SPFILE=+DATA/ASM/ASMPARAMETERFILE/registry.253.822856169

 

So far so good. Now the ASM knows where to look for ASM disks and where the spfile is. But the disk group is not mounted yet, as the ASM instance still hasn’t started up, so how can ASM read the spfile?

 

The trick is in the ASM disk headers. To support the ASM spfile in a disk group, two new fields were added to the ASM disk header:

  • kfdhdb.spfile – Allocation unit number of the ASM spfile.
  • kfdhdb.spfflg – ASM spfile flag. If this value is 1, the ASM spfile is on this disk in allocation unit kfdhdb.spfile.

 

As part of the disk discovery process, the ASM instance reads the disk headers and looks for the spfile information. Once it finds the disks that have the spfile, it can read the actual initialization parameters.

 

Let’s have a look at my disk group DATA. First check the disk group state and redundancy

 

$ asmcmd lsdg -g DATA | cut -c1-26

Inst_ID  State    Type

1  MOUNTED  NORMAL

 

The disk group is mounted and the redundancy is normal. This means the ASM spfile will be mirrored, so we should see two disks with kfdhdb.spfile and kfdhdb.spfflgvalues set. Let’s have a look:

 

$ for disk in `asmcmd lsdsk -G DATA –suppressheader`

> do

> echo $disk

> kfed read $disk | grep spf

> done

/dev/sdc1

kfdhdb.spfile:                       46 ; 0x0f4: 0x0000002e

kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001

/dev/sdd1

kfdhdb.spfile:                     2212 ; 0x0f4: 0x000008a4

kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001

/dev/sde1

kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000

kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000

 

As we can see, two disks have the ASM spfile.

 

Let’s check the contents of the Allocation Unit 46 on disk /dev/sdc1:

 

$ dd if=/dev/sdc1 bs=1048576 skip=46 count=1 | strings

+ASM.__oracle_base=’/u01/app/grid’#ORACLE_BASE set from in memory value

+ASM.asm_diskgroups=’RECO’,’ACFS’#Manual Mount

*.asm_power_limit=1

*.large_pool_size=12M

*.remote_login_passwordfile=’EXCLUSIVE’

1+0 records in

1+0 records out

1048576 bytes (1.0 MB) copied, 0.0352732 s, 29.7 MB/s

 

The AU 46 on disk /dev/sdc1 indeed contains the ASM spfile.

 

ASM spfile alias block

 

In addition to the new ASM disk header fields, there is a new metadata block type – KFBTYP_ASMSPFALS – that describes the ASM spfile alias. The ASM spfile alias block will be the last block in the ASM spfile.

 

Let’s have a look at the last block of the Allocation Unit 46:

 

$ kfed read /dev/sdc1 aun=46 blkn=255

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                           27 ; 0x002: KFBTYP_ASMSPFALS

kfbh.datfmt:                          1 ; 0x003: 0x01

kfbh.block.blk:                     255 ; 0x004: blk=255

kfbh.block.obj:                     253 ; 0x008: file=253

kfbh.check:                   806373865 ; 0x00c: 0x301049e9

kfbh.fcn.base:                        0 ; 0x010: 0x00000000

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

kfspbals.incarn:              822856169 ; 0x000: 0x310bc9e9

kfspbals.blksz:                     512 ; 0x004: 0x00000200

kfspbals.size:                        3 ; 0x008: 0x0003

kfspbals.path.len:                    0 ; 0x00a: 0x0000

kfspbals.path.buf:                      ; 0x00c: length=0

 

There is not much in this metadata block. Most of the entries have the block header info (fields kfbh.*). The actual ASM spfile alias data (fields kfspbals.*) has only few entries. The spfile file incarnation (822856169) is part of the file name (REGISTRY.253.822856169), the block size is 512 (bytes) and the file size is 3 blocks. The path info is empty, meaning I don’t actually have the ASM spfile alias.

 

Let’s create one. I will first create a pfile from the existing spfile and then create the spfile alias from that pfile.

 

$ sqlplus / as sysasm

 

SQL> create pfile=’/tmp/pfile+ASM.ora’ from spfile;

 

File created.

 

SQL> shutdown abort;

ASM instance shutdown

 

SQL> startup pfile=’/tmp/pfile+ASM.ora’;

ASM instance started

 

Total System Global Area 1135747072 bytes

Fixed Size                  2297344 bytes

Variable Size            1108283904 bytes

ASM Cache                  25165824 bytes

ASM diskgroups mounted

 

SQL> create spfile=’+DATA/spfileASM.ora’ from pfile=’/tmp/pfile+ASM.ora’;

 

File created.

 

SQL> exit

 

Looking for the ASM spfile again shows two entries:

 

$ asmcmd find –type ASMPARAMETERFILE +DATA “*”

+DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.843597139

+DATA/spfileASM.ora

 

We now see the ASM spfile itself (REGISTRY.253.843597139) and its alias (spfileASM.ora). Having a closer look at spfileASM.ora confirms this is indeed the alias for the registry file:

 

$ asmcmd ls -l +DATA/spfileASM.ora

Type              Redund  Striped  Time             Sys  Name

ASMPARAMETERFILE  MIRROR  COARSE   MAR 30 20:00:00  N    spfileASM.ora => +DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.843597139

 

Check the ASM spfile alias block now:

 

$ kfed read /dev/sdc1 aun=46 blkn=255

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                           27 ; 0x002: KFBTYP_ASMSPFALS

kfbh.datfmt:                          1 ; 0x003: 0x01

kfbh.block.blk:                     255 ; 0x004: blk=255

kfbh.block.obj:                     253 ; 0x008: file=253

kfbh.check:                  2065104480 ; 0x00c: 0x7b16fe60

kfbh.fcn.base:                        0 ; 0x010: 0x00000000

kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000

kfbh.spare1:                          0 ; 0x018: 0x00000000

kfbh.spare2:                          0 ; 0x01c: 0x00000000

kfspbals.incarn:              843597139 ; 0x000: 0x32484553

kfspbals.blksz:                     512 ; 0x004: 0x00000200

kfspbals.size:                        3 ; 0x008: 0x0003

kfspbals.path.len:                   13 ; 0x00a: 0x000d

kfspbals.path.buf:        spfileASM.ora ; 0x00c: length=13

 

Now we see that the alias file name appears in the ASM spfile alias block. Note the new incarnation number, as this is a new ASM spfile, created from the pfile.

 

Conclusion

 

Starting with ASM version 11.2, the ASM spfile can be stored in an ASM disk group. To support this feature, we now have new ASMCMD commands and, under the covers, we have new ASM metadata structures.

【转】ASM Disk Group Attributes

Disk group attributes were introduced in ASM version 11.1. They are bound to a disk group, rather than the ASM instance. Some attributes can be set only at the time the disk group is created, some only after the disk group is created, and some attributes can be set at any time, by altering the disk group.

 

This is the follow up on the ASM Attributes Directory post.

 

ACCESS_CONTROL.ENABLED

 

This attribute determines whether ASM File Access Control is enabled for a disk group. The value can be TRUE or FALSE (default).

 

If the attribute is set to TRUE, accessing ASM files is subject to access control. If FALSE, any user can access every file in the disk group. All other operations behave independently of this attribute.

 

This attribute can only be set when altering a disk group.

 

ACCESS_CONTROL.UMASK

 

This attribute determines which permissions are masked out on the creation of an ASM file for the owner, group, and others not in the user group. This attribute applies to all files in a disk group.

 

The values can be combinations of three digits {0|2|6} {0|2|6} {0|2|6}. The default is 066.

 

Setting to ‘0’ does not mask anything. Setting to ‘2’ masks out write permission. Setting to ‘6’ masks out both read and write permissions.

 

Before setting the ACCESS_CONTROL.UMASK disk group attribute, the ACCESS_CONTROL.ENABLED has to be set to TRUE.

 

This attribute can only be set when altering a disk group.

 

AU_SIZE

 

The AU_SIZE attribute controls the allocation unit size and can only be set when creating the disk group.

 

It is worth spelling out that each disk group can have a different allocation unit size.

 

CELL.SMART_SCAN_CAPABLE [Exadata]

 

This attribute is applicable to Exadata when the disk group is created from the grid diskson the storage cells. It enables the smart scan functionality for the objects stored in that disk group.

 

COMPATIBLE.ASM

 

The value for the disk group COMPATIBLE.ASM attribute determines the minimum software version for an ASM instance that can use the disk group. This setting also affects the format of the ASM metadata structures.

 

The default value for the COMPATIBLE.ASM is 10.1, when using the CREATE DISKGROUP statement, the ASMCMD mkdg command and the Enterprise Manager Create Disk Group page.

 

When creating a disk group with the ASMCA, the default value is 11.2 in ASM version 11gR2 and 12.1 in ASM version 12c.

 

COMPATIBLE.RDBMS

 

The value for the COMPATIBLE.RDBMS attribute determines the minimum COMPATIBLE database initialization parameter setting for any database instance that is allowed to use the disk group.

 

Before advancing the COMPATIBLE.RDBMS attribute, ensure that the values for the COMPATIBLE initialization parameter for all databases that access the disk group are set to at least the value of the new setting for the COMPATIBLE.RDBMS.

 

COMPATIBLE.ADVM

 

The value for the COMPATIBLE.ADVM attribute determines whether the disk group can contain the ASM volumes. The value must be set to 11.2 or higher. Before setting this attribute, the COMPATIBLE.ASM value must be 11.2 or higher. Also, the ADVM volume drivers must be loaded in the supported environment.

 

By default, the value of the COMPATIBLE.ADVM attribute is empty until set.

 

CONTENT.CHECK [12c]

 

The CONTENT.CHECK attributes enables or disables the content checking when performing the disk group rebalance. The attribute values can be TRUE or FALSE.

 

The content checking can include Hardware Assisted Resilient Data (HARD) checks on user data, validation of file types from the file directory against the block contents and the file directory information, and mirror side comparison.

 

When the attribute is set to TRUE, the logical content checking is enabled for all rebalance operations.

 

The content checking is also known as the disk scrubbing feature.

 

CONTENT.TYPE [11.2.0.3, Exadata]

 

The CONTENT.TYPE attribute identifies the disk group type that can be DATA, RECOVERY or SYSTEM. It determines the distance to the nearest partner disk/failgroup. The default value is DATA which specifies a distance of 1, the value of RECOVERY specifies a distance of 3 and the value of SYSTEM specifies a distance of 5.

 

The distance of 1 simply means that ASM considers all disks for partnership. The distance of 3 means that every 3rd disk will be considered for partnership and the distance of 5 means that every 5th disk will be considered for partnership.

 

The attribute can be specified when creating or altering a disk group. If the CONTENT.TYPE attribute is set or changed using the ALTER DISKGROUP, the new configuration does not take effect until a disk group rebalance is explicitly run.

 

The CONTENT.TYPE attribute is only valid for NORMAL and HIGH redundancy disk groups. The COMPATIBLE.ASM attribute must be set to 11.2.0.3 or higher to enable the CONTENT.TYPE attribute.

 

DISK_REPAIR_TIME

 

The value of the DISK_REPAIR_TIME attribute determines the amount of time ASM will keep the disk offline, before dropping it. This is relevant to the fast mirror resync feature for which the COMPATIBLE.ASM attribute must be set to 11.1 or higher.

 

This attribute can only be set when altering a disk group.

 

FAILGROUP_REPAIR_TIME [12c]

 

The FAILGROUP_REPAIR_TIME attribute specifies a default repair time for the failure groups in the disk group. The failure group repair time is used if ASM determines that an entire failure group has failed. The default value is 24 hours. If there is a repair time specified for a disk, such as with the DROP AFTER clause of the ALTER DISKGROUP OFFLINE DISK statement, that disk repair time overrides the failure group repair time.

 

This attribute can only be set when altering a disk group and is only applicable to NORMAL and HIGH redundancy disk groups.

 

IDP.BOUNDARY and IDP.TYPE [Exadata]

 

These attributes are used to configure Exadata storage, and are relevant for the Intelligent Data Placement feature.

 

PHYS_META_REPLICATED [12c]

 

The PHYS_META_REPLICATED attribute tracks the replication status of a disk group. When the ASM compatibility of a disk group is advanced to 12.1 or higher, the physical metadata of each disk is replicated. This metadata includes the disk header, free space table blocks and allocation table blocks. The replication is performed online asynchronously. This attribute value is set to true by ASM if the physical metadata of every disk in the disk group has been replicated.

 

This attribute is only defined in a disk group with the COMPATIBLE.ASM set to 12.1 and higher. The attribute is read-only and is intended for information only – a user cannot set or change its value. The values are either TRUE or FALSE.

 

SECTOR_SIZE

 

The SECTOR_SIZE attribute specifies the sector size for disks in a disk group and can only be set when creating a disk group.

 

The values for the SECTOR_SIZE can be 512, 4096 or 4K (provided the disks support those values). The default value is platform dependent. The COMPATIBLE.ASM and COMPATIBLE.RDBMS attributes must be set to 11.2 or higher to set the sector size to a value other than the default value.

 

NOTE: ASM Cluster File System (ACFS) does not support 4 KB sector drives.

 

STORAGE.TYPE

 

The STORAGE.TYPE attribute specifies the type of the disks in the disk group. The possible values are EXADATA, PILLAR, ZFSAS and OTHER. If the attribute is set to EXADATA|PILLAR|ZFSAS then all disks in the disk group must be of that type. If the attribute is set to OTHER, any types of disks can be in the disk group.

 

If the STORAGE.TYPE disk group attribute is set to PILLAR or ZFSAS, the Hybrid Columnar Compression (HCC) functionality can be enabled for the objects in the disk group. Exadata already supports HCC.

 

NOTE: The ZFS storage must be provisioned through Direct NFS (dNFS) and the Pillar Axiom storage must be provisioned via the SCSI or the Fiber Channel interface.

 

To set the STORAGE.TYPE attribute, the COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes must be set to 11.2.0.3 or higher. For maximum support with ZFS storage, set COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes to 11.2.0.4 or higher.

 

The STORAGE.TYPE attribute can be set when creating or altering a disk group. The attribute cannot be set when clients are connected to the disk group. For example, the attribute cannot be set when an ADVM volume is enabled on the disk group.

 

The attribute is not visible in the V$ASM_ATTRIBUTE view or with the ASMCMD lsattr command until the attribute has been set.

 

THIN_PROVISIONED [12c]

 

The THIN_PROVISIONED attribute enables or disables the functionality to discard unused storage space after a disk group rebalance is completed. The attribute value can be TRUE or FALSE (default).

 

Storage vendor products that support thin provisioning have the capability to reuse the discarded storage space for a more efficient overall physical storage utilization.

 

APPLIANCE.MODE [11.2.0.4, Exadata]

 

The APPLIANCE.MODE attribute improves the disk rebalance completion time when dropping one or more ASM disks. This means that redundancy is restored faster after a (disk) failure. The attribute is automatically enabled when creating a new disk group in Exadata. Existing disk groups must explicitly set the attribute using the ALTER DISKGROUP command. This feature is also known as fixed partnering.

 

The attribute can only be enabled on disk groups that meet the following requirements:

 

The Oracle ASM disk group attribute COMPATIBLE.ASM is set to release 11.2.0.4, or later.

The CELL.SMART_SCAN_CAPABLE attribute is set to TRUE.

All disks in the disk group are the same type of disk, such as all hard disks or all flash disks.

All disks in the disk group are the same size.

All failure groups in the disk group have an equal number of disks.

No disk in the disk group is offline.

 

Minimum software: Oracle Exadata Storage Server Software release 11.2.3.3 running Oracle Database 11g Release 2 (11.2) release 11.2.0.4

 

NOTE: This feature is not available in Oracle Database version 12.1.0.1.

 

Hidden disk group attributes

 

Not all disk group attributes are documented. Here are some of the more interesting ones.

 

_REBALANCE_COMPACT

 

The _REBALANCE_COMPACT attribute is related to the compacting phase of the rebalance. The attribute value can be TRUE (default) or FALSE. Setting the attribute to FALSE, disables the compacting phase of the disk group rebalance.

 

_EXTENT_COUNTS

 

The _EXTENT_COUNTS attribute, is related to the variable extents size feature, that determines the points at which the extent size will be incremented.

 

The value of the attribute is “20000 20000 214748367”, which means that the first 20000 extent sizes will be 1 AU, next 20000 extents will have the size determined by the second value of the _EXTENT_SIZES attribute, and the rest will have the size determined by the third value of the _EXTENT_SIZES attribute.

 

_EXTENT_SIZES

 

The _EXTENT_SIZES is the second attribute relevant to the variable extents size feature, and it determines the extent size increments – in the number of AUs.

 

In ASM version 11.1 the attribute value is “1 8 64”. In ASM version 11.2 and later, the value of the _EXTENT_SIZES is “1 4 16”.

 

V$ASM_ATTRIBUTE view and ASMCMD lsattr command

 

The disk group attributes can be queried via the V$ASM_ATTRIBUTE view and the ASMCMD lsattr command.

 

This is one way to list the attributes for disk group PLAY:

 

$ asmcmd lsattr -G PLAY –l

 

Name                    Value

access_control.enabled  FALSE

access_control.umask     066

au_size                  4194304

cell.smart_scan_capable FALSE

compatible.asm           11.2.0.0.0

compatible.rdbms         11.2.0.0.0

disk_repair_time         3.6h

sector_size              512

$

 

The disk group attributes can be modified via SQL ALTER DISKGROUP SET ATTRIBUTE, ASMCMD setattr command and the ASMCA. This is an example of using the ASMCMD setattr command to modify the DISK_REPAIR_TIME attribute for disk group PLAY:

 

$ asmcmd setattr -G PLAY disk_repair_time ‘4.5 H’

 

Check the new value:

 

$ asmcmd lsattr -G PLAY -l disk_repair_time

Name              Value

disk_repair_time  4.5 H

$

 

Conclusion

 

Disk group attributes, introduced in ASM version 11.1, are a great way to fine tune the disk group capabilities. Some of the attributes are Exadata specific (as marked) and some are available in certain versions only (as marked). Most disk group attributes are documented and accessible via the V$ASM_ATTRIBUTE view. Some of the undocumented attributes were also discussed and those should not be modified unless advised by Oracle Support.

【转】How to reconfigure Oracle Restart

转自 asmsupportguy 博客

The other day I had to reconfigure an Oracle Restart 12c environment, and I couldn’t find my blog post on that. It turns out that I never published it here, as my MOS document on this topic was created back in 2010 when this blog didn’t exist!

 

The original document was written for Oracle Restart 11gR2, but it is still valid for 12c. Here it is.

 

Introduction

 

This document is about reconfiguring the Oracle Restart. One reason for such action might be the server rename. If the server was renamed and then rebooted, the ASM instance startup would fail with ORA-29701: unable to connect to Cluster Synchronization Service.

 

The solution is to reconfigure Oracle Restart as follows.

 

  1. Remove Oracle Restart configuration

 

This step should be performed as the privileged user (root).

 

# $GRID_HOME/crs/install/roothas.pl -deconfig -force

 

The expected result is “Successfully deconfigured Oracle Restart stack”.

 

  1. Reconfigure Oracle Restart

 

This step should also be performed as the privileged user (root).

 

# $GRID_HOME/crs/install/roothas.pl

 

The expected result is “Successfully configured Oracle Grid Infrastructure for a Standalone Server”

 

  1. Add ASM back to Oracle Restart configuration

 

This step should be performed as the Grid Infrastructure owner (grid user).

 

$ srvctl add asm

 

The expected result is no output, just a return to the operating system prompt.

 

  1. Start ASM instance

 

This step should be performed as the Grid Infrastructure owner (grid user).

 

$ srvctl start asm

 

That should start the ASM instance.

 

Note that at this time there will be no ASM initialization or server parameter file.

 

  1. Recreate ASM server parameter file (SPFILE)

 

This step should be performed as the Grid Infrastructure owner (grid user).

 

Create a temporary initialization parameter file (e.g. /tmp/init+ASM.ora) with the content similar to the this (of course, with your own disk group names):

 

asm_diskgroups=’DATA’,’RECO’

large_pool_size=12M

remote_login_passwordfile=’EXCLUSIVE’

 

Mount the disk group where the new server parameter file (SPFILE) will reside (e.g. DATA) and create the SPFILE:

 

$ sqlplus / as sysasm

 

SQL> alter diskgroup DATA mount;

 

Diskgroup altered.

 

SQL> create spfile=’+DATA’ from pfile=’/tmp/init+ASM.ora’;

 

File created.

 

SQL> show parameter spfile

 

NAME   TYPE   VALUE

—— —— ————————————————-

spfile string +DATA/asm/asmparameterfile/registry.253.707737977

 

  1. Restart HAS stack

 

This step should be performed as the Grid Infrastructure owner (grid user).

 

$ crsctl stop has

 

$ crsctl start has

 

  1. Add components back to Oracle Restart configuration

 

Add the database, the listener and other components, back into the Oracle Restart configuration.

 

7.1. Add database

 

This step should be performed as RDBMS owner (oracle user).

 

In 11gR2 the command is:

 

$ srvctl add database -d db_unique_name -o oracle_home

 

In 12c the command is:

 

$ srvctl add database -db db_unique_name -oraclehome oracle_home

 

7.2. Add listener

 

This step should be performed as the Grid Infrastructure owner (grid user).

 

$ srvctl add listener

 

7.3. Add other components

 

For information on how to add back additional components in 11gR2 see:

Oracle® Database Administrator’s Guide 11g Release 2 (11.2)

Chapter 4 Configuring Automatic Restart of an Oracle Database

Section Configuring Oracle Restart

 

For 12c see:

Oracle® Database Administrator’s Guide 12c Release 1 (12.1)

Chapter 4 Configuring Automatic Restart of an Oracle Database

Section Configuring Oracle Restart

 

Conclusion

 

As noted earlier, I have published a MOS document on this: How to Reconfigure Oracle Restart – MOS Doc ID 986740.1.

【转】How to resize grid disks in Exadata

转自asmsupportguy 博客

This document explains how to resize the grid disks in Exadata (to make them larger), when there is free space in the cell disks. The free space can be anywhere on the cell disks. In other words, the grid disks can be built from and extended with the non-contiguous free space.

 

Typically, there is no free space in Exadata cell disks, in which case the MOS Doc ID 1465230.1 needs to be followed. But if there is free space in the cell disks, the procedure is much simpler and it can be accomplished with a single ASM rebalance operation.

 

This document has an example of performing this task on a quarter rack system (two database servers and three storage cells). With an Exadata with more storage cells, the only additional steps would be to resize the grid disks on additional storage cells.

 

Storage cells in the example are exacell01, exacell02 and exacell03, the disk group is DATA and the new grid disk size is 100000 MB.

 

Resize grid disks on storage cells

 

Log in as root to storage cell 1, and run the following command:

 

# cellcli -e alter griddisk  DATA_CD_00_exacell01, DATA_CD_01_exacell01, DATA_CD_02_exacell01, DATA_CD_03_exacell01, DATA_CD_04_exacell01, DATA_CD_05_exacell01, DATA_CD_06_exacell01, DATA_CD_07_exacell01, DATA_CD_08_exacell01, DATA_CD_09_exacell01, DATA_CD_10_exacell01, DATA_CD_11_exacell01 size=100000M;

 

Log in as root to storage cell 2, and run the following command:

 

# cellcli -e alter griddisk  DATA_CD_00_exacell02, DATA_CD_01_exacell02, DATA_CD_02_exacell02, DATA_CD_03_exacell02, DATA_CD_04_exacell02, DATA_CD_05_exacell02, DATA_CD_06_exacell02, DATA_CD_07_exacell02, DATA_CD_08_exacell02, DATA_CD_09_exacell02, DATA_CD_10_exacell02, DATA_CD_11_exacell02 size=100000M;

 

Log in as root to storage cell 3, and run the following command:

 

# cellcli -e alter griddisk  DATA_CD_00_exacell03, DATA_CD_01_exacell03, DATA_CD_02_exacell03, DATA_CD_03_exacell03, DATA_CD_04_exacell03, DATA_CD_05_exacell03, DATA_CD_06_exacell03, DATA_CD_07_exacell03, DATA_CD_08_exacell03, DATA_CD_09_exacell03, DATA_CD_10_exacell03, DATA_CD_11_exacell03 size=100000M;

 

As noted earlier, If you have a larger system, e.g. Exadata half rack with 7 storage cells, resize the grid disks for disk group DATA on all other storage cells.

 

Resize ASM disks

 

Log in as the Grid Infrastructure owner to database server 1, and log in to ASM instance 1 as sysasm.

 

$ sqlplus / as sysasm

 

Resize all disks in disk group DATA, with the following command:

 

SQL> ALTER DISKGROUP DATA RESIZE ALL;

 

Note that there was no need to specify the new disk size, as ASM will get that from the grid disks. If you would like to speed up the rebalance, add REBALANCE POWER 32 to the above command.

 

The command will trigger the rebalance operation for disk group DATA.

 

Monitor the rebalance with the following command:

 

SQL> select * from gv$asm_operation;

 

Once the command returns “no rows selected”, the rebalance would have completed and all disks in disk group DATA should show new size.

 

Check that the ASM sees the new size, with the following command:

 

SQL> select name, total_mb from v$asm_disk_stat where name like ‘DATA%’;

 

The TOTAL_MB should show 100000M for all disks in disk group DATA.

 

Conclusion

 

If there is free space in Exadata cell disks, the disk group resize can be accomplished in two steps – grid disk resize on all storage cells followed by the disk resize in ASM. This requires a single ASM rebalance operation.

 

I have published this on MOS as Doc ID 1684112.1.

【转】REQUIRED_MIRROR_FREE_MB

The REQUIRED_MIRROR_FREE_MB and the USABLE_FILE_MB are two very interesting columns in the V$ASM_DISKGROUP[_STAT] view. Oracle Support gets many questions about the meaning of those columns and how the values are calculated. I wanted to write about this, but I realised that I could not do it better than Harald van Breederode, so I asked him for permission to simply reference his write up. He agreed, so please have a look at his excellent post Demystifying ASM REQUIRED_MIRROR_FREE_MB and USABLE_FILE_MB.

 

How much space can I use

 

Now that the REQUIRED_MIRROR_FREE_MB and the USABLE_FILE_MB have been explained, I would like to add that the ASM does not prevent you from using all available space – half of the total space for a normal redundancy disk group and one third of the total space for a high redundancy disk group. But if you do fill up your disk group to the brim, there will be no room to grow or add any files, and in the case of a disk failure, there will be no room to restore the redundancy for some data – until the failed disk is replaced and the rebalance completed.

Exadata with ASM version 11gR2

 

In Exadata with ASM version 11.2 the REQUIRED_MIRROR_FREE_MB is reported as the size of the largest failgroup [1] in the disk group. To demonstrate, let’s look at an Exadata system with ASM version 11.2.0.4.

 

As in most Exadata installations, I have three disk groups.

 

[grid@exadb01 ~]$ sqlplus / as sysasm

 

SQL*Plus: Release 11.2.0.4.0 Production on [date]

 

SQL> select NAME, GROUP_NUMBER from v$asm_diskgroup_stat;

 

NAME      GROUP_NUMBER

——— ————

DATA                 1

DBFS_DG              2

RECO                 3

 

SQL>

 

For the purpose of this example, we will look at the disk group DBFS_DG. Normally there would be 10 disks per failgroup for disk group DBFS_DG. I have dropped few disks to demonstrate that the REQUIRED_MIRROR_FREE_MB is reported as the size of the largest failgroup.

 

SQL> select FAILGROUP, count(NAME) “Disks”, sum(TOTAL_MB) “MB”

from v$asm_disk_stat

where GROUP_NUMBER=2

group by FAILGROUP

order by 3;

 

FAILGROUP       Disks         MB

———- ———- ———-

EXACELL04           7     180096

EXACELL01           8     205824

EXACELL02           9     231552

EXACELL03          10     257280

 

SQL>

 

Note that the total space in the largest failgroup is 257280 MB.

 

Finally, we see that the REQUIRED_MIRROR_FREE_MB is reported as the size of the largest failgroup:

 

SQL> select NAME, TOTAL_MB, FREE_MB, REQUIRED_MIRROR_FREE_MB, USABLE_FILE_MB

from v$asm_diskgroup_stat

where GROUP_NUMBER=2;

 

NAME         TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB

———- ———- ———- ———————– ————–

DBFS_DG        874752     801420                  257280         272070

 

SQL>

 

The ASM calculates the USABLE_FILE_MB using the following formula:

 

USABLE_FILE_MB = (FREE_MB – REQUIRED_MIRROR_FREE_MB) / 2

 

Which gives 272070 MB.

[1] In Exadata, all failgroups are typically of the same size

 

Exadata with ASM version 12cR1

 

In Exadata with ASM version 12cR1, the REQUIRED_MIRROR_FREE_MB is reported as the size of the largest disk [2] in the disk group.

 

Here is an example from an Exadata system with ASM version 12.1.0.2.0.

 

[grid@exadb03 ~]$ sqlplus / as sysasm

 

SQL*Plus: Release 12.1.0.2.0 Production on [date]

 

SQL> select NAME, GROUP_NUMBER from v$asm_diskgroup_stat;

 

NAME     GROUP_NUMBER

——– ————

DATA                1

DBFS_DG             2

RECO                3

 

SQL>

 

Again, I have the failgroups of different sizes in the disk group DBFS_DG:

 

SQL> select FAILGROUP, count(NAME) “Disks”, sum(TOTAL_MB) “MB”

from v$asm_disk_stat

where GROUP_NUMBER=2

group by FAILGROUP

order by 3;

 

FAILGROUP       Disks         MB

———- ———- ———-

EXACELL05           8     238592

EXACELL07           9     268416

EXACELL06          10     298240

 

SQL>

 

The total space in the largest failgroup is 298240 MB, but this time the REQUIRED_MIRROR_FREE_MB is reported as 29824 MB:

 

SQL> select NAME, TOTAL_MB, FREE_MB, REQUIRED_MIRROR_FREE_MB, USABLE_FILE_MB

from v$asm_diskgroup_stat

where GROUP_NUMBER=2;  2    3

 

NAME         TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB

———- ———- ———- ———————– ————–

DBFS_DG        805248     781764                   29824         375970

 

SQL>

 

As we can see, that is the size of the largest disk, in the diskgroup:

 

SQL> select max(TOTAL_MB) from v$asm_disk_stat where GROUP_NUMBER=2;

 

MAX(TOTAL_MB)

————-

29824

 

SQL>

 

The USABLE_FILE_MB was calculated using the same formula:

 

USABLE_FILE_MB = (FREE_MB – REQUIRED_MIRROR_FREE_MB) / 2

 

Which gives 375970 MB.

[2] In Exadata, all disks are typically of the same size

 

Conclusion

 

The REQUIRED_MIRROR_FREE_MB and the USABLE_FILE_MB are intended to assist the DBAs and storage administrators with planning the disk group capacity and redundancy. The values are reported, but not enforced by the ASM.

In Exadata with ASM version 12cR1, the REQUIRED_MIRROR_FREE_MB is reported as the size of the largest disk in the disk group. This is by design, to reflect the experience from the field, which shows that the disks are the components that are failing, not the whole storage cells.

一种手工恢复asm文件的方法

摘要
本发明提供一种手工恢复ASM文件的方法,涉及计算机系统设计领域和数据库领域,通过获取ASM磁盘信息来重构已损坏或者丢失的ASM磁盘头,然后再通过PST信息来获取相关磁盘的信息,最后通过alias来获取ASM磁盘中的数据文件,从而获得最终的数据文件。通过手工抽取损坏的ASM磁盘文件来达到没有备份的情况下的数据恢复。
权利要求(5)
1.一种手工恢复ASM文件的方法,其特征在于该方法包括ASM DISK HEADER (1)、PST(2)、FILE DIRECTORY (3)、DISK DIRECTORY (4)、ALIAS DIRECTORY (5);通过读取 PST 数据,然后根据 kfdhdb.dskname、kfdhdb.grpname、kfdhdb.fgname、kfdhdb.dsksize 等数据构建ASM头信息,利用读取ASM元数据来获取存在于ASM磁盘中的数据文件名称,然后通过使用ASMDU来抽取相应磁盘中的文件。
2.根据权利要求1所述的方法,其特征在于所述ASM DISK HEADER,主要是该asmdisk的专有信息,例如asm disk name, group name, ausize,仓ij建和挂载的时间,可以通过oracle提供的kfed tool读取asm disk header的信息,记录在第一个AU的第一个block。
3.根据权利要求1所述的方法,其特征在于所述APST用来跟踪diskgroup的成员关系,每个disk的第二个au用于保存pst内容,pst的最后一个block用于磁盘组的heartbeat,防止被不同的集群同时挂载相同的磁盘组。
4.根据权利要求1所述的方法,其特征在于所述ADISK DIRECTORY用于记录磁盘组的asm磁盘信息,包括磁盘的大小,磁盘创建创建时间,挂载时间等信息。
5.根据权利要求1所述的方法,其特征在于所述AALIAS DIRECTORY用于记录asm另Ij名信息,包括文件所在的磁盘,文件的名称与别名,文件的目录名等信息。
说明

—种手工恢复ASM文件的方法

技术领域

[0001] 本发明涉及计算机系统设计领域和数据库领域,具体涉及一种手工恢复ASM文件的方法。

技术背景

[0002] ASM全称为 Automated Storage Management,即自动存储管理,它是自 OraclelOg这个版本Oracle推出的新功能。这是Oracle提供的一个卷管理器,用于替代操作操作系统所提供的LVM,它不仅支持单实例配置,也支持RAC这样的多实例配置。将给Oracle数据库管理员带来极大的方便,ASM可以自动管理磁盘组,并提供数据冗余和优化。特别是对于企业极的大型数据库管理员来说,可以使管理员可以从管理成百上千个数据文件这些琐碎的日常事务中解脱开来,以便处理其它更为重要的事务上去。

[0003] 在Oracle 10g这个版本之前,管理一个大型数据库成百上千个的数据文件对数据库管理员来说是一个既无技术含量又十分枯燥的工作,这要求数据库管理员要熟悉一些系统的LVM的相关知识,做好磁盘规化,LV的条带等相关的系统方面的相关操作。而使用自动存储管理将大大减轻这方面的工作量,数据库管理员只需要管理少数几个磁盘组即可。一个磁盘组是ASM管理的一个逻辑单元,由一组磁盘设备组成。我们可以定义一个磁盘组作为数据库的默认磁盘组,Oracle会自动管理存储,包括创建、删除数据文件等。Oracle会自动将这些文件与一个合适的数据库对象做关联,这样我们在管理这些对象时只需要提供对象的名称,而无需像以前那样提供详细的文件名。

[0004] ASM提供了很多有用的存储技术,如RAID和LVM (逻辑卷管理)等。像这些技术一样,ASM允许你在一组独立的磁盘上创建一个单独的磁盘组。这样就实现了单个磁盘组的1/0均衡。同时ASM还实现了条带化(Striping)和磁盘镜像(Mirroring)以提高1/0的性能和数据可靠性。与RAID或LVM不同的是,ASM是在文件级实现的条带化和镜像,这样的实现方式给用户带了很大选择自由度,我们可以在同一个磁盘组中对不同的文件配置不同的存储属性,实现不同的存储方式。

[0005] 由于ASM头中包含了 ASM的重要信息,如果头数据由于人为破坏或者硬件问题导致数据丢失那么会导致oracle数据库无法启动,进而有可能导致ASM磁盘中的数据丢失。

发明内容

[0006] 本发明主要是利用读取ASM元数据来获取存在于ASM磁盘中的数据文件名称,然后通过使用ASMDU来抽取相应磁盘中的文件,从而当ASM磁盘无法mount的时候作为一种数据恢复的手段。本发明的设计方法是通过读取ASM DISK HEADER中的相关信息包括 kfdhdb.driver, provstr、kfdhdb.dskname、kfdhdb.blksize、kfdhdb.ausize、kfdhdb.fstlocn、kfdhdb.f lbllocn等,通过读取这些字段的数据来获取相应的磁盘、磁盘组、AU等信息,来获得对应环境的基本信息。然后获取ALIAS DIRECTORY的关键元数据kfffde[0].xptr.au,通过这个数据来获取数据文件别名的具体信息,从而得到相应的数据库实例名字,然后再获得相应的system、confile、redo等文件的名称,然后使用AMDU来抽取相应的文件。从而当ASM实例down之后也可以从ASM的磁盘中获取数据文件从而达到数据恢复的目的。

[0007] 本发明的有益效果是:本方法通过读取PST数据,然后根据kfdhdb.dskname、kfdhdb.grpname>kfdhdb.fgname>kfdhdb.dsksize 等数据构建 ASM 头信息,然后根据 ALIASDIRECTORY的关键元数据kfffde[0].xptr.au来获取数据文件别名的具体信息,从而可以将ASM磁盘中的数据抽取出来;减少了因为没有数据备份而导致的数据丢失的情况。

[0008] 这种手工恢复ASM文件的方法具有上述优点,在无数据备份的情况下可以减少数据丢失,保护了企业的数据。

附图说明

[0009] 附图1本发明的逻辑结构图。

[0010] 实施方式

下面参照附图,对本发明的内容以手工恢复ASM文件为例,描述这一结构的实现过程。

[0011] 正如发明内容中所描述的,本发明的逻辑实现结构设计方法主要包括:ASM DISKHEADER (1)、PST (2),FILE DIRECTORY (3),DISK DIRECTORY (4),ALIAS DIRECTORY (5),通过读取 PST 数据,然后根据 kfdhdb.dskname、kfdhdb.grpname、kfdhdb.fgname> kfdhdb.dsksize等数据构建ASM头信息,利用读取ASM元数据来获取存在于ASM磁盘中的数据文件名称,然后通过使用ASMDU来抽取相应磁盘中的文件,从而当ASM磁盘无法mount的时候作为一种数据恢复的手段。

沪ICP备14014813号-2

沪公网安备 31010802001379号