Saturday, 20 February 2016

Loss of OCR, Voting disks with no backups of OCR, Voting disks in Oracle 11gR2

In this article, I’m posting out a scenario wherein there is a loss of OCR and Voting Disk and we do not have any backups of these to restore.
Environment:
Database Version: 11.2.0.3
2 node RAC– nodes: 10gnode1, 10gnode2
Database : PRIMT
Instance PRIMT1 on 10gnode1
Instance PRIMT2 on 10gnode2
ORA_CRS_HOME: /u01/app/11.2.0/grid3 (grid infrastructure home)
Below is the details of the Database instances running on node 10gnode1 and 10gnode2
10gnode1:
1
2
3
4
[oracle@10gnode1 ~]$ ps -ef | grep pmon
oracle    4159     1  0 18:15 ?        00:00:00 asm_pmon_+ASM1
oracle    4616     1  0 18:16 ?        00:00:00 ora_pmon_primt1
oracle    5069  5030  0 18:28 pts/0    00:00:00 grep pmon
10gnode2:
1
2
3
4
5
[oracle@10gnode2 ~]$
[oracle@10gnode2 ~]$ ps -ef | grep pmon
oracle    4248     1  0 18:14 ?        00:00:00 asm_pmon_+ASM2
oracle    4811     1  0 18:15 ?        00:00:00 ora_pmon_primt2
oracle    5822  5793  0 18:28 pts/0    00:00:00 grep pmon
CRS Status:
1
2
3
4
5
6
7
8
[oracle@10gnode1 ~]$ cd $ORA_CRS_HOME/bin
[oracle@10gnode1 bin]$
[oracle@10gnode1 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[oracle@10gnode1 bin]$
Details of the resources registered
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[oracle@10gnode1 bin]$ ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       10gnode1
               ONLINE  ONLINE       10gnode2
ora.DATA.dg
               ONLINE  ONLINE       10gnode1
               ONLINE  ONLINE       10gnode2
ora.FRA.dg
               ONLINE  ONLINE       10gnode1
               ONLINE  ONLINE       10gnode2
ora.LISTENER.lsnr
               ONLINE  ONLINE       10gnode1
               ONLINE  ONLINE       10gnode2
ora.asm
               ONLINE  ONLINE       10gnode1                 Started
               ONLINE  ONLINE       10gnode2                 Started
ora.eons
               ONLINE  OFFLINE      10gnode1
               ONLINE  OFFLINE      10gnode2
ora.gsd
               OFFLINE OFFLINE      10gnode1
               OFFLINE OFFLINE      10gnode2
ora.net1.network
               ONLINE  ONLINE       10gnode1
               ONLINE  ONLINE       10gnode2
ora.ons
               ONLINE  ONLINE       10gnode1
               ONLINE  ONLINE       10gnode2
ora.registry.acfs
               ONLINE  OFFLINE      10gnode1
               ONLINE  ONLINE       10gnode2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.10gnode1.vip
      1        ONLINE  ONLINE       10gnode1
ora.10gnode2.vip
      1        ONLINE  ONLINE       10gnode2
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       10gnode2
ora.oc4j
      1        OFFLINE OFFLINE
ora.primt.db
      1        ONLINE  ONLINE       10gnode1                 Open
      2        ONLINE  ONLINE       10gnode2                 Open
ora.primt.primt_appl.svc
      1        ONLINE  ONLINE       10gnode1
ora.scan1.vip
      1        ONLINE  ONLINE       10gnode2
[oracle@10gnode1 bin]$
Now let’s check the OCR and voting disk details.
Voting Disk Details:
1
2
3
4
5
6
[oracle@10gnode1 bin]$ ./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   823a909459594fcebfeab0588e3f9db2 (/dev/oracleasm/disks/DSK1) [CRS]
Located 1 voting disk(s).
[oracle@10gnode1 bin]$
OCR details:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[oracle@10gnode1 bin]$ ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3284
         Available space (kbytes) :     258836
         ID                       : 1842344055
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user
Lets check the backup of OCR available
1
2
3
4
5
[oracle@10gnode1 ~]$ cd $ORA_CRS_HOME/bin
[oracle@10gnode1 bin]$ ./ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available

10gnode2     2014/11/14 18:33:54     /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr
Now, let me collect the details of the ASM diskgroup hosting the OCR and voting disk.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[oracle@10gnode1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@10gnode1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Fri Nov 14 18:36:27 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
CRS                            MOUNTED
DATA                           MOUNTED
FRA                            MOUNTED

1
2
3
4
5
6
7
8
9
SQL>  select a.name,a.path,a.group_number from v$asm_disk a,v$asm_diskgroup b where a.group_number=b.group_number;

NAME                           PATH                                               GROUP_NUMBER
------------------------------ -------------------------------------------------- ------------
FRA_0001                       /dev/oracleasm/disks/DSK5                                     3
FRA_0000                       /dev/oracleasm/disks/DSK4                                     3
DATA_0001                      /dev/oracleasm/disks/DSK3                                     2
DATA_0000                      /dev/oracleasm/disks/DSK2                                     2
CRS_0000                       /dev/oracleasm/disks/DSK1                                     1
The disk hosting the OCR and Voting disks is “/dev/oracleasm/disks/DSK1″. Let me collect its details and try to corrupt it to create a scenario of loss of voting disk and OCR.
We cannot directly delete the OCR file from the ASM as its currently being used by CRS. If an attempt to delete, then the following error would be thrown.
1
2
3
ASMCMD> rm -rf +crs/node-scan/ocrfile/REGISTRY.255.854106767
ORA-15032: not all alterations performed
ORA-15028: ASM file '+crs/node-scan/ocrfile/REGISTRY.255.854106767' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
With the below outcome, its clear that the disk at OS level is “/dev/sdc1″ which is associated with the CRS ASM diskfile.
1
2
3
4
5
6
7
[root@10gnode1 ~]# /etc/init.d/oracleasm querydisk -p DSK1
ASM disk name contains an invalid character: "-"
Disk "DSK1" is a valid ASM disk on device [8, 33]
[root@10gnode1 ~]#
[root@10gnode1 ~]# ls -lrt /dev/sd* | grep 33
brw-r----- 1 root disk 8, 33 Nov 14 18:12 /dev/sdc1
[root@10gnode1 ~]#
Let me try to corrupt the diskgroup.
1
2
3
4
[root@10gnode1 ~]# dd if=/dev/zero of=/dev/sdc1 bs=8192
654645+0 records in
654644+0 records out
5362850304 bytes (5.4 GB) copied, 51.6696 seconds, 104 MB/s
Done…So now CRS would be down automatically.
1
2
3
4
[oracle@10gnode1 ~]$ cd $ORA_CRS_HOME/bin
[oracle@10gnode1 bin]$ ./ocrcheck
PROT-602: Failed to retrieve data from the cluster registry
PROC-26: Error while accessing the physical storage

1
2
3
4
5
[root@10gnode1 ~]# cd /u01/app/11.2.0/grid3/bin
[root@10gnode1 bin]# ./crsctl stop crs
CRS-2796: The command may not proceed when Cluster Ready Services is not running
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.
Lets check for the CRSD.BIN process …. From below output, its clear that CRSD process is no more.
1
2
3
4
5
6
7
8
9
[root@10gnode1 bin]# ps -ef | grep d.bin
root      3394     1  0 18:12 ?        00:00:11 /u01/app/11.2.0/grid3/bin/ohasd.bin reboot
oracle    3832     1  0 18:13 ?        00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin
oracle    3843     1  0 18:13 ?        00:00:01 /u01/app/11.2.0/grid3/bin/gpnpd.bin
oracle    3856     1  0 18:13 ?        00:00:07 /u01/app/11.2.0/grid3/bin/gipcd.bin
root      3869     1  0 18:13 ?        00:00:13 /u01/app/11.2.0/grid3/bin/osysmond.bin
oracle    5831     1  0 19:05 ?        00:00:00 /u01/app/11.2.0/grid3/bin/evmd.bin
oracle    5889     1  0 19:05 ?        00:00:00 /u01/app/11.2.0/grid3/bin/ocssd.bin
root      5939  5891  0 19:06 pts/0    00:00:00 grep d.bin
Confirming the same on 10gnode2 as well.
1
2
3
4
5
6
7
8
9
[oracle@10gnode2 node-scan]$ ps -ef | grep d.bin
root      3385     1  0 18:13 ?        00:00:13 /u01/app/11.2.0/grid3/bin/ohasd.bin reboot
oracle    3824     1  0 18:13 ?        00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin
oracle    3835     1  0 18:13 ?        00:00:03 /u01/app/11.2.0/grid3/bin/gpnpd.bin
oracle    3847     1  0 18:13 ?        00:00:11 /u01/app/11.2.0/grid3/bin/gipcd.bin
root      3861     1  0 18:13 ?        00:00:18 /u01/app/11.2.0/grid3/bin/osysmond.bin
oracle    7215  5793  0 19:27 pts/0    00:00:00 grep d.bin
oracle   23920     1  0 19:25 ?        00:00:00 /u01/app/11.2.0/grid3/bin/ocssd.bin
[oracle@10gnode2 node-scan]$
Ok … Now let me stop all other processes forcibly.
10gnode1:
1
2
3
4
5
6
7
8
9
10
11
12
[root@10gnode1 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '10gnode1'
CRS-2673: Attempting to stop 'ora.mdnsd' on '10gnode1'
CRS-2677: Stop of 'ora.mdnsd' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on '10gnode1'
CRS-2677: Stop of 'ora.crf' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on '10gnode1'
CRS-2677: Stop of 'ora.gipcd' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on '10gnode1'
CRS-2677: Stop of 'ora.gpnpd' on '10gnode1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '10gnode1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
10gnode2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@10gnode2 ~]# cd /u01/app/11.2.0/grid3/bin
[root@10gnode2 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '10gnode2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on '10gnode2'
CRS-2673: Attempting to stop 'ora.mdnsd' on '10gnode2'
CRS-2677: Stop of 'ora.drivers.acfs' on '10gnode2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on '10gnode2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on '10gnode2'
CRS-2677: Stop of 'ora.crf' on '10gnode2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on '10gnode2'
CRS-2677: Stop of 'ora.gipcd' on '10gnode2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on '10gnode2'
CRS-2677: Stop of 'ora.gpnpd' on '10gnode2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '10gnode2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
To simulate the scenario, let me remvoe the backup that we have.
1
2
3
4
5
[root@10gnode2 ~]# ls -lrt /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr
-rw------- 1 root root 7610368 Nov 14 18:33 /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr
[root@10gnode2 ~]#
[root@10gnode2 ~]# rm -rf /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr
[root@10gnode2 ~]#
All done..Now let’s recreate the disk to be used at ASM level.
1
2
3
4
5
6
7
[root@10gnode1 bin]# /etc/init.d/oracleasm deletedisk DSK1
Removing ASM disk "DSK1":                                  [  OK  ]
[root@10gnode1 bin]#
[root@10gnode1 bin]# /etc/init.d/oracleasm createdisk DSK1 /dev/sdc1
Marking disk "/dev/sdc1" as an ASM disk:                   [  OK  ]
[root@10gnode1 bin]#
[root@10gnode1 bin]#
If the backup of OCR and voting disk is available, then we could just create the diskgroup and restore OCR/Voting disk from the backup back to the diskgroup. But in our case (no backup), we’ll have to deconfig the cluster and run “ROOT.SH” to reconfig.
Deconfiguring the cluster can be done through “rootcrs.pl” script located at .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@10gnode1 ~]# cd /u01/app/11.2.0/grid3/crs/install/
[root@10gnode1 install]# ls -lrt rootcrs*
-rwxr-xr-x 1 root oinstall 35639 Oct 12  2012 rootcrs.pl
[root@10gnode1 install]#
[root@10gnode1 install]#
[root@10gnode1 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Successfully deconfigured Oracle clusterware stack on this node
Repeat the same step of deconfiguring the cluster in 10gnode2 as well.
10gnode2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@10gnode2 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Successfully deconfigured Oracle clusterware stack on this node
[root@10gnode2 install]#
Now, let me run ROOT.SH script in both the nodes. Let’s first run in 10gnode1.
1
2
3
4
[root@10gnode1 ~]# cd /u01/app/11.2.0/grid3/
[root@10gnode1 grid3]# ./root.sh
Check /u01/app/11.2.0/grid3/install/root_10gnode1.mydomain_2014-11-14_21-53-08.log for the output of root script
[root@10gnode1 grid3]# 
Here is outcome of the above execution on 10gnode1.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[oracle@10gnode1 ~]$ cat /u01/app/11.2.0/grid3/install/root_10gnode1.mydomain_2014-11-14_21-53-08.log

Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid3
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on '10gnode1'
CRS-2676: Start of 'ora.mdnsd' on '10gnode1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on '10gnode1'
CRS-2676: Start of 'ora.gpnpd' on '10gnode1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on '10gnode1'
CRS-2672: Attempting to start 'ora.gipcd' on '10gnode1'
CRS-2676: Start of 'ora.cssdmonitor' on '10gnode1' succeeded
CRS-2676: Start of 'ora.gipcd' on '10gnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '10gnode1'
CRS-2672: Attempting to start 'ora.diskmon' on '10gnode1'
CRS-2676: Start of 'ora.diskmon' on '10gnode1' succeeded
CRS-2676: Start of 'ora.cssd' on '10gnode1' succeeded

ASM created and started successfully.

Disk Group CRS created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Successful addition of voting disk 067b68c8ffc34fd9bf3bb529b7737ecb.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   067b68c8ffc34fd9bf3bb529b7737ecb (/dev/oracleasm/disks/DSK1) [CRS]
Located 1 voting disk(s).
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[oracle@10gnode1 ~]$
From the above log, its clear that CRS diskgroup is created with the disk ‘/dev/oracleasm/disks/DSK1′ and OCR and voting disks have got created in the CRS diskgroup.
To have the ASM diskgroup created with the required disk and also for the OCR and voting disks to be created under it, we need to pass the right values under the CRSCONFIG_PARAMS file. This file is located under . In my case it’s under “/u01/app/11.2.0/grid3/crs/install/”
Once the script is exeucted, let’s verify if everything is fine on node 10gnode1.
10gnode1:
1
2
3
4
5
6
7
8
9
10
11
[oracle@10gnode1 ~]$ ps -ef | grep d.bin
root      7714     1  0 21:53 ?        00:00:04 /u01/app/11.2.0/grid3/bin/ohasd.binreboot
oracle    9876     1  0 21:56 ?        00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin
oracle    9890     1  0 21:56 ?        00:00:00 /u01/app/11.2.0/grid3/bin/gpnpd.bin
oracle    9909     1  0 21:56 ?        00:00:01 /u01/app/11.2.0/grid3/bin/gipcd.bin
oracle    9944     1  0 21:56 ?        00:00:02 /u01/app/11.2.0/grid3/bin/ocssd.bin
root     10028     1  0 21:56 ?        00:00:00 /u01/app/11.2.0/grid3/bin/octssd.bin
root     10049     1  0 21:56 ?        00:00:02/u01/app/11.2.0/grid3/bin/osysmond.bin
root     10193     1  1 21:57 ?        00:00:06 /u01/app/11.2.0/grid3/bin/crsd.binreboot
oracle   10210     1  0 21:57 ?        00:00:00 /u01/app/11.2.0/grid3/bin/evmd.bin
oracle   11421  4263  0 22:02 pts/1    00:00:00 grep d.bin
1
2
3
4
5
6
[oracle@10gnode1 ~]$ cd /u01/app/11.2.0/grid3/bin
[oracle@10gnode1 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services isonline
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online






1
2
3
[oracle@10gnode1 bin]$ ps -ef | grep pmon
oracle   10133     1  0 21:57 ?        00:00:00 asm_pmon_+ASM1
oracle   11477  4263  0 22:03 pts/1    00:00:00 grep pmon
It can be seen that all is fine on node 10gnode1. Now, let’s proceed with node 10gnode2 by running the “ROOT.SH” script.
10gnode2:
1
2
3
4
[root@10gnode2 ~]# cd /u01/app/11.2.0/grid3
[root@10gnode2 grid3]# ./root.sh
Check /u01/app/11.2.0/grid3/install/root_10gnode2.mydomain_2014-11-14_22-05-58.log for the output of root script
[root@10gnode2 grid3]#
Here is outcome of the above execution on 10gnode2.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[oracle@10gnode2 ~]$ cat /u01/app/11.2.0/grid3/install/root_10gnode2.mydomain_2014-11-14_22-05-58.log

Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid3
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node 10gnode1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[oracle@10gnode2 ~]$
1
2
3
4
5
6
7
8
9
10
11
[oracle@10gnode2 ~]$ ps -ef | grep d.bin
root      5121     1  0 22:06 ?        00:00:05 /u01/app/11.2.0/grid3/bin/ohasd.bin reboot
oracle    6547     1  0 22:08 ?        00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin
oracle    6561     1  0 22:08 ?        00:00:00 /u01/app/11.2.0/grid3/bin/gpnpd.bin
oracle    6580     1  0 22:08 ?        00:00:01 /u01/app/11.2.0/grid3/bin/gipcd.bin
oracle    6629     1  0 22:08 ?        00:00:04 /u01/app/11.2.0/grid3/bin/ocssd.bin
root      6695     1  0 22:08 ?        00:00:01 /u01/app/11.2.0/grid3/bin/octssd.bin
root      6717     1  0 22:08 ?        00:00:02 /u01/app/11.2.0/grid3/bin/osysmond.bin
root      6899     1  1 22:09 ?        00:00:05 /u01/app/11.2.0/grid3/bin/crsd.bin reboot
oracle    6915     1  0 22:09 ?        00:00:01 /u01/app/11.2.0/grid3/bin/evmd.bin
oracle    8136  7522  0 22:16 pts/2    00:00:00 grep d.bin


1
2
3
4
5
6
[oracle@10gnode2 ~]$ cd /u01/app/11.2.0/grid3/bin/
[oracle@10gnode2 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
1
2
3
[oracle@10gnode2 bin]$ ps -ef | grep pmon
oracle    6841     1  0 22:09 ?        00:00:00 asm_pmon_+ASM2
oracle    8807  7522  0 22:22 pts/2    00:00:00 grep pmon
It’s clear that all is fine on 10gnode2 as well. Let me mount the diskgroups at ASM instance and start the database instances.
Note that, with the loss of voting disks and OCR, there would be no impact on the other diskgroups. So, we are not recreating or deleting or doing unwanted stuff with the remaining diskgroups.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[oracle@10gnode1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@10gnode1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Fri Nov 14 22:22:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
CRS                            MOUNTED
FRA                            DISMOUNTED
DATA                           DISMOUNTED
Let me start the DATA and FRA diskgroups.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
SQL> alter diskgroup FRA mount;

Diskgroup altered.

SQL> alter diskgroup DATA mount;

Diskgroup altered.

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
CRS                            MOUNTED
FRA                            MOUNTED
DATA                           MOUNTED
All done. Let’s check the same on the other node.
10gnode2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[oracle@10gnode2 ~]$ . oraenv
ORACLE_SID = [+ASM2] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@10gnode2 ~]$
[oracle@10gnode2 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Fri Nov 14 22:24:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
CRS                            MOUNTED
FRA                            MOUNTED
DATA                           MOUNTED
Everything is fine. Let me check the same using SRVCTL.
1
2
[oracle@10gnode1 ~]$ srvctl status asm
ASM is running on 10gnode1,10gnode2
Now, its time to add the resources to the cluster. First let me add the database and then its instances.
1
2
3
[oracle@10gnode1 bin]$ srvctl status database -d primt -v -f
PRCD-1120 : The resource for database primt could not be found.
PRCR-1001 : Resource ora.primt.db does not exist
As expected from above, database is still not registered. Let me do it.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[oracle@10gnode1 bin]$ srvctl add database -d primt -o /u01/app/oracle/product/11.2.0/db3
[oracle@10gnode1 bin]$ srvctl config database -d primt
Database unique name: primt
Database name:
Oracle home: /u01/app/oracle/product/11.2.0/db3
Oracle user: oracle
Spfile:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: primt
Database instances: primt1,primt2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@10gnode1 bin]$
1
2
[oracle@10gnode1bin]$ srvctl status database -d primt
Database is notrunning.





Now, its time to add the instances.
1
2
[oracle@10gnode1 bin]$ srvctl add instance -i primt1 -d primt -n 10gnode1
[oracle@10gnode1 bin]$ srvctl add instance -i primt2 -d primt -n 10gnode2
Start the database using “srvctl start database -d ” command and check its status.
1
2
3
[oracle@10gnode1 bin]$ srvctl status database -d primt -v -f
Instance primt1 is running on node 10gnode1. Instance status: Open.
Instance primt2 is running on node 10gnode2. Instance status: Open.
If there were services configured for the database, then those too need to be added using “srvctl add service” command.

No comments:

Post a Comment