Saturday, 20 February 2016

Add Node to Oracle RAC 11gR2(11.2.0.3)

Here i am going to post the steps needed to add a node to the existing two node RAC 11.2.0.3 Infrastructure on Oracle Virtual BOX

Already this is two node RAC Infrastructure configured on OEL6.5 on Oracle Virtual BOX.

I Have used backup to create additional node rac3.

Now to Create a New RAC node, i have imported the appliance which was backed up earlier at the time installation of RAC.



Click on File->Import Appliance

If you dont have any backup. then need to configure a fresh vm and installation OEL6.5 with all pre-reqs steps which were mentioned in my previous post.



Select the backup file .ova path which was taken earlier.



Change the name to rac3 and scroll down to select the location where you want to save the imported VM.







RAC3 vm is going to get created once we import the appliance





Add ASMDISKS click on storage.



Click on the icon show in below screenshot



Click on Add Hard Disk



And select “choose existing disk”



Go to the ASM disk 1 and select it and click ok



Repeat this step for the remaining ASMDISKs*



Start the RAC3 VM





hostname has to be changed. Change it to rac3.





Save and exit.

Now we need to configure virtual nics with correct IP’s for Private and Public

Follow the navigation as below



Change the IP address from 192.168.56.71 to 192.168.56.73 this is for PUBLIC



Now change for Private also



Add the entries in /etc/hosts file as below



Install oracleasmlib rpm and configure ASM





in the above screenshot i have mistakenly provided group as dba(which should be oinstall).





Also install the missing RPM’s



also Configure SSH between all the 3 Nodes.

it has ssh setup for 2 Node rac
Manual User Equivalence (Key-Based Authentication) Configuration

Assuming we have a two node cluster (rac1.localdomain, rac2.localdomain), log in as the “oracle” user and perform the following tasks on each node.
su - oracle mkdir ~/.ssh chmod 700 ~/.ssh /usr/bin/ssh-keygen -t rsa # Accept the default settings.


The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file.

Log in as the “oracle” user on rac1.localdomain, generate an “authorized_keys” file and copy it to rac2.localdomainusing the following commands.
su - oracle cd ~/.ssh cat id_rsa.pub >> authorized_keys scp authorized_keys rac2.localdomain:/home/oracle/.ssh/


Next, log in as the “oracle” user on rac2.localdomain and perform the following commands.
su - oracle cd ~/.ssh cat id_rsa.pub >> authorized_keys scp authorized_keys rac1.localdomain:/home/oracle/.ssh/


The “authorized_keys” file on both servers now contains the public keys generated on all nodes.

To enable SSH user equivalency on the cluster member nodes issue the following commands on each node.

ssh rac1 date ssh rac2 date ssh rac1.localdomain date ssh rac2.localdomain date exec /usr/bin/ssh-agent $SHELL /usr/bin/ssh-add

But we need to work on configure ssh on 3 node rac for oracle user!!!

Copy hosts file from rac3 to the rest of the 2 servers rac1 and rac2.

Copy nslookup file from /usr/bin from rac1 to rac3cluvfy stage -pre nodeadd -n rac3 -fixup -verbose Performing pre-checks for node addition Checking node reachability... Check: Node reachability from node "rac1" Destination Node Reachable? ------------------------------------ ------------------------ rac3 yes Result: Node reachability check passed from node "rac1" Checking user equivalence... Check: User equivalence for user "oracle" Node Name Status ------------------------------------ ------------------------ rac3 passed Result: User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ rac1 passed rac2 passed rac3 passed Verification of the hosts config file successful Interface information for node "rac1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.71 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.81 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.91 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth1 192.168.10.1 192.168.10.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth1 169.254.149.126 169.254.0.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth2 192.168.1.8 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:94:63:1F 1500 Interface information for node "rac2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.72 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.92 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.93 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.82 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth1 192.168.10.2 192.168.10.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth1 169.254.184.13 169.254.0.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth2 192.168.1.7 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:94:63:1F 1500 Interface information for node "rac3" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.73 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth1 192.168.10.3 192.168.10.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth2 192.168.1.9 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:94:63:1F 1500 Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1[192.168.56.71] rac1[192.168.56.81] yes rac1[192.168.56.71] rac1[192.168.56.91] yes rac1[192.168.56.71] rac2[192.168.56.72] yes rac1[192.168.56.71] rac2[192.168.56.92] yes rac1[192.168.56.71] rac2[192.168.56.93] yes rac1[192.168.56.71] rac2[192.168.56.82] yes rac1[192.168.56.71] rac3[192.168.56.73] yes rac1[192.168.56.81] rac1[192.168.56.91] yes rac1[192.168.56.81] rac2[192.168.56.72] yes rac1[192.168.56.81] rac2[192.168.56.92] yes rac1[192.168.56.81] rac2[192.168.56.93] yes rac1[192.168.56.81] rac2[192.168.56.82] yes rac1[192.168.56.81] rac3[192.168.56.73] yes rac1[192.168.56.91] rac2[192.168.56.72] yes rac1[192.168.56.91] rac2[192.168.56.92] yes rac1[192.168.56.91] rac2[192.168.56.93] yes rac1[192.168.56.91] rac2[192.168.56.82] yes rac1[192.168.56.91] rac3[192.168.56.73] yes rac2[192.168.56.72] rac2[192.168.56.92] yes rac2[192.168.56.72] rac2[192.168.56.93] yes rac2[192.168.56.72] rac2[192.168.56.82] yes rac2[192.168.56.72] rac3[192.168.56.73] yes rac2[192.168.56.92] rac2[192.168.56.93] yes rac2[192.168.56.92] rac2[192.168.56.82] yes rac2[192.168.56.92] rac3[192.168.56.73] yes rac2[192.168.56.93] rac2[192.168.56.82] yes rac2[192.168.56.93] rac3[192.168.56.73] yes rac2[192.168.56.82] rac3[192.168.56.73] yes Result: Node connectivity passed for interface "eth0" Check: TCP connectivity of subnet "192.168.56.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:192.168.56.71 rac1:192.168.56.81 passed rac1:192.168.56.71 rac1:192.168.56.91 passed rac1:192.168.56.71 rac2:192.168.56.72 passed rac1:192.168.56.71 rac2:192.168.56.92 passed rac1:192.168.56.71 rac2:192.168.56.93 passed rac1:192.168.56.71 rac2:192.168.56.82 passed rac1:192.168.56.71 rac3:192.168.56.73 passed Result: TCP connectivity check passed for subnet "192.168.56.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.56.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Checking CRS integrity... Clusterware version consistency passed The Oracle Clusterware is healthy on node "rac1" The Oracle Clusterware is healthy on node "rac2" CRS integrity check passed Checking shared resources... Checking CRS home location... "/u01/app/11.2.0/grid" is shared Result: Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ rac1 passed rac2 passed rac3 passed Verification of the hosts config file successful Interface information for node "rac1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.71 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.81 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.91 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth1 192.168.10.1 192.168.10.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth1 169.254.149.126 169.254.0.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth2 192.168.1.8 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:94:63:1F 1500 Interface information for node "rac2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.72 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.92 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.93 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth0 192.168.56.82 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth1 192.168.10.2 192.168.10.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth1 169.254.184.13 169.254.0.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth2 192.168.1.7 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:94:63:1F 1500 Interface information for node "rac3" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.73 192.168.56.0 0.0.0.0 192.168.1.1 08:00:27:CF:4E:CA 1500 eth1 192.168.10.3 192.168.10.0 0.0.0.0 192.168.1.1 08:00:27:3D:56:40 1500 eth2 192.168.1.9 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:94:63:1F 1500 Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1[192.168.56.71] rac1[192.168.56.81] yes rac1[192.168.56.71] rac1[192.168.56.91] yes rac1[192.168.56.71] rac2[192.168.56.72] yes rac1[192.168.56.71] rac2[192.168.56.92] yes rac1[192.168.56.71] rac2[192.168.56.93] yes rac1[192.168.56.71] rac2[192.168.56.82] yes rac1[192.168.56.71] rac3[192.168.56.73] yes rac1[192.168.56.81] rac1[192.168.56.91] yes rac1[192.168.56.81] rac2[192.168.56.72] yes rac1[192.168.56.81] rac2[192.168.56.92] yes rac1[192.168.56.81] rac2[192.168.56.93] yes rac1[192.168.56.81] rac2[192.168.56.82] yes rac1[192.168.56.81] rac3[192.168.56.73] yes rac1[192.168.56.91] rac2[192.168.56.72] yes rac1[192.168.56.91] rac2[192.168.56.92] yes rac1[192.168.56.91] rac2[192.168.56.93] yes rac1[192.168.56.91] rac2[192.168.56.82] yes rac1[192.168.56.91] rac3[192.168.56.73] yes rac2[192.168.56.72] rac2[192.168.56.92] yes rac2[192.168.56.72] rac2[192.168.56.93] yes rac2[192.168.56.72] rac2[192.168.56.82] yes rac2[192.168.56.72] rac3[192.168.56.73] yes rac2[192.168.56.92] rac2[192.168.56.93] yes rac2[192.168.56.92] rac2[192.168.56.82] yes rac2[192.168.56.92] rac3[192.168.56.73] yes rac2[192.168.56.93] rac2[192.168.56.82] yes rac2[192.168.56.93] rac3[192.168.56.73] yes rac2[192.168.56.82] rac3[192.168.56.73] yes Result: Node connectivity passed for interface "eth0" Check: TCP connectivity of subnet "192.168.56.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:192.168.56.71 rac1:192.168.56.81 passed rac1:192.168.56.71 rac1:192.168.56.91 passed rac1:192.168.56.71 rac2:192.168.56.72 passed rac1:192.168.56.71 rac2:192.168.56.92 passed rac1:192.168.56.71 rac2:192.168.56.93 passed rac1:192.168.56.71 rac2:192.168.56.82 passed rac1:192.168.56.71 rac3:192.168.56.73 passed Result: TCP connectivity check passed for subnet "192.168.56.0" Check: Node connectivity for interface "eth1" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1[192.168.10.1] rac2[192.168.10.2] yes rac1[192.168.10.1] rac3[192.168.10.3] yes rac2[192.168.10.2] rac3[192.168.10.3] yes Result: Node connectivity passed for interface "eth1" Check: TCP connectivity of subnet "192.168.10.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:192.168.10.1 rac2:192.168.10.2 passed rac1:192.168.10.1 rac3:192.168.10.3 passed Result: TCP connectivity check passed for subnet "192.168.10.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.56.0". Subnet mask consistency check passed for subnet "192.168.10.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed. Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 2.9401GB (3082932.0KB) 1.5GB (1572864.0KB) passed rac3 2.9401GB (3082932.0KB) 1.5GB (1572864.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 1.8159GB (1904084.0KB) 50MB (51200.0KB) passed rac3 2.7753GB (2910084.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 2.9531GB (3096572.0KB) 2.9401GB (3082932.0KB) passed rac3 2.9531GB (3096572.0KB) 2.9401GB (3082932.0KB) passed Result: Swap space check passed Check: Free disk space for "rac1:/u01/app/11.2.0/grid,rac1:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid rac1 / 25.9531GB 7.5GB passed /tmp rac1 / 25.9531GB 7.5GB passed Result: Free disk space check passed for "rac1:/u01/app/11.2.0/grid,rac1:/tmp" Check: Free disk space for "rac3:/u01/app/11.2.0/grid,rac3:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid rac3 / 35.8145GB 7.5GB passed /tmp rac3 / 35.8145GB 7.5GB passed Result: Free disk space check passed for "rac3:/u01/app/11.2.0/grid,rac3:/tmp" Check: User existence for "oracle" Node Name Status Comment ------------ ------------------------ ------------------------ rac1 passed exists(1100) rac3 passed exists(1100) Checking for multiple users with UID value 1100 Result: Check for multiple users with UID value 1100 passed Result: User existence check passed for "oracle" Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- rac1 5 3,5 passed rac3 5 3,5 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac1 hard 65536 65536 passed rac3 hard 65536 65536 passed Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac1 soft 4096 1024 passed rac3 soft 4096 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac1 hard 16384 16384 passed rac3 hard 16384 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac1 soft 2047 2047 passed rac3 soft 2047 2047 passed Result: Soft limits check passed for "maximum user processes" Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 x86_64 x86_64 passed rac3 x86_64 x86_64 passed Result: System architecture check passed Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 3.8.13-16.3.1.el6uek.x86_64 2.6.32 passed rac3 3.8.13-16.3.1.el6uek.x86_64 2.6.32 passed Result: Kernel version check passed Check: Kernel parameter for "semmsl" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 250 250 250 passed rac3 250 250 250 passed Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 32000 32000 32000 passed rac3 32000 32000 32000 passed Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 100 100 100 passed rac3 100 100 100 passed Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 128 128 128 passed rac3 128 128 128 passed Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 68719476736 68719476736 1578461184 passed rac3 68719476736 68719476736 1578461184 passed Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 4096 4096 4096 passed rac3 4096 4096 4096 passed Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 4294967296 4294967296 2097152 passed rac3 4294967296 4294967296 2097152 passed Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 6815744 6815744 6815744 passed rac3 6815744 6815744 6815744 passed Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed rac3 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 262144 262144 262144 passed rac3 262144 262144 262144 passed Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 4194304 4194304 4194304 passed rac3 4194304 4194304 4194304 passed Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 262144 262144 262144 passed rac3 262144 262144 262144 passed Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 1048586 1048586 1048576 passed rac3 1048586 1048586 1048576 passed Result: Kernel parameter check passed for "wmem_max" Check: Kernel parameter for "aio-max-nr" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac1 1048576 1048576 1048576 passed rac3 1048576 1048576 1048576 passed Result: Kernel parameter check passed for "aio-max-nr" Check: Package existence for "binutils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed rac3 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed Result: Package existence check passed for "binutils" Check: Package existence for "compat-libcap1" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed rac3 compat-libcap1-1.10-1 compat-libcap1-1.10 passed Result: Package existence check passed for "compat-libcap1" Check: Package existence for "compat-libstdc++-33(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed rac3 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed Result: Package existence check passed for "compat-libstdc++-33(x86_64)" Check: Package existence for "libgcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 libgcc(x86_64)-4.4.7-4.el6 libgcc(x86_64)-4.4.4 passed rac3 libgcc(x86_64)-4.4.7-4.el6 libgcc(x86_64)-4.4.4 passed Result: Package existence check passed for "libgcc(x86_64)" Check: Package existence for "libstdc++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 libstdc++(x86_64)-4.4.7-4.el6 libstdc++(x86_64)-4.4.4 passed rac3 libstdc++(x86_64)-4.4.7-4.el6 libstdc++(x86_64)-4.4.4 passed Result: Package existence check passed for "libstdc++(x86_64)" Check: Package existence for "libstdc++-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 libstdc++-devel(x86_64)-4.4.7-4.el6 libstdc++-devel(x86_64)-4.4.4 passed rac3 libstdc++-devel(x86_64)-4.4.7-4.el6 libstdc++-devel(x86_64)-4.4.4 passed Result: Package existence check passed for "libstdc++-devel(x86_64)" Check: Package existence for "sysstat" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 sysstat-9.0.4-22.el6 sysstat-9.0.4 passed rac3 sysstat-9.0.4-22.el6 sysstat-9.0.4 passed Result: Package existence check passed for "sysstat" Check: Package existence for "gcc" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 gcc-4.4.7-4.el6 gcc-4.4.4 passed rac3 gcc-4.4.7-4.el6 gcc-4.4.4 passed Result: Package existence check passed for "gcc" Check: Package existence for "gcc-c++" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 gcc-c++-4.4.7-4.el6 gcc-c++-4.4.4 passed rac3 gcc-c++-4.4.7-4.el6 gcc-c++-4.4.4 passed Result: Package existence check passed for "gcc-c++" Check: Package existence for "ksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 ksh-20120801-10.el6 ksh-20100621 passed rac3 ksh-20120801-10.el6 ksh-20100621 passed Result: Package existence check passed for "ksh" Check: Package existence for "make" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 make-3.81-20.el6 make-3.81 passed rac3 make-3.81-20.el6 make-3.81 passed Result: Package existence check passed for "make" Check: Package existence for "glibc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 glibc(x86_64)-2.12-1.132.el6 glibc(x86_64)-2.12 passed rac3 glibc(x86_64)-2.12-1.132.el6 glibc(x86_64)-2.12 passed Result: Package existence check passed for "glibc(x86_64)" Check: Package existence for "glibc-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 glibc-devel(x86_64)-2.12-1.132.el6 glibc-devel(x86_64)-2.12 passed rac3 glibc-devel(x86_64)-2.12-1.132.el6 glibc-devel(x86_64)-2.12 passed Result: Package existence check passed for "glibc-devel(x86_64)" Check: Package existence for "libaio(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed rac3 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed Result: Package existence check passed for "libaio(x86_64)" Check: Package existence for "libaio-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed rac3 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed Result: Package existence check passed for "libaio-devel(x86_64)" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ rac1 passed rac3 passed Check for consistency of root user's primary group passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Check: Time zone consistency Result: Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... The NTP configuration file "/etc/ntp.conf" is available on all nodes NTP Configuration file check passed Checking daemon liveness... Check: Liveness for "ntpd" Node Name Running? ------------------------------------ ------------------------ rac1 no rac3 yes Result: Liveness check failed for "ntpd" PRVF-5508 : NTP configuration file is present on at least one node on which NTP daemon or service is not running. Result: Clock synchronization check using Network Time Protocol(NTP) failed Checking to make sure user "oracle" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ rac1 passed does not exist rac3 passed does not exist Result: User "oracle" is not part of "root" group. Check passed Checking consistency of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined File "/etc/resolv.conf" does not have both domain and search entries defined Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes... domain entry in file "/etc/resolv.conf" is consistent across nodes Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes... search entry in file "/etc/resolv.conf" is consistent across nodes Checking file "/etc/resolv.conf" to make sure that only one search entry is defined All nodes have one search entry defined in file "/etc/resolv.conf" Checking all nodes to make sure that search entry is "localdomain" as found on node "rac1" All nodes of the cluster have same value for 'search' Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ rac1 failed rac3 failed PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac3 File "/etc/resolv.conf" is not consistent across nodes Pre-check for node addition was unsuccessful on all the nodes. [root@rac1 Desktop]# The above errors can be ignored


[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ export ORACLE_HOME=/u01/app/11.2.0/grid
[oracle@rac1 bin]$ d $ORACLE_HOME/oui/bin
-bash: d: command not found
[oracle@rac1 bin]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ ./addNode.sh -silent “CLUSTER_NEW_NODES={rac3}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}” “CLUSTER_NEW_PRIVATE_NODE_NAMES={rac3-priv}”   


Run the “root.sh” commands on the new rac3 node as directed

First run /u01/app/oraInventory/orainstRoot.sh

And then root.sh

Execute root.sh [root@rac3 grid]# pwd /u01/app/11.2.0/grid [root@rac3 grid]# ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation OLR initialization - successful Adding Clusterware entries to upstart CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded grep for the services owned by oracle user on rac3 node  Now we will work on Extending the Oracle Database Software on rac3 node


From an existing node i.e rac1 – as the database software owner – run the following command to extend the Oracle database software to the new node “rac3″[root@rac1 Desktop]# su - oracle [oracle@rac1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1/ [oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin [oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" [root@rac1 Desktop]# su - oracle [oracle@rac1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1/ [oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin [oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "rac1" Checking user equivalence... User equivalence check passed for user "oracle" WARNING: Node "rac3" already appears to be part of cluster Pre-check for node addition was successful. Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3013 MB Passed Oracle Universal Installer, Version 11.2.0.3.0 Production Copyright (C) 1999, 2011, Oracle. All rights reserved. Performing tests to see whether nodes rac2,rac3 are available ............................................................... 100% Done. ...... ----------------------------------------------------------------------------- Cluster Node Addition Summary Global Settings Source: /u01/app/oracle/product/11.2.0/dbhome_1 New Nodes Space Requirements New Nodes rac3 /: Required 4.09GB : Available 30.24GB Installed Products Product Names Oracle Database 11g 11.2.0.3.0 Sun JDK 1.5.0.30.03 Installer SDK Component 11.2.0.3.0 Oracle One-Off Patch Installer 11.2.0.1.7 Oracle Universal Installer 11.2.0.3.0 Oracle USM Deconfiguration 11.2.0.3.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Oracle DBCA Deconfiguration 11.2.0.3.0 Oracle RAC Deconfiguration 11.2.0.3.0 Oracle Database Deconfiguration 11.2.0.3.0 Oracle Configuration Manager Client 10.3.2.1.0 Oracle Configuration Manager 10.3.5.0.1 Oracle ODBC Driverfor Instant Client 11.2.0.3.0 LDAP Required Support Files 11.2.0.3.0 SSL Required Support Files for InstantClient 11.2.0.3.0 Bali Share 1.1.18.0.0 Oracle Extended Windowing Toolkit 3.4.47.0.0 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 Oracle Real Application Testing 11.2.0.3.0 Oracle Database Vault J2EE Application 11.2.0.3.0 Oracle Label Security 11.2.0.3.0 Oracle Data Mining RDBMS Files 11.2.0.3.0 Oracle OLAP RDBMS Files 11.2.0.3.0 Oracle OLAP API 11.2.0.3.0 Platform Required Support Files 11.2.0.3.0 Oracle Database Vault option 11.2.0.3.0 Oracle RAC Required Support Files-HAS 11.2.0.3.0 SQL*Plus Required Support Files 11.2.0.3.0 Oracle Display Fonts 9.0.2.0.0 Oracle Ice Browser 5.2.3.6.0 Oracle JDBC Server Support Package 11.2.0.3.0 Oracle SQL Developer 11.2.0.3.0 Oracle Application Express 11.2.0.3.0 XDK Required Support Files 11.2.0.3.0 RDBMS Required Support Files for Instant Client 11.2.0.3.0 SQLJ Runtime 11.2.0.3.0 Database Workspace Manager 11.2.0.3.0 RDBMS Required Support Files Runtime 11.2.0.3.0 Oracle Globalization Support 11.2.0.3.0 Exadata Storage Server 11.2.0.1.0 Provisioning Advisor Framework 10.2.0.4.3 Enterprise Manager Database Plugin -- Repository Support 11.2.0.3.0 Enterprise Manager Repository Core Files 10.2.0.4.4 Enterprise Manager Database Plugin -- Agent Support 11.2.0.3.0 Enterprise Manager Grid Control Core Files 10.2.0.4.4 Enterprise Manager Common Core Files 10.2.0.4.4 Enterprise Manager Agent Core Files 10.2.0.4.4 RDBMS Required Support Files 11.2.0.3.0 regexp 2.1.9.0.0 Agent Required Support Files 10.2.0.4.3 Oracle 11g Warehouse Builder Required Files 11.2.0.3.0 Oracle Notification Service (eONS) 11.2.0.3.0 Oracle Text Required Support Files 11.2.0.3.0 Parser Generator Required Support Files 11.2.0.3.0 Oracle Database 11g Multimedia Files 11.2.0.3.0 Oracle Multimedia Java Advanced Imaging 11.2.0.3.0 Oracle Multimedia Annotator 11.2.0.3.0 Oracle JDBC/OCI Instant Client 11.2.0.3.0 Oracle Multimedia Locator RDBMS Files 11.2.0.3.0 Precompiler Required Support Files 11.2.0.3.0 Oracle Core Required Support Files 11.2.0.3.0 Sample Schema Data 11.2.0.3.0 Oracle Starter Database 11.2.0.3.0 Oracle Message Gateway Common Files 11.2.0.3.0 Oracle XML Query 11.2.0.3.0 XML Parser for Oracle JVM 11.2.0.3.0 Oracle Help For Java 4.2.9.0.0 Installation Plugin Files 11.2.0.3.0 Enterprise Manager Common Files 10.2.0.4.3 Expat libraries 2.0.1.0.1 Deinstallation Tool 11.2.0.3.0 Oracle Quality of Service Management (Client) 11.2.0.3.0 Perl Modules 5.10.0.0.1 JAccelerator (COMPANION) 11.2.0.3.0 Oracle Containers for Java 11.2.0.3.0 Perl Interpreter 5.10.0.0.2 Oracle Net Required Support Files 11.2.0.3.0 Secure Socket Layer 11.2.0.3.0 Oracle Universal Connection Pool 11.2.0.3.0 Oracle JDBC/THIN Interfaces 11.2.0.3.0 Oracle Multimedia Client Option 11.2.0.3.0 Oracle Java Client 11.2.0.3.0 Character Set Migration Utility 11.2.0.3.0 Oracle Code Editor 1.2.1.0.0I PL/SQL Embedded Gateway 11.2.0.3.0 OLAP SQL Scripts 11.2.0.3.0 Database SQL Scripts 11.2.0.3.0 Oracle Locale Builder 11.2.0.3.0 Oracle Globalization Support 11.2.0.3.0 SQL*Plus Files for Instant Client 11.2.0.3.0 Required Support Files 11.2.0.3.0 Oracle Database User Interface 2.2.13.0.0 Oracle ODBC Driver 11.2.0.3.0 Oracle Notification Service 11.2.0.3.0 XML Parser for Java 11.2.0.3.0 Oracle Security Developer Tools 11.2.0.3.0 Oracle Wallet Manager 11.2.0.3.0 Cluster Verification Utility Common Files 11.2.0.3.0 Oracle Clusterware RDBMS Files 11.2.0.3.0 Oracle UIX 2.2.24.6.0 Enterprise Manager plugin Common Files 11.2.0.3.0 HAS Common Files 11.2.0.3.0 Precompiler Common Files 11.2.0.3.0 Installation Common Files 11.2.0.3.0 Oracle Help for the Web 2.0.14.0.0 Oracle LDAP administration 11.2.0.3.0 Buildtools Common Files 11.2.0.3.0 Assistant Common Files 11.2.0.3.0 Oracle Recovery Manager 11.2.0.3.0 PL/SQL 11.2.0.3.0 Generic Connectivity Common Files 11.2.0.3.0 Oracle Database Gateway for ODBC 11.2.0.3.0 Oracle Programmer 11.2.0.3.0 Oracle Database Utilities 11.2.0.3.0 Enterprise Manager Agent 10.2.0.4.3 SQL*Plus 11.2.0.3.0 Oracle Netca Client 11.2.0.3.0 Oracle Multimedia Locator 11.2.0.3.0 Oracle Call Interface (OCI) 11.2.0.3.0 Oracle Multimedia 11.2.0.3.0 Oracle Net 11.2.0.3.0 Oracle XML Development Kit 11.2.0.3.0 Database Configuration and Upgrade Assistants 11.2.0.3.0 Oracle JVM 11.2.0.3.0 Oracle Advanced Security 11.2.0.3.0 Oracle Internet Directory Client 11.2.0.3.0 Oracle Enterprise Manager Console DB 11.2.0.3.0 HAS Files for DB 11.2.0.3.0 Oracle Net Listener 11.2.0.3.0 Oracle Text 11.2.0.3.0 Oracle Net Services 11.2.0.3.0 Oracle Database 11g 11.2.0.3.0 Oracle OLAP 11.2.0.3.0 Oracle Spatial 11.2.0.3.0 Oracle Partitioning 11.2.0.3.0 Enterprise Edition Options 11.2.0.3.0 ----------------------------------------------------------------------------- Instantiating scripts for add node (Tuesday, December 24, 2013 2:22:26 PM IST) . 1% Done. Instantiation of add node scripts complete Copying to remote nodes (Tuesday, December 24, 2013 2:22:40 PM IST) ............................................................................................... 96% Done. Home copied to new nodes Saving inventory on nodes (Tuesday, December 24, 2013 2:36:10 PM IST) . 100% Done. Save inventory complete WARNING: The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes. /u01/app/oracle/product/11.2.0/dbhome_1/root.sh #On nodes rac3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node The Cluster Node Addition of /u01/app/oracle/product/11.2.0/dbhome_1 was successful. Please check '/tmp/silentInstall.log' for more details. 


Now run root.sh on rac3 node



Add Instance to Clustered Database
A database instance will be established on the new node. Specifically, an instance named “orcl3″ will be added to “orcl” – a pre-existing clustered database.
Satisfy Node Instance Dependencies
Satisfy all node instance dependencies, such as passwordfile, init.ora parameters, etc.

From the new node “rac3″, run the following commands to create the passwordfile, “init.ora” file, and “oratab” entry for the new instance[root@rac3 Desktop]# su - oracle [oracle@rac3 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 [oracle@rac3 ~]$ cd $ORACLE_HOME/dbs [oracle@rac3 dbs]$ ls hc_orcl1.dat init.ora initorcl1.ora orapworcl1 [oracle@rac3 dbs]$ mv initorcl1.ora initorcl3.ora [oracle@rac3 dbs]$ mv orapworcl1 orapworcl3 [oracle@rac3 dbs]$ echo "orcl3:$ORACLE_HOME:N" >> /etc/oratab [oracle@rac3 dbs]$ 


From a node with an existing instance of “orcl” issue the following commands to create the needed public log thread, undo tablespace, and “init.ora” entries for the new instance

From RAC1 node


[oracle@rac1 bin]$ ps -ef | grep smon


root 2876 1 4 12:39 ? 00:05:59 /u01/app/11.2.0/grid/bin/osysmond.bin


oracle 3137 1 0 12:40 ? 00:00:00 asm_smon_+ASM1


oracle 3495 1 0 12:41 ? 00:00:03 ora_smon_orcl1


oracle 11417 10077 0 14:50 pts/2 00:00:00 grep smon


[oracle@rac1 bin]$ export ORACLE_SID=orcl1


[oracle@rac1 bin]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1


[oracle@rac1 bin]$ export PATH=$ORACLE_HOME/bin:$PATH


[oracle@rac1 bin]$ sqlplus ‘/as sysdba’


SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 24 14:51:32 2013


Copyright (c) 1982, 2011, Oracle. All rights reserved.


Connected to:


Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production


With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,


Data Mining and Real Application Testing options


SQL>
 SQL> alter database add logfile thread 3 group 5 ('+DATA') size 50M, group 6 ('+DATA') size 50M; Database altered. SQL> alter database enable public thread 3; Database altered. SQL> create undo tablespace undotbs3 datafile '+DATA' size 200M autoextend on; Tablespace created. SQL> alter system set undo_tablespace='undotbs3' scope=spfile sid='orcl3'; System altered. SQL> alter system set instance_number=3 scope=spfile sid='orcl3'; System altered. SQL> alter system set cluster_database_instances=3 scope=spfile sid='*'; System altered. 


Update Oracle Cluster Registry (OCR)
The OCR will be updated to account for a new instance – “orcl3″ – being added to the “orcl” cluster database. Add “orcl3″ instance to the “orcl” database and verify



Start the Instance
Now that all the prerequisites have been satisfied and OCR updated, the “orcl3″ instance will be started. Start the newly created instance – “orcl3″ – and verify

SQL> col host_name format a11 SQL> set line 300 SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE; INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS ---------------- ----------- ----------------- ------------------- ------------ --------- ------------------ ----------------- orcl3 rac3.localdomain 11.2.0.3.0 2013-01-05 11:23:16 OPEN NORMAL PRIMARY_INSTANCE ACTIVE orcl1 rac1.localdomain 11.2.0.3.0 2013-01-05 09:53:24 OPEN NORMAL PRIMARY_INSTANCE ACTIVE orcl2 rac2.localdomain 11.2.0.3.0 2013-01-04 17:34:40 OPEN NORMAL PRIMARY_INSTANCE ACTIVE  

No comments:

Post a Comment