In this post, I’m demonstrating on the steps involved in upgrading the RAC Clusterware from version 11.2.0.1 to 11.2.0.3.
1
2
3
|
[oracle@10gnode1 ~]$ unset ORA_CRS_HOME
[oracle@10gnode1 ~]$ unset ORACLE_HOME
[oracle@10gnode1 ~]$ unset ORACLE_BASE
|
Run the runInstaller file from the unzipped 11.2.0.3 grid file on 10gnode1.
1. select skip software Updates and proceed NEXT.
2. Select Upgrade Grid Infrastructure or Oracle Automatic Storage Management
3. Select the preferred language.
4. Select the nodes where the GI needs to be upgraded.
5. Select the appropriate OS groups.
6. Select the software location (home path) where the new 11.2.0.3 GI needs to
be installed.
7. Look out for the prerequisites check and work on the failed checks. If
there are no failed checks, then proceed with the installation.
8. Run the rootupgrade.sh scripts on each node separately as root user.
2. Select Upgrade Grid Infrastructure or Oracle Automatic Storage Management
3. Select the preferred language.
4. Select the nodes where the GI needs to be upgraded.
5. Select the appropriate OS groups.
6. Select the software location (home path) where the new 11.2.0.3 GI needs to
be installed.
7. Look out for the prerequisites check and work on the failed checks. If
there are no failed checks, then proceed with the installation.
8. Run the rootupgrade.sh scripts on each node separately as root user.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
|
[root@10gnode1 ~]# /u01/app/11.2.0/grid3/rootupgrade.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid3
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: n
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it?(y/n)
[n]: n
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:
/u01/app/11.2.0/grid3/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
ASM upgrade has started on first node.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '10gnode1'
CRS-2673: Attempting to stop 'ora.crsd' on '10gnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on '10gnode1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on '10gnode1'
CRS-2673: Attempting to stop 'ora.srprim.newsrv.svc' on '10gnode1'
CRS-2677: Stop of 'ora.registry.acfs' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.srprim.newsrv.svc' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on '10gnode1'
CRS-2673: Attempting to stop 'ora.srprim.db' on '10gnode1'
CRS-2677: Stop of 'ora.srprim.db' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on '10gnode1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on '10gnode1'
CRS-2677: Stop of 'ora.DATA.dg' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on '10gnode1'
CRS-2677: Stop of 'ora.asm' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.10gnode1.vip' on '10gnode1'
CRS-2677: Stop of 'ora.10gnode1.vip' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.eons' on '10gnode1'
CRS-2673: Attempting to stop 'ora.ons' on '10gnode1'
CRS-2677: Stop of 'ora.ons' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on '10gnode1'
CRS-2677: Stop of 'ora.net1.network' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.eons' on '10gnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on '10gnode1' has completed
CRS-2677: Stop of 'ora.crsd' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on '10gnode1'
CRS-2673: Attempting to stop 'ora.asm' on '10gnode1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on '10gnode1'
CRS-2673: Attempting to stop 'ora.ctssd' on '10gnode1'
CRS-2673: Attempting to stop 'ora.evmd' on '10gnode1'
CRS-2673: Attempting to stop 'ora.mdnsd' on '10gnode1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on '10gnode1'
CRS-2677: Stop of 'ora.cssdmonitor' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.ctssd' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.asm' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on '10gnode1'
CRS-2677: Stop of 'ora.cssd' on '10gnode1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on '10gnode1'
CRS-2673: Attempting to stop 'ora.gipcd' on '10gnode1'
CRS-2677: Stop of 'ora.gipcd' on '10gnode1' succeeded
CRS-2677: Stop of 'ora.diskmon' on '10gnode1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '10gnode1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
OLR initialization - successful
Replacing Clusterware entries in inittab
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@10gnode1 ~]#
|
Once the rootupgrade.sh is run on 10gnode1, run the same on 10gnode2. This would upgrade the ASM as well.
1
2
3
4
5
|
[oracle@10gnode1 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
|
[oraclee@10gnode1 bin]$ ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.FRA.dg
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.LISTENER.lsnr
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.asm
ONLINE ONLINE 10gnode1 Started
ONLINE ONLINE 10gnode2 Started
ora.eons
ONLINE OFFLINE 10gnode1
ONLINE OFFLINE 10gnode2
ora.gsd
OFFLINE OFFLINE 10gnode1
OFFLINE OFFLINE 10gnode2
ora.net1.network
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.ons
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.registry.acfs
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.10gnode1.vip
1 ONLINE ONLINE 10gnode1
ora.10gnode2.vip
1 ONLINE ONLINE 10gnode2
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE 10gnode2
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE 10gnode2
ora.srprim.db
1 ONLINE ONLINE 10gnode1 Open
2 ONLINE ONLINE 10gnode2 Open
ora.srprim.newsrv.svc
1 ONLINE ONLINE 10gnode1
2 ONLINE ONLINE 10gnode2
|
1
2
3
4
|
[oracle@10gnode1 upgrd11201_to_11203]$ cd /u01/app/11.2.0/grid3/bin/
[oracle@10gnode1 bin]$ ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
[oracle@10gnode1 bin]$
|
1
2
3
|
[oracle@10gnode1 bin]$ ./crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
[oracle@10gnode1 bin]$
|
1
2
|
[oracle@10gnode1 bin]$ ./crsctl query crs softwareversion
Oracle Clusterware version on node [10gnode1] is [11.2.0.3.0]
|
1
2
3
4
5
6
|
[oracle@10gnode1 bin]$ srvctl status database -d srprim
[oracle@10gnode1 bin]$ srvctl status database -d srprim -v -f
Instance srprim1 is running on node 10gnode1 with online services newsrv
Instance srprim2 is running on node 10gnode2 with online services newsrv
[oracle@10gnode1 bin]$
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
[oracle@10gnode1 bin]$ cd
[oracle@10gnode1 ~]$
[oracle@10gnode1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid3 is /u01/app/oracle
[oracle@10gnode1 ~]$
[oracle@10gnode1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.3.0 Production on Mon Dec 30 10:40:09 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Real Application Clusters and Automatic Storage Management options
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> select status,instance_name from v$instance;
STATUS INSTANCE_NAME
-------- -------------------
STARTED +ASM1
SQL> exit
|
1
2
3
4
5
|
[oracle@10gnode1 bin]$ srvctl status database -d srprim
[oracle@10gnode1 bin]$ srvctl status database -d srprim -v -f
Instance srprim1 is running on node 10gnode1 with online services newsrv
Instance srprim2 is running on node 10gnode2 with online services newsrv
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
[oracle@10gnode1 ~]$ . oraenv
ORACLE_SID = [srprim1] ?
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app
/oracle
[oracle@10gnode1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Mon Dec 30 10:41:42 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select status,instance_name,host_name from gv$instance;
STATUS INSTANCE_NAME HOST_NAME
------ ------------- ---------------------------------------------
OPEN srprim1 10gnode1.mydomain
OPEN srprim2 10gnode2.mydomain
SQL>
SQL> select * from v$version;
BANNER
------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
|
No comments:
Post a Comment