Installing Oracle RAC 11.2.0.3, OEL 6.3 and Virtualbox 4.2 with GNS

Linux, Virtualbox Installation

Check the following link for Linux/VirtualBox installation details: http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-oracle-linux-6-using-virtualbox.php

  • Install Virtualbox Guest Additons
  • Install package : # yum install oracle-rdbms-server-11gR2-preinstall
  • Update the installation: : # yum update
  • Install Wireshark:  # yum install wireshark     # yum install wireshark-gnome
  • Install ASMlib
  • Install cluvfy as user grid – download here and extract files under user grid
  • Extract grid software to folder grid and  install rpm from  folder:  grid/rpm 
# cd /media/sf_kits/Oracle/11.2.0.4/grid/rpm
# rpm -iv cvuqdisk-1.0.9-1.rpm
Preparing packages for installation...
Using default group oinstall to install package
cvuqdisk-1.0.9-1
  • Verify the currrent OS status by running : $ ./bin/cluvfy stage -pre crsinst -n grac41

 

Check OS setting
Install X11 applications like xclock
# yum install xorg-x11-apps

Turn off and disable the firewall IPTables and disable SELinux
# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
# chkconfig iptables off
# chkconfig --list iptables
iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off

Disable SELinux. Open the config file and change the SELINUX variable from enforcing to disabled.
# vim /etc/selinux/config
 # This file controls the state of SELinux on the system.
 # SELINUX= can take one of these three values:
 #     enforcing - SELinux security policy is enforced.
 #     permissive - SELinux prints warnings instead of enforcing.
 #     disabled - No SELinux policy is loaded.
 SELINUX=disabled

DNS Setup including BIND, NTP, DHCP in a LAN   on a separate VirtualBox VM  

Even if you are using a DNS, Oracle recommends to list the public IP, VIP and private addresses

for each node in the hosts file on each node.

Domain:         example.com       Name Server: ns1.example.com            192.168.1.50
RAC Sub-Domain: grid.example.com  Name Server: gns.example.com            192.168.1.55
DHCP Server:    ns1.example.com
NTP  Server:    ns1.example.com
DHCP adresses:  192.168.1.100 ... 192.168.1.254

Configure DNS:
Identity     Home Node    Host Node                          Given Name                      Type        Address        Address Assigned By     Resolved By
GNS VIP        None        Selected by Oracle Clusterware    gns.example.com                 Virtual     192.168.1.55   Net administrator       DNS + GNS
Node 1 Public  Node 1      grac1                             grac1.example.com               Public      192.168.1.61   Fixed                   DNS
Node 1 VIP     Node 1      Selected by Oracle Clusterware    grac1-vip.grid.example.com      Private     Dynamic        DHCP                    GNS
Node 1 Private Node 1      grac1int                          grac1int.example.com            Private     192.168.2.71   Fixed                   DNS
Node 2 Public  Node 2      grac2                             grac2.example.com               Public      192.168.1.62   Fixed                   DNS
Node 2 VIP     Node 2      Selected by Oracle Clusterware    grac2-vip.grid.example.com      Private     Dynamic        DHCP                    GNS
Node 2 Private Node 2      grac2int                          grac2int.example.com            Private     192.168.2.72   Fixed                   DNS
SCAN VIP 1     none        Selected by Oracle Clusterware    GRACE2-scan.grid.example.com    Virtual     Dynamic        DHCP                    GNS
SCAN VIP 2     none        Selected by Oracle Clusterware    GRACE2-scan.grid.example.com    Virtual     Dynamic        DHCP                    GNS
SCAN VIP 3     none        Selected by Oracle Clusterware    GRACE2-scan.grid.example.com    Virtual     Dynamic        DHCP                    GNS

 

Note: the cluster node VIPs and SCANs are obtained via DHCP and if GNS is up all DHCP  adresses should be found with nslookup. If you have problems with zone delegation add your GNS name server to /etc/resolv.conf

Install BIND – Make sure the following rpms are installed

Install – Make sure the following rpms are installed:

dhcp-common-4.1.1-34.P1.0.1.el6

dhcp-common-4.1.1-34.P1.0.1.el6.x86_64

bind-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

bind-libs-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

bind-utils-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

Install Bind packages

# rpm -Uvh bind-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm bind-libs-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

bind-utils-9.8.2-0.17.rc1.0.2.el6_4.4.x86_64.rpm

For a detailed describtion using zone delegations check following link:

Configure DNS:

-> named.conf
options {
    listen-on port 53 {  192.168.1.50; 127.0.0.1; };
    directory     "/var/named";
    dump-file     "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query     {  any; };
    allow-recursion     {  any; };
    recursion yes;
    dnssec-enable no;
    dnssec-validation no;

};
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
zone "." IN {
    type hint;
    file "named.ca";
};
zone    "1.168.192.in-addr.arpa" IN { // Reverse zone
    type master;
    file "192.168.1.db";
        allow-transfer { any; };
    allow-update { none; };
};
zone    "2.168.192.in-addr.arpa" IN { // Reverse zone
    type master;
    file "192.168.2.db";
        allow-transfer { any; };
    allow-update { none; };
};
zone  "example.com" IN {
      type master;
       notify no;
       file "example.com.db";
};

-> Forward zone: example.com.db 
;
; see http://www.zytrax.com/books/dns/ch9/delegate.html 
; 
$TTL 1H         ; Time to live
$ORIGIN example.com.
@       IN      SOA     ns1.example.com.  hostmaster.example.com.  (
                        2009011202      ; serial (todays date + todays serial #)
                        3H              ; refresh 3 hours
                        1H              ; retry 1 hour
                        1W              ; expire 1 week
                        1D )            ; minimum 24 hour
;
        IN          A         192.168.1.50
        IN          NS        ns1.example.com. ; name server for example.com
ns1     IN          A         192.168.1.50
grac1   IN          A         192.168.1.61
grac2   IN          A         192.168.1.62
grac3   IN          A         192.168.1.63
;
$ORIGIN grid.example.com.
@       IN          NS        gns.grid.example.com. ; NS  grid.example.com
        IN          NS        ns1.example.com.      ; NS example.com
gns     IN          A         192.168.1.55 ; glue record

-> Reverse zone:  192.168.1.db 
$TTL 1H
@       IN      SOA     ns1.example.com.  hostmaster.example.com.  (
                        2009011201      ; serial (todays date + todays serial #)
                        3H              ; refresh 3 hours
                        1H              ; retry 1 hour
                        1W              ; expire 1 week
                        1D )            ; minimum 24 hour
; 
              NS        ns1.example.com.
              NS        gns.grid.example.com.
50            PTR       ns1.example.com.
55            PTR       gns.grid.example.com. ; reverse mapping for GNS
61            PTR       grac1.example.com. ; reverse mapping for GNS
62            PTR       grac2.example.com. ; reverse mapping for GNS
63            PTR       grac3.example.com. ; reverse mapping for GNS

-> Reverse zone:  192.168.2.db 
$TTL 1H
@       IN      SOA     ns1.example.com. hostmaster.example.com.  (
                        2009011201      ; serial (todays date + todays serial #)
                        3H              ; refresh 3 hours
                        1H              ; retry 1 hour
                        1W              ; expire 1 week
                        1D )            ; minimum 24 hour
; 
             NS        ns1.example.com.
71           PTR       grac1int.example.com. ; reverse mapping for GNS
72           PTR       grac2int.example.com. ; reverse mapping for GNS
73           PTR       grac3int.example.com. ; reverse mapping for GNS

->/etc/resolv.conf
# Generated by NetworkManager
search example.com
nameserver 192.168.1.50

Verify DNS ( Note: Commands where execute with a running GNS - means we already have install GRID )
Check the current GNS status
#   /u01/app/11203/grid/bin/srvctl config gns -a -l
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5353 to connect to mDNS
GNS status: OK
Domain served by GNS: grid3.example.com
GNS version: 11.2.0.3.0
GNS VIP network: ora.net1.network
Name            Type Value
grac3-scan      A    192.168.1.220
grac3-scan      A    192.168.1.221
grac3-scan      A    192.168.1.222
grac3-scan1-vip A    192.168.1.220
grac3-scan2-vip A    192.168.1.221
grac3-scan3-vip A    192.168.1.222
grac31-vip      A    192.168.1.219
grac32-vip      A    192.168.1.224
grac33-vip      A    192.168.1.226


$ nslookup grac1.example.com
Name:    grac1.example.com
Address: 192.168.1.61
$ nslookup grac1.example.com
Name:    grac1.example.com
Address: 192.168.1.61
$ nslookup grac1.example.com
Name:    grac1.example.com
Address: 192.168.1.61
$ nslookup grac1int.example.com
Name:    grac1int.example.com
Address: 192.168.2.71
$ nslookup grac1int.example.com
Name:    grac1int.example.com
Address: 192.168.2.71
$ nslookup grac1int.example.com
Name:    grac1int.example.com
Address: 192.168.2.71
$ nslookup 192.168.2.71
71.2.168.192.in-addr.arpa    name = grac1int.example.com.
$ nslookup 192.168.2.72
72.2.168.192.in-addr.arpa    name = grac2int.example.com.
$ nslookup 192.168.2.73
73.2.168.192.in-addr.arpa    name = grac3int.example.com.
$ nslookup 192.168.1.61
61.1.168.192.in-addr.arpa    name = grac1.example.com.
$ nslookup 192.168.1.62
62.1.168.192.in-addr.arpa    name = grac2.example.com.
$ nslookup 192.168.1.63
63.1.168.192.in-addr.arpa    name = grac3.example.com.
$ nslookup grac1-vip.grid.example.com
Non-authoritative answer:
Name:    grac1-vip.grid.example.com
Address: 192.168.1.107
$ nslookup grac2-vip.grid.example.com
Non-authoritative answer:
Name:    grac2-vip.grid.example.com
Address: 192.168.1.112
$ nslookup GRACE2-scan.grid.example.com
Non-authoritative answer:
Name:    GRACE2-scan.grid.example.com
Address: 192.168.1.108
Name:    GRACE2-scan.grid.example.com
Address: 192.168.1.110
Name:    GRACE2-scan.grid.example.com
Address: 192.168.1.109

Use dig against DNS name server - DNS name server should use Zone Delegation
$ dig @192.168.1.50 GRACE2-scan.grid.example.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 <<>> @192.168.1.50 GRACE2-scan.grid.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64626
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 2, ADDITIONAL: 1
;; QUESTION SECTION:
;GRACE2-scan.grid.example.com.    IN    A
;; ANSWER SECTION:
GRACE2-scan.grid.example.com. 1    IN    A    192.168.1.108
GRACE2-scan.grid.example.com. 1    IN    A    192.168.1.109
GRACE2-scan.grid.example.com. 1    IN    A    192.168.1.110
;; AUTHORITY SECTION:
grid.example.com.    3600    IN    NS    ns1.example.com.
grid.example.com.    3600    IN    NS    gns.grid.example.com.
;; ADDITIONAL SECTION:
ns1.example.com.    3600    IN    A    192.168.1.50
;; Query time: 0 msec
;; SERVER: 192.168.1.50#53(192.168.1.50)
;; WHEN: Sun Jul 28 13:50:26 2013
;; MSG SIZE  rcvd: 146

Use dig against GNS name server 
$ dig @192.168.1.55 GRACE2-scan.grid.example.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 <<>> @192.168.1.55 GRACE2-scan.grid.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32138
;; flags: qr aa; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;GRACE2-scan.grid.example.com.    IN    A
;; ANSWER SECTION:
GRACE2-scan.grid.example.com. 120 IN    A    192.168.1.108
GRACE2-scan.grid.example.com. 120 IN    A    192.168.1.109
GRACE2-scan.grid.example.com. 120 IN    A    192.168.1.110
;; AUTHORITY SECTION:
grid.example.com.    10800    IN    SOA    GRACE2-gns-vip.grid.example.com. GRACE2-gns-vip.grid.example.com. 3173463 10800 10800 30 120
;; ADDITIONAL SECTION:
GRACE2-gns-vip.grid.example.com. 10800 IN A    192.168.1.55
;; Query time: 15 msec
;; SERVER: 192.168.1.55#53(192.168.1.55)
;; WHEN: Sun Jul 28 13:50:26 2013
;; MSG SIZE  rcvd: 161

Start the DNS server

# service named restart

Starting named:                                            [  OK  ]

Ensure DNS service restart on the reboot

# chkconfig named on

# chkconfig –list named

named              0:off    1:off    2:on    3:on    4:on    5:on    6:off

Display all records for zone example.com with dig 

 

$ dig example.com AXFR
$ dig @192.168.1.55  AXFR
$ dig GRACE2-scan.grid.example.com

 

Configure DHCP server 

  • dhclient is recreate /etc/resolv,conf . Run $ service network restart after testing dhclient that to have a consistent /etc/resolv,conf on all cluster nodes

 

Verify that you don't use any DHCP server from a  vbriged network
- Note If using Virtualbox briged network devices using same Network address as our local Router 
  the Virtualbox DHCP server is used ( of course you can disable 
  M:\VM> vboxmanage list bridgedifs
   Name:            Realtek PCIe GBE Family Controller
   GUID:            7e0af9ff-ea37-4e63-b2e5-5128c60ab300
   DHCP:            Enabled
   IPAddress:       192.168.1.4
   NetworkMask:     255.255.255.0

M:\VM\GRAC_OEL64_11203>ipconfig
   Windows-IP-Konfiguration
   Ethernet-Adapter LAN-Verbindung:
   Verbindungsspezifisches DNS-Suffix: speedport.ip
   Verbindungslokale IPv6-Adresse  . : fe80::c52f:f681:bb0b:c358%11
   IPv4-Adresse  . . . . . . . . . . : 192.168.1.4
   Subnetzmaske  . . . . . . . . . . : 255.255.255.0
   Standardgateway . . . . . . . . . : 192.168.1.1

Solution:  Use Internal Network devices instead of Bridged Network devices for the Virtulbox Network setup


-> /etc/sysconfig/dhcpd
Command line options here
 DHCPDARGS="eth0"

-> /etc/dhcp/dhcpd.conf ( don't used domain-name as this will create a new resolv.conf )
 ddns-update-style interim;
 ignore client-updates;
 subnet 192.168.1.0 netmask 255.255.255.0 {
 option routers                  192.168.1.1;                    # Default gateway to be used by DHCP clients
 option subnet-mask              255.255.255.0;                  # Default subnet mask to be used by DHCP clients.
 option ip-forwarding            off;                            # Do not forward DHCP requests.
 option broadcast-address        192.168.1.255;                  # Default broadcast address to be used by DHCP client.
#  option domain-name              "grid.example.com"; 
 option domain-name-servers      192.168.1.50;                   # IP address of the DNS server. In this document it will be oralab1
 option time-offset              -19000;                           # Central Standard Time
 option ntp-servers              0.pool.ntp.org;                   # Default NTP server to be used by DHCP clients
 range                           192.168.1.100 192.168.1.254;    # Range of IP addresses that can be issued to DHCP client
 default-lease-time              21600;                            # Amount of time in seconds that a client may keep the IP address
 max-lease-time                  43200;
 }
 # service dhcpd restart
 # chkconfig dhcpd on

Test on all cluster instances:
 # dhclient eth0
 Check /var/log/messages
 #  tail -f /var/log/messages
 Jul  8 12:46:09 gns dhclient[3909]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7 (xid=0x6fb12d80)
 Jul  8 12:46:09 gns dhcpd: DHCPDISCOVER from 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhcpd: 0.pool.ntp.org: temporary name server failure
 Jul  8 12:46:10 gns dhcpd: DHCPOFFER on 192.168.1.100 to 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhclient[3909]: DHCPOFFER from 192.168.1.50
 Jul  8 12:46:10 gns dhclient[3909]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6fb12d80)
 Jul  8 12:46:10 gns dhcpd: DHCPREQUEST for 192.168.1.100 (192.168.1.50) from 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhcpd: DHCPACK on 192.168.1.100 to 08:00:27:e6:71:54 via eth0
 Jul  8 12:46:10 gns dhclient[3909]: DHCPACK from 192.168.1.50 (xid=0x6fb12d80)
 Jul  8 12:46:12 gns avahi-daemon[1407]: Registering new address record for 192.168.1.100 on eth0.IPv4.
 Jul  8 12:46:12 gns NET[3962]: /sbin/dhclient-script : updated /etc/resolv.conf
 Jul  8 12:46:12 gns dhclient[3909]: bound to 192.168.1.100 -- renewal in 9071 seconds.
 Jul  8 12:46:13 gns ntpd[2051]: Listening on interface #6 eth0, 192.168.1.100#123 Enabled
  • Verify that the right DHCP server is in use ( at least check the bound an renwal values )

NTP Setup – Server: gns.example.com

# cat /etc/ntp.conf
 restrict default nomodify notrap noquery
 restrict 127.0.0.1
 # -- CLIENT NETWORK -------
 restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
 # --- OUR TIMESERVERS -----  can't reach NTP servers - build my own server
 #server 0.pool.ntp.org iburst
 #server 1.pool.ntp.org iburst
 server 127.127.1.0
 # --- NTP MULTICASTCLIENT ---
 # --- GENERAL CONFIGURATION ---
 # Undisciplined Local Clock.
 fudge   127.127.1.0 stratum 9
 # Drift file.
 driftfile /var/lib/ntp/drift
 broadcastdelay  0.008
 # Keys file.
 keys /etc/ntp/keys
 # chkconfig ntpd on
 # ntpq -p
 remote           refid      st t when poll reach   delay   offset  jitter
 ==============================================================================
 *LOCAL(0)        .LOCL.           9 l   11   64  377    0.000    0.000   0.000

NTP Setup - Clients: grac1.example.com, grac2.example.com,  ...
 # cat /etc/ntp.conf
 restrict default nomodify notrap noquery
 restrict 127.0.0.1
 # -- CLIENT NETWORK -------
 # --- OUR TIMESERVERS -----
 # 192.168.1.2 is the address for my timeserver,
 # use the address of your own, instead:
 server 192.168.1.50
 server  127.127.1.0
 # --- NTP MULTICASTCLIENT ---
 # --- GENERAL CONFIGURATION ---
 # Undisciplined Local Clock.
 fudge   127.127.1.0 stratum 12
 # Drift file.
 driftfile /var/lib/ntp/drift
 broadcastdelay  0.008
 # Keys file.
 keys /etc/ntp/keys
 # ntpq -p
 remote           refid      st t when poll reach   delay   offset  jitter
 ==============================================================================
 gns.example.com LOCAL(0)        10 u   22   64    1    2.065  -11.015   0.000
 LOCAL(0)        .LOCL.          12 l   21   64    1    0.000    0.000   0.000
 Verify setup with cluvfy :

Add to  our /etc/rc.local
#
service ntpd stop
ntpdate -u 192.168.1.50 
service ntpd start

 

Verify GNS setup with cluvfy:

$ ./bin/cluvfy comp gns -precrsinst -domain grid.example.com -vip 192.168.2.100 -verbose -n grac1,grac2
 Verifying GNS integrity
 Checking GNS integrity...
 Checking if the GNS subdomain name is valid...
 The GNS subdomain name "grid.example.com" is a valid domain name
 Checking if the GNS VIP is a valid address...
 GNS VIP "192.168.2.100" resolves to a valid IP address
 Checking the status of GNS VIP...
 GNS integrity check passed
 Verification of GNS integrity was successful.

 

Setup User Accounts

NOTE: Oracle recommend different users for the installation of the Grid  Infrastructure (GI) and the Oracle RDBMS home. The GI will be installed in  a separate Oracle base, owned by user ‘grid.’ After the grid install the GI home will be owned by root, and inaccessible to unauthorized users.

Create OS groups using the command below. Enter these commands as the 'root' user:
  #/usr/sbin/groupadd -g 501 oinstall
  #/usr/sbin/groupadd -g 502 dba
  #/usr/sbin/groupadd -g 504 asmadmin
  #/usr/sbin/groupadd -g 506 asmdba
  #/usr/sbin/groupadd -g 507 asmoper

Create the users that will own the Oracle software using the commands:
  #/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
  #/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle
  $ id
  uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),504(asmadmin),506(asmdba),507(asmoper)
  $ id
  uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(adba),54322(dba)

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
  if ( $USER = "oracle" || $USER = "grid" ) then
  limit maxproc 16384
  limit descriptors 65536
  endif

Modify  /etc/security/limits.conf
  # oracle-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024
  oracle   soft   nofile    1024
  grid   soft   nofile    1024
  # oracle-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536
  oracle   hard   nofile    65536
  grid   hard   nofile    65536
  # oracle-rdbms-server-11gR2-preinstall setting for nproc soft limit is 2047
  oracle   soft   nproc    2047
  grid     soft   nproc    2047
  # oracle-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384
  oracle   hard   nproc    16384
  grid     hard   nproc    16384
  # oracle-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KB
  oracle   soft   stack    10240
  grid     soft   stack    10240
  # oracle-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KB
  oracle   hard   stack    32768
  grid     hard   stack    32768

Create Directories:
 - Have a separate ORACLE_BASE for both GRID and RDBMS install !
Create the Oracle Inventory Directory ( needed or 11.2.0.3 will ) 
To create the Oracle Inventory directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oraInventory
  # chown -R grid:oinstall /u01/app/oraInventory

Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/grid
  # chown -R grid:oinstall /u01/app/grid
  # chmod -R 775 /u01/app/grid
  # mkdir -p /u01/app/11203/grid
  # chown -R grid:oinstall /u01//app/11203/grid
  # chmod -R 775 /u01/app/11203/grid

Creating the Oracle Base Directory
  To create the Oracle Base directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle
  # chown -R oracle:oinstall /u01/app/oracle
  # chmod -R 775 /u01/app/oracle

Creating the Oracle RDBMS Home Directory
  To create the Oracle RDBMS Home directory, enter the following commands as the root user:
  # mkdir -p /u01/app/oracle/product/11203/racdb
  # chown -R oracle:oinstall /u01/app/oracle/product/11203/racdb
  # chmod -R 775 /u01/app/oracle/product/11203/racdb

Add divider=10″ to /boot/grub/grub.conf
Finally, add “divider=10″ to the boot parameters in grub.conf to improve VM performance. 
This is often recommended as a way to reduce host CPU utilization when a VM is idle, but 
it also improves overall guest performance. When I tried my first run-through of this 
process without this parameter enabled, the cluster configuration script bogged down 
terribly, and failed midway through creating the database

Verify Initial Virtualbox Image using cluvfy
  Install the cluvfy as Grid Owner ( grid )  in  ~/cluvfy112

Check the minimum system for our 1.st Virtualbox image  by running: cluvfy -p crs
$ ./bin/cluvfy comp sys -p crs -n grac1
Verifying system requirement 
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac1:/u01/app/11203/grid,grac1:/tmp"
Check for multiple users with UID value 501 passed 
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed 
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Time zone consistency check passed
Verification of system requirement was successful.

 

 Setup ASM disks
Create ASM disks
  Note : Create all ASM disks on my SSD device ( C:\VM\GRACE2\ASM ) 
  Create 6 ASM disks : 
    3 disks with 5 Gbyte each   
    3 disks with 2 Gbyte each   
D:\VM>set_it
D:\VM>set path="d:\Program Files\Oracle\VirtualBox";D:\Windows\system32;D:\Windo
ws;D:\Windows\System32\Wbem;D:\Windows\System32\WindowsPowerShell\v1.0\;D:\Progr
am Files (x86)\IDM Computer Solutions\UltraEdit\

D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm1_5G.vdi --size 5120 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 7c9711c7-14e9-4bc4-8390-3e7dbb2ad130
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm2_5G.vdi --size 5120 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 5c801291-7083-4030-9221-cfab1460f527
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm3_5G.vdi --size 5120 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 28b0e0b4-c9ae-474e-b339-d742a10bb120
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm1_2G.vdi --size 2048 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: acc2b925-fa58-4d5f-966f-1c9cac014d1b
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm2_2G.vdi --size 2048 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: a93f5fd8-bb10-4421-af07-3dfe4fc0d740
D:\VM>VBoxManage createhd --filename C:\VM\GRACE2\ASM\asm3_2G.vdi --size 2048 --
format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 89c0f4cd-569e-4a30-9b6e-5ce3044fcde5
D:\VM>dir  C:\VM\GRACE2\ASM\*
 Volume in Laufwerk C: hat keine Bezeichnung.
 Volumeseriennummer: 20BF-FC17
 Verzeichnis von C:\VM\GRACE2\ASM
13.07.2013  13:00     2.147.495.936 asm1_2G.vdi
13.07.2013  12:56     5.368.733.696 asm1_5G.vdi
13.07.2013  13:00     2.147.495.936 asm2_2G.vdi
13.07.2013  12:57     5.368.733.696 asm2_5G.vdi
13.07.2013  13:00     2.147.495.936 asm3_2G.vdi
13.07.2013  12:59     5.368.733.696 asm3_5G.vdi
Attach disk to VM
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_2G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 5  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_2G.vdi
D:\VM>VBoxManage storageattach grac1 --storagectl "SATA" --port 6  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_2G.vdi

Change disk type to sharable disks:
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm1_5G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm2_5G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm3_5G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm1_2G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm2_2G.vdi --type shareable
D:\VM>VBoxManage modifyhd C:\VM\GRACE2\ASM\asm3_2G.vdi --type shareable
Reboot and format disks
 # ls /dev/sd*
/dev/sda   /dev/sda2  /dev/sdb  /dev/sdd  /dev/sdf
/dev/sda1  /dev/sda3  /dev/sdc  /dev/sde  /dev/sdg
# fdisk /dev/sdb
  Command (m for help): n
  Command action
   e   extended
   p   primary partition (1-4)
  p 
  Partition number (1-4): 1
  First sector (2048-10485759, default 2048): 
  Using default value 2048
  Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
  Using default value 10485759
  Command (m for help): w
  The partition table has been altered!
  In each case, the sequence of answers is "n", "p", "1", "Return", "Return" and "w".
  Repeat steps for : /dev/sdb -> /dev/sdg
#  ls /dev/sd*
/dev/sda   /dev/sda3  /dev/sdc   /dev/sdd1  /dev/sdf   /dev/sdg1
/dev/sda1  /dev/sdb   /dev/sdc1  /dev/sde   /dev/sdf1
/dev/sda2  /dev/sdb1  /dev/sdd   /dev/sde1  /dev/sdg

 

Configure ASMLib and Disks

# /usr/sbin/oracleasm configure -i

#  /etc/init.d/oracleasm createdisk data1 /dev/sdb1
Marking disk "data1" as an ASM disk:                       [  OK  ]
#  /etc/init.d/oracleasm createdisk data2 /dev/sdc1
Marking disk "data2" as an ASM disk:                       [  OK  ]
# /etc/init.d/oracleasm createdisk data3 /dev/sdd1
Marking disk "data3" as an ASM disk:                       [  OK  ]
#  /etc/init.d/oracleasm createdisk ocr1 /dev/sde1
Marking disk "ocr1" as an ASM disk:                        [  OK  ]
# /etc/init.d/oracleasm createdisk ocr2  /dev/sdf1
Marking disk "ocr2" as an ASM disk:                        [  OK  ]
[root@grac1 Desktop]#  /etc/init.d/oracleasm createdisk ocr3 /dev/sdg1
Marking disk "ocr3" as an ASM disk:                        [  OK  ]

# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
OCR1
OCR2
OCR3

# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 grid asmadmin 8, 17 Jul 13 16:32 DATA1
brw-rw---- 1 grid asmadmin 8, 33 Jul 13 16:32 DATA2
brw-rw---- 1 grid asmadmin 8, 49 Jul 13 16:33 DATA3
brw-rw---- 1 grid asmadmin 8, 65 Jul 13 16:33 OCR1
brw-rw---- 1 grid asmadmin 8, 81 Jul 13 16:33 OCR2
brw-rw---- 1 grid asmadmin 8, 97 Jul 13 16:33 OCR3

#  /etc/init.d/oracleasm status 
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@grac1 Desktop]# /etc/init.d/oracleasm listdisks
DATA1
DATA2
DATA3
OCR1
OCR2
OCR3

# /etc/init.d/oracleasm querydisk -d DATA1
Disk "DATA1" is a valid ASM disk on device [8, 17]
# /etc/init.d/oracleasm querydisk -d DATA2
Disk "DATA2" is a valid ASM disk on device [8, 33]
# /etc/init.d/oracleasm querydisk -d DATA3
Disk "DATA3" is a valid ASM disk on device [8, 49]
# /etc/init.d/oracleasm querydisk -d OCR1
Disk "OCR1" is a valid ASM disk on device [8, 65]
# /etc/init.d/oracleasm querydisk -d OCR2
# /etc/init.d/oracleasm querydisk -d OCR3
Disk "OCR3" is a valid ASM disk on device [8, 97]
# /etc/init.d/oracleasm  scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]

 

Clone VirtualBox Image
Shutdown Virtualbox image 1 and manually clone the "grac1.vdi" disk using the following commands on the host server.
D:\VM> set_it
D:\VM> md D:\VM\GNS_RACE2\grac2

D:\VM> VBoxManage clonehd D:\VM\GNS_RACE2\grac1\grac1.vdi d:\VM\GNS_RACE2\grac2\grac2.vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: 0d626e95-9354-4f65-8fc0-e40ba44e1
Manually clone the "ol6-112-rac1.vdi" disk using the following commands on the host server.
Create new VM grac2 by using disk grac2.vdi

Attach disk to VM: grac2
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 1  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_5G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 2  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_5G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 3  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_5G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 4  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm1_2G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 5  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm2_2G.vdi
D:\VM>VBoxManage storageattach grac2 --storagectl "SATA" --port 6  --device 0 --type hdd --medium C:\VM\GRACE2\ASM\asm3_2G.vdi 
Start the "grac2" virtual machine by clicking the "Start" button on the toolbar. Ignore any network errors during the startup.
Log in to the "grac2" virtual machine as the "root" user so we can reconfigure the network settings to match the following.
    hostname: grac2.example.com
    IP Address eth0: 192.168.1.62 (public address)
    Default Gateway eth0: 192.168.1.1 (public address)
    IP Address eth1: 192.168.2.72 (private address)
    Default Gateway eth1: none
Amend the hostname in the "/etc/sysconfig/network" file.
    NETWORKING=yes
    HOSTNAME=grac2.example.com 
Check the MAC address of each of the available network connections. Don't worry that they are listed as "eth2" and "eth3". These are dynamically created connections because the MAC address of the "eth0" and "eth1" connections is incorrect.

# ifconfig -a | grep eth
eth2      Link encap:Ethernet  HWaddr 08:00:27:1F:2E:33  
eth3      Link encap:Ethernet  HWaddr 08:00:27:8E:6D:24  
Edit the "/etc/sysconfig/network-scripts/ifcfg-eth0", amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the "eth2" interface displayed above.
    IPADDR=192.168.1.62
    HWADDR=08:00:27:1F:2E:33 
Edit the "/etc/sysconfig/network-scripts/ifcfg-eth1", amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the "eth3" interface displayed above.
    HWADDR=08:00:27:8E:6D:24
    IPADDR=192.168.2.102
Change .login for grid user
 setenv ORACLE_SID +ASM2
Remove udev rules:
# rm  /etc/udev/rules.d/70-persistent-net.rules
# reboot
Verify network devices ( use graphical tool if needed for changes )
# ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:1F:2E:33  
          inet addr:192.168.1.62  Bcast:192.168.1.255  Mask:255.255.255.0
..
eth1      Link encap:Ethernet  HWaddr 08:00:27:8E:6D:24  
          inet addr:192.168.2.72  Bcast:192.168.2.255  Mask:255.255.255.0 
..

Check Ntp
$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 gns.example.com LOCAL(0)        10 u   30   64    1    0.462  2233.72   0.000
 LOCAL(0)        .LOCL.          12 l   29   64    1    0.000    0.000   0.000

Check DHCP
$ grep -i dhcp /var/log/messages
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth2) Beginning DHCPv4 transaction
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth2) DHCPv4 will time out in 45 seconds
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth3) Beginning DHCPv4 transaction
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> Activation (eth3) DHCPv4 will time out in 45 seconds
Jul 15 19:12:21 grac1 dhclient[1547]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:12:21 grac1 dhclient[1547]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:12:21 grac1 dhclient[1537]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:12:21 grac1 dhclient[1537]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> (eth2): DHCPv4 state changed nbi -> preinit
Jul 15 19:12:21 grac1 NetworkManager[1528]: <info> (eth3): DHCPv4 state changed nbi -> preinit
Jul 15 19:12:22 grac1 dhclient[1537]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 4 (xid=0x5ddfdccc)
Jul 15 19:12:23 grac1 dhclient[1547]: DHCPDISCOVER on eth3 to 255.255.255.255 port 67 interval 5 (xid=0x5c751799)
Jul 15 19:12:26 grac1 dhclient[1537]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 11 (xid=0x5ddfdccc)
Jul 15 19:12:28 grac1 dhclient[1547]: DHCPDISCOVER on eth3 to 255.255.255.255 port 67 interval 11 (xid=0x5c751799)
Jul 15 19:12:32 grac1 dhclient[1537]: DHCPOFFER from 192.168.1.50
Jul 15 19:12:32 grac1 dhclient[1537]: DHCPREQUEST on eth2 to 255.255.255.255 port 67 (xid=0x5ddfdccc)
Jul 15 19:12:32 grac1 dhclient[1537]: DHCPACK from 192.168.1.50 (xid=0x5ddfdccc)
Jul 15 19:12:32 grac1 NetworkManager[1528]: <info> (eth2): DHCPv4 state changed preinit -> bound
Jul 15 19:12:33 grac1 dhclient[1547]: DHCPOFFER from 192.168.1.50
Jul 15 19:12:33 grac1 dhclient[1547]: DHCPREQUEST on eth3 to 255.255.255.255 port 67 (xid=0x5c751799)
Jul 15 19:12:33 grac1 dhclient[1547]: DHCPACK from 192.168.1.50 (xid=0x5c751799)
Jul 15 19:12:33 grac1 NetworkManager[1528]: <info> (eth3): DHCPv4 state changed preinit -> bound
Jul 15 19:27:53 grac2 NetworkManager[1617]: <info> Activation (eth2) Beginning DHCPv4 transaction
Jul 15 19:27:53 grac2 NetworkManager[1617]: <info> Activation (eth2) DHCPv4 will time out in 45 seconds
Jul 15 19:27:53 grac2 dhclient[1637]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:27:53 grac2 dhclient[1637]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:27:53 grac2 dhclient[1637]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 4 (xid=0x44e12e9)
Jul 15 19:27:53 grac2 NetworkManager[1617]: <info> (eth2): DHCPv4 state changed nbi -> preinit
Jul 15 19:27:57 grac2 dhclient[1637]: DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 10 (xid=0x44e12e9)
Jul 15 19:28:03 grac2 dhclient[1637]: DHCPOFFER from 192.168.1.50
Jul 15 19:28:03 grac2 dhclient[1637]: DHCPREQUEST on eth2 to 255.255.255.255 port 67 (xid=0x44e12e9)
Jul 15 19:28:03 grac2 dhclient[1637]: DHCPACK from 192.168.1.50 (xid=0x44e12e9)
Jul 15 19:28:03 grac2 NetworkManager[1617]: <info> (eth2): DHCPv4 state changed preinit -> bound
Jul 15 19:32:52 grac2 NetworkManager[1690]: <info> Activation (eth0) Beginning DHCPv4 transaction
Jul 15 19:32:52 grac2 NetworkManager[1690]: <info> Activation (eth0) DHCPv4 will time out in 45 seconds
Jul 15 19:32:52 grac2 dhclient[1703]: Internet Systems Consortium DHCP Client 4.1.1-P1
Jul 15 19:32:52 grac2 dhclient[1703]: For info, please visit https://www.isc.org/software/dhcp/
Jul 15 19:32:52 grac2 dhclient[1703]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 6 (xid=0x6781ea4f)
Jul 15 19:32:52 grac2 NetworkManager[1690]: <info> (eth0): DHCPv4 state changed nbi -> preinit
Jul 15 19:32:58 grac2 dhclient[1703]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 (xid=0x6781ea4f)
Jul 15 19:33:02 grac2 dhclient[1703]: DHCPOFFER from 192.168.1.50
Jul 15 19:33:02 grac2 dhclient[1703]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6781ea4f)
Jul 15 19:33:02 grac2 dhclient[1703]: DHCPACK from 192.168.1.50 (xid=0x6781ea4f)
Jul 15 19:33:02 grac2 NetworkManager[1690]: <info> (eth0): DHCPv4 state changed preinit -> bound
Jul 15 19:37:56 grac2 NetworkManager[1690]: <info> (eth0): canceled DHCP transaction, DHCP client pid 1703
Rerun clufify for 2.nd node and test GNS connectivity:

Verify GNS: 
$ ./bin/cluvfy comp gns -precrsinst -domain oracle-gns.example.com -vip 192.168.2.72 -verbose -n grac2
Verifying GNS integrity 
Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "oracle-gns.example.com" is a valid domain name
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.2.72" resolves to a valid IP address
Checking the status of GNS VIP...
GNS integrity check passed
Verification of GNS integrity was successful. 

Verify CRS for both nodes using newly created  ASM disk and asmadmin group 
$ ./bin/cluvfy stage -pre crsinst -n grac1,grac2 -asm -asmgrp asmadmin -asmdev /dev/oracleasm/disks/DATA1,/dev/oracleasm/disks/DATA2,/dev/oracleasm/disks/DATA3
Performing pre-checks for cluster services setup 
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.1.0" with node(s) grac2,grac1
TCP connectivity check passed for subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac2,grac1
TCP connectivity check passed for subnet "192.168.2.0"
Node connectivity passed for subnet "169.254.0.0" with node(s) grac2,grac1
TCP connectivity check passed for subnet "169.254.0.0"
Interfaces found on subnet "169.254.0.0" that are likely candidates for VIP are:
grac2 eth1:169.254.86.205
grac1 eth1:169.254.168.215
Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect are:
grac2 eth1:192.168.2.102
grac1 eth1:192.168.2.101
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "169.254.0.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac2:/u01/app/11203/grid,grac2:/tmp"
Free disk space check passed for "grac1:/u01/app/11203/grid,grac1:/tmp"
Check for multiple users with UID value 501 passed 
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Group existence check passed for "asmadmin"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Membership check for user "grid" in group "asmadmin" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Package existence check passed for "cvuqdisk"
Checking Devices for ASM...
Checking for shared devices...
  Device                                Device Type             
  ------------------------------------  ------------------------
  /dev/oracleasm/disks/DATA3            Disk                    
  /dev/oracleasm/disks/DATA2            Disk                    
  /dev/oracleasm/disks/DATA1            Disk                    
Checking consistency of device owner across all nodes...
Consistency check of device owner for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device owner for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device owner for "/dev/oracleasm/disks/DATA2" PASSED
Checking consistency of device group across all nodes...
Consistency check of device group for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device group for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device group for "/dev/oracleasm/disks/DATA2" PASSED
Checking consistency of device permissions across all nodes...
Consistency check of device permissions for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device permissions for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device permissions for "/dev/oracleasm/disks/DATA2" PASSED
Checking consistency of device size across all nodes...
Consistency check of device size for "/dev/oracleasm/disks/DATA3" PASSED
Consistency check of device size for "/dev/oracleasm/disks/DATA1" PASSED
Consistency check of device size for "/dev/oracleasm/disks/DATA2" PASSED
UDev attributes check for ASM Disks started...
ERROR: 
PRVF-9802 : Attempt to get udev info from node "grac2" failed
ERROR: 
PRVF-9802 : Attempt to get udev info from node "grac1" failed
UDev attributes check failed for ASM Disks 
Devices check for ASM passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: grac2,grac1
File "/etc/resolv.conf" is not consistent across nodes
Time zone consistency check passed
Pre-check for cluster services setup was unsuccessful on all the nodes. 
Ignore PRVF-9802 , PRVF-5636.  For details check the following link.

 

Install Clusterware Software
As user Root 
# xhost +
    access control disabled, clients can connect from any host
As user Grid
$  xclock      ( Testing X connection )
$ cd /KITS/Oracle/11.2.0.3/Linux_64/grid   ( your grid staging area )
$ ./runInstaller  
--> Important : Select Installation type : Advanced Installation
Cluster name   GRACE2  
Scan name:     GRACE2-scan.grid.example.com
Scan port:     1521
Configure GNS
GNS sub domain:  grid.example.com
GNS VIP address: 192.168.1.55
   ( This address shouldn't be in use:   # ping 192.168.1.55 should fail ) 
  Hostname:  grac1.example.com     Virtual hostnames  : AUTO
  Hostname:  grac1.example.com     Virtual hostnames  : AUTO 
Test and configure SSH connectivity 
Configure ASM disk string: /dev/oracleasm/disks/*
ASM password: sys 
Don't user IPM
Dont't change groups
ORACLE_BASE: /u01/app/grid
Sofware Location : /u01/app/11.2.0/grid
--> Check OUI Prerequisites Check 
  -> Ignore the wellknown  PRVF-5636, PRVF-9802 errors/warnings ( see the former clufvfy reports ) 
Install software and run the related root.sh scripts

Run on grac1:  /u01/app/11203/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11203/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11203/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'grac1'
CRS-2676: Start of 'ora.mdnsd' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'grac1'
CRS-2676: Start of 'ora.gpnpd' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'grac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'grac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'grac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'grac1'
CRS-2676: Start of 'ora.diskmon' on 'grac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'grac1' succeeded
ASM created and started successfully.
Disk Group DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 3ee007b399cc4f59bfa0fc80ff3fa9ff.
Successful addition of voting disk 7a73147a81dc4f71bfc8757343aee181.
Successful addition of voting disk 25fcfbdb854a4f49bf0addd0fa32d0a2.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   3ee007b399cc4f59bfa0fc80ff3fa9ff (/dev/oracleasm/disks/DATA1) [DATA]
 2. ONLINE   7a73147a81dc4f71bfc8757343aee181 (/dev/oracleasm/disks/DATA2) [DATA]
 3. ONLINE   25fcfbdb854a4f49bf0addd0fa32d0a2 (/dev/oracleasm/disks/DATA3) [DATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'grac1'
CRS-2676: Start of 'ora.asm' on 'grac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac1'
CRS-2676: Start of 'ora.DATA.dg' on 'grac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run on grac2:  /u01/app/11203/grid/root.sh
# /u01/app/11203/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11203/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11203/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node grac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Run cluvfyy and crsctl to verify Oracle Grid Installation
$ ./bin/cluvfy stage -post crsinst -n grac1,grac2 -verbose
Performing post-checks for cluster services setup 
Checking node reachability...
Check: Node reachability from node "grac1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: Node reachability check passed from node "grac1"
Checking user equivalence...
Check: User equivalence for user "grid"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 passed                  
  grac1                                 passed                  
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 passed                  
  grac1                                 passed                  
Verification of the hosts config file successful
Interface information for node "grac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.62    192.168.1.0     0.0.0.0         168.1.0.1       08:00:27:1F:2E:33 1500  
 eth0   192.168.1.112   192.168.1.0     0.0.0.0         168.1.0.1       08:00:27:1F:2E:33 1500  
 eth0   192.168.1.108   192.168.1.0     0.0.0.0         168.1.0.1       08:00:27:1F:2E:33 1500  
 eth1   192.168.2.102   192.168.2.0     0.0.0.0         168.1.0.1       08:00:27:8E:6D:24 1500  
 eth1   169.254.86.205  169.254.0.0     0.0.0.0         168.1.0.1       08:00:27:8E:6D:24 1500  
Interface information for node "grac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.61    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.55    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.110   192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.109   192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth0   192.168.1.107   192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:6E:17:DB 1500  
 eth1   192.168.2.101   192.168.2.0     0.0.0.0         192.168.1.1     08:00:27:F5:31:22 1500  
 eth1   169.254.168.215 169.254.0.0     0.0.0.0         192.168.1.1     08:00:27:F5:31:22 1500  
Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac2[192.168.1.62]             grac2[192.168.1.112]            yes             
  grac2[192.168.1.62]             grac2[192.168.1.108]            yes             
  grac2[192.168.1.62]             grac1[192.168.1.61]             yes             
  grac2[192.168.1.62]             grac1[192.168.1.55]             yes             
  grac2[192.168.1.62]             grac1[192.168.1.110]            yes             
  grac2[192.168.1.62]             grac1[192.168.1.109]            yes             
  grac2[192.168.1.62]             grac1[192.168.1.107]            yes             
  grac2[192.168.1.112]            grac2[192.168.1.108]            yes             
  grac2[192.168.1.112]            grac1[192.168.1.61]             yes             
  grac2[192.168.1.112]            grac1[192.168.1.55]             yes             
  grac2[192.168.1.112]            grac1[192.168.1.110]            yes             
  grac2[192.168.1.112]            grac1[192.168.1.109]            yes             
  grac2[192.168.1.112]            grac1[192.168.1.107]            yes             
  grac2[192.168.1.108]            grac1[192.168.1.61]             yes             
  grac2[192.168.1.108]            grac1[192.168.1.55]             yes             
  grac2[192.168.1.108]            grac1[192.168.1.110]            yes             
  grac2[192.168.1.108]            grac1[192.168.1.109]            yes             
  grac2[192.168.1.108]            grac1[192.168.1.107]            yes             
  grac1[192.168.1.61]             grac1[192.168.1.55]             yes             
  grac1[192.168.1.61]             grac1[192.168.1.110]            yes             
  grac1[192.168.1.61]             grac1[192.168.1.109]            yes             
  grac1[192.168.1.61]             grac1[192.168.1.107]            yes             
  grac1[192.168.1.55]             grac1[192.168.1.110]            yes             
  grac1[192.168.1.55]             grac1[192.168.1.109]            yes             
  grac1[192.168.1.55]             grac1[192.168.1.107]            yes             
  grac1[192.168.1.110]            grac1[192.168.1.109]            yes             
  grac1[192.168.1.110]            grac1[192.168.1.107]            yes             
  grac1[192.168.1.109]            grac1[192.168.1.107]            yes             
Result: Node connectivity passed for interface "eth0"
Check: TCP connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac1:192.168.1.61              grac2:192.168.1.62              passed          
  grac1:192.168.1.61              grac2:192.168.1.112             passed          
  grac1:192.168.1.61              grac2:192.168.1.108             passed          
  grac1:192.168.1.61              grac1:192.168.1.55              passed          
  grac1:192.168.1.61              grac1:192.168.1.110             passed          
  grac1:192.168.1.61              grac1:192.168.1.109             passed          
  grac1:192.168.1.61              grac1:192.168.1.107             passed          
Result: TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac2[192.168.2.102]            grac1[192.168.2.101]            yes             
Result: Node connectivity passed for interface "eth1"
Check: TCP connectivity of subnet "192.168.2.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  grac1:192.168.2.101             grac2:192.168.2.102             passed          
Result: TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check: Time zone consistency 
Result: Time zone consistency check passed
Checking Oracle Cluster Voting Disk configuration...
ASM Running check passed. ASM is running on all specified nodes
Oracle Cluster Voting Disk configuration check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 running                 
  grac1                                 running                 
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations 
UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations 
Check default user file creation mask
  Node Name     Available                 Required                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         22                        0022                      passed    
  grac1         22                        0022                      passed    
Result: Default user file creation mask check passed
Checking cluster integrity...
  Node Name                           
  ------------------------------------
  grac1                               
  grac2                               
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
ASM Running check passed. ASM is running on all specified nodes
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" available on all the nodes
NOTE: 
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "grac2"
The Oracle Clusterware is healthy on node "grac1"
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         yes                       yes                       passed    
  grac1         yes                       yes                       passed    
VIP node application check passed
Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         yes                       yes                       passed    
  grac1         yes                       yes                       passed    
NETWORK node application check passed
Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         no                        no                        exists    
  grac1         no                        no                        exists    
GSD node application is offline on nodes "grac2,grac1"
Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  grac2         no                        yes                       passed    
  grac1         no                        yes                       passed    
ONS node application check passed
Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?    
  ----------------  ------------  ------------  ------------  ------------  ------------
  GRACE2-scan.grid.example.com  grac2         true          LISTENER_SCAN1  1521          true        
  GRACE2-scan.grid.example.com  grac1         true          LISTENER_SCAN2  1521          true        
  GRACE2-scan.grid.example.com  grac1         true          LISTENER_SCAN3  1521          true        
Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?       
  ------------  ------------------------  ------------------------
  grac1         LISTENER_SCAN1            yes                     
  grac1         LISTENER_SCAN2            yes                     
  grac1         LISTENER_SCAN3            yes                     
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "GRACE2-scan.grid.example.com"...
  SCAN Name     IP Address                Status                    Comment   
  ------------  ------------------------  ------------------------  ----------
  GRACE2-scan.grid.example.com  192.168.1.110             passed                              
  GRACE2-scan.grid.example.com  192.168.1.109             passed                              
  GRACE2-scan.grid.example.com  192.168.1.108             passed                              
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Checking OLR config file...
OLR config file check successful
Checking OLR file attributes...
OLR file check successful
WARNING: 
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "grid.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0" match with the GNS VIP "192.168.1.0"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.1.55" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "grid.example.com" are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
Checking status of GNS resource...
  Node          Running?                  Enabled?                
  ------------  ------------------------  ------------------------
  grac2         no                        yes                     
  grac1         yes                       yes                     
GNS resource configuration check passed
Checking status of GNS VIP resource...
  Node          Running?                  Enabled?                
  ------------  ------------------------  ------------------------
  grac2         no                        yes                     
  grac1         yes                       yes                     
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  grac2         passed                    does not exist          
  grac1         passed                    does not exist          
Result: User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status                  
  ------------------------------------  ------------------------
  grac2                                 passed                  
  grac1                                 passed                  
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed
Check CTSS state started...
Check: CTSS state
  Node Name                             State                   
  ------------------------------------  ------------------------
  grac2                                 Observer                
  grac1                                 Observer                
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?     
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: 
NTP daemon slewing option check passed
Checking NTP daemon's boot time configuration, in file "/etc/sysconfig/ntpd", for slewing option "-x"
Check: NTP daemon's boot time configuration
  Node Name                             Slewing Option Set?     
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
Result: 
NTP daemon's boot time configuration check for slewing option passed
Checking whether NTP daemon or service is using UDP port 123 on all nodes
Check for NTP daemon or service using UDP port 123
  Node Name                             Port Open?              
  ------------------------------------  ------------------------
  grac2                                 yes                     
  grac1                                 yes                     
NTP common Time Server Check started...
NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[grac2, grac1]"... 
Check: Clock time offset from NTP Time Server
Time Server: .LOCL. 
Time Offset Limit: 1000.0 msecs
  Node Name     Time Offset               Status                  
  ------------  ------------------------  ------------------------
  grac2         0.0                       passed                  
  grac1         0.0                       passed                  
Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[grac2, grac1]". 
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was successful. 

Checking CRS status after installation]
$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac1         
ora.DATA.dg                    ONLINE     ONLINE          grac2         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac1         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac2         
ora.asm                        ONLINE     ONLINE          grac1        Started 
ora.asm                        ONLINE     ONLINE          grac2        Started 
ora.gsd                        OFFLINE    OFFLINE         grac1         
ora.gsd                        OFFLINE    OFFLINE         grac2         
ora.net1.network               ONLINE     ONLINE          grac1         
ora.net1.network               ONLINE     ONLINE          grac2         
ora.ons                        ONLINE     ONLINE          grac1         
ora.ons                        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac1         
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac1         
ora.cvu                        ONLINE     ONLINE          grac1         
ora.gns                        ONLINE     ONLINE          grac1         
ora.gns.vip                    ONLINE     ONLINE          grac1         
ora.grac1.vip                  ONLINE     ONLINE          grac1         
ora.grac2.vip                  ONLINE     ONLINE          grac2         
ora.oc4j                       ONLINE     ONLINE          grac1         
ora.scan1.vip                  ONLINE     ONLINE          grac2         
ora.scan2.vip                  ONLINE     ONLINE          grac1         
ora.scan3.vip                  ONLINE     ONLINE          grac1                              

Grid post installation - Ologgerd process cosumes high CPU time
  It had been noticed that after a while, the ologgerd process can consume excessive CPU resource. 
  The ologgerd is part of Oracle Cluster Health Monitor and used by Oracle Support to troubleshoot RAC problems. 
  You can check that by starting top:  (sometime up we see up to 60% WA states ) 
  top - 15:02:38 up 15 min,  6 users,  load average: 3.70, 2.54, 1.78
    Tasks: 215 total,   2 running, 213 sleeping,   0 stopped,   0 zombie
    Cpu(s):  3.6%us,  8.9%sy,  0.0%ni, 55.4%id, 31.4%wa,  0.0%hi,  0.8%si,  0.0%st
    Mem:   3234376k total,  2512568k used,   721808k free,   108508k buffers
    Swap:  3227644k total,        0k used,  3227644k free,  1221196k cached
      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
     5602 root      RT   0  501m 145m  60m S 48.1  4.6   0:31.29 ologgerd    
If ologgerd process is consuming a lot of CPU, you can stop it by executing:
# crsctl stop resource ora.crf -init
  Now top looks good as IDLE CPU time increases from 55 % to 95 % !
  hrac1: 
    top - 15:07:56 up 20 min,  6 users,  load average: 2.57, 3.33, 2.41
    Tasks: 212 total,   1 running, 211 sleeping,   0 stopped,   0 zombie
    Cpu(s):  1.3%us,  4.2%sy,  0.0%ni, 94.3%id,  0.1%wa,  0.0%hi,  0.2%si,  0.0%st
    Mem:   3234376k total,  2339268k used,   895108k free,   132604k buffers
    Swap:  3227644k total,        0k used,  3227644k free,  1126964k cached
  hrac2:   
    top - 15:48:37 up 33 min,  3 users,  load average: 2.63, 2.40, 2.13
    Tasks: 204 total,   1 running, 203 sleeping,   0 stopped,   0 zombie
    Cpu(s):  0.9%us,  3.3%sy,  0.0%ni, 95.6%id,  0.1%wa,  0.0%hi,  0.2%si,  0.0%st
    Mem:   2641484k total,  1975444k used,   666040k free,   158212k buffers
    Swap:  3227644k total,        0k used,  3227644k free,   993328k cached
 If you want to disable ologgerd permanently, then execute:
 # crsctl delete resource ora.crf -init

 

Fixing a failed GRID Installation
Fixing a failed Grid Installation ( runt this commands on all instances )
[grid@grac31 ~]$ rm -rf  /u01/app/11203/grid/*
[grid@grac31 ~]$ rm /u01/app/oraInventory/*

 

Install RDBMS and  create RAC database
Login as Oracle user and verify the accout
$ id
  uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(asmdba),54322(dba)
$ env | grep ORA 
  ORACLE_BASE=/u01/app/oracle
  ORACLE_SID=RACE2
  ORACLE_HOME=/u01/app/oracle/product/11203/racdb

Verfiy system by  running  cluvfy with: stage -pre dbinst
$ ./bin/cluvfy stage -pre dbinst -n grac1,grac2
Performing pre-checks for database installation 
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac2:/tmp"
Free disk space check passed for "grac1:/tmp"
Check for multiple users with UID value 54321 passed 
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
...
Check for multiple users with UID value 0 passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Default user file creation mask check passed
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking Cluster manager integrity... 
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "grac2,grac1"
Checking existence of ONS node application (optional)
ONS node application check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time Protocol(NTP) passed
Oracle Cluster Time Synchronization Services check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: grac2,grac1
File "/etc/resolv.conf" is not consistent across nodes
Time zone consistency check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "GRACE2-scan.grid.example.com"...
Verification of SCAN VIP and Listener setup passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
ASM and CRS versions are compatible
Database Clusterware version compatibility passed
Pre-check for database installation was unsuccessful on all the nodes. 

Run cluvfy with:  stage -pre dbcfg
$ ./bin/cluvfy stage -pre dbcfg -n grac1,grac2 -d $ORACLE_HOME
Performing pre-checks for database configuration 
ERROR: 
Unable to determine OSDBA group from Oracle Home "/u01/app/oracle/product/11203/racdb"
Checking node reachability...
Node reachability check passed from node "grac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
ERROR: 
PRVF-7617 : Node connectivity between "grac1 : 192.168.1.61" and "grac2 : 192.168.1.108" failed
TCP connectivity check failed for subnet "192.168.1.0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check failed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "grac2:/u01/app/oracle/product/11203/racdb,grac2:/tmp"
Free disk space check passed for "grac1:/u01/app/oracle/product/11203/racdb,grac1:/tmp"
Check for multiple users with UID value 54321 passed 
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
...
Package existence check passed for "libaio-devel(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "grac2,grac1"
Checking existence of ONS node application (optional)
ONS node application check passed
Time zone consistency check passed
Pre-check for database configuration was unsuccessful on all the nodes. 

Ignore ERROR: 
   Unable to determine OSDBA group from Oracle Home "/u01/app/oracle/product/11203/racdb"
   -> Oracle software isn't software installed yet and cluvfy can't find $ORACLE_HOME/bin/osdbagrp
    stat("/u01/app/oracle/product/11203/racdb/bin/osdbagrp", 0x7fff2fd6e530) = -1 ENOENT (No such file or directory) 
   Run only cluvfy stage -pre dbcfg only after you have installed the software and before you have created the database.

Run Installer
As user Root 
  # xhost +
    access control disabled, clients can connect from any host
As user Oracle
  $ xclock      ( Testing X connection )
  $ cd /KITS/Oracle/11.2.0.3/Linux_64/database  ( rdbms staging area ) 
  $ ./runInstaller ( select SERVER class )
     Node Name           : grac1,grac2  
     Storage type        : ASM
     Location            : DATA
     OSDBDBA group       : asmdba
     Global database name: GRACE2
On grac1 run:  /u01/app/oracle/product/11203/racdb/root.sh
On grac2 run:  /u01/app/oracle/product/11203/racdb/root.sh
Enterprise Manager Database Control URL - (RACE2) :   https://hrac1.de.oracle.com:1158/em

Verify Rac Install
$ my_crs_stat
NAME                           TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------      ---------- ----------      ------------ ------------------
ora.DATA.dg                    ONLINE     ONLINE          grac1         
ora.DATA.dg                    ONLINE     ONLINE          grac2         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac1         
ora.LISTENER.lsnr              ONLINE     ONLINE          grac2         
ora.asm                        ONLINE     ONLINE          grac1        Started 
ora.asm                        ONLINE     ONLINE          grac2        Started 
ora.gsd                        OFFLINE    OFFLINE         grac1         
ora.gsd                        OFFLINE    OFFLINE         grac2         
ora.net1.network               ONLINE     ONLINE          grac1         
ora.net1.network               ONLINE     ONLINE          grac2         
ora.ons                        ONLINE     ONLINE          grac1         
ora.ons                        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE          grac2         
ora.LISTENER_SCAN2.lsnr        ONLINE     ONLINE          grac1         
ora.LISTENER_SCAN3.lsnr        ONLINE     ONLINE          grac1         
ora.cvu                        ONLINE     ONLINE          grac1         
ora.gns                        ONLINE     ONLINE          grac1         
ora.gns.vip                    ONLINE     ONLINE          grac1         
ora.grac1.vip                  ONLINE     ONLINE          grac1         
ora.grac2.vip                  ONLINE     ONLINE          grac2         
ora.grace2.db                  ONLINE     ONLINE          grac1        Open 
ora.grace2.db                  ONLINE     ONLINE          grac2        Open 
ora.oc4j                       ONLINE     ONLINE          grac1         
ora.scan1.vip                  ONLINE     ONLINE          grac2         
ora.scan2.vip                  ONLINE     ONLINE          grac1         
ora.scan3.vip                  ONLINE     ONLINE          grac1     

$ srvctl  status database -d GRACE2
Instance GRACE21 is running on node grac1
Instance GRACE22 is running on node grac2

$GRID_HOME/bin/olsnodes -n
grac1    1
grac2    2

 

Reference
GitHub 加速计划 / li / linux-dash
10.39 K
1.2 K
下载
A beautiful web dashboard for Linux
最近提交(Master分支:2 个月前 )
186a802e added ecosystem file for PM2 4 年前
5def40a3 Add host customization support for the NodeJS version 4 年前
Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐