Sunday 22 May 2016

Remove a Node from an Existing Oracle RAC 11g R2 Cluster




Please find the below links if you want to remove a Node from ORACLE RAC.


Remove a Node from an Existing Oracle RAC 11g R2 Cluster


Kindly  follow  all the steps one be one .

PRCR-1001 : Resource ora.asm does not exist in Oracle 11gR2 RAC


I am trying to stop  asm in node3( vsnlmmdb06) but getting below error.

[root@vsnlmmdb06 ~]#  srvctl stop asm -n vsnlmmdb06 -f

PRCR-1001 : Resource ora.asm does not exist

and i checked  crsctl status res -t   it is  shows list of cluster services but ora.asm services not exist.

all the cluster services are visible except  ora.asm and same service  is visible when i executed the below command.

[oracle@vsnlmmdb06 ~]$ crsctl status resource -t -init

--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       vsnlmmdb05               Started             

after that i tried to add  asm using srvctl command in both nodes.


[oracle@vsnlmmdb05 ~]$ srvctl add asm


after added successfully it is visible ora.asm service but status is offline.

[oracle@vsnlmmdb05 ~]$ crsctl status res -t

--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.asm
               OFFLINE OFFLINE      vsnlmmdb04                                   
               OFFLINE OFFLINE      vsnlmmdb05                   


[oracle@vsnlmmdb05 ~]$ crsctl status resource ora.asm

NAME=ora.asm
TYPE=ora.asm.type
TARGET=OFFLINE, OFFLINE
STATE=OFFLINE, OFFLINE

I started ora.asm resource that wa successfully started on both nodes.

[oracle@vsnlmmdb05 ~]$ crsctl start resource ora.asm

CRS-2672: Attempting to start 'ora.asm' on 'vsnlmmdb05'
CRS-2672: Attempting to start 'ora.asm' on 'vsnlmmdb04'
CRS-2676: Start of 'ora.asm' on 'vsnlmmdb04' succeeded
CRS-2676: Start of 'ora.asm' on 'vsnlmmdb05' succeeded

[oracle@vsnlmmdb05 ~]$ crsctl status resource ora.asm

NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE              , ONLINE
STATE=ONLINE on vsnlmmdb04, ONLINE on vsnlmmdb05

finally check it again.

[oracle@vsnlmmdb05 ~]$ crsctl status res -t

--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.asm
               ONLINE  ONLINE       vsnlmmdb04                                   
               ONLINE  ONLINE       vsnlmmdb05    


kindly check and update.

Tuesday 3 May 2016

Grid Infrastructure Disk Issues– CHM DB file: crfclust.bdb


We had issues our Live production server grid Infrastructure disk usage increasing. 
I have come across an issue in 11gR2 RAC, where the GI file system GRID_HOME was mostly consumed by a single file called crfclust.bdb
This single file itself was about 132 GB of size.
crfclust.bdb is a Cluster Health Monitor (CHM) file, which collects the stats of Cluster as well as the OS statistics by means of the Cluster Health Monitor Service ora.crf.
-rw-r----- 1 root root   1814122496 May  3 16:32 crfhosts.bdb
-rw-r----- 1 root root 141540589568 May  3 16:32 crfclust.bdb
the below steps are followed to fix the issues,
adequately lower value. To do so, follow these steps:

1. Issue "$GI_HOME/bin/crsctl stop res ora.crf -init " on all the nodes of cluster.
2. Locate the config file $GRID_HOME/crf/admin/crf<hostname>.ora
3. Manually edit the crf<hostname>.ora file on every node of the cluster and
   change BDBSIZE tag entry and remove the value (set it to blank) or
   set it to a desired value, eg. 61511. Do not delete the BDBSIZE tag itself.

[root@vsnlmmdb06 admin]# less crfvsnlmmdb06.ora 

BDBLOC=default
BDBSIZE=67054080


4. Restart ora.crf daemon on every node.

[root@vsnlmmdb06 admin]# crsctl stop res ora.crf -init

CRS-2673: Attempting to stop 'ora.crf' on 'vsnlmmdb06'
CRS-2677: Stop of 'ora.crf' on 'vsnlmmdb06' succeeded


[root@vsnlmmdb06 vsnlmmdb06]# rm *.bdb

rm: remove regular file `crfalert.bdb'? yes
rm: remove regular file `crfclust.bdb'? yes
rm: remove regular file `crfconn.bdb'? yes
rm: remove regular file `crfcpu.bdb'? yes
rm: remove regular file `crfhosts.bdb'? yes
rm: remove regular file `crfloclts.bdb'? yes
rm: remove regular file `crfts.bdb'? yes
rm: remove regular file `repdhosts.bdb'? yes


[root@vsnlmmdb06 vsnlmmdb06]# crsctl start res ora.crf -init

CRS-2672: Attempting to start 'ora.crf' on 'vsnlmmdb06'
CRS-2676: Start of 'ora.crf' on 'vsnlmmdb06' succeeded


After started please check the location space will be reduced,