删除前集群件状态

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                        
               ONLINE  ONLINE       rac2                                        
ora.FRA.dg
               ONLINE  ONLINE       rac1                                        
               ONLINE  ONLINE       rac2                                        
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                        
               ONLINE  ONLINE       rac2                                        
ora.OCRVDISK.dg
               ONLINE  ONLINE       rac1                                        
               ONLINE  ONLINE       rac2                                        
ora.asm
               ONLINE  ONLINE       rac1                     Started            
               ONLINE  ONLINE       rac2                     Started            
ora.gsd
               OFFLINE OFFLINE      rac1                                        
               OFFLINE OFFLINE      rac2                                        
ora.net1.network
               ONLINE  ONLINE       rac1                                        
               ONLINE  ONLINE       rac2                                        
ora.ons
               ONLINE  ONLINE       rac1                                        
               ONLINE  ONLINE       rac2                                        
ora.registry.acfs
               ONLINE  ONLINE       rac1                                        
               ONLINE  ONLINE       rac2                                   
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                        
ora.cvu
      1        ONLINE  ONLINE       rac1                                        
ora.oc4j
      1        ONLINE  ONLINE       rac1                                        
ora.orcl.db
      1        ONLINE  ONLINE       rac1                     Open               
      2        ONLINE  ONLINE       rac2                     Open               
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                        
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                        
ora.scan1.vip
      1        ONLINE  ONLINE       rac1 

1.删除节点实例
在保留节点节点上执行
(1)清除em配置
$emctl stop dbconsole
$emca -deconfig dbcontrol db
(2)删除service
$srvctl status service -d orcl
$srvctl remove service -d orcl -s orclervice
$srvctl add service -d orcl -s orclrac -r rac1,rac2 -P basic -y automatic -e select -z 5 -w 180
SQL> select inst_id,service_id,name from gv$services;
SQL> begin
  2     dbms_service.stop_service('orclrac');
  3     dbms_service.delete_service('orclrac');
  4  end;
  5  /
(3)dbca删除

2.卸载节点Database软件

(1)执行以下命令禁用和停止监听
$srvctl disable listener -l <listener_name> -n <name_of_node_to_delete>
$srvctl stop listener -l <listener_name> -n <name_of_node_to_delete>
(2)执行以下命令更新Inventory
在要删除节点的$ORACLE_HOME/oui/bin目录下执行以下命令
$./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={name_of_node_to_delete}" -local
(3)移除RAC Database Home
在要删除的节点以oracle用户在$ORACLE_HOME/deinstall目录下执行以下命令
$./deinstall -local
(4)更新其他节点的Inventory
在集群中任何保留的节点以oracle用户在$ORACLE_HOME/oui/bin目录下执行以下命令
$./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={remaining_node_list}"

3.卸载节点Clusterware软件

(1)确定所有节点的$GRID_HOME环境变量
(2)查看是否是pinned状态
$olsnodes -s -t
如果是pinned状态,执行crsctl unpin css
(3)以root用户在要删除的节点的$GRID_HOME/crs/install目录下执行如下命令
#./rootcrs.pl -deconfig -force
如果删除多个节点,在每个节点执行上面的脚本;如果删除所有的节点,在最后要删除的节点指定-lastnode选项
#./rootcrs.pl -deconfig -force -lastnode,清空OCR和表决磁盘
(4)在任何保留的节点以root用户在$GRID_HOME/bin目录下执行以下命令,执行节点删除操作
#./crsctl delete node -n <node_to_be_deleted>
(5)在删除节点以grid用户在$GRID_HOME/oui/bin目录下执行以下命令,更新保留节点的Inventory
$./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={node_to_be_deleted}" CRS=TRUE -silent -local
(6)在删除节点以grid用户在$GRID_HOME/deinstall目录下执行以下命令,卸载Clusterware软件
$GRID_HOME/deinstall/deinstall -local
(7)在任何保留节点以grid用户在$GRID_HOME/oui/bin目录下执行以下命令,更新保留节点的Inventory
$./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent
(8)在任何保留节点以grid用户执行以下CVU命令,验证节点是否成功删除
cluvfy stage -post nodedel -n node_list [-verbose]

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl status res -t

--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                        
ora.FRA.dg
               ONLINE  ONLINE       rac1                                        
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                        
ora.OCRVDISK.dg
               ONLINE  ONLINE       rac1                                        
ora.asm
               ONLINE  ONLINE       rac1                     Started            
ora.gsd
               OFFLINE OFFLINE      rac1                                        
ora.net1.network
               ONLINE  ONLINE       rac1                                        
ora.ons
               ONLINE  ONLINE       rac1                                        
ora.registry.acfs
               ONLINE  ONLINE       rac1                                     
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                        
ora.cvu
      1        ONLINE  ONLINE       rac1                                        
ora.oc4j
      1        ONLINE  ONLINE       rac1                                        
ora.orcl.db
      1        ONLINE  ONLINE       rac1                     Open               
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                        
ora.scan1.vip
      1        ONLINE  ONLINE       rac1  

 

 

1.添加节点服务器

(1)安装操作系统
(2)网络配置
(3)存储配置
(4)系统配置

2.添加新节点Clusterware软件

(1)确保正确的$GRID_HOME路径
$cluvfy stage -pre nodeadd -n rhel3 -verbose
(2)在现有集群节点的grid用户下,执行以下命令添加新节点Clusterware软件,$GRID_HOME/oui/bin
$./addNode.sh -silent "CLUSTER_NEW_NODES={rhel3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rhel3-vip}"
(3)在新节点以root用户运行$GRID_HOME/root.sh脚本
如果执行root.sh不成功,可执行下面这条删除配置,找到原因后重新执行root.sh脚本
#$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
#$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

3.添加新节点RAC Database软件

$cluvfy stage -pre nodeadd -n rhel3 -verbose
(1)为新节点添加Database软件,在现有集群节点以oracle用户执行,$ORACLE_HOME/oui/bin
$./addNode.sh -silent "CLUSTER_NEW_NODES={rhel3}"
(2)在新的节点以root用户身份运行$ORACLE_HOME/root.sh脚本
(3)CVU验证,在grid和oracle用户下执行验证命令
$cluvfy stage -post nodeadd -n rhel3 -verbose

4.添加新节点数据库实例

在现有集群节点以oracle用户执行
dbca或
$dbca -silent -addInstance -nodeList "rhel3" -gdbName "ractest" -instanceName "ractest3" -sysDBAUserName "sys" -sysDBAPassword "oracle"

 

遇到的问题:
预检不通过,无法继续安装,但实际已满足安装条件,比如:/u01/11.2.0/grid目录存在,但预检时却检查失败。
prod02:/u01/11.2.0/grid/oui/bin$export IGNORE_PREADDNODE_CHECKS=Y 
prod02:/u01/11.2.0/grid/oui/bin$./addNode.sh -ignoreSysPrereqs -force "CLUSTER_NEW_NODES={prod01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={prod01-vip}" 

 

 

 

[root@spxwdb1 /]# /etc/init.d/oracleasm querydisk -d DATA   检查DATA盘的物理路径

[root@spxwdb1 /]# /etc/init.d/oracleasm querydisk -p DATA