oracle rac install and uninstall acfs

Table of Contents

ACFS installation and uninstallation:

1. The steps to manually install the ACFS/ADVM module on RAC are as follows:

1. Verify whether the ACFS/ADVM module exists in the memory:

2. Reinstall the ACFS/ADVM module as the root user: (Both nodes must be operated or the volume cannot be mounted)

3. Whether the ACFS/ADVM module has been loaded and exists in memory:

4. Create asm volume after successfully installing the ACFS/ADVM module:

5. Format the asm volume into a file system:

① Use asmcmd volinfo -a to determine the volume path (single node execution)

② Format into acfs file system (single node execution)

③ Create mount points on all RAC nodes

6. Register the ACFS file system in Oracle Registry: (single node execution)

7. Mount the ACFS file system on the ASM Volume (ADVM):

8. Register the ora.registry.acfs service into the cluster (single node execution):

2. The steps to uninstall/delete the ACFS/ADVM module on RAC are as follows:

1. Uninstall the ACFS file system:

2. Unregister the ACFS file system in Oracle Registry (single node execution):

3. Delete asm volume (single node execution):

4. Verify that the ACFS/ADVM module exists in the memory:

5. Stop CRS or OHAS service:

6. Unload the ACFS/ADVM module from memory

7. Verify again whether the ACFS/ADVM module exists in the memory:

8. Uninstall the installed ACFS/ADVM module:

9. Start crs or ohas service:

10. Check the cluster resource status:

11. Remove the ora.registry.acfs service (single node execution):

12. Check the cluster resource status again:


ACFS installation and uninstallation:

Before configuring acfs, be sure to check whether the ACFS/ADVM module exists in the memory.

Otherwise, the following error will be reported when creating an asm volume:

#Example:
[ + ASM1][grid@ceshi1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 23 13:30:51 2022

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup DATA add volume expdp_dir size 1G;
alter diskgroup DATA add volume expdp_dir size 1G
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15477: cannot communicate with the volume driver


SQL>

1. The steps to manually install the ACFS/ADVM module on RAC are as follows:

1. Verify whether the ACFS/ADVM module exists in the memory:

In the following example the module does not exist:

[root@ceshi1 ~]# lsmod | grep oracle
oracleasm 46100 1
#Or the result is empty

2. Reinstall ACFS/ADVM module as root user: (Both nodes must be operated or the volume cannot be mounted)

The centos system may report an error that the system does not support it. Modify the corresponding parameter file here

[root@wwdb1 bin]# ./acfsroot install
ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-7-9.2009.0.el7.centos.x86_64'

[root@wwdb1 bin]# cd /u01/app/11.2.0/grid/lib/
[root@wwdb1 lib]# cp -p osds_acfslib.pm osds_acfslib.pm.bak
[root@wwdb1 lib]# vi osds_acfslib.pm
Modify the following content
if ((defined($release)) & amp; & amp; # Redhat or OEL if defined
      (($release =~ /^redhat-release/) || # straight RH
       ($release =~ /^enterprise-release/) || # Oracle Enterprise Linux
       ($release =~ /^oraclelinux-release/))) # Oracle Linux
  
Modify the above code snippet as follows
 if ((defined($release)) & amp; & amp; # Redhat or OEL if defined
      (($release =~ /^redhat-release/) || # straight RH
       ($release =~ /^enterprise-release/) || # Oracle Enterprise Linux
       ($release =~ /^centos-release/) || #CentOS hack
       ($release =~ /^oraclelinux-release/))) # Oracle Linux

Install the acfs module again

#1. Enter the $GRID_HOME/bin directory
[root@ceshi1 ~]# cd $GRID_HOME/bin

#2. View the current path
[root@ceshi1 bin]# pwd
$GRID_HOME/bin

#3. View the acfsroot executable file
[root@ceshi1 bin]#ll acfsroot
-rwxr-xr-x 1 root oinstall 1567 Jul 7 2013 acfsroot

#4. Execute ./acfsroot install to install
[root@ceshi1 bin]# ./acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.

#5. Start acfs
[root@ceshi1 bin]# ./acfsload start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed

3. Whether the ACFS/ADVM module has been loaded and exists in memory:

[root@ceshi1 bin]# lsmod | grep oracle
oracleacfs 1981135 0
oracleadvm 233254 0
oracleoks 454412 2 oracleacfs,oracleadvm
#This module exists in the memory and can be used to create asm volumes.

4. Create asm volume after successfully installing the ACFS/ADVM module:

1. Use sysasm to log in to the asm instance under the #grid user
#su-grid
[ + ASM1][grid@ceshi1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 23 13:54:49 2022

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

#View the status and size of diskgroup:
SQL> set linesize 200
SQL> col name for a30
SQL> select name, state, total_mb, free_mb from v$asm_diskgroup;
NAME STATE TOTAL_MB FREE_MB
----------------------- ----------------------- ----- ----- ----------
ARCH MOUNTED 2048 1823
DATA MOUNTED 10240 7148
OCR MOUNTED 3072 2146

#Check whether a volume with the same name exists before creating an asm volume
SQL> col VOLUME_NAME for a30
SQL> col VOLUME_DEVICE for a50
SQL> select volume_name, volume_device, size_mb, state from gv$asm_volume;

no rows selected

2. #Create asm volume (expdp_dir)
SQL> alter diskgroup DATA add volume expdp_dir size 1G;

Diskgroup altered.

3. #Verify the creation result
#Query the v$asm_volume view and find that the current node successfully created the asm volume (expdp_dir), and the suffix 109 is automatically generated.
#VOLUME_DEVICE is (/dev/asm/expdp_dir-109) and already exists in the file system
SQL> col VOLUME_NAME for a30
SQL> col VOLUME_DEVICE for a50
SQL> select volume_name, volume_device, size_mb, state from gv$asm_volume;

VOLUME_NAME VOLUME_DEVICE SIZE_MB STATE
----------------------- ---------------------------- --------- ------- ------
EXPDP_DIR /dev/asm/expdp_dir-109 1024 ENABLED
EXPDP_DIR /dev/asm/expdp_dir-109 1024 ENABLED

All nodes in the RAC need to perform steps 1-3 to install the acfs service. Otherwise, the asm volume cannot be created on all nodes during step 4.

Let’s create an asm volume test: (Node 2 does not have the acfs/advm module installed)

[ + ASM1][grid@ceshi1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 23 13:54:49 2022

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

#Check whether a volume with the same name exists before creating an asm volume
SQL> select volume_name, volume_device, size_mb, state from v$asm_volume;

no rows selected

#Create asm volume expdp_dir
SQL> alter diskgroup DATA add volume expdp_dir size 1G;
alter diskgroup DATA add volume expdp_dir size 1G
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15490: not all instances could add/drop the volume


SQL> col VOLUME_NAME for a30
SQL> col VOLUME_DEVICE for a50
SQL> select volume_name, volume_device, size_mb, state from v$asm_volume;

VOLUME_NAME VOLUME_DEVICE SIZE_MB STATE
----------------------------------------------- ------- ------------ ------- ------
EXPDP_DIR /dev/asm/expdp_dir-109 1024 ENABLED
#Query the v$asm_volume view and find that the current node successfully created the asm volume (expdp_dir).
#VOLUME_DEVICE is (/dev/asm/expdp_dir-109) and already exists in the file system
SQL> select volume_name, volume_device, size_mb, state from gv$asm_volume;

VOLUME_NAME VOLUME_DEVICE SIZE_MB STATE
----------------------------------------------- ------- ------------ ------- ------
EXPDP_DIR /dev/asm/expdp_dir-109 1024 ENABLED
EXPDP_DIR ERROR 1024 DISABLED
#Querying asm volumeEXPDP_DIR in other nodes is wrong and the status is DISABLED
SQL>

#1-3 steps to install the ACFS/ADVM module on all other nodes
#Ignore the installation process. After the installation is completed, verify whether the ACFS/ADVM module exists in the memory.
[root@ceshi2 bin]# lsmod | grep oracle
oracleacfs 1981135 0
oracleadvm 233254 0
oracleoks 454412 2 oracleacfs,oracleadvm

#Mount the asm volume to all nodes
1. Use asmcmd to execute: ASMCMD [ + ] > volenable -G DATA volume1
2. Use sqlplus/as sysasm to execute: SQL> alter diskgroup DATA enable volume expdp_dir;
3. Use asmca to start the graphical interface for moutn. The essence is to execute the command alter diskgroup DATA enable volume expdp_dir;

#Query to verify that the asm volume has been mounted on all nodes
SQL> select volume_name, volume_device, size_mb, state from gv$asm_volume;

VOLUME_NAME VOLUME_DEVICE SIZE_MB STATE
----------------------- ---------------------------- --------- ------- ------
EXPDP_DIR /dev/asm/expdp_dir-109 1024 ENABLED
EXPDP_DIR /dev/asm/expdp_dir-109 1024 ENABLED
#Check again that the expdp_dir-109 storage device already exists in the /dev/asm directory
SQL>

5. Format asm volume into a file system:

① Use asmcmd volinfo -a to determine the volume path (single node execution)
[ + ASM2][grid@ceshi2 ~]$ asmcmd volinfo -a
Diskgroup Name: DATA

Volume Name: EXPDP_DIR
Volume Device: /dev/asm/expdp_dir-109
State: ENABLED
Size (MB): 1024
Resize Unit (MB): 32
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
② Format into acfs file system (single node execution)
[ + ASM2][grid@ceshi2 ~]$ /sbin/mkfs -t acfs /dev/asm/expdp_dir-109
mkfs.acfs: version = 11.2.0.4.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/expdp_dir-109
mkfs.acfs: volume size = 1073741824
mkfs.acfs: Format complete.
③ Create mount points on all RAC nodes
# mkdir -p /expdp

6. Register ACFS file system in Oracle Registry: (single node execution)

[ + ASM1][grid@ceshi1 ~]$ /sbin/acfsutil registry -f -a /dev/asm/expdp_dir-109 /expdp
acfsutil registry: mount point /expdp successfully added to Oracle Registry

7. Mount the ACFS file system on ASM Volume (ADVM):

--node one
#The mount point/expdp directory is created before mounting
[root@ceshi1 bin]# /bin/mount -t acfs /dev/asm/expdp_dir-109 /expdp
#acfs file system is mounted
[root@ceshi1 bin]# df -h
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root ext4 15G 5.3G 8.7G 38% /
tmpfs tmpfs 1.9G 159M 1.7G 9% /dev/shm
/dev/sda1 ext4 477M 84M 364M 19% /boot
/dev/mapper/vg_root-lv_home ext4 2.0G 1.1G 725M 61% /home
/dev/mapper/vg_root-lv_tmp ext4 976M 1.6M 908M 1% /tmp
/dev/mapper/vg_root-lv_data ext4 26G 21G 3.8G 85% /u01
/dev/mapper/vg_root-lv_var ext4 976M 96M 814M 11% /var
/dev/asm/expdp_dir-109 acfs 1.0G 79M 946M 8% /expdp

--node two
#The mount point/expdp directory is created before mounting
[root@ceshi2 dev]# /bin/mount -t acfs /dev/asm/expdp_dir-109 /expdp
#acfs file system is mounted
[root@ceshi2 dev]# df -h
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root ext4 15G 2.5G 12G 18% /
tmpfs tmpfs 1.9G 159M 1.7G 9% /dev/shm
/dev/sda1 ext4 477M 84M 364M 19% /boot
/dev/mapper/vg_root-lv_home ext4 2.0G 3.6M 1.8G 1% /home
/dev/mapper/vg_root-lv_tmp ext4 976M 1.6M 908M 1% /tmp
/dev/mapper/vg_root-lv_data ext4 26G 19G 6.4G 75% /u01
/dev/mapper/vg_root-lv_var ext4 976M 99M 811M 11% /var
/dev/asm/expdp_dir-109 acfs 1.0G 79M 946M 8% /expdp

8. Register the ora.registry.acfs service into the cluster (single node execution):

  1. The ACFS file system is not mounted after restarting the RAC node.
  2. The problem is that the “ora.registry.acfs” (checked by crsctl stat res -t) or “ora.drivers.acfs” (checked by crsctl stat res -t -init) resource is not configured and does not exist in the CRS stack:
  3. Execute ./acfsroot enable to register the ora.registry.acfs service into the cluster. On 11.2.0.3 and later versions, the acfsroot enable command exists and is used (for earlier versions (11.2.0.1 and 11.2.0.2), you can use the “acfsroot reregister” command)
[root@ceshi1 bin]#./acfsroot enable
ACFS-9376: Adding ADVM/ACFS drivers resource succeeded.
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'ceshi1'
CRS-2676: Start of 'ora.drivers.acfs' on 'ceshi1' succeeded
ACFS-9380: Starting ADVM/ACFS drivers resource succeeded.
ACFS-9368: Adding ACFS registry resource succeeded.
CRS-2672: Attempting to start 'ora.registry.acfs' on 'ceshi1'
CRS-2672: Attempting to start 'ora.registry.acfs' on 'ceshi2'
CRS-2676: Start of 'ora.registry.acfs' on 'ceshi1' succeeded
CRS-2676: Start of 'ora.registry.acfs' on 'ceshi2' succeeded
ACFS-9372: Starting ACFS registry resource succeeded.

#View cluster resource status:
[root@ceshi1 bin]# CRSCTL status res -t
-------------------------------------------------- ----------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
-------------------------------------------------- ----------------------------------
Local Resources
-------------------------------------------------- ----------------------------------
ora.ARCH.dg
               ONLINE ONLINE ceshi1
               ONLINE ONLINE ceshi2
ora.DATA.dg
               ONLINE ONLINE ceshi1
               ONLINE ONLINE ceshi2
ora.LISTENER.lsnr
               ONLINE ONLINE ceshi1
               ONLINE ONLINE ceshi2
ora.OCR.dg
               ONLINE ONLINE ceshi1
               ONLINE ONLINE ceshi2
ora.asm
               ONLINE ONLINE ceshi1 Started
               ONLINE ONLINE ceshi2 Started
ora.gsd
               OFFLINE OFFLINE ceshi1
               OFFLINE OFFLINE ceshi2
ora.net1.network
               ONLINE ONLINE ceshi1
               ONLINE ONLINE ceshi2
ora.ons
               ONLINE ONLINE ceshi1
               ONLINE ONLINE ceshi2
ora.registry.acfs
               ONLINE ONLINE ceshi1
               ONLINE ONLINE ceshi2
-------------------------------------------------- ----------------------------------
Cluster Resources
-------------------------------------------------- ----------------------------------
ora.LISTENER_SCAN1.lsnr
      1 ONLINE ONLINE ceshi1
ora.ceshi1.vip
      1 ONLINE ONLINE ceshi1
ora.ceshi2.vip
      1 ONLINE ONLINE ceshi2
ora.cvu
      1 ONLINE ONLINE ceshi1
ora.oc4j
      1 ONLINE ONLINE ceshi1
ora.orcl.db
      1 OFFLINE OFFLINE Instance Shutdown
      2 OFFLINE OFFLINE Instance Shutdown
ora.scan1.vip
      1 ONLINE ONLINE ceshi1

2. The steps to uninstall/delete the ACFS/ADVM module on RAC are as follows:

The following operations (1-8) except (2-3 operations) are executed on all RAC nodes.

1. Uninstall the ACFS file system:

# /bin/umount -t acfs <filesystem>

#Examples are as follows:
[root@test1 /u01/app/11.2.0.4/grid/bin]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root
                       15G 2.3G 12G 17% /
tmpfs 3.8G 158M 3.7G 5% /dev/shm
/dev/sda1 477M 84M 364M 19% /boot
/dev/mapper/vg_root-lv_home
                      2.0G 4.0M 1.8G 1% /home
/dev/mapper/vg_root-lv_tmp
                      2.0G 3.4M 1.8G 1% /tmp
/dev/mapper/vg_root-lv_data
                       26G 15G 11G 59% /u01
/dev/asm/expdp_dir-5 1.0G 79M 946M 8% /expdp
[root@test1 /u01/app/11.2.0.4/grid/bin]# /bin/umount -t acfs /expdp
[root@test1 /u01/app/11.2.0.4/grid/bin]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_root
                       15G 2.3G 12G 17% /
tmpfs 3.8G 158M 3.7G 5% /dev/shm
/dev/sda1 477M 84M 364M 19% /boot
/dev/mapper/vg_root-lv_home
                      2.0G 4.0M 1.8G 1% /home
/dev/mapper/vg_root-lv_tmp
                      2.0G 3.4M 1.8G 1% /tmp
/dev/mapper/vg_root-lv_data
                       26G 15G 11G 59% /u01
[root@test1 /u01/app/11.2.0.4/grid/bin]#

2. Unregister ACFS file system in Oracle Registry (single node execution):

[root@test1 /u01/app/11.2.0.4/grid/bin]# /sbin/acfsutil registry -d /dev/asm/expdp_dir-5
acfsutil registry: successfully removed ACFS volume /dev/asm/expdp_dir-5 from Oracle Registry

3. Delete asm volume (single node execution):

[ + ASM1][grid@test1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Tue Mar 29 17:02:01 2022

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

#Query the current asm volume status:
SQL> col VOLUME_NAME for a30
SQL> col VOLUME_DEVICE for a50
SQL> select volume_name, volume_device, size_mb, state from gv$asm_volume;

VOLUME_NAME VOLUME_DEVICE SIZE_MB STATE
--------------------------------------------------------------- ------- ------
EXPDP_DIR ERROR 1024 DISABLED
EXPDP_DIR ERROR 1024 DISABLED

#Delete asm volume:
SQL> alter diskgroup DATA drop volume expdp_dir;

Diskgroup altered.

#Verify that the asm volume has been deleted
SQL> select volume_name, volume_device, size_mb, state from gv$asm_volume;

no rows selected

4. Verify that the ACFS/ADVM module exists in the memory:

[root@test1 /u01/app/11.2.0.4/grid/bin]# lsmod | grep oracle
oracleacfs 1981135 1
oracleadvm 233254 5
oracleoks 454412 2 oracleacfs,oracleadvm

5. Stop CRS or OHAS service:

[root@test1 /u01/app/11.2.0.4/grid/bin]# ./crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'test1'
CRS-2673: Attempting to stop 'ora.crsd' on 'test1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'test1'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'test1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'test1'
CRS-2673: Attempting to stop 'ora.ARCH.dg' on 'test1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'test1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'test1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.test1.vip' on 'test1'
CRS-2677: Stop of 'ora.ARCH.dg' on 'test1' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'test1' succeeded
CRS-2677: Stop of 'ora.test1.vip' on 'test1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'test1' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'test1'
CRS-2677: Stop of 'ora.asm' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'test1'
CRS-2677: Stop of 'ora.ons' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'test1'
CRS-2677: Stop of 'ora.net1.network' on 'test1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'test1' has completed
CRS-2677: Stop of 'ora.crsd' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'test1'
CRS-2673: Attempting to stop 'ora.evmd' on 'test1'
CRS-2673: Attempting to stop 'ora.asm' on 'test1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'test1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'test1'
CRS-2677: Stop of 'ora.ctssd' on 'test1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'test1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'test1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'test1' succeeded
CRS-2677: Stop of 'ora.asm' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'test1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'test1'
CRS-2677: Stop of 'ora.cssd' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'test1'
CRS-2677: Stop of 'ora.crf' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'test1'
CRS-2677: Stop of 'ora.gipcd' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'test1'
CRS-2677: Stop of 'ora.gpnpd' on 'test1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'test1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

6. Unload ACFS/ADVM module from memory

[root@test1 /u01/app/11.2.0.4/grid/bin]# ./acfsload stop
[root@test1 /u01/app/11.2.0.4/grid/bin]#

7. Verify again whether the ACFS/ADVM module exists in the memory:

[root@test1 /u01/app/11.2.0.4/grid/bin]# lsmod | grep oracle
[root@test1 /u01/app/11.2.0.4/grid/bin]#

8. Uninstall the installed ACFS/ADVM module:

[root@test1 /u01/app/11.2.0.4/grid/bin]# ./acfsroot uninstall
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.

9. Start crs or ohas service:

[root@test2 /u01/app/11.2.0.4/grid/bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

10. View cluster resource status:

[root@test2 /u01/app/11.2.0.4/grid/bin]# ./crsctl status res -t
-------------------------------------------------- ----------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
-------------------------------------------------- ----------------------------------
Local Resources
-------------------------------------------------- ----------------------------------
ora.ARCH.dg
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.DATA.dg
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.LISTENER.lsnr
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.OCR.dg
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.asm
               ONLINE ONLINE test1 Started
               ONLINE ONLINE test2 Started
ora.gsd
               OFFLINE OFFLINE test1
               OFFLINE OFFLINE test2
ora.net1.network
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.ons
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.registry.acfs
               ONLINE OFFLINE test1
               ONLINE OFFLINE test2
-------------------------------------------------- ----------------------------------
Cluster Resources
-------------------------------------------------- ----------------------------------
ora.LISTENER_SCAN1.lsnr
      1 ONLINE ONLINE test1
ora.cvu
      1 ONLINE ONLINE test1
ora.oc4j
      1 ONLINE ONLINE test1
ora.scan1.vip
      1 ONLINE ONLINE test1
ora.test1.vip
      1 ONLINE ONLINE test1
ora.test2.vip
      1 ONLINE ONLINE test2
ora.zhuku.db
      1 OFFLINE OFFLINE Instance Shutdown
      2 OFFLINE OFFLINE Instance Shutdown
[root@test2 /u01/app/11.2.0.4/grid/bin]#

11. Remove ora.registry.acfs service (single node execution):

[root@test1 /u01/app/11.2.0.4/grid/bin]# ./acfsroot disable
ACFS-9374: Stopping ACFS registry resource succeeded.
ACFS-9370: Deleting ACFS registry resource succeeded.
ACFS-9378: Deleting ADVM/ACFS drivers resource succeeded.

12. Check cluster resource status again:

[root@test2 /u01/app/11.2.0.4/grid/bin]# ./crsctl status res -t
-------------------------------------------------- ----------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
-------------------------------------------------- ----------------------------------
Local Resources
-------------------------------------------------- ----------------------------------
ora.ARCH.dg
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.DATA.dg
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.LISTENER.lsnr
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.OCR.dg
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.asm
               ONLINE ONLINE test1 Started
               ONLINE ONLINE test2 Started
ora.gsd
               OFFLINE OFFLINE test1
               OFFLINE OFFLINE test2
ora.net1.network
               ONLINE ONLINE test1
               ONLINE ONLINE test2
ora.ons
               ONLINE ONLINE test1
               ONLINE ONLINE test2
-------------------------------------------------- ----------------------------------
Cluster Resources
-------------------------------------------------- ----------------------------------
ora.LISTENER_SCAN1.lsnr
      1 ONLINE ONLINE test1
ora.cvu
      1 ONLINE ONLINE test1
ora.oc4j
      1 ONLINE ONLINE test1
ora.scan1.vip
      1 ONLINE ONLINE test1
ora.test1.vip
      1 ONLINE ONLINE test1
ora.test2.vip
      1 ONLINE ONLINE test2
ora.zhuku.db
      1 OFFLINE OFFLINE Instance Shutdown
      2 OFFLINE OFFLINE Instance Shutdown
#ora.registry.acfs service has been removed

Mostly reproduced from:

https://www.cnblogs.com/junzibuyuantian/p/16079275.html#1. The steps to manually install the acfsadvm-module on rac are as follows

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge. MySQL entry skill treeInstallation and loginInstallation 77211 people are learning the system