6.2. Create the GFS volumes
Because GFS is installed only on the RAC servers, these steps must be performed on one of them. It does not matter which one, but if you are installing Oracle Clusterware and RDBMS, it is recommended that you choose node 1. Verify that the node can see logical volumes.
Run: rac1 $ sudo lvscan
ACTIVE '/dev/oradata/datafiles' [48.00 GB] inherit ACTIVE '/dev/redo4/log4' [4.00 GB] inherit ACTIVE '/dev/common/ohome' [5.50 GB] inherit ACTIVE '/dev/redo3/log3' [4.00 GB] inherit ACTIVE '/dev/redo2/log2' [4.00 GB] inherit ACTIVE '/dev/redo1/log1' [4.00 GB] inherit
The command-line utility mkfs.gfs will be used to create each volume since some non-default values are required to create suitable Oracle GFS volumes
Run: rac1 $ mkfs.gfs –h
mkfs.gfs [options] <device> Options: -b <bytes> Filesystem block size -D Enable debugging code -h Print this help, then exit -J <MB> Size of journals -j <num> Number of journals -O Do not ask for confirmation -p <name> Name of the locking protocol -q Do not print anything -r <MB> Resource Group Size -s <blocks> Journal segment size -t <name> Name of the lock table -V Print program version information, then exit
Option | Value | Notes |
---|---|---|
-j |
4 |
One for each RAC node. GULM lock servers do not run GFS. |
-J |
32MB |
Oracle maintains the integrity of its filesystem with its own journals or redo logs. All database files are opened O_DIRECT (bypassing the RHEL buffer cache and the need to use GFS journals). Additionally, redo logs are opened O_SYNC. |
-p |
lock_gulm |
The chosen locking protocol. |
-t |
alpha_cluster:log1 |
The cluster name (from /etc/sysconfig/clusters) and the logical volume being initialized. |
<device> |
/dev/redo1/log1 |
The block-mode logical-device name. |
Table 6.1. Hostnames and Physical Interfaces
Run:
rac1 $ sudo mkfs.gfs -J 32 -j 4 -p lock_gulm -t alpha_cluster:log1 /dev/redo1/log1
This will destroy any data on /dev/redo1/log1. Are you sure you want to proceed? [y/n] y Device: /dev/redo1/log1 Blocksize: 4096 Filesystem Size: 1015600 Journals: 4 Resource Groups: 16 Locking Protocol: lock_gulm Lock Table: alpha_cluster:log1 Syncing... All Done
Run:
mkfs.gfs -J 32 -j 4 -p lock_gulm -t alpha_cluster:log2 /dev/redo2/log2 mkfs.gfs -J 32 -j 4 -p lock_gulm -t alpha_cluster:log3 /dev/redo3/log4 mkfs.gfs -J 32 -j 4 -p lock_gulm -t alpha_cluster:log4 /dev/redo4/log4 mkfs.gfs -J 32 -j 4 -p lock_gulm -t alpha_cluster:ohome /dev/common/ohome mkfs.gfs -J 32 -j 4 -p lock_gulm -t alpha_cluster:datafiles /dev/oradata/datafiles
/dev/common/ohome /mnt/ohome gfs _netdev 0 0 /dev/datafiles/oradata /mnt/oradata gfs _netdev 0 0 /dev/redo1/log1 /mnt/log1 gfs _netdev 0 0 /dev/redo2/log2 /mnt/log2 gfs _netdev 0 0 /dev/redo3/log3 /mnt/log3 gfs _netdev 0 0 /dev/redo4/log4 /mnt/log4 gfs _netdev 0 0
The _netdev option is also useful as it insures the filesystems are un-mounted before cluster services shutdown. Copy this section of the /etc/fstab file and move it to the other nodes in the system. These volumes were mounted in /mnt and the corresponding mount directories needed to be created on every node.
Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/redo1-log1 4062624 20 4062604 1% /mnt/log1 /dev/mapper/redo2-log2 4062368 20 4062348 1% /mnt/log2 /dev/mapper/redo3-log3 4062624 20 4062604 1% /mnt/log3 /dev/mapper/redo4-log4 4062368 20 4062348 1% /mnt/log4 /dev/mapper/common-ohome 6159232 20 6159212 1% /mnt/ohome /dev/mapper/oradata-datafiles 50193856 40 50193816 1% /mnt/datafiles