How to install and run GFS2. Refer to the cluster project page for the latest information. http://sources.redhat.com/cluster/ Get source ---------- Get a kernel that has GFS2 and DLM. git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6.git Get the 'cluster' cvs tree, instructions at: http://sources.redhat.com/cluster/ Optionally, get the LVM2 cvs from cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 Build and install ----------------- Compile kernel with GFS2, DLM, configfs, IPV6 and SCTP. Build and install from cluster tree. Various parts of the tree aren't updated yet, so just build the minimum bits shown here. cd cluster ./configure --kernel_src=/path/to/kernel cd cluster/cman/lib; make; make install cd cluster/ccs; make; make install cd cluster/cman; make; make install (*) cd cluster/group; make; make install cd cluster/fence; make; make install cd cluster/dlm; make; make install cd cluster/gfs/lock_dlm/daemon; make; make install edit INCLUDEPATH in cluster/gfs2/mkfs/Makefile cd cluster/gfs2/libgfs2; make; make install cd cluster/gfs2/convert; make; make install cd cluster/gfs2/fsck; make; make install cd cluster/gfs2/mkfs; make; make install cd cluster/gfs2/mount; make; make install (*) this step downloads and builds an openais tarball from http://people.redhat.com/pcaulfie/ To build LVM2 & clvm: cd LVM2 ./configure --with-clvmd=cman --with-cluster=shared make; make install Load kernel modules ------------------- modprobe gfs2 modprobe lock_dlm modprobe lock_nolock modprobe dlm modprobe dlm_device Configuration ------------- Create /etc/cluster/cluster.conf and copy it to all nodes. The format and content of cluster.conf has changed little since the last generation of the software. See old example here: http://sources.redhat.com/cluster/doc/usage.txt The one change you will need to make is to add nodeids for all nodes in the cluster. These are now mandatory. eg: If you already have a cluster.conf file with no nodeids in it, then you can use the 'ccstool addnodeids' command to add them. Example cluster.conf -------------------- This is a basic cluster.conf file that uses manual fencing. The node names should resolve to the address on the network interface you want to use for cman/dlm communication. Startup procedure ----------------- Run these commands on each cluster node: debug/verbose options in [] can be useful at this stage :) > mount -t configfs none /sys/kernel/config > ccsd -X > cman_tool join [-d] > groupd [-D] > fenced [-D] > fence_tool join > dlm_controld [-D] > gfs_controld [-D] > mkfs -t gfs2 -p lock_dlm -t : -j <#journals> > mount -t gfs2 [-v] > group_tool ls Shows registered groups, similar to what cman_tool services did. Notes: - in mkfs should match the one in cluster.conf. - in mkfs is any name you pick, each fs must have a different name. - <#journals> in mkfs should be greater than or equal to the number of nodes that you want to mount this fs, each node uses a separate journal. - To avoid unnecessary fencing when starting the cluster, it's best for all nodes to join the cluster (complete cman_tool join) before any of them do fence_tool join. Shutdown procedure ------------------ Run these commands on each cluster node: > umount -t gfs2 [-v] > fence_tool leave > cman_tool leave Notes: - You need util-linux 2.13-pre6 version of umount(8), older versions do not call the umount.gfs2 helper. Converting from GFS1 to GFS2 ---------------------------- If you have GFS1 filesystems that you need to convert to GFS2, follow this procedure: 1. Back up your entire filesystem first. e.g. cp /dev/your_vg/lvol0 /your_gfs_backup 2. Run gfs_fsck to ensure filesystem integrity. e.g. gfs2_fsck /dev/your_vg/lvol0 3. Make sure the filesystem is not mounted from any node. e.g. for i in `grep " from one of the nodes. e.g. gfs2_convert /dev/your_vg/lvol0