How to install and run GFS2. Refer to the cluster project page for the latest information. http://sources.redhat.com/cluster/ Install ------- Install a Linux kernel with GFS2, DLM, configfs, IPV6 and SCTP, 2.6.19-rc1 or later Install openais get the latest "whitetank" (stable) release from http://developer.osdl.org/dev/openais/ or svn checkout http://svn.osdl.org/openais cd openais/branches/whitetank make; make install DESTDIR=/ Install libvolume_id from udev-094 or later, e.g. http://www.us.kernel.org/pub/linux/utils/kernel/hotplug/udev-094.tar.bz2 make EXTRAS="extras/volume_id" install Install the cluster CVS tree from source: cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster login cvs cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster checkout cluster the password is "cvs" cd cluster ./configure --kernel_src=/path/to/kernel make install Install LVM2/CLVM (optional) cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 login cvs cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 checkout LVM2 cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 the password is "cvs" cd LVM2 ./configure --with-clvmd=cman --with-cluster=shared make; make install Load kernel modules ------------------- modprobe gfs2 modprobe lock_dlm modprobe lock_nolock modprobe dlm Configuration ------------- Create /etc/cluster/cluster.conf and copy it to all nodes. The format and content of cluster.conf has changed little since the last generation of the software. See old example here: http://sources.redhat.com/cluster/doc/usage.txt The one change you will need to make is to add nodeids for all nodes in the cluster. These are now mandatory. eg: If you already have a cluster.conf file with no nodeids in it, then you can use the 'ccs_tool addnodeids' command to add them. Example cluster.conf -------------------- This is a basic cluster.conf file that uses manual fencing. The node names should resolve to the address on the network interface you want to use for openais/cman/dlm communication. Startup procedure ----------------- Run these commands on each cluster node: debug/verbose options are in [] > mount -t configfs none /sys/kernel/config > ccsd -X > cman_tool join [-d] > groupd [-D] > fenced [-D] > fence_tool join > dlm_controld [-D] > gfs_controld [-D] > clvmd (optional) > mkfs -t gfs2 -p lock_dlm -t : -j <#journals> > mount -t gfs2 [-v] Notes: - in mkfs should match the one in cluster.conf. - in mkfs is any name you pick, each fs must have a different name. - <#journals> in mkfs should be greater than or equal to the number of nodes that you want to mount this fs, each node uses a separate journal. - To avoid unnecessary fencing when starting the cluster, it's best for all nodes to join the cluster (complete cman_tool join) before any of them do fence_tool join. - The cman_tool "status" and "nodes" options show the status and members of the cluster. - The group_tool command shows all local groups which includes the fencing group, dlm lockspaces and gfs mounts. - The "cman" init script can be used for starting everything up through gfs_controld in the list above. Shutdown procedure ------------------ Run these commands on each cluster node: > umount [-v] > fence_tool leave > cman_tool leave Notes: - the util-linux 2.13-pre6 or later version of umount(8) is required, older versions do not call the umount.gfs2 helper. Converting from GFS1 to GFS2 ---------------------------- If you have GFS1 filesystems that you need to convert to GFS2, follow this procedure: 1. Back up your entire filesystem first. e.g. cp /dev/your_vg/lvol0 /your_gfs_backup 2. Run fsck to ensure filesystem integrity. e.g. gfs2_fsck /dev/your_vg/lvol0 3. Make sure the filesystem is not mounted from any node. e.g. for i in `grep " from one of the nodes. e.g. gfs2_convert /dev/your_vg/lvol0