Page Menu
Home
ClusterLabs Projects
Search
Configure Global Search
Log In
Files
F3152329
usage.txt
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Flag For Later
Award Token
Size
5 KB
Referenced Files
None
Subscribers
None
usage.txt
View Options
How to install and run GFS2.
Refer to the cluster project page for the latest information.
http://sources.redhat.com/cluster/
Get source
----------
Get a kernel that has GFS2 and DLM.
git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6.git
Get the 'cluster' cvs tree, instructions at:
http://sources.redhat.com/cluster/
Optionally, get the LVM2 cvs from
cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2
Build and install
-----------------
Compile kernel with GFS2, DLM, configfs, IPV6 and SCTP.
Build and install from cluster tree. Various parts of the tree aren't
updated yet, so just build the minimum bits shown here.
cd cluster
./configure --kernel_src=/path/to/kernel
cd cluster/cman/lib; make; make install
cd cluster/ccs; make; make install
cd cluster/cman; make; make install (*)
cd cluster/group; make; make install
cd cluster/fence; make; make install
cd cluster/dlm; make; make install
cd cluster/gfs/lock_dlm/daemon; make; make install
edit INCLUDEPATH in cluster/gfs2/mkfs/Makefile
cd cluster/gfs2/libgfs2; make; make install
cd cluster/gfs2/convert; make; make install
cd cluster/gfs2/fsck; make; make install
cd cluster/gfs2/mkfs; make; make install
cd cluster/gfs2/mount; make; make install
(*) this step downloads and builds an openais tarball from
http://people.redhat.com/pcaulfie/
To build LVM2 & clvm:
cd LVM2
./configure --with-clvmd=cman --with-cluster=shared
make; make install
Load kernel modules
-------------------
modprobe gfs2
modprobe lock_dlm
modprobe lock_nolock
modprobe dlm
modprobe dlm_device
Configuration
-------------
Create /etc/cluster/cluster.conf and copy it to all nodes.
The format and content of cluster.conf has changed little since the
last generation of the software. See old example here:
http://sources.redhat.com/cluster/doc/usage.txt
The one change you will need to make is to add nodeids for all nodes
in the cluster. These are now mandatory. eg:
<clusternode name="node12.mycluster.mycompany.com" votes="1" nodeid="12">
If you already have a cluster.conf file with no nodeids in it, then you can
use the 'ccstool addnodeids' command to add them.
Example cluster.conf
--------------------
This is a basic cluster.conf file that uses manual fencing. The node
names should resolve to the address on the network interface you want to
use for cman/dlm communication.
<?xml version="1.0"?>
<cluster name="alpha" config_version="1">
<clusternodes>
<clusternode name="node01" nodeid="1">
<fence>
<method name="single">
<device name="man" nodename="node01"/>
</method>
</fence>
</clusternode>
<clusternode name="node02" nodeid="2">
<fence>
<method name="single">
<device name="man" nodename="node02"/>
</method>
</fence>
</clusternode>
<clusternode name="node03" nodeid="3">
<fence>
<method name="single">
<device name="man" nodename="node03"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="man" agent="fence_manual"/>
</fencedevices>
</cluster>
Startup procedure
-----------------
Run these commands on each cluster node:
debug/verbose options in [] can be useful at this stage :)
> mount -t configfs none /sys/kernel/config
> ccsd -X
> cman_tool join [-d]
> groupd [-D]
> fenced [-D]
> fence_tool join
> dlm_controld [-D]
> gfs_controld [-D]
> mkfs -t gfs2 -p lock_dlm -t <clustername>:<fsname> -j <#journals> <blockdev>
> mount -t gfs2 [-v] <blockdev> <mountpoint>
> group_tool ls
Shows registered groups, similar to what cman_tool services did.
Notes:
- <clustername> in mkfs should match the one in cluster.conf.
- <fsname> in mkfs is any name you pick, each fs must have a different name.
- <#journals> in mkfs should be greater than or equal to the number of nodes
that you want to mount this fs, each node uses a separate journal.
- To avoid unnecessary fencing when starting the cluster, it's best for
all nodes to join the cluster (complete cman_tool join) before any
of them do fence_tool join.
Shutdown procedure
------------------
Run these commands on each cluster node:
> umount -t gfs2 [-v] <mountpoint>
> fence_tool leave
> cman_tool leave
Notes:
- You need util-linux 2.13-pre6 version of umount(8), older versions do not
call the umount.gfs2 helper.
Converting from GFS1 to GFS2
----------------------------
If you have GFS1 filesystems that you need to convert to GFS2, follow
this procedure:
1. Back up your entire filesystem first.
e.g. cp /dev/your_vg/lvol0 /your_gfs_backup
2. Run gfs_fsck to ensure filesystem integrity.
e.g. gfs2_fsck /dev/your_vg/lvol0
3. Make sure the filesystem is not mounted from any node.
e.g. for i in `grep "<clusternode name" /etc/cluster/cluster.conf | cut -d '"' -f2` ; do ssh $i "mount | grep gfs" ; done
4. Make sure you have the latest software versions.
5. Run gfs2_convert <blockdev> from one of the nodes.
e.g. gfs2_convert /dev/your_vg/lvol0
File Metadata
Details
Attached
Mime Type
text/plain
Expires
Mon, Feb 24, 8:25 PM (9 h, 14 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1464438
Default Alt Text
usage.txt (5 KB)
Attached To
Mode
rF Fence Agents
Attached
Detach File
Event Timeline
Log In to Comment