Page Menu
Home
ClusterLabs Projects
Search
Configure Global Search
Log In
Files
F2020092
usage.txt
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Flag For Later
Award Token
Size
4 KB
Referenced Files
None
Subscribers
None
usage.txt
View Options
How to install and run GFS2.
Refer to the cluster project page for the latest information.
http://sources.redhat.com/cluster/
Install
-------
Install a Linux kernel with GFS2, DLM, configfs, IPV6 and SCTP,
2.6.19-rc1 or later
Install openais
get the latest "whitetank" (stable) release from
http://developer.osdl.org/dev/openais/
or
svn checkout http://svn.osdl.org/openais
cd openais/branches/whitetank
make; make install DESTDIR=/
Install libvolume_id
from udev-094 or later, e.g.
http://www.us.kernel.org/pub/linux/utils/kernel/hotplug/udev-094.tar.bz2
make EXTRAS="extras/volume_id" install
Install the cluster CVS tree from source:
cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster login cvs
cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster checkout cluster
the password is "cvs"
cd cluster
./configure --kernel_src=/path/to/kernel
make install
Install LVM2/CLVM (optional)
cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 login cvs
cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 checkout LVM2
cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2
the password is "cvs"
cd LVM2
./configure --with-clvmd=cman --with-cluster=shared
make; make install
Load kernel modules
-------------------
modprobe gfs2
modprobe lock_dlm
modprobe lock_nolock
modprobe dlm
Configuration
-------------
Create /etc/cluster/cluster.conf and copy it to all nodes.
The format and content of cluster.conf has changed little since the
last generation of the software. See old example here:
http://sources.redhat.com/cluster/doc/usage.txt
The one change you will need to make is to add nodeids for all nodes
in the cluster. These are now mandatory. eg:
<clusternode name="node12.mycluster.mycompany.com" votes="1" nodeid="12">
If you already have a cluster.conf file with no nodeids in it, then you can
use the 'ccs_tool addnodeids' command to add them.
Example cluster.conf
--------------------
This is a basic cluster.conf file that uses manual fencing. The node
names should resolve to the address on the network interface you want to
use for openais/cman/dlm communication.
<?xml version="1.0"?>
<cluster name="alpha" config_version="1">
<clusternodes>
<clusternode name="node01" nodeid="1">
<fence>
<method name="single">
<device name="man" nodename="node01"/>
</method>
</fence>
</clusternode>
<clusternode name="node02" nodeid="2">
<fence>
<method name="single">
<device name="man" nodename="node02"/>
</method>
</fence>
</clusternode>
<clusternode name="node03" nodeid="3">
<fence>
<method name="single">
<device name="man" nodename="node03"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="man" agent="fence_manual"/>
</fencedevices>
</cluster>
Startup procedure
-----------------
Run these commands on each cluster node:
debug/verbose options are in []
> mount -t configfs none /sys/kernel/config
> ccsd -X
> cman_tool join [-d]
> groupd [-D]
> fenced [-D]
> fence_tool join
> dlm_controld [-D]
> gfs_controld [-D]
> clvmd (optional)
> mkfs -t gfs2 -p lock_dlm -t <clustername>:<fsname> -j <#journals> <blockdev>
> mount -t gfs2 [-v] <blockdev> <mountpoint>
Notes:
- <clustername> in mkfs should match the one in cluster.conf.
- <fsname> in mkfs is any name you pick, each fs must have a different name.
- <#journals> in mkfs should be greater than or equal to the number of nodes
that you want to mount this fs, each node uses a separate journal.
- To avoid unnecessary fencing when starting the cluster, it's best for
all nodes to join the cluster (complete cman_tool join) before any
of them do fence_tool join.
- The cman_tool "status" and "nodes" options show the status and members
of the cluster.
- The group_tool command shows all local groups which includes the
fencing group, dlm lockspaces and gfs mounts.
- The "cman" init script can be used for starting everything up through
gfs_controld in the list above.
Shutdown procedure
------------------
Run these commands on each cluster node:
> umount [-v] <mountpoint>
> fence_tool leave
> cman_tool leave
Notes:
- the util-linux 2.13-pre6 or later version of umount(8) is required,
older versions do not call the umount.gfs2 helper.
Converting from GFS1 to GFS2
----------------------------
If you have GFS1 filesystems that you need to convert to GFS2, follow
this procedure:
1. Back up your entire filesystem first.
e.g. cp /dev/your_vg/lvol0 /your_gfs_backup
2. Run fsck to ensure filesystem integrity.
e.g. gfs2_fsck /dev/your_vg/lvol0
3. Make sure the filesystem is not mounted from any node.
e.g. for i in `grep "<clusternode name" /etc/cluster/cluster.conf | cut -d '"' -f2` ; do ssh $i "mount | grep gfs" ; done
4. Make sure you have the latest software versions.
5. Run gfs2_convert <blockdev> from one of the nodes.
e.g. gfs2_convert /dev/your_vg/lvol0
File Metadata
Details
Attached
Mime Type
text/plain
Expires
Mon, Dec 23, 12:29 PM (1 d, 8 h)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1125080
Default Alt Text
usage.txt (4 KB)
Attached To
Mode
rF Fence Agents
Attached
Detach File
Event Timeline
Log In to Comment