Page Menu
Home
ClusterLabs Projects
Search
Configure Global Search
Log In
Files
F2020139
min-gfs.txt
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Flag For Later
Award Token
Size
3 KB
Referenced Files
None
Subscribers
None
min-gfs.txt
View Options
Minimum GFS HowTo
-----------------
The following gfs configuration requires a minimum amount of hardware and
no expensive storage system. It's the cheapest and quickest way to "play"
with gfs.
---------- ----------
| GNBD | | GNBD |
| client | | client | <-- these nodes use gfs
| node2 | | node3 |
---------- ----------
| |
------------------ IP network
|
----------
| GNBD |
| server | <-- this node doesn't use gfs
| node1 |
----------
- There are three machines to use with hostnames: node1, node2, node3
- node1 has an extra disk /dev/sda1 to use for gfs
(this could be hda1 or an lvm LV or an md device)
- node1 will use gnbd to export this disk to node2 and node3
- Node1 cannot use gfs, it only acts as a gnbd server.
(Node1 will /not/ actually be part of the cluster since it is only
running the gnbd server.)
- Only node2 and node3 will be in the cluster and use gfs.
(A two-node cluster is a special case for cman, noted in the config below.)
- There's not much point to using clvm in this setup so it's left out.
- Download the "cluster" source tree.
- Build and install from the cluster source tree. (The kernel components
are not required on node1 which will only need the gnbd_serv program.)
cd cluster
./configure --kernel_src=/path/to/kernel
make; make install
- Create /etc/cluster/cluster.conf on node2 with the following contents:
<?xml version="1.0"?>
<cluster name="gamma" config_version="1">
<cman two_node="1" expected_votes="1">
</cman>
<clusternodes>
<clusternode name="node2">
<fence>
<method name="single">
<device name="gnbd" ipaddr="node2"/>
</method>
</fence>
</clusternode>
<clusternode name="node3">
<fence>
<method name="single">
<device name="gnbd" ipaddr="node3"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="gnbd" agent="fence_gnbd" servers="node1"/>
</fencedevices>
</cluster>
- load kernel modules on nodes
node2 and node3> modprobe gnbd
node2 and node3> modprobe gfs
node2 and node3> modprobe lock_dlm
- run the following commands
node1> gnbd_serv -n
node1> gnbd_export -c -d /dev/sda1 -e global_disk
node2 and node3> gnbd_import -n -i node1
node2 and node3> ccsd
node2 and node3> cman_tool join
node2 and node3> fence_tool join
node2> gfs_mkfs -p lock_dlm -t gamma:gfs1 -j 2 /dev/gnbd/global_disk
node2 and node3> mount -t gfs /dev/gnbd/global_disk /mnt
- the end, you now have a gfs file system mounted on node2 and node3
Appendix A
----------
To use manual fencing instead of gnbd fencing, the cluster.conf file
would look like this:
<?xml version="1.0"?>
<cluster name="gamma" config_version="1">
<cman two_node="1" expected_votes="1">
</cman>
<clusternodes>
<clusternode name="node2">
<fence>
<method name="single">
<device name="manual" ipaddr="node2"/>
</method>
</fence>
</clusternode>
<clusternode name="node3">
<fence>
<method name="single">
<device name="manual" ipaddr="node3"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="manual" agent="fence_manual"/>
</fencedevices>
</cluster>
FAQ
---
- Why can't node3 use gfs, too?
You might be able to make it work, but we recommend that you not try.
This software was not intended or designed to allow that kind of usage.
- Isn't node3 a single point of failure? how do I avoid that?
Yes it is. For the time being, there's no way to avoid that, apart from
not using gnbd, of course. Eventually, there will be a way to avoid this
using cluster mirroring.
- More info from
http://sources.redhat.com/cluster/gnbd/gnbd_usage.txt
http://sources.redhat.com/cluster/doc/usage.txt
File Metadata
Details
Attached
Mime Type
text/plain
Expires
Mon, Dec 23, 12:42 PM (1 d, 13 h)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1105642
Default Alt Text
min-gfs.txt (3 KB)
Attached To
Mode
rF Fence Agents
Attached
Detach File
Event Timeline
Log In to Comment