diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.xml b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.xml index 329cfd7178..8f709287dd 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.xml +++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.xml @@ -1,721 +1,721 @@ %BOOK_ENTITIES; ]> Conversion to Active/Active
Requirements The primary requirement for an Active/Active cluster is that the data required for your services are available, simultaneously, on both machines. Pacemaker makes no requirement on how this is achieved, you could use a SAN if you had one available, however since DRBD supports multiple Primaries, we can also use that. The only hitch is that we need to use a cluster-aware filesystem (and the one we used earlier with DRBD, ext4, is not one of those). Both OCFS2 and GFS2 are supported, however here we will use GFS2 which comes with &DISTRO; &DISTRO_VERSION; .
Install a Cluster Filesystem - GFS2 - The first thing to do is install gfs2-utils on each machine. + The first thing to do is install gfs2-utils and gfs2-cluster on each machine. -[root@pcmk-1 ~]# yum install -y gfs2-utils gfs-pcmk +[root@pcmk-1 ~]# yum install -y gfs2-utils gfs2-cluster gfs-pcmk Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package gfs-pcmk.x86_64 0:3.0.5-2.fc12 set to be updated --> Processing Dependency: libSaCkpt.so.3(OPENAIS_CKPT_B.01.01)(64bit) for package: gfs-pcmk-3.0.5-2.fc12.x86_64 --> Processing Dependency: dlm-pcmk for package: gfs-pcmk-3.0.5-2.fc12.x86_64 --> Processing Dependency: libccs.so.3()(64bit) for package: gfs-pcmk-3.0.5-2.fc12.x86_64 --> Processing Dependency: libdlmcontrol.so.3()(64bit) for package: gfs-pcmk-3.0.5-2.fc12.x86_64 --> Processing Dependency: liblogthread.so.3()(64bit) for package: gfs-pcmk-3.0.5-2.fc12.x86_64 --> Processing Dependency: libSaCkpt.so.3()(64bit) for package: gfs-pcmk-3.0.5-2.fc12.x86_64 ---> Package gfs2-utils.x86_64 0:3.0.5-2.fc12 set to be updated --> Running transaction check ---> Package clusterlib.x86_64 0:3.0.5-2.fc12 set to be updated ---> Package dlm-pcmk.x86_64 0:3.0.5-2.fc12 set to be updated ---> Package openaislib.x86_64 0:1.1.0-1.fc12 set to be updated --> Finished Dependency Resolution Dependencies Resolved ===========================================================================================  Package                Arch               Version                   Repository        Size =========================================================================================== Installing:  gfs-pcmk               x86_64             3.0.5-2.fc12              custom           101 k  gfs2-utils             x86_64             3.0.5-2.fc12              custom           208 k Installing for dependencies:  clusterlib             x86_64             3.0.5-2.fc12              custom            65 k  dlm-pcmk               x86_64             3.0.5-2.fc12              custom            93 k  openaislib             x86_64             1.1.0-1.fc12              fedora            76 k Transaction Summary =========================================================================================== Install       5 Package(s) Upgrade       0 Package(s) Total download size: 541 k Downloading Packages: (1/5): clusterlib-3.0.5-2.fc12.x86_64.rpm                                |  65 kB     00:00 (2/5): dlm-pcmk-3.0.5-2.fc12.x86_64.rpm                                  |  93 kB     00:00 (3/5): gfs-pcmk-3.0.5-2.fc12.x86_64.rpm                                  | 101 kB     00:00 (4/5): gfs2-utils-3.0.5-2.fc12.x86_64.rpm                                | 208 kB     00:00 (5/5): openaislib-1.1.0-1.fc12.x86_64.rpm                                |  76 kB     00:00 ------------------------------------------------------------------------------------------- Total                                                           992 kB/s | 541 kB     00:00 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : clusterlib-3.0.5-2.fc12.x86_64                                       1/5   Installing     : openaislib-1.1.0-1.fc12.x86_64                                       2/5   Installing     : dlm-pcmk-3.0.5-2.fc12.x86_64                                         3/5   Installing     : gfs-pcmk-3.0.5-2.fc12.x86_64                                         4/5   Installing     : gfs2-utils-3.0.5-2.fc12.x86_64                                       5/5 Installed:   gfs-pcmk.x86_64 0:3.0.5-2.fc12                    gfs2-utils.x86_64 0:3.0.5-2.fc12 Dependency Installed:   clusterlib.x86_64 0:3.0.5-2.fc12   dlm-pcmk.x86_64 0:3.0.5-2.fc12   openaislib.x86_64 0:1.1.0-1.fc12   Complete! [root@pcmk-1 x86_64]# If this step fails, it is likely that your version/distribution does not ship the "Pacemaker" versions of dlm_controld and/or gfs_controld. Normally these files would be called dlm_controld.pcmk and gfs_controld.pcmk and live in the /usr/sbin directory. If you cannot locate an installation source for these files, you will need to install a package called cman and reconfigure Corosync to use it as outlined in . - When using CMAN, you can skip where dlm-clone and gfs-clone are created, and proceed directly to . + When using CMAN, you can skip where dlm-clone and gfs-clone are created, and proceed directly to after ensuring that gfs2-utils and gfs2-cluster were installed.
Setup Pacemaker-GFS2 Integration GFS2 needs two services to be running, the first is the user-space interface to the kernel’s distributed lock manager (DLM). The DLM is used to co-ordinate which node(s) can access a given file (and when) and integrates with Pacemaker to obtain node membership The list of nodes the cluster considers to be available information and fencing capabilities. The second service is GFS2’s own control daemon which also integrates with Pacemaker to obtain node membership data.
Add the DLM service The DLM control daemon needs to run on all active cluster nodes, so we will use the shells interactive mode to create a cloned resource. [root@pcmk-1 ~]# crm crm(live)# cib new stack-glue INFO: stack-glue shadow CIB created crm(stack-glue)# configure primitive dlm ocf:pacemaker:controld op monitor interval=120s crm(stack-glue)# configure clone dlm-clone dlm meta interleave=true crm(stack-glue)# configure show xml crm(stack-glue)# configure show node pcmk-1 node pcmk-2 primitive WebData ocf:linbit:drbd \         params drbd_resource="wwwdata" \         op monitor interval="60s" primitive WebFS ocf:heartbeat:Filesystem \         params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4" primitive WebSite ocf:heartbeat:apache \         params configfile="/etc/httpd/conf/httpd.conf" \         op monitor interval="1min" primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip="192.168.122.101" cidr_netmask="32" \         op monitor interval="30s" primitive dlm ocf:pacemaker:controld \ op monitor interval="120s" ms WebDataClone WebData \         meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" clone dlm-clone dlm \ meta interleave="true" location prefer-pcmk-1 WebSite 50: pcmk-1 colocation WebSite-with-WebFS inf: WebSite WebFS colocation fs_on_drbd inf: WebFS WebDataClone:Master colocation website-with-ip inf: WebSite ClusterIP order WebFS-after-WebData inf: WebDataClone:promote WebFS:start order WebSite-after-WebFS inf: WebFS WebSite order apache-after-ip inf: ClusterIP WebSite property $id="cib-bootstrap-options" \         dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \         cluster-infrastructure="openais" \         expected-quorum-votes=”2” \         stonith-enabled="false" \         no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \         resource-stickiness=”100” TODO: Explain the meaning of the interleave option Review the configuration before uploading it to the cluster, quitting the shell and watching the cluster’s response crm(stack-glue)# cib commit stack-glue INFO: commited 'stack-glue' shadow CIB to the cluster crm(stack-glue)# quit bye [root@pcmk-1 ~]# crm_mon ============ Last updated: Thu Sep  3 20:49:54 2009 Stack: openais Current DC: pcmk-2 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 5 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] WebSite (ocf::heartbeat:apache):        Started pcmk-2 Master/Slave Set: WebDataClone         Masters: [ pcmk-1 ]         Slaves: [ pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-2 Clone Set: dlm-clone Started: [ pcmk-2 pcmk-1 ] WebFS   (ocf::heartbeat:Filesystem):    Started pcmk-2
Add the GFS2 service Once the DLM is active, we can add the GFS2 control daemon. Use the crm shell to create the gfs-control cluster resource: [root@pcmk-1 ~]# crm crm(live)# cib new gfs-glue --force INFO: gfs-glue shadow CIB created crm(gfs-glue)# configure primitive gfs-control ocf:pacemaker:controld params daemon=gfs_controld.pcmk args="-g 0" op monitor interval=120s crm(gfs-glue)# configure clone gfs-clone gfs-control meta interleave=true Now ensure Pacemaker only starts the gfs-control service on nodes that also have a copy of the dlm service (created above) already running crm(gfs-glue)# configure colocation gfs-with-dlm INFINITY: gfs-clone dlm-clone crm(gfs-glue)# configure order start-gfs-after-dlm mandatory: dlm-clone gfs-clone Review the configuration before uploading it to the cluster, quitting the shell and watching the cluster’s response crm(gfs-glue)# configure show node pcmk-1 node pcmk-2 primitive WebData ocf:linbit:drbd \         params drbd_resource="wwwdata" \         op monitor interval="60s" primitive WebFS ocf:heartbeat:Filesystem \         params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4" primitive WebSite ocf:heartbeat:apache \         params configfile="/etc/httpd/conf/httpd.conf" \         op monitor interval="1min" primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip="192.168.122.101" cidr_netmask="32" \         op monitor interval="30s" primitive dlm ocf:pacemaker:controld \         op monitor interval="120s" primitive gfs-control ocf:pacemaker:controld \ params daemon=”gfs_controld.pcmk” args=”-g 0” \ op monitor interval="120s" ms WebDataClone WebData \         meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" clone dlm-clone dlm \         meta interleave="true" clone gfs-clone gfs-control \ meta interleave="true" location prefer-pcmk-1 WebSite 50: pcmk-1 colocation WebSite-with-WebFS inf: WebSite WebFS colocation fs_on_drbd inf: WebFS WebDataClone:Master colocation gfs-with-dlm inf: gfs-clone dlm-clone colocation website-with-ip inf: WebSite ClusterIP order WebFS-after-WebData inf: WebDataClone:promote WebFS:start order WebSite-after-WebFS inf: WebFS WebSite order apache-after-ip inf: ClusterIP WebSite order start-gfs-after-dlm inf: dlm-clone gfs-clone property $id="cib-bootstrap-options" \         dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \         cluster-infrastructure="openais" \         expected-quorum-votes=”2” \         stonith-enabled="false" \         no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \         resource-stickiness=”100” crm(gfs-glue)# cib commit gfs-glue INFO: commited 'gfs-glue' shadow CIB to the cluster crm(gfs-glue)# quit bye [root@pcmk-1 ~]# crm_mon ============ Last updated: Thu Sep  3 20:49:54 2009 Stack: openais Current DC: pcmk-2 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 6 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] WebSite (ocf::heartbeat:apache):        Started pcmk-2 Master/Slave Set: WebDataClone         Masters: [ pcmk-1 ]         Slaves: [ pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-2 Clone Set: dlm-clone         Started: [ pcmk-2 pcmk-1 ] Clone Set: gfs-clone Started: [ pcmk-2 pcmk-1 ] WebFS   (ocf::heartbeat:Filesystem):    Started pcmk-1
Create a GFS2 Filesystem
Preparation Before we do anything to the existing partition, we need to make sure it is unmounted. We do this by tell the cluster to stop the WebFS resource. This will ensure that other resources (in our case, Apache) using WebFS are not only stopped, but stopped in the correct order. [root@pcmk-1 ~]# crm_resource --resource WebFS --set-parameter target-role --meta --parameter-value Stopped [root@pcmk-1 ~]# crm_mon ============ Last updated: Thu Sep  3 15:18:06 2009 Stack: openais Current DC: pcmk-1 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 6 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] Master/Slave Set: WebDataClone         Masters: [ pcmk-1 ]         Slaves: [ pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-1 Clone Set: dlm-clone         Started: [ pcmk-2 pcmk-1 ] Clone Set: gfs-clone         Started: [ pcmk-2 pcmk-1 ] Note that both Apache and WebFS have been stopped.
Create and Populate an GFS2 Partition Now that the cluster stack and integration pieces are running smoothly, we can create an GFS2 partition. This will erase all previous content stored on the DRBD device. Ensure you have a copy of any important data. We need to specify a number of additional parameters when creating a GFS2 partition. First we must use the -p option to specify that we want to use the the Kernel’s DLM. Next we use -j to indicate that it should reserve enough space for two journals (one per node accessing the filesystem). Lastly, we use -t to specify the lock table name. The format for this field is clustername:fsname. For the fsname, we just need to pick something unique and descriptive and since we haven’t specified a clustername yet, we will use the default (pcmk). To specify an alternate name for the cluster, locate the service section containing “name: pacemaker” in corosync.conf and insert the following line anywhere inside the block: clustername: myname Do this on each node in the cluster and be sure to restart them before continuing. [root@pcmk-1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t pcmk:web /dev/drbd1 This will destroy any data on /dev/drbd1. It appears to contain: data Are you sure you want to proceed? [y/n] y Device:                    /dev/drbd1 Blocksize:                 4096 Device Size                1.00 GB (131072 blocks) Filesystem Size:           1.00 GB (131070 blocks) Journals:                  2 Resource Groups:           2 Locking Protocol:          "lock_dlm" Lock Table:                "pcmk:web" UUID:                      6B776F46-177B-BAF8-2C2B-292C0E078613 [root@pcmk-1 ~]# Then (re)populate the new filesystem with data (web pages). For now we’ll create another variation on our home page. [root@pcmk-1 ~]# mount /dev/drbd1 /mnt/ [root@pcmk-1 ~]# cat <<-END >/mnt/index.html <html> <body>My Test Site - GFS2</body> </html> END [root@pcmk-1 ~]# umount /dev/drbd1 [root@pcmk-1 ~]# drbdadm verify wwwdata [root@pcmk-1 ~]#
Reconfigure the Cluster for GFS2 [root@pcmk-1 ~]# crm crm(live)# cib new GFS2 INFO: GFS2 shadow CIB created crm(GFS2)# configure delete WebFS crm(GFS2)# configure primitive WebFS ocf:heartbeat:Filesystem params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype=”gfs2” Now that we’ve recreated the resource, we also need to recreate all the constraints that used it. This is because the shell will automatically remove any constraints that referenced WebFS. crm(GFS2)# configure colocation WebSite-with-WebFS inf: WebSite WebFS crm(GFS2)# configure colocation fs_on_drbd inf: WebFS WebDataClone:Master crm(GFS2)# configure order WebFS-after-WebData inf: WebDataClone:promote WebFS:start crm(GFS2)# configure order WebSite-after-WebFS inf: WebFS WebSite crm(GFS2)# configure colocation WebFS-with-gfs-control INFINITY: WebFS gfs-clone crm(GFS2)# configure order start-WebFS-after-gfs-control mandatory: gfs-clone WebFS crm(GFS2)# configure show node pcmk-1 node pcmk-2 primitive WebData ocf:linbit:drbd \         params drbd_resource="wwwdata" \         op monitor interval="60s" primitive WebFS ocf:heartbeat:Filesystem \ params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype=”gfs2” primitive WebSite ocf:heartbeat:apache \         params configfile="/etc/httpd/conf/httpd.conf" \         op monitor interval="1min" primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip="192.168.122.101" cidr_netmask="32" \         op monitor interval="30s" primitive dlm ocf:pacemaker:controld \         op monitor interval="120s" primitive gfs-control ocf:pacemaker:controld \    params daemon=”gfs_controld.pcmk” args=”-g 0” \         op monitor interval="120s" ms WebDataClone WebData \         meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" clone dlm-clone dlm \         meta interleave="true" clone gfs-clone gfs-control \         meta interleave="true" colocation WebFS-with-gfs-control inf: WebFS gfs-clone colocation WebSite-with-WebFS inf: WebSite WebFS colocation fs_on_drbd inf: WebFS WebDataClone:Master colocation gfs-with-dlm inf: gfs-clone dlm-clone colocation website-with-ip inf: WebSite ClusterIP order WebFS-after-WebData inf: WebDataClone:promote WebFS:start order WebSite-after-WebFS inf: WebFS WebSite order apache-after-ip inf: ClusterIP WebSite order start-WebFS-after-gfs-control inf: gfs-clone WebFS order start-gfs-after-dlm inf: dlm-clone gfs-clone property $id="cib-bootstrap-options" \         dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \         cluster-infrastructure="openais" \         expected-quorum-votes=”2” \         stonith-enabled="false" \         no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \         resource-stickiness=”100” Review the configuration before uploading it to the cluster, quitting the shell and watching the cluster’s response crm(GFS2)# cib commit GFS2 INFO: commited 'GFS2' shadow CIB to the cluster crm(GFS2)# quit bye [root@pcmk-1 ~]# crm_mon ============ Last updated: Thu Sep  3 20:49:54 2009 Stack: openais Current DC: pcmk-2 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 6 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] WebSite (ocf::heartbeat:apache):        Started pcmk-2 Master/Slave Set: WebDataClone         Masters: [ pcmk-1 ]         Slaves: [ pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-2 Clone Set: dlm-clone         Started: [ pcmk-2 pcmk-1 ] Clone Set: gfs-clone         Started: [ pcmk-2 pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-1
Reconfigure Pacemaker for Active/Active Almost everything is in place. Recent versions of DRBD are capable of operating in Primary/Primary mode and the filesystem we’re using is cluster aware. All we need to do now is reconfigure the cluster to take advantage of this. This will involve a number of changes, so we’ll again use interactive mode. [root@pcmk-1 ~]# crm [root@pcmk-1 ~]# cib new active There’s no point making the services active on both locations if we can’t reach them, so lets first clone the IP address. Cloned IPaddr2 resources use an iptables rule to ensure that each request only processed by one of the two clone instances. The additional meta options tell the cluster how many instances of the clone we want (one “request bucket” for each node) and that if all other nodes fail, then the remaining node should hold all of them. Otherwise the requests would be simply discarded. [root@pcmk-1 ~]# configure clone WebIP ClusterIP  \         meta globally-unique=”true” clone-max=”2” clone-node-max=”2” Now we must tell the ClusterIP how to decide which requests are processed by which hosts. To do this we must specify the clusterip_hash parameter. Open the ClusterIP resource [root@pcmk-1 ~]# configure edit  ClusterIP And add the following to the params line clusterip_hash="sourceip" So that the complete definition looks like: primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip="192.168.122.101" cidr_netmask="32" clusterip_hash="sourceip" \         op monitor interval="30s" Here is the full transcript [root@pcmk-1 ~]# crm crm(live)# cib new active INFO: active shadow CIB created crm(active)# configure clone WebIP ClusterIP  \         meta globally-unique=”true” clone-max=”2” clone-node-max=”2” crm(active)# configure show node pcmk-1 node pcmk-2 primitive WebData ocf:linbit:drbd \         params drbd_resource="wwwdata" \         op monitor interval="60s" primitive WebFS ocf:heartbeat:Filesystem \         params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype=”gfs2” primitive WebSite ocf:heartbeat:apache \         params configfile="/etc/httpd/conf/httpd.conf" \         op monitor interval="1min" primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip=”192.168.122.101” cidr_netmask=”32” clusterip_hash=”sourceip” \         op monitor interval="30s" primitive dlm ocf:pacemaker:controld \         op monitor interval="120s" primitive gfs-control ocf:pacemaker:controld \    params daemon=”gfs_controld.pcmk” args=”-g 0” \         op monitor interval="120s" ms WebDataClone WebData \         meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" clone WebIP ClusterIP \ meta globally-unique=”true” clone-max=”2” clone-node-max=”2” clone dlm-clone dlm \         meta interleave="true" clone gfs-clone gfs-control \         meta interleave="true" colocation WebFS-with-gfs-control inf: WebFS gfs-clone colocation WebSite-with-WebFS inf: WebSite WebFS colocation fs_on_drbd inf: WebFS WebDataClone:Master colocation gfs-with-dlm inf: gfs-clone dlm-clone colocation website-with-ip inf: WebSite WebIP order WebFS-after-WebData inf: WebDataClone:promote WebFS:start order WebSite-after-WebFS inf: WebFS WebSite order apache-after-ip inf: WebIP WebSite order start-WebFS-after-gfs-control inf: gfs-clone WebFS order start-gfs-after-dlm inf: dlm-clone gfs-clone property $id="cib-bootstrap-options" \         dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \         cluster-infrastructure="openais" \         expected-quorum-votes=”2” \         stonith-enabled="false" \         no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \         resource-stickiness=”100” Notice how any constraints that referenced ClusterIP have been updated to use WebIP instead. This is an additional benefit of using the crm shell. Next we need to convert the filesystem and Apache resources into clones. Again, the shell will automatically update any relevant constraints. crm(active)# configure clone WebFSClone WebFS crm(active)# configure clone WebSiteClone WebSite The last step is to tell the cluster that it is now allowed to promote both instances to be Primary (aka. Master). crm(active)# configure edit WebDataClone Change master-max to 2 crm(active)# configure show node pcmk-1 node pcmk-2 primitive WebData ocf:linbit:drbd \         params drbd_resource="wwwdata" \         op monitor interval="60s" primitive WebFS ocf:heartbeat:Filesystem \         params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype=”gfs2” primitive WebSite ocf:heartbeat:apache \         params configfile="/etc/httpd/conf/httpd.conf" \         op monitor interval="1min" primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip=”192.168.122.101” cidr_netmask=”32” clusterip_hash=”sourceip” \         op monitor interval="30s" primitive dlm ocf:pacemaker:controld \         op monitor interval="120s" primitive gfs-control ocf:pacemaker:controld \    params daemon=”gfs_controld.pcmk” args=”-g 0” \         op monitor interval="120s" ms WebDataClone WebData \         meta master-max="2" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" clone WebFSClone WebFS clone WebIP ClusterIP  \         meta globally-unique=”true” clone-max=”2” clone-node-max=”2” clone WebSiteClone WebSite clone dlm-clone dlm \         meta interleave="true" clone gfs-clone gfs-control \         meta interleave="true" colocation WebFS-with-gfs-control inf: WebFSClone gfs-clone colocation WebSite-with-WebFS inf: WebSiteClone WebFSClone colocation fs_on_drbd inf: WebFSClone WebDataClone:Master colocation gfs-with-dlm inf: gfs-clone dlm-clone colocation website-with-ip inf: WebSiteClone WebIP order WebFS-after-WebData inf: WebDataClone:promote WebFSClone:start order WebSite-after-WebFS inf: WebFSClone WebSiteClone order apache-after-ip inf: WebIP WebSiteClone order start-WebFS-after-gfs-control inf: gfs-clone WebFSClone order start-gfs-after-dlm inf: dlm-clone gfs-clone property $id="cib-bootstrap-options" \         dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \         cluster-infrastructure="openais" \         expected-quorum-votes=”2” \         stonith-enabled="false" \         no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \         resource-stickiness=”100” Review the configuration before uploading it to the cluster, quitting the shell and watching the cluster’s response crm(active)# cib commit active INFO: commited 'active' shadow CIB to the cluster crm(active)# quit bye [root@pcmk-1 ~]# crm_mon ============ Last updated: Thu Sep  3 21:37:27 2009 Stack: openais Current DC: pcmk-2 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 6 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] Master/Slave Set: WebDataClone         Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone         Started: [ pcmk-2 pcmk-1 ] Clone Set: gfs-clone         Started: [ pcmk-2 pcmk-1 ] Clone Set: WebIP Started: [ pcmk-1 pcmk-2 ] Clone Set: WebFSClone Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSiteClone Started: [ pcmk-1 pcmk-2 ]
Testing Recovery TODO: Put one node into standby to demonstrate failover
diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.xml b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.xml index 03d974b410..5fc6805de8 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.xml +++ b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.xml @@ -1,528 +1,528 @@ %BOOK_ENTITIES; ]> Replicated Storage with DRBD Even if you’re serving up static websites, having to manually synchronize the contents of that website to all the machines in the cluster is not ideal. For dynamic websites, such as a wiki, its not even an option. Not everyone care afford network-attached storage but somehow the data needs to be kept in sync. Enter DRBD which can be thought of as network based RAID-1. See http://www.drbd.org/ for more details.
Install the DRBD Packages Since its inclusion in the upstream 2.6.33 kernel, everything needed to use DRBD ships with &DISTRO; &DISTRO_VERSION;. All you need to do is install it: -[root@pcmk-1 ~]# yum install -y drbd-pacemaker +[root@pcmk-1 ~]# yum install -y drbd-pacemaker drbd-udev Loaded plugins: presto, refresh-packagekit Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package drbd-pacemaker.x86_64 0:8.3.7-2.fc13 set to be updated --> Processing Dependency: drbd-utils = 8.3.7-2.fc13 for package: drbd-pacemaker-8.3.7-2.fc13.x86_64 --> Running transaction check ---> Package drbd-utils.x86_64 0:8.3.7-2.fc13 set to be updated --> Finished Dependency Resolution Dependencies Resolved ================================================================================= Package Arch Version Repository Size ================================================================================= Installing: drbd-pacemaker x86_64 8.3.7-2.fc13 fedora 19 k Installing for dependencies: drbd-utils x86_64 8.3.7-2.fc13 fedora 165 k Transaction Summary ================================================================================= Install 2 Package(s) Upgrade 0 Package(s) Total download size: 184 k Installed size: 427 k Downloading Packages: Setting up and reading Presto delta metadata fedora/prestodelta | 1.7 kB 00:00 Processing delta metadata Package(s) data still to download: 184 k (1/2): drbd-pacemaker-8.3.7-2.fc13.x86_64.rpm | 19 kB 00:01 (2/2): drbd-utils-8.3.7-2.fc13.x86_64.rpm | 165 kB 00:02 --------------------------------------------------------------------------------- Total 45 kB/s | 184 kB 00:04 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : drbd-utils-8.3.7-2.fc13.x86_64 1/2 Installing : drbd-pacemaker-8.3.7-2.fc13.x86_64 2/2 Installed: drbd-pacemaker.x86_64 0:8.3.7-2.fc13 Dependency Installed: drbd-utils.x86_64 0:8.3.7-2.fc13 Complete! [root@pcmk-1 ~]#
Configure DRBD Before we configure DRBD, we need to set aside some disk for it to use.
Create A Partition for DRBD If you have more than 1Gb free, feel free to use it. For this guide however, 1Gb is plenty of space for a single html file and sufficient for later holding the GFS2 metadata. [root@pcmk-1 ~]# lvcreate -n drbd-demo -L 1G VolGroup   Logical volume "drbd-demo" created [root@pcmk-1 ~]# lvs   LV        VG       Attr   LSize   Origin Snap%  Move Log Copy%  Convert   drbd-demo VolGroup -wi-a- 1.00G                                         lv_root   VolGroup -wi-ao   7.30G                                         lv_swap   VolGroup -wi-ao 500.00M Repeat this on the second node, be sure to use the same size partition. [root@pcmk-2 ~]# lvs   LV      VG       Attr   LSize   Origin Snap%  Move Log Copy%  Convert   lv_root VolGroup -wi-ao   7.30G                                         lv_swap VolGroup -wi-ao 500.00M                                       [root@pcmk-2 ~]# lvcreate -n drbd-demo -L 1G VolGroup   Logical volume "drbd-demo" created [root@pcmk-2 ~]# lvs   LV        VG       Attr   LSize   Origin Snap%  Move Log Copy%  Convert   drbd-demo VolGroup -wi-a- 1.00G                                         lv_root   VolGroup -wi-ao   7.30G                                         lv_swap   VolGroup -wi-ao 500.00M
Write the DRBD Config There is no series of commands for build a DRBD configuration, so simply copy the configuration below to /etc/drbd.conf Detailed information on the directives used in this configuration (and other alternatives) is available from http://www.drbd.org/users-guide/ch-configure.html Be sure to use the names and addresses of your nodes if they differ from the ones used in this guide. global {   usage-count yes; } common {   protocol C; } resource wwwdata {   meta-disk internal;   device    /dev/drbd1;   syncer {     verify-alg sha1;   }   net {     allow-two-primaries;   }   on pcmk-1 {     disk      /dev/mapper/VolGroup-drbd--demo;     address   192.168.122.101:7789;   }   on pcmk-2 {     disk      /dev/mapper/VolGroup-drbd--demo;     address   192.168.122.102:7789;   } } TODO: Explain the reason for the allow-two-primaries option
Initialize and Load DRBD With the configuration in place, we can now perform the DRBD initialization [root@pcmk-1 ~]# drbdadm create-md wwwdata md_offset 12578816 al_offset 12546048 bm_offset 12541952 Found some data  ==> This might destroy existing data! <== Do you want to proceed? [need to type 'yes' to confirm] yes Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. success Now load the DRBD kernel module and confirm that everything is sane [root@pcmk-1 ~]# modprobe drbd [root@pcmk-1 ~]# drbdadm up wwwdata [root@pcmk-1 ~]# cat /proc/drbd version: 8.3.6 (api:88/proto:86-90) GIT-hash: f3606c47cc6fcf6b3f086e425cb34af8b7a81bbf build by root@pcmk-1, 2009-12-08 11:22:57 1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----     ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:12248 [root@pcmk-1 ~]# Repeat on the second node drbdadm --force create-md wwwdata modprobe drbd drbdadm up wwwdata cat /proc/drbd [root@pcmk-2 ~]# drbdadm --force create-md wwwdata Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. success [root@pcmk-2 ~]# modprobe drbd WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. [root@pcmk-2 ~]# drbdadm up wwwdata [root@pcmk-2 ~]# cat /proc/drbd version: 8.3.6 (api:88/proto:86-90) GIT-hash: f3606c47cc6fcf6b3f086e425cb34af8b7a81bbf build by root@pcmk-1, 2009-12-08 11:22:57 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----     ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:12248 Now we need to tell DRBD which set of data to use. Since both sides contain garbage, we can run the following on pcmk-1: [root@pcmk-1 ~]# drbdadm -- --overwrite-data-of-peer primary wwwdata [root@pcmk-1 ~]# cat /proc/drbd version: 8.3.6 (api:88/proto:86-90) GIT-hash: f3606c47cc6fcf6b3f086e425cb34af8b7a81bbf build by root@pcmk-1, 2009-12-08 11:22:57  1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----     ns:2184 nr:0 dw:0 dr:2472 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:10064         [=====>..............] sync'ed: 33.4% (10064/12248)K         finish: 0:00:37 speed: 240 (240) K/sec [root@pcmk-1 ~]# cat /proc/drbd version: 8.3.6 (api:88/proto:86-90) GIT-hash: f3606c47cc6fcf6b3f086e425cb34af8b7a81bbf build by root@pcmk-1, 2009-12-08 11:22:57  1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----     ns:12248 nr:0 dw:0 dr:12536 al:0 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 pcmk-1 is now in the Primary state which allows it to be written to. Which means its a good point at which to create a filesystem and populate it with some data to serve up via our WebSite resource.
Populate DRBD with Data [root@pcmk-1 ~]# mkfs.ext4 /dev/drbd1 mke2fs 1.41.4 (27-Jan-2009) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 3072 inodes, 12248 blocks 612 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=12582912 2 block groups 8192 blocks per group, 8192 fragments per group 1536 inodes per group Superblock backups stored on blocks:         8193 Writing inode tables: done                             Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 26 mounts or 180 days, whichever comes first.  Use tune2fs -c or -i to override. Now mount the newly created filesystem so we can create our index file mount /dev/drbd1 /mnt/ cat <<-END >/mnt/index.html <html> <body>My Test Site - drbd</body> </html> END umount /dev/drbd1 [root@pcmk-1 ~]# mount /dev/drbd1 /mnt/ [root@pcmk-1 ~]# cat <<-END >/mnt/index.html > <html> > <body>My Test Site - drbd</body> > </html> > END [root@pcmk-1 ~]# umount /dev/drbd1
Configure the Cluster for DRBD One handy feature of the crm shell is that you can use it in interactive mode to make several changes atomically. First we launch the shell. The prompt will change to indicate you’re in interactive mode. [root@pcmk-1 ~]# crm cib crm(live)# Next we must create a working copy or the current configuration. This is where all our changes will go. The cluster will not see any of them until we say its ok. Notice again how the prompt changes, this time to indicate that we’re no longer looking at the live cluster. cib crm(live)# cib new drbd INFO: drbd shadow CIB created crm(drbd)# Now we can create our DRBD clone and display the revised configuration. crm(drbd)# configure primitive WebData ocf:linbit:drbd params drbd_resource=wwwdata \         op monitor interval=60s crm(drbd)# configure ms WebDataClone WebData meta master-max=1 master-node-max=1 \         clone-max=2 clone-node-max=1 notify=true crm(drbd)# configure show node pcmk-1 node pcmk-2 primitive WebData ocf:linbit:drbd \ params drbd_resource="wwwdata" \ op monitor interval="60s" primitive WebSite ocf:heartbeat:apache \         params configfile="/etc/httpd/conf/httpd.conf" \         op monitor interval="1min" primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip="192.168.122.101" cidr_netmask="32" \         op monitor interval="30s" ms WebDataClone WebData \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" location prefer-pcmk-1 WebSite 50: pcmk-1 colocation website-with-ip inf: WebSite ClusterIP order apache-after-ip inf: ClusterIP WebSite property $id="cib-bootstrap-options" \         dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \         cluster-infrastructure="openais" \         expected-quorum-votes=”2” \         stonith-enabled="false" \         no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \         resource-stickiness=”100” Once we’re happy with the changes, we can tell the cluster to start using them and use crm_mon to check everything is functioning. crm(drbd)# cib commit drbd INFO: commited 'drbd' shadow CIB to the cluster crm(drbd)# quit bye [root@pcmk-1 ~]# crm_mon ============ Last updated: Tue Sep  1 09:37:13 2009 Stack: openais Current DC: pcmk-1 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 3 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-1 WebSite (ocf::heartbeat:apache):        Started pcmk-1 Master/Slave Set: WebDataClone Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] Include details on adding a second DRBD resource Now that DRBD is functioning we can configure a Filesystem resource to use it. In addition to the filesystem’s definition, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start (after the Primary was promoted). Once again we’ll use the shell’s interactive mode [root@pcmk-1 ~]# crm crm(live)# cib new fs INFO: fs shadow CIB created crm(fs)# configure primitive WebFS ocf:heartbeat:Filesystem \         params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4" crm(fs)# configure colocation fs_on_drbd inf: WebFS WebDataClone:Master crm(fs)# configure order WebFS-after-WebData inf: WebDataClone:promote WebFS:start We also need to tell the cluster that Apache needs to run on the same machine as the filesystem and that it must be active before Apache can start. crm(fs)# configure colocation WebSite-with-WebFS inf: WebSite WebFS crm(fs)# configure order WebSite-after-WebFS inf: WebFS WebSite Time to review the updated configuration: [root@pcmk-1 ~]# crm configure show node pcmk-1 node pcmk-2 primitive WebData ocf:linbit:drbd \         params drbd_resource="wwwdata" \         op monitor interval="60s" primitive WebFS ocf:heartbeat:Filesystem \         params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4" primitive WebSite ocf:heartbeat:apache \         params configfile="/etc/httpd/conf/httpd.conf" \         op monitor interval="1min" primitive ClusterIP ocf:heartbeat:IPaddr2 \         params ip="192.168.122.101" cidr_netmask="32" \         op monitor interval="30s" ms WebDataClone WebData \         meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" location prefer-pcmk-1 WebSite 50: pcmk-1 colocation WebSite-with-WebFS inf: WebSite WebFS colocation fs_on_drbd inf: WebFS WebDataClone:Master colocation website-with-ip inf: WebSite ClusterIP order WebFS-after-WebData inf: WebDataClone:promote WebFS:start order WebSite-after-WebFS inf: WebFS WebSite order apache-after-ip inf: ClusterIP WebSite property $id="cib-bootstrap-options" \         dc-version="1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f" \         cluster-infrastructure="openais" \         expected-quorum-votes=”2” \         stonith-enabled="false" \         no-quorum-policy="ignore" rsc_defaults $id="rsc-options" \         resource-stickiness=”100” After reviewing the new configuration, we again upload it and watch the cluster put it into effect. crm(fs)# cib commit fs INFO: commited 'fs' shadow CIB to the cluster crm(fs)# quit bye [root@pcmk-1 ~]# crm_mon ============ Last updated: Tue Sep  1 10:08:44 2009 Stack: openais Current DC: pcmk-1 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 4 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone         Masters: [ pcmk-1 ]         Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-1
Testing Migration We could shut down the active node again, but another way to safely simulate recovery is to put the node into what is called “standby mode”. Nodes in this state tell the cluster that they are not allowed to run resources. Any resources found active there will be moved elsewhere. This feature can be particularly useful when updating the resources’ packages. Put the local node into standby mode and observe the cluster move all the resources to the other node. Note also that the node’s status will change to indicate that it can no longer host resources. [root@pcmk-1 ~]# crm node standby [root@pcmk-1 ~]# crm_mon ============ Last updated: Tue Sep  1 10:09:57 2009 Stack: openais Current DC: pcmk-1 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 4 Resources configured. ============ Node pcmk-1: standby Online: [ pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-2 WebSite (ocf::heartbeat:apache):        Started pcmk-2 Master/Slave Set: WebDataClone         Masters: [ pcmk-2 ]         Stopped: [ WebData:1 ] WebFS   (ocf::heartbeat:Filesystem):    Started pcmk-2 Once we’ve done everything we needed to on pcmk-1 (in this case nothing, we just wanted to see the resources move), we can allow the node to be a full cluster member again. [root@pcmk-1 ~]# crm node online [root@pcmk-1 ~]# crm_mon ============ Last updated: Tue Sep  1 10:13:25 2009 Stack: openais Current DC: pcmk-1 - partition with quorum Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f 2 Nodes configured, 2 expected votes 4 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] ClusterIP        (ocf::heartbeat:IPaddr):        Started pcmk-2 WebSite (ocf::heartbeat:apache):        Started pcmk-2 Master/Slave Set: WebDataClone         Masters: [ pcmk-2 ]         Slaves: [ pcmk-1 ] WebFS   (ocf::heartbeat:Filesystem):    Started pcmk-2 Notice that our resource stickiness settings prevent the services from migrating back to pcmk-1.