diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt index 51cebdaa99..fd78f711ef 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt @@ -1,381 +1,381 @@ = Convert Cluster to Active/Active = The primary requirement for an Active/Active cluster is that the data required for your services is available, simultaneously, on both machines. Pacemaker makes no requirement on how this is achieved; you could use a SAN if you had one available, but since DRBD supports multiple Primaries, we can continue to use it here. == Install Cluster Filesystem Software == The only hitch is that we need to use a cluster-aware filesystem. The -one we used earlier with DRBD, ext4, is not one of those. Both OCFS2 +one we used earlier with DRBD, xfs, is not one of those. Both OCFS2 and GFS2 are supported; here, we will use GFS2. On both nodes, install the GFS2 command-line utilities and the Distributed Lock Manager (DLM) required by cluster filesystems: ---- # yum install -y gfs2-utils dlm ---- == Configure the Cluster for the DLM == The DLM needs to run on both nodes, so we'll start by creating a resource for it (using the *ocf:pacemaker:controld* resource script), and clone it: ---- [root@pcmk-1 ~]# pcs cluster cib dlm_cfg [root@pcmk-1 ~]# pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60s [root@pcmk-1 ~]# pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1 [root@pcmk-1 ~]# pcs -f dlm_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started Clone Set: dlm-clone [dlm] Stopped: [ pcmk-1 pcmk-2 ] ---- Activate our new configuration, and see how the cluster responds: ---- [root@pcmk-1 ~]# pcs cluster cib-push dlm_cfg CIB updated [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Sat Dec 20 21:53:44 2014 Last change: Sat Dec 20 21:53:40 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 8 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1 Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- [[GFS2_prep]] == Create and Populate GFS2 Filesystem == Before we do anything to the existing partition, we need to make sure it is unmounted. We do this by telling the cluster to stop the WebFS resource. This will ensure that other resources (in our case, Apache) using WebFS are not only stopped, but stopped in the correct order. ---- [root@pcmk-1 ~]# pcs resource disable WebFS [root@pcmk-1 ~]# pcs resource ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Stopped Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Stopped Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] ---- You can see that both Apache and WebFS have been stopped, and that *pcmk-2* is the current master for the DRBD device. Now we can create a new GFS2 filesystem on the DRBD device. [WARNING] ========= This will erase all previous content stored on the DRBD device. Ensure you have a copy of any important data. ========= [IMPORTANT] =========== Run the next command on whichever node has the DRBD Primary role. Otherwise, you will receive the message: ----- /dev/drbd1: Read-only file system ----- =========== ----- [root@pcmk-2 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:web /dev/drbd1 -It appears to contain an existing filesystem (ext4) +It appears to contain an existing filesystem (xfs) This will destroy any data on /dev/drbd1 Are you sure you want to proceed? [y/n]y Device: /dev/drbd1 Block size: 4096 Device size: 1.00 GB (262127 blocks) -Filesystem size: 1.00 GB (262124 blocks) +Filesystem size: 1.00 GB (262126 blocks) Journals: 2 -Resource groups: 3 +Resource groups: 5 Locking protocol: "lock_dlm" Lock table: "mycluster:web" -UUID: b2b30e6c-8890-33fa-a1ba-3c70edd4b5f0 +UUID: 9a72c488-d8a7-24c9-ceee-add7a8ca52c2 ----- The `mkfs.gfs2` command required a number of additional parameters: * `-p lock_dlm` specifies that we want to use the kernel's DLM. * `-j 2` indicates that the filesystem should reserve enough space for two journals (one for each node that will access the filesystem). * `-t mycluster:web` specifies the lock table name. The format for this field is +pass:[clustername:fsname]+. For +pass:[clustername]+, we need to use the same value we specified originally with `pcs cluster setup --name` (which is also the value of *cluster_name* in +/etc/corosync/corosync.conf+). If you are unsure what your cluster name is, you can look in +/etc/corosync/corosync.conf+ or execute the command `pcs cluster corosync pcmk-1 | grep cluster_name`. Now we can (re-)populate the new filesystem with data (web pages). We'll create yet another variation on our home page. ----- [root@pcmk-2 ~]# mount /dev/drbd1 /mnt [root@pcmk-2 ~]# cat <<-END >/mnt/index.html My Test Site - GFS2 END [root@pcmk-2 ~]# umount /dev/drbd1 [root@pcmk-2 ~]# drbdadm verify wwwdata ----- == Reconfigure the Cluster for GFS2 == With the WebFS resource stopped, let's update the configuration. ---- [root@pcmk-1 ~]# pcs resource show WebFS Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) - Attributes: device=/dev/drbd1 directory=/var/www/html fstype=ext4 - Meta Attrs: target-role=Stopped + Attributes: device=/dev/drbd1 directory=/var/www/html fstype=xfs + Meta Attrs: target-role=Stopped Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20) ---- -The fstype option needs to be updated to *gfs2* instead of *ext4*. +The fstype option needs to be updated to *gfs2* instead of *xfs*. ---- [root@pcmk-1 ~]# pcs resource update WebFS fstype=gfs2 [root@pcmk-1 ~]# pcs resource show WebFS Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2 Meta Attrs: target-role=Stopped Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20) ---- GFS2 requires that DLM be running, so we also need to set up new colocation and ordering constraints for it: ---- [root@pcmk-1 ~]# pcs constraint colocation add WebFS with dlm-clone INFINITY [root@pcmk-1 ~]# pcs constraint order dlm-clone then WebFS Adding dlm-clone WebFS (kind: Mandatory) (Options: first-action=start then-action=start) ---- == Clone the IP address == There's no point making the services active on both locations if we can't reach them both, so let's clone the IP address. The *IPaddr2* resource agent has built-in intelligence for when it is configured as a clone. It will utilize a multicast MAC address to have the local switch send the relevant packets to all nodes in the cluster, together with *iptables clusterip* rules on the nodes so that any given packet will be grabbed by exactly one node. This will give us a simple but effective form of load-balancing requests between our two nodes. Let's start a new config, and clone our IP: ---- [root@pcmk-1 ~]# pcs cluster cib loadbalance_cfg [root@pcmk-1 ~]# pcs -f loadbalance_cfg resource clone ClusterIP \ clone-max=2 clone-node-max=2 globally-unique=true ---- * `clone-max=2` tells the resource agent to split packets this many ways. This should equal the number of nodes that can host the IP. * `clone-node-max=2` says that one node can run up to 2 instances of the clone. This should also equal the number of nodes that can host the IP, so that if any node goes down, another node can take over the failed node's "request bucket". Otherwise, requests intended for the failed node would be discarded. * `globally-unique=true` tells the cluster that one clone isn't identical to another (each handles a different "bucket"). This also tells the resource agent to insert *iptables* rules so each host only processes packets in its bucket(s). Notice that when the ClusterIP becomes a clone, the constraints referencing ClusterIP now reference the clone. This is done automatically by pcs. ---- [root@pcmk-1 ~]# pcs -f loadbalance_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite (kind:Mandatory) promote WebDataClone then start WebFS (kind:Mandatory) start WebFS then start WebSite (kind:Mandatory) start dlm-clone then start WebFS (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP-clone (score:INFINITY) WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite with WebFS (score:INFINITY) WebFS with dlm-clone (score:INFINITY) ---- Now we must tell the resource how to decide which requests are processed by which hosts. To do this, we specify the *clusterip_hash* parameter. The value of *sourceip* means that the source IP address of incoming packets will be hashed; each node will process a certain range of hashes. ---- [root@pcmk-1 ~]# pcs -f loadbalance_cfg resource update ClusterIP clusterip_hash=sourceip ---- Load our configuration to the cluster, and see how it responds. ----- [root@pcmk-1 ~]# pcs cluster cib-push loadbalance_cfg CIB updated [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Sat Dec 20 22:05:48 2014 Last change: Sat Dec 20 22:05:34 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 9 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: WebSite (ocf::heartbeat:apache): Stopped Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Stopped ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1 Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started pcmk-1 ClusterIP:1 (ocf::heartbeat:IPaddr2): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ----- If desired, you can demonstrate that all request buckets are working by using a tool such as `arping` from several source hosts to see which host responds to each. == Clone the Filesystem and Apache Resources == Now that we have a cluster filesystem ready to go, and our nodes can load-balance requests to a shared IP address, we can configure the cluster so both nodes mount the filesystem and respond to web requests. Clone the filesystem and Apache resources in a new configuration. Notice how pcs automatically updates the relevant constraints again. ---- [root@pcmk-1 ~]# pcs cluster cib active_cfg [root@pcmk-1 ~]# pcs -f active_cfg resource clone WebFS [root@pcmk-1 ~]# pcs -f active_cfg resource clone WebSite [root@pcmk-1 ~]# pcs -f active_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite-clone (kind:Mandatory) promote WebDataClone then start WebFS-clone (kind:Mandatory) start WebFS-clone then start WebSite-clone (kind:Mandatory) start dlm-clone then start WebFS-clone (kind:Mandatory) Colocation Constraints: WebSite-clone with ClusterIP-clone (score:INFINITY) WebFS-clone with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite-clone with WebFS-clone (score:INFINITY) WebFS-clone with dlm-clone (score:INFINITY) ---- Tell the cluster that it is now allowed to promote both instances to be DRBD Primary (aka. master). ----- [root@pcmk-1 ~]# pcs -f active_cfg resource update WebDataClone master-max=2 ----- Finally, load our configuration to the cluster, and re-enable the WebFS resource (which we disabled earlier). ----- [root@pcmk-1 ~]# pcs cluster cib-push active_cfg CIB updated [root@pcmk-1 ~]# pcs resource enable WebFS ----- After all the processes are started, the status should look similar to this. ----- [root@pcmk-1 ~]# pcs resource Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started ClusterIP:1 (ocf::heartbeat:IPaddr2): Started Clone Set: WebFS-clone [WebFS] Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSite-clone [WebSite] Started: [ pcmk-1 pcmk-2 ] ----- == Test Failover == Testing failover is left as an exercise for the reader. For example, you can put one node into standby mode, use `pcs status` to confirm that its ClusterIP clone was moved to the other node, and use `arping` to verify that packets are not being lost from any source host. [NOTE] ==== You may find that when a failed node rejoins the cluster, both ClusterIP clones stay on one node, due to the resource stickiness. While this works fine, it effectively eliminates load-balancing and returns the cluster to an active-passive setup again. You can avoid this by disabling stickiness for the IP address resource: ---- [root@pcmk-1 ~]# pcs resource meta ClusterIP resource-stickiness=0 ---- ==== diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt index 3301683f58..58651e7f5d 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt @@ -1,510 +1,508 @@ = Replicate Storage Using DRBD = Even if you're serving up static websites, having to manually synchronize the contents of that website to all the machines in the cluster is not ideal. For dynamic websites, such as a wiki, it's not even an option. Not everyone care afford network-attached storage, but somehow the data needs to be kept in sync. Enter DRBD, which can be thought of as network-based RAID-1. footnote:[See http://www.drbd.org/ for details.] == Install the DRBD Packages == DRBD itself is included in the upstream kernel, footnote:[Since version 2.6.33] but we do need some utilities to use it effectively. On both nodes, run: ---- # yum install -y drbd-pacemaker drbd-udev ---- == Allocate a Disk Volume for DRBD == DRBD will need its own block device on each node. This can be a physical disk partition or logical volume, of whatever size you need for your data. For this document, we will use a 1GiB logical volume, which is more than sufficient for a single HTML file and (later) GFS2 metadata. ---- [root@pcmk-1 ~]# vgdisplay | grep -e Name -e Free VG Name fedora-server_pcmk-1 Free PE / Size 511 / 2.00 GiB [root@pcmk-1 ~]# lvcreate --name drbd-demo --size 1G fedora-server_pcmk-1 Logical volume "drbd-demo" created [root@pcmk-1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert drbd-demo fedora-server_pcmk-1 -wi-a----- 1.00g root fedora-server_pcmk-1 -wi-ao---- 5.00g swap fedora-server_pcmk-1 -wi-ao---- 1.00g ---- Repeat for the second node, making sure to use the same size: ---- [root@pcmk-1 ~]# ssh pcmk-2 -- lvcreate --name drbd-demo --size 1G fedora-server_pcmk-2 Logical volume "drbd-demo" created ---- == Configure DRBD == There is no series of commands for building a DRBD configuration, so simply run this on both nodes to use this sample configuration: ---- # cat </etc/drbd.d/wwwdata.res resource wwwdata { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on pcmk-1 { disk /dev/fedora-server_pcmk-1/drbd-demo; address 192.168.122.101:7789; } on pcmk-2 { disk /dev/fedora-server_pcmk-2/drbd-demo; address 192.168.122.102:7789; } } END ---- [IMPORTANT] ========= Edit the file to use the hostnames, IP addresses and logical volume paths of your nodes if they differ from the ones used in this guide. ========= [NOTE] ======= Detailed information on the directives used in this configuration (and other alternatives) is available at http://www.drbd.org/users-guide/ch-configure.html The *allow-two-primaries* option would not normally be used in an active/passive cluster. We are adding it here for the convenience of changing to an active/active cluster later. ======= == Initialize DRBD == With the configuration in place, we can now get DRBD running. These commands create the local metadata for the DRBD resource, ensure the DRBD kernel module is loaded, and bring up the DRBD resource. Run them on one node: ---- # drbdadm create-md wwwdata initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created. # modprobe drbd # drbdadm up wwwdata ---- We can confirm DRBD's status on this node: ---- # cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508 ---- Because we have not yet initialized the data, this node's data is marked as *Inconsistent*. Because we have not yet initialized the second node, the local state is *WFConnection* (waiting for connection), and the partner node's status is marked as *Unknown*. Now, repeat the above commands on the second node. This time, when we check the status, it shows: ---- # cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508 ---- You can see the state has changed to *Connected*, meaning the two DRBD nodes are communicating properly, and both nodes are in *Secondary* role with *Inconsistent* data. To make the data consistent, we need to tell DRBD which node should be considered to have the correct data. In this case, since we are creating a new resource, both have garbage, so we'll just pick pcmk-1 and run this command on it: ---- [root@pcmk-1 ~]# drbdadm primary --force wwwdata ---- [NOTE] ====== If you are using an older version of DRBD, the required syntax may be different. See the documentation for your version for how to perform these commands. ====== If we check the status immediately, we'll see something like this: ---- [root@pcmk-1 ~]# cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:2872 nr:0 dw:0 dr:3784 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1045636 [>....................] sync'ed: 0.4% (1045636/1048508)K finish: 0:10:53 speed: 1,436 (1,436) K/sec ---- We can see that this node has the *Primary* role, the partner node has the *Secondary* role, this node's data is now considered *UpToDate*, the partner node's data is still *Inconsistent*, and a progress bar shows how far along the partner node is in synchronizing the data. After a while, the sync should finish, and you'll see something like: ---- [root@pcmk-1 ~]# cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:1048508 nr:0 dw:0 dr:1049420 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 ---- Both sets of data are now *UpToDate*, and we can proceed to creating and populating a filesystem for our WebSite resource's documents. == Populate the DRBD Disk == On the node with the primary role (pcmk-1 in this example), create a filesystem on the DRBD device: ---- -[root@pcmk-1 ~]# mkfs.ext4 /dev/drbd1 -mke2fs 1.42.11 (09-Jul-2014) -Creating filesystem with 262127 4k blocks and 65536 inodes -Filesystem UUID: 26879260-9077-4d6d-ad69-7d31d3d8d8d4 -Superblock backups stored on blocks: - 32768, 98304, 163840, 229376 - -Allocating group tables: done -Writing inode tables: done -Creating journal (4096 blocks): done -Writing superblocks and filesystem accounting information: done +[root@pcmk-1 ~]# mkfs.xfs /dev/drbd1 +meta-data=/dev/drbd1 isize=256 agcount=4, agsize=65532 blks + = sectsz=512 attr=2, projid32bit=1 + = crc=0 finobt=0 +data = bsize=4096 blocks=262127, imaxpct=25 + = sunit=0 swidth=0 blks +naming =version 2 bsize=4096 ascii-ci=0 ftype=0 +log =internal log bsize=4096 blocks=853, version=2 + = sectsz=512 sunit=0 blks, lazy-count=1 +realtime =none extsz=4096 blocks=0, rtextents=0 ---- [NOTE] ==== -In this example, we create an ext4 filesystem with no special options. +In this example, we create an xfs filesystem with no special options. In a production environment, you should choose a filesystem type and options that are suitable for your application. ==== Mount the newly created filesystem, populate it with our web document, then unmount it (the cluster will handle mounting and unmounting it later): ---- [root@pcmk-1 ~]# mount /dev/drbd1 /mnt [root@pcmk-1 ~]# cat <<-END >/mnt/index.html My Test Site - DRBD END [root@pcmk-1 ~]# umount /dev/drbd1 ---- == Configure the Cluster for the DRBD device == One handy feature `pcs` has is the ability to queue up several changes into a file and commit those changes atomically. To do this, start by populating the file with the current raw XML config from the CIB. ---- [root@pcmk-1 ~]# pcs cluster cib drbd_cfg ---- Using the `pcs -f` option, make changes to the configuration saved in the +drbd_cfg+ file. These changes will not be seen by the cluster until the +drbd_cfg+ file is pushed into the live cluster's CIB later. Here, we create a cluster resource for the DRBD device, and an additional _clone_ resource to allow the resource to run on both nodes at the same time. ---- [root@pcmk-1 ~]# pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \ drbd_resource=wwwdata op monitor interval=60s [root@pcmk-1 ~]# pcs -f drbd_cfg resource master WebDataClone WebData \ master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 \ notify=true [root@pcmk-1 ~]# pcs -f drbd_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Stopped: [ pcmk-1 pcmk-2 ] ---- After you are satisfied with all the changes, you can commit them all at once by pushing the drbd_cfg file into the live CIB. ---- [root@pcmk-1 ~]# pcs cluster cib-push drbd_cfg CIB updated ---- [NOTE] ==== Early versions of `pcs` required `push cib` in place of `cib-push` above. ==== Let's see what the cluster did with the new configuration: ---- [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 16:39:43 2014 Last change: Wed Dec 17 16:39:30 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 4 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- We can see that *WebDataClone* (our DRBD device) is running as master (DRBD's primary role) on *pcmk-1* and slave (DRBD's secondary role) on *pcmk-2*. [IMPORTANT] ==== The resource agent should load the DRBD module when needed if it's not already loaded. If that does not happen, configure your operating system to load the module at boot time. For &DISTRO; &DISTRO_VERSION;, you would run this on both nodes: ---- # echo drbd >/etc/modules-load.d/drbd.conf ---- ==== == Configure the Cluster for the Filesystem == Now that we have a working DRBD device, we need to mount its filesystem. In addition to defining the filesystem, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start (after the Primary was promoted). We are going to take a shortcut when creating the resource this time. Instead of explicitly saying we want the *ocf:heartbeat:Filesystem* script, we are only going to ask for *Filesystem*. We can do this because we know there is only one resource script named *Filesystem* available to pacemaker, and that pcs is smart enough to fill in the *ocf:heartbeat:* portion for us correctly in the configuration. If there were multiple *Filesystem* scripts from different OCF providers, we would need to specify the exact one we wanted. Once again, we will queue our changes to a file and then push the new configuration to the cluster as the final step. ---- [root@pcmk-1 ~]# pcs cluster cib fs_cfg [root@pcmk-1 ~]# pcs -f fs_cfg resource create WebFS Filesystem \ - device="/dev/drbd1" directory="/var/www/html" \ - fstype="ext4" + device="/dev/drbd1" directory="/var/www/html" fstype="xfs" [root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebFS with WebDataClone INFINITY with-rsc-role=Master [root@pcmk-1 ~]# pcs -f fs_cfg constraint order promote WebDataClone then start WebFS Adding WebDataClone WebFS (kind: Mandatory) (Options: first-action=promote then-action=start) ---- We also need to tell the cluster that Apache needs to run on the same machine as the filesystem and that it must be active before Apache can start. ---- [root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebSite with WebFS INFINITY [root@pcmk-1 ~]# pcs -f fs_cfg constraint order WebFS then WebSite Adding WebFS WebSite (kind: Mandatory) (Options: first-action=start then-action=start) ---- Review the updated configuration. ---- [root@pcmk-1 ~]# pcs -f fs_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) promote WebDataClone then start WebFS (kind:Mandatory) start WebFS then start WebSite (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY) WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite with WebFS (score:INFINITY) ---- ---- [root@pcmk-1 ~]# pcs -f fs_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Stopped ---- After reviewing the new configuration, upload it and watch the cluster put it into effect. ---- [root@pcmk-1 ~]# pcs cluster cib-push fs_cfg [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 17:02:45 2014 Last change: Wed Dec 17 17:02:42 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 5 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- == Test Cluster Failover == Previously, we used `pcs cluster stop pcmk-1` to stop all cluster services on *pcmk-1*, failing over the cluster resources, but there is another way to safely simulate node failure. We can put the node into _standby mode_. Nodes in this state continue to run corosync and pacemaker but are not allowed to run resources. Any resources found active there will be moved elsewhere. This feature can be particularly useful when performing system administration tasks such as updating packages used by cluster resources. Put the active node into standby mode, and observe the cluster move all the resources to the other node. The node's status will change to indicate that it can no longer host resources. ---- [root@pcmk-1 ~]# pcs cluster standby pcmk-1 [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 17:14:05 2014 Last change: Wed Dec 17 17:14:02 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 5 Resources configured Node pcmk-1 (1): standby Online: [ pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Stopped: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- Once we've done everything we needed to on pcmk-1 (in this case nothing, we just wanted to see the resources move), we can allow the node to be a full cluster member again. ---- [root@pcmk-1 ~]# pcs cluster unstandby pcmk-1 [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 17:15:36 2014 Last change: Wed Dec 17 17:15:33 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 5 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- Notice that *pcmk-1* is back to the *Online* state, and that the cluster resources stay where they are due to our resource stickiness settings configured earlier.