diff --git a/TODO.markdown b/TODO.markdown index 8d3982d67c..dfe6610ab7 100644 --- a/TODO.markdown +++ b/TODO.markdown @@ -1,57 +1,56 @@ # Semi-random collection of tasks we'd like to get done ## Targeted for 1.2 - Need a way to indicate when unfencing operations need to be initiated from the host to be unfenced - Remove all calls to uname() and replace with get_node_name() whcih redirects to ${stack}_node_name() ## Targeted for 1.2.x - Support http://cgit.freedesktop.org/systemd/systemd/commit/?id=96342de68d0d6de71a062d984dafd2a0905ed9fe - Allow stonith_admin to optionally route fencing requests via the CIB (terminate=true) - Add corosync to ComponentFail cts test - Support 'yesterday' and 'thursday' and '24-04' as dates in crm_report - Allow the N in 'give up after N failed fencing attempts' to be configurable - Check for uppercase letters in node names, warn if found - Imply startup-failure-is-fatal from on-fail="restart" - Show an english version of the config with crm_resource --rules - Convert cts/CIB.py into a supported Python API for the CIB - Reduce the amount of stonith-ng logging - Use dlopen for snmp in crm_mon - Re-implement no-quorum filter for cib updates? ## Targeted for 1.4 - Support A colocated with (B || C || D) - Implement a truely atomic version of attrd - Support rolling average values in attrd - Support heartbeat with the mcp - Freeze/Thaw - Create Pacemaker plugin for snmpd - http://www.net-snmp.org/ - Investigate using a DB as the back-end for the CIB - Decide whether to fully support or drop failover domains # Testing - Convert BandwidthTest CTS test into a Scenario wrapper - find_operations() is not covered by PE regression tests - no_quorum_policy==suicide is not covered by PE regression tests - parse_xml_duration() is not covered by PE regression tests - phase_of_the_moon() is not covered by PE regression tests - test_role_expression() is not covered by PE regression tests - native_parameter() is not covered by PE regression tests - clone_active() is not covered by PE regression tests - convert_non_atomic_task() in native.c is not covered by PE regression tests - group_rsc_colocation_lh() is not covered by PE regression tests - Test on-fail=standby # Documentation - Clusters from Scratch: Mail - Clusters from Scratch: MySQL - Document reload in Pacemaker Explained - Document advanced fencing logic in Pacemaker Explained - Use ann:defaultValue="..." instead of in the schema more often - Document in CFS an Appendix detailing with re-enabling firewall -- Remove ocf:heartbeat part of resource create to demonstrate that the resource is automatically found. - Document implicit operation creation in CFS once pcs supports it. - Document use of pcs resource move command in CFS once pcs supports it. - Make use of --clone option in pcs resource create dlm in CFS once pcs fully supports that option. diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt index 3ff3697d17..77cac5f42b 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt @@ -1,761 +1,769 @@ = Replicated Storage with DRBD = == Background == Even if you're serving up static websites, having to manually synchronize the contents of that website to all the machines in the cluster is not ideal. For dynamic websites, such as a wiki, it's not even an option. Not everyone care afford network-attached storage but somehow the data needs to be kept in sync. Enter DRBD which can be thought of as network based RAID-1. See http://www.drbd.org/ for more details. == Install the DRBD Packages == Since its inclusion in the upstream 2.6.33 kernel, everything needed to use DRBD has shiped with Fedora since version 13. All you need to do is install it: [source,Bash] ..... # yum install -y drbd-pacemaker drbd-udev Loaded plugins: langpacks, presto, refresh-packagekit Resolving Dependencies --> Running transaction check ---> Package drbd-pacemaker.x86_64 0:8.3.11-5.fc17 will be installed --> Processing Dependency: drbd-utils = 8.3.11-5.fc17 for package: drbd-pacemaker-8.3.11-5.fc17.x86_64 ---> Package drbd-udev.x86_64 0:8.3.11-5.fc17 will be installed --> Running transaction check ---> Package drbd-utils.x86_64 0:8.3.11-5.fc17 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================== Package Arch Version Repository Size ====================================================================================== Installing: drbd-pacemaker x86_64 8.3.11-5.fc17 updates-testing 22 k drbd-udev x86_64 8.3.11-5.fc17 updates-testing 6.4 k Installing for dependencies: drbd-utils x86_64 8.3.11-5.fc17 updates-testing 183 k Transaction Summary ====================================================================================== Install 2 Packages (+1 Dependent package) Total download size: 212 k Installed size: 473 k Downloading Packages: (1/3): drbd-pacemaker-8.3.11-5.fc17.x86_64.rpm | 22 kB 00:00 (2/3): drbd-udev-8.3.11-5.fc17.x86_64.rpm | 6.4 kB 00:00 (3/3): drbd-utils-8.3.11-5.fc17.x86_64.rpm | 183 kB 00:00 -------------------------------------------------------------------------------------- Total 293 kB/s | 212 kB 00:00 Running Transaction Check Running Transaction Test Transaction Test Succeeded Running Transaction Installing : drbd-utils-8.3.11-5.fc17.x86_64 1/3 Installing : drbd-pacemaker-8.3.11-5.fc17.x86_64 2/3 Installing : drbd-udev-8.3.11-5.fc17.x86_64 3/3 Verifying : drbd-pacemaker-8.3.11-5.fc17.x86_64 1/3 Verifying : drbd-udev-8.3.11-5.fc17.x86_64 2/3 Verifying : drbd-utils-8.3.11-5.fc17.x86_64 3/3 Installed: drbd-pacemaker.x86_64 0:8.3.11-5.fc17 drbd-udev.x86_64 0:8.3.11-5.fc17 Dependency Installed: drbd-utils.x86_64 0:8.3.11-5.fc17 Complete! ..... == Configure DRBD == Before we configure DRBD, we need to set aside some disk for it to use. === Create A Partition for DRBD === If you have more than 1Gb free, feel free to use it. For this guide however, 1Gb is plenty of space for a single html file and sufficient for later holding the GFS2 metadata. [source,Bash] ---- # vgdisplay | grep -e Name - e Free VG Name vg_pcmk1 Free PE / Size 31 / 992.00 MiB # lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert lv_root vg_pcmk1 -wi-ao-- 8.56g lv_swap vg_pcmk1 -wi-ao-- 960.00m # lvcreate -n drbd-demo -L 1G vg_pcmk1 Logical volume "drbd-demo" created # lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert drbd-demo vg_pcmk1 -wi-a--- 1.00G lv_root vg_pcmk1 -wi-ao-- 8.56g lv_swap vg_pcmk1 -wi-ao-- 960.00m ---- Repeat this on the second node, be sure to use the same size partition. [source,Bash] ---- # ssh pcmk-2 -- lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv_root vg_pcmk1 -wi-ao-- 8.56g lv_swap vg_pcmk1 -wi-ao-- 960.00m # ssh pcmk-2 -- lvcreate -n drbd-demo -L 1G vg_pcmk1 Logical volume "drbd-demo" created # ssh pcmk-2 -- lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert drbd-demo vg_pcmk1 -wi-a--- 1.00G lv_root vg_pcmk1 -wi-ao-- 8.56g lv_swap vg_pcmk1 -wi-ao-- 960.00m ---- === Write the DRBD Config === There is no series of commands for building a DRBD configuration, so simply copy the configuration below to /etc/drbd.conf Detailed information on the directives used in this configuration (and other alternatives) is available from http://www.drbd.org/users-guide/ch-configure.html [WARNING] ========= Be sure to use the names and addresses of your nodes if they differ from the ones used in this guide. ========= .... global { usage-count yes; } common { protocol C; } resource wwwdata { meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on pcmk-1 { disk /dev/vg_pcmk1/drbd-demo; address 192.168.122.101:7789; } on pcmk-2 { disk /dev/vg_pcmk1/drbd-demo; address 192.168.122.102:7789; } } .... [NOTE] ======= TODO: Explain the reason for the allow-two-primaries option ======= === Initialize and Load DRBD === With the configuration in place, we can now perform the DRBD initialization [source,Bash] ---- # drbdadm create-md wwwdata Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. success ---- Now load the DRBD kernel module and confirm that everything is sane [source,Bash] ---- # modprobe drbd # drbdadm up wwwdata # cat /proc/drbd version: 8.3.11 (api:88/proto:86-96) srcversion: 0D2B62DEDB020A425130935 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1015740 ---- Repeat on the second node [source,Bash] ---- # ssh pcmk-2 -- drbdadm --force create-md wwwdata Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. success # ssh pcmk-2 -- modprobe drbd WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. # ssh pcmk-2 -- drbdadm up wwwdata # ssh pcmk-2 -- cat /proc/drbd version: 8.3.11 (api:88/proto:86-96) srcversion: 0D2B62DEDB020A425130935 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1015740 ---- Now we need to tell DRBD which set of data to use. Since both sides contain garbage, we can run the following on pcmk-1: [source,Bash] ---- # drbdadm -- --overwrite-data-of-peer primary wwwdata # cat /proc/drbd version: 8.3.11 (api:88/proto:86-96) srcversion: 0D2B62DEDB020A425130935 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:8064 nr:0 dw:0 dr:8728 al:0 bm:0 lo:0 pe:1 ua:0 ap:0 ep:1 wo:f oos:1007804 [>....................] sync'ed: 0.9% (1007804/1015740)K finish: 0:12:35 speed: 1,320 (1,320) K/sec ---- After a while, the sync should finish and you'll see: [source,Bash] ---- # cat /proc/drbd version: 8.3.11 (api:88/proto:86-96) srcversion: 0D2B62DEDB020A425130935 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:1015740 nr:0 dw:0 dr:1016404 al:0 bm:62 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 ---- pcmk-1 is now in the Primary state which allows it to be written to. Which means it's a good point at which to create a filesystem and populate it with some data to serve up via our WebSite resource. === Populate DRBD with Data === [source,Bash] ---- # mkfs.ext4 /dev/drbd1 mke2fs 1.42 (29-Nov-2011) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 63488 inodes, 253935 blocks 12696 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=260046848 8 block groups 32768 blocks per group, 32768 fragments per group 7936 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done ---- Now mount the newly created filesystem so we can create our index file [source,Bash] ---- # mount /dev/drbd1 /mnt/ # cat <<-END >/mnt/index.html My Test Site - drbd END # umount /dev/drbd1 ---- == Configure the Cluster for DRBD == ifdef::pcs[] One handy feature pcs has is the ability to queue up several changes into a file and commit those changes atomically. To do this, start by populating the file with the current raw xml config from the cib. This can be done using the following command. [source,Bash] ---- # pcs cluster cib drbd_cfg ---- Now using the pcs -f option, make changes to the configuration saved in the drbd_cfg file. These changes will not be seen by the cluster until the drbd_cfg file is pushed into the live cluster's cib later on. [source,Bash] ---- # pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \ drbd_resource=wwwdata op monitor interval=60s # pcs -f drbd_cfg resource master WebDataClone WebData \ master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 \ notify=true # pcs -f drbd_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2) Started WebSite (ocf::heartbeat:apache) Started Master/Slave Set: WebDataClone [WebData] Stopped: [ WebData:0 WebData:1 ] ---- After you are satisfied with all the changes, you can commit all the changes at once by pushing the drbd_cfg file into the live cib. [source,Bash] ---- # pcs cluster push cib drbd_cfg CIB updated # # pcs status Last updated: Fri Sep 14 12:19:49 2012 Last change: Fri Sep 14 12:19:13 2012 via cibadmin on pcmk-1 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0 2 Nodes configured, unknown expected votes 4 Resources configured. Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] ---- endif::[] ifdef::crm[] One handy feature of the crm shell is that you can use it in interactive mode to make several changes atomically. First we launch the shell. The prompt will change to indicate you're in interactive mode. [source,Bash] ---- # crm crm(live) # ---- Next we must create a working copy of the current configuration. This is where all our changes will go. The cluster will not see any of them until we say it's ok. Notice again how the prompt changes, this time to indicate that we're no longer looking at the live cluster. [source,Bash] ---- cib crm(live) # cib new drbd INFO: drbd shadow CIB created crm(drbd) # ---- Now we can create our DRBD clone and display the revised configuration. [source,Bash] ---- crm(drbd) # configure primitive WebData ocf:linbit:drbd params drbd_resource=wwwdata \ op monitor interval=60s crm(drbd) # configure ms WebDataClone WebData meta master-max=1 master-node-max=1 \ clone-max=2 clone-node-max=1 notify=true crm(drbd) # configure show node $id="1702537408" pcmk-1 node $id="1719314624" pcmk-2 primitive ClusterIP ocf:heartbeat:IPaddr2 \ params ip="192.168.122.120" cidr_netmask="32" \ op monitor interval="30s" primitive WebData ocf:linbit:drbd \ params drbd_resource="wwwdata" \ op monitor interval="60s" primitive WebSite ocf:heartbeat:apache \ params configfile="/etc/httpd/conf/httpd.conf" \ op monitor interval="1min" ms WebDataClone WebData \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" location prefer-pcmk-1 WebSite 50: pcmk-1 colocation website-with-ip inf: WebSite ClusterIP order apache-after-ip inf: ClusterIP WebSite property $id="cib-bootstrap-options" \ dc-version="1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="corosync" \ stonith-enabled="false" \ no-quorum-policy="ignore" \ last-lrm-refresh="1333446866" rsc_defaults $id="rsc-options" \ resource-stickiness="100" op_defaults $id="op-options" \ timeout="240s" ---- Once we're happy with the changes, we can tell the cluster to start using them and use crm_mon to check everything is functioning. [source,Bash] ---- crm(drbd) # cib commit drbd INFO: commited 'drbd' shadow CIB to the cluster crm(drbd) # quit bye # crm_mon -1 ============ Last updated: Tue Apr 3 13:50:01 2012 Last change: Tue Apr 3 13:49:46 2012 via crm_shadow on pcmk-1 Stack: corosync Current DC: pcmk-1 (1702537408) - partition with quorum Version: 1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, unknown expected votes 4 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] ---- endif::[] [NOTE] ======= TODO: Include details on adding a second DRBD resource ======= Now that DRBD is functioning we can configure a Filesystem resource to use it. In addition to the filesystem's definition, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start (after the Primary was promoted). ifdef::pcs[] +We are going to take a shortcut when creating the resource this time though. +Instead of explicitly saying we want the 'ocf:heartbeat:Filesystem' script, we +are only going to ask for 'Filesystem'. We can do this because we know there is only +one resource script named 'Filesystem' available to pacemaker, and that pcs is smart +enough to fill in the 'ocf:heartbeat' portion for us correctly in the configuration. +If there were multiple 'Filesystem' scripts from different ocf providers, we would need +to specify the exact one we wanted to use. + Once again we will queue up our changes to a file and then push the new configuration to the cluster as the final step. [source,Bash] ---- # pcs cluster cib fs_cfg -# pcs -f fs_cfg resource create WebFS ocf:heartbeat:Filesystem \ +# pcs -f fs_cfg resource create WebFS Filesystem \ device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" \ fstype="ext4" # pcs -f fs_cfg constraint colocation add WebFS WebDataClone INFINITY with-rsc-role=Master # pcs -f fs_cfg constraint order promote WebDataClone then start WebFS Adding WebDataClone WebFS (kind: Mandatory) (Options: first-action=promote then-action=start) ---- We also need to tell the cluster that Apache needs to run on the same machine as the filesystem and that it must be active before Apache can start. [source,Bash] ---- # pcs -f fs_cfg constraint colocation add WebSite WebFS INFINITY # pcs -f fs_cfg constraint order WebFS then WebSite ---- Now review the updated configuration. [source,Bash] ---- # pcs -f fs_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP then start WebSite WebFS then WebSite promote WebDataClone then start WebFS Colocation Constraints: WebSite with ClusterIP WebFS with WebDataClone (with-rsc-role:Master) WebSite with WebFS # # pcs -f fs_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2) Started WebSite (ocf::heartbeat:apache) Started Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem) Stopped ---- endif::[] ifdef::crm[] Once again we'll use the shell's interactive mode [source,Bash] ---- # crm crm(live) # cib new fs INFO: fs shadow CIB created crm(fs) # configure primitive WebFS ocf:heartbeat:Filesystem \ params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4" crm(fs) # configure colocation fs_on_drbd inf: WebFS WebDataClone:Master crm(fs) # configure order WebFS-after-WebData inf: WebDataClone:promote WebFS:start ---- We also need to tell the cluster that Apache needs to run on the same machine as the filesystem and that it must be active before Apache can start. [source,Bash] ---- crm(fs) # configure colocation WebSite-with-WebFS inf: WebSite WebFS crm(fs) # configure order WebSite-after-WebFS inf: WebFS WebSite ---- Time to review the updated configuration: [source,Bash] ---- crm(fs) # configure show node $id="1702537408" pcmk-1 node $id="1719314624" pcmk-2 primitive ClusterIP ocf:heartbeat:IPaddr2 \ params ip="192.168.122.120" cidr_netmask="32" \ op monitor interval="30s" primitive WebData ocf:linbit:drbd \ params drbd_resource="wwwdata" \ op monitor interval="60s" primitive WebFS ocf:heartbeat:Filesystem \ params device="/dev/drbd/by-res/wwwdata" directory="/var/www/html" fstype="ext4" primitive WebSite ocf:heartbeat:apache \ params configfile="/etc/httpd/conf/httpd.conf" \ op monitor interval="1min" ms WebDataClone WebData \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" location prefer-pcmk-1 WebSite 50: pcmk-1 colocation WebSite-with-WebFS inf: WebSite WebFS colocation fs_on_drbd inf: WebFS WebDataClone:Master colocation website-with-ip inf: WebSite ClusterIP order WebFS-after-WebData inf: WebDataClone:promote WebFS:start order WebSite-after-WebFS inf: WebFS WebSite order apache-after-ip inf: ClusterIP WebSite property $id="cib-bootstrap-options" \ dc-version="1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="corosync" \ stonith-enabled="false" \ no-quorum-policy="ignore" \ last-lrm-refresh="1333446866" rsc_defaults $id="rsc-options" \ resource-stickiness="100" op_defaults $id="op-options" \ timeout="240s" ---- endif::[] After reviewing the new configuration, we again upload it and watch the cluster put it into effect. ifdef::pcs[] [source,Bash] ---- # pcs cluster push cib fs_cfg CIB updated # pcs status Last updated: Fri Aug 10 12:47:01 2012 Last change: Fri Aug 10 12:46:55 2012 via cibadmin on pcmk-1 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0 2 Nodes configured, unknown expected votes 5 Resources configured. Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-1 ---- endif::[] ifdef::crm[] [source,Bash] ---- crm(fs) # cib commit fs INFO: commited 'fs' shadow CIB to the cluster crm(fs) # quit bye # crm_mon -1 ============ Last updated: Tue Apr 3 13:52:21 2012 Last change: Tue Apr 3 13:52:06 2012 via crm_shadow on pcmk-1 Stack: corosync Current DC: pcmk-1 (1702537408) - partition with quorum Version: 1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, unknown expected votes 5 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-1 ---- endif::[] === Testing Migration === We could shut down the active node again, but another way to safely simulate recovery is to put the node into what is called "standby mode". Nodes in this state tell the cluster that they are not allowed to run resources. Any resources found active there will be moved elsewhere. This feature can be particularly useful when updating the resources' packages. Put the local node into standby mode and observe the cluster move all the resources to the other node. Note also that the node's status will change to indicate that it can no longer host resources. ifdef::pcs[] [source,Bash] ---- # pcs cluster standby pcmk-1 # pcs status Last updated: Fri Sep 14 12:41:12 2012 Last change: Fri Sep 14 12:41:08 2012 via crm_attribute on pcmk-1 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0 2 Nodes configured, unknown expected votes 5 Resources configured. Node pcmk-1 (1): standby Online: [ pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Stopped: [ WebData:1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 ---- endif::[] ifdef::crm[] [source,Bash] ---- # crm node standby # crm_mon -1 ============ Last updated: Tue Apr 3 13:59:14 2012 Last change: Tue Apr 3 13:52:36 2012 via crm_attribute on pcmk-1 Stack: corosync Current DC: pcmk-1 (1702537408) - partition with quorum Version: 1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, unknown expected votes 5 Resources configured. ============ Node pcmk-1 (1702537408): standby Online: [ pcmk-2 ] ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Stopped: [ WebData:1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 ---- endif::[] Once we've done everything we needed to on pcmk-1 (in this case nothing, we just wanted to see the resources move), we can allow the node to be a full cluster member again. ifdef::pcs[] [source,Bash] ---- # pcs cluster unstandby pcmk-1 # pcs status Last updated: Fri Sep 14 12:43:02 2012 Last change: Fri Sep 14 12:42:57 2012 via crm_attribute on pcmk-1 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.8-1.el7-60a19ed12fdb4d5c6a6b6767f52e5391e447fec0 2 Nodes configured, unknown expected votes 5 Resources configured. Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 ---- endif::[] ifdef::crm[] [source,Bash] ---- # crm node online # crm_mon -1 ============ Last updated: Tue Apr 3 14:00:06 2012 Last change: Tue Apr 3 14:00:00 2012 via crm_attribute on pcmk-1 Stack: corosync Current DC: pcmk-1 (1702537408) - partition with quorum Version: 1.1.7-2.fc17-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, unknown expected votes 5 Resources configured. ============ Online: [ pcmk-1 pcmk-2 ] ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 ---- endif::[] Notice that our resource stickiness settings prevent the services from migrating back to pcmk-1.