diff --git a/doc/Clusters_from_Scratch/en-US/Ap-Configuration.txt b/doc/Clusters_from_Scratch/en-US/Ap-Configuration.txt index ef11ded1a0..e7805e1ac7 100644 --- a/doc/Clusters_from_Scratch/en-US/Ap-Configuration.txt +++ b/doc/Clusters_from_Scratch/en-US/Ap-Configuration.txt @@ -1,468 +1,455 @@ [appendix] == Configuration Recap == === Final Cluster Configuration === ---- [root@pcmk-1 ~]# pcs resource Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started ClusterIP:1 (ocf::heartbeat:IPaddr2): Started Clone Set: WebFS-clone [WebFS] Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSite-clone [WebSite] Started: [ pcmk-1 pcmk-2 ] ---- ----- -[root@pcmk-1 ~]# pcs resource defaults -resource-stickiness: 100 ----- - ---- [root@pcmk-1 ~]# pcs resource op defaults timeout: 240s ---- ---- [root@pcmk-1 ~]# pcs stonith impi-fencing (stonith:fence_ipmilan) Started ---- ----- -[root@pcmk-1 ~]# pcs property -Cluster Properties: - cluster-infrastructure: corosync - cluster-name: mycluster - dc-version: 1.1.12-a9c8177 - have-watchdog: false - last-lrm-refresh: 1419129162 - stonith-enabled: true ----- - ---- [root@pcmk-1 ~]# pcs constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite-clone (kind:Mandatory) promote WebDataClone then start WebFS-clone (kind:Mandatory) start WebFS-clone then start WebSite-clone (kind:Mandatory) start dlm-clone then start WebFS-clone (kind:Mandatory) Colocation Constraints: WebSite-clone with ClusterIP-clone (score:INFINITY) WebFS-clone with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite-clone with WebFS-clone (score:INFINITY) WebFS-clone with dlm-clone (score:INFINITY) ---- ---- [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Mon Dec 22 11:19:17 2014 Last change: Mon Dec 22 11:03:52 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 11 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: impi-fencing (stonith:fence_ipmilan): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started pcmk-2 ClusterIP:1 (ocf::heartbeat:IPaddr2): Started pcmk-1 Clone Set: WebFS-clone [WebFS] Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSite-clone [WebSite] Started: [ pcmk-1 pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- ---- [root@pcmk-1 ~]# pcs cluster cib ---- [source,XML] ---- ---- === Node List === ---- [root@pcmk-1 ~]# pcs status nodes Pacemaker Nodes: Online: pcmk-1 pcmk-2 Standby: Offline: ---- === Cluster Options === ---- [root@pcmk-1 ~]# pcs property Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster dc-version: 1.1.12-a9c8177 have-watchdog: false last-lrm-refresh: 1419129162 stonith-enabled: true ---- The output shows state information automatically obtained about the cluster, including: + * *cluster-infrastructure* - the cluster communications layer in use (heartbeat or corosync) * *cluster-name* - the cluster name chosen by the administrator when the cluster was created * *dc-version* - the version (including upstream source-code hash) of Pacemaker used on the Designated Controller The output also shows options set by the administrator that control the way the cluster operates, including: + * *stonith-enabled=true* - whether the cluster is allowed to use STONITH resources === Resources === ==== Default Options ==== ---- [root@pcmk-1 ~]# pcs resource defaults resource-stickiness: 100 ---- This shows cluster option defaults that apply to every resource that does not explicitly set the option itself. Above: + * *resource-stickiness* - Specify the aversion to moving healthy resources to other machines ==== Fencing ==== ---- [root@pcmk-1 ~]# pcs stonith show ipmi-fencing (stonith:fence_ipmilan) Started [root@pcmk-1 ~]# pcs stonith show ipmi-fencing Resource: ipmi-fencing (class=stonith type=fence_ipmilan) Attributes: ipaddr="10.0.0.1" login="testuser" passwd="acd123" pcmk_host_list="pcmk-1 pcmk-2" Operations: monitor interval=60s (fence-monitor-interval-60s) ---- ==== Service Address ==== Users of the services provided by the cluster require an unchanging address with which to access it. Additionally, we cloned the address so it will be active on both nodes. An iptables rule (created as part of the resource agent) is used to ensure that each request only gets processed by one of the two clone instances. The additional meta options tell the cluster that we want two instances of the clone (one "request bucket" for each node) and that if one node fails, then the remaining node should hold both. ---- [root@pcmk-1 ~]# pcs resource show ClusterIP-clone Clone: ClusterIP-clone Meta Attrs: clone-max=2 clone-node-max=2 globally-unique=true Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.122.120 cidr_netmask=32 clusterip_hash=sourceip Operations: start interval=0s timeout=20s (ClusterIP-start-timeout-20s) stop interval=0s timeout=20s (ClusterIP-stop-timeout-20s) monitor interval=30s (ClusterIP-monitor-interval-30s) ---- ==== DRBD - Shared Storage ==== Here, we define the DRBD service and specify which DRBD resource (from /etc/drbd.d/*.res) it should manage. We make it a master/slave resource and, in order to have an active/active setup, allow both instances to be promoted to master at the same time. We also set the notify option so that the cluster will tell DRBD agent when its peer changes state. ---- [root@pcmk-1 ~]# pcs resource show WebDataClone Master: WebDataClone Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true Resource: WebData (class=ocf provider=linbit type=drbd) Attributes: drbd_resource=wwwdata Operations: start interval=0s timeout=240 (WebData-start-timeout-240) promote interval=0s timeout=90 (WebData-promote-timeout-90) demote interval=0s timeout=90 (WebData-demote-timeout-90) stop interval=0s timeout=100 (WebData-stop-timeout-100) monitor interval=60s (WebData-monitor-interval-60s) [root@pcmk-1 ~]# pcs constraint ref WebDataClone Resource: WebDataClone colocation-WebFS-WebDataClone-INFINITY order-WebDataClone-WebFS-mandatory ---- ==== Cluster Filesystem ==== The cluster filesystem ensures that files are read and written correctly. We need to specify the block device (provided by DRBD), where we want it mounted and that we are using GFS2. Again, it is a clone because it is intended to be active on both nodes. The additional constraints ensure that it can only be started on nodes with active DLM and DRBD instances. ---- [root@pcmk-1 ~]# pcs resource show WebFS-clone Clone: WebFS-clone Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2 Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20) [root@pcmk-1 ~]# pcs constraint ref WebFS-clone Resource: WebFS-clone colocation-WebFS-WebDataClone-INFINITY colocation-WebSite-WebFS-INFINITY colocation-WebFS-clone-dlm-clone-INFINITY order-WebDataClone-WebFS-mandatory order-WebFS-WebSite-mandatory order-dlm-clone-WebFS-clone-mandatory ---- ==== Apache ==== Lastly, we have the actual service, Apache. We need only tell the cluster where to find its main configuration file and restrict it to running on nodes that have the required filesystem mounted and the IP address active. ---- [root@pcmk-1 ~]# pcs resource show WebSite-clone Clone: WebSite-clone Resource: WebSite (class=ocf provider=heartbeat type=apache) Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://localhost/server-status Operations: start interval=0s timeout=40s (WebSite-start-timeout-40s) stop interval=0s timeout=60s (WebSite-stop-timeout-60s) monitor interval=1min (WebSite-monitor-interval-1min) [root@pcmk-1 ~]# pcs constraint ref WebSite-clone Resource: WebSite-clone colocation-WebSite-ClusterIP-INFINITY colocation-WebSite-WebFS-INFINITY order-ClusterIP-WebSite-mandatory order-WebFS-WebSite-mandatory ---- diff --git a/doc/Clusters_from_Scratch/en-US/Ap-Corosync-Conf.txt b/doc/Clusters_from_Scratch/en-US/Ap-Corosync-Conf.txt index df14dd1d60..87f4042a85 100644 --- a/doc/Clusters_from_Scratch/en-US/Ap-Corosync-Conf.txt +++ b/doc/Clusters_from_Scratch/en-US/Ap-Corosync-Conf.txt @@ -1,33 +1,33 @@ [appendix] - +[[ap-corosync-conf]] == Sample Corosync Configuration == .Sample +corosync.conf+ for two-node cluster created by `pcs`. ..... totem { version: 2 secauth: off cluster_name: mycluster transport: udpu } nodelist { node { ring0_addr: pcmk-1 nodeid: 1 } node { ring0_addr: pcmk-2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_syslog: yes } ..... diff --git a/doc/Clusters_from_Scratch/en-US/Ap-Reading.txt b/doc/Clusters_from_Scratch/en-US/Ap-Reading.txt index 26d5d7e117..3b9367418d 100644 --- a/doc/Clusters_from_Scratch/en-US/Ap-Reading.txt +++ b/doc/Clusters_from_Scratch/en-US/Ap-Reading.txt @@ -1,12 +1,12 @@ [appendix] == Further Reading == - Project Website http://www.clusterlabs.org/ - SuSE has a comprehensive guide to cluster commands (though using the `crmsh` command-line shell rather than `pcs`) at: - http://www.suse.com/documentation/sle_ha/book_sleha/?page=/documentation/sle_ha/book_sleha/data/book_sleha.html + https://www.suse.com/documentation/sle_ha/book_sleha/data/book_sleha.html - Corosync http://www.corosync.org/ diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt index ca980c42fd..51cebdaa99 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Active.txt @@ -1,380 +1,381 @@ = Convert Cluster to Active/Active = The primary requirement for an Active/Active cluster is that the data required for your services is available, simultaneously, on both machines. Pacemaker makes no requirement on how this is achieved; you could use a SAN if you had one available, but since DRBD supports multiple Primaries, we can continue to use it here. == Install Cluster Filesystem Software == The only hitch is that we need to use a cluster-aware filesystem. The one we used earlier with DRBD, ext4, is not one of those. Both OCFS2 and GFS2 are supported; here, we will use GFS2. On both nodes, install the GFS2 command-line utilities and the Distributed Lock Manager (DLM) required by cluster filesystems: ---- # yum install -y gfs2-utils dlm ---- == Configure the Cluster for the DLM == The DLM needs to run on both nodes, so we'll start by creating a resource for it (using the *ocf:pacemaker:controld* resource script), and clone it: ---- [root@pcmk-1 ~]# pcs cluster cib dlm_cfg [root@pcmk-1 ~]# pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60s [root@pcmk-1 ~]# pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1 [root@pcmk-1 ~]# pcs -f dlm_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started Clone Set: dlm-clone [dlm] Stopped: [ pcmk-1 pcmk-2 ] ---- Activate our new configuration, and see how the cluster responds: ---- [root@pcmk-1 ~]# pcs cluster cib-push dlm_cfg CIB updated [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Sat Dec 20 21:53:44 2014 Last change: Sat Dec 20 21:53:40 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 8 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1 Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- [[GFS2_prep]] == Create and Populate GFS2 Filesystem == Before we do anything to the existing partition, we need to make sure it is unmounted. We do this by telling the cluster to stop the WebFS resource. This will ensure that other resources (in our case, Apache) using WebFS are not only stopped, but stopped in the correct order. ---- [root@pcmk-1 ~]# pcs resource disable WebFS [root@pcmk-1 ~]# pcs resource ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Stopped Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Stopped Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] ---- You can see that both Apache and WebFS have been stopped, and that *pcmk-2* is the current master for the DRBD device. Now we can create a new GFS2 filesystem on the DRBD device. [WARNING] ========= This will erase all previous content stored on the DRBD device. Ensure you have a copy of any important data. ========= [IMPORTANT] =========== Run the next command on whichever node has the DRBD Primary role. Otherwise, you will receive the message: ----- /dev/drbd1: Read-only file system ----- =========== ----- [root@pcmk-2 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:web /dev/drbd1 It appears to contain an existing filesystem (ext4) This will destroy any data on /dev/drbd1 Are you sure you want to proceed? [y/n]y Device: /dev/drbd1 Block size: 4096 Device size: 1.00 GB (262127 blocks) Filesystem size: 1.00 GB (262124 blocks) Journals: 2 Resource groups: 3 Locking protocol: "lock_dlm" Lock table: "mycluster:web" UUID: b2b30e6c-8890-33fa-a1ba-3c70edd4b5f0 ----- The `mkfs.gfs2` command required a number of additional parameters: * `-p lock_dlm` specifies that we want to use the kernel's DLM. * `-j 2` indicates that the filesystem should reserve enough space for two journals (one for each node that will access the filesystem). * `-t mycluster:web` specifies the lock table name. The format for this field is +pass:[clustername:fsname]+. For +pass:[clustername]+, we need to use the same value we specified originally with `pcs cluster setup --name` (which is also the value of *cluster_name* in +/etc/corosync/corosync.conf+). If you are unsure what your cluster name is, you can look in +/etc/corosync/corosync.conf+ or execute the command `pcs cluster corosync pcmk-1 | grep cluster_name`. Now we can (re-)populate the new filesystem with data (web pages). We'll create yet another variation on our home page. ----- [root@pcmk-2 ~]# mount /dev/drbd1 /mnt [root@pcmk-2 ~]# cat <<-END >/mnt/index.html My Test Site - GFS2 END [root@pcmk-2 ~]# umount /dev/drbd1 [root@pcmk-2 ~]# drbdadm verify wwwdata ----- == Reconfigure the Cluster for GFS2 == With the WebFS resource stopped, let's update the configuration. ---- [root@pcmk-1 ~]# pcs resource show WebFS Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=ext4 Meta Attrs: target-role=Stopped Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20) ---- The fstype option needs to be updated to *gfs2* instead of *ext4*. ---- [root@pcmk-1 ~]# pcs resource update WebFS fstype=gfs2 [root@pcmk-1 ~]# pcs resource show WebFS Resource: WebFS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/drbd1 directory=/var/www/html fstype=gfs2 Meta Attrs: target-role=Stopped Operations: start interval=0s timeout=60 (WebFS-start-timeout-60) stop interval=0s timeout=60 (WebFS-stop-timeout-60) monitor interval=20 timeout=40 (WebFS-monitor-interval-20) ---- GFS2 requires that DLM be running, so we also need to set up new colocation and ordering constraints for it: ---- [root@pcmk-1 ~]# pcs constraint colocation add WebFS with dlm-clone INFINITY [root@pcmk-1 ~]# pcs constraint order dlm-clone then WebFS +Adding dlm-clone WebFS (kind: Mandatory) (Options: first-action=start then-action=start) ---- == Clone the IP address == There's no point making the services active on both locations if we can't reach them both, so let's clone the IP address. The *IPaddr2* resource agent has built-in intelligence for when it is configured as a clone. It will utilize a multicast MAC address to have the local switch send the relevant packets to all nodes in the cluster, together with *iptables clusterip* rules on the nodes so that any given packet will be grabbed by exactly one node. This will give us a simple but effective form of load-balancing requests between our two nodes. Let's start a new config, and clone our IP: ---- [root@pcmk-1 ~]# pcs cluster cib loadbalance_cfg [root@pcmk-1 ~]# pcs -f loadbalance_cfg resource clone ClusterIP \ clone-max=2 clone-node-max=2 globally-unique=true ---- * `clone-max=2` tells the resource agent to split packets this many ways. This should equal the number of nodes that can host the IP. * `clone-node-max=2` says that one node can run up to 2 instances of the clone. This should also equal the number of nodes that can host the IP, so that if any node goes down, another node can take over the failed node's "request bucket". Otherwise, requests intended for the failed node would be discarded. * `globally-unique=true` tells the cluster that one clone isn't identical to another (each handles a different "bucket"). This also tells the resource agent to insert *iptables* rules so each host only processes packets in its bucket(s). Notice that when the ClusterIP becomes a clone, the constraints referencing ClusterIP now reference the clone. This is done automatically by pcs. ---- [root@pcmk-1 ~]# pcs -f loadbalance_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite (kind:Mandatory) promote WebDataClone then start WebFS (kind:Mandatory) start WebFS then start WebSite (kind:Mandatory) start dlm-clone then start WebFS (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP-clone (score:INFINITY) WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite with WebFS (score:INFINITY) WebFS with dlm-clone (score:INFINITY) ---- Now we must tell the resource how to decide which requests are processed by which hosts. To do this, we specify the *clusterip_hash* parameter. The value of *sourceip* means that the source IP address of incoming packets will be hashed; each node will process a certain range of hashes. ---- [root@pcmk-1 ~]# pcs -f loadbalance_cfg resource update ClusterIP clusterip_hash=sourceip ---- Load our configuration to the cluster, and see how it responds. ----- [root@pcmk-1 ~]# pcs cluster cib-push loadbalance_cfg CIB updated [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Sat Dec 20 22:05:48 2014 Last change: Sat Dec 20 22:05:34 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 9 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: WebSite (ocf::heartbeat:apache): Stopped Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Stopped ipmi-fencing (stonith:fence_ipmilan): Started pcmk-1 Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started pcmk-1 ClusterIP:1 (ocf::heartbeat:IPaddr2): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ----- If desired, you can demonstrate that all request buckets are working by using a tool such as `arping` from several source hosts to see which host responds to each. == Clone the Filesystem and Apache Resources == Now that we have a cluster filesystem ready to go, and our nodes can load-balance requests to a shared IP address, we can configure the cluster so both nodes mount the filesystem and respond to web requests. Clone the filesystem and Apache resources in a new configuration. Notice how pcs automatically updates the relevant constraints again. ---- [root@pcmk-1 ~]# pcs cluster cib active_cfg [root@pcmk-1 ~]# pcs -f active_cfg resource clone WebFS [root@pcmk-1 ~]# pcs -f active_cfg resource clone WebSite [root@pcmk-1 ~]# pcs -f active_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP-clone then start WebSite-clone (kind:Mandatory) promote WebDataClone then start WebFS-clone (kind:Mandatory) start WebFS-clone then start WebSite-clone (kind:Mandatory) start dlm-clone then start WebFS-clone (kind:Mandatory) Colocation Constraints: WebSite-clone with ClusterIP-clone (score:INFINITY) WebFS-clone with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite-clone with WebFS-clone (score:INFINITY) WebFS-clone with dlm-clone (score:INFINITY) ---- Tell the cluster that it is now allowed to promote both instances to be DRBD Primary (aka. master). ----- [root@pcmk-1 ~]# pcs -f active_cfg resource update WebDataClone master-max=2 ----- Finally, load our configuration to the cluster, and re-enable the WebFS resource (which we disabled earlier). ----- [root@pcmk-1 ~]# pcs cluster cib-push active_cfg CIB updated [root@pcmk-1 ~]# pcs resource enable WebFS ----- After all the processes are started, the status should look similar to this. ----- [root@pcmk-1 ~]# pcs resource Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 pcmk-2 ] Clone Set: dlm-clone [dlm] Started: [ pcmk-1 pcmk-2 ] Clone Set: ClusterIP-clone [ClusterIP] (unique) ClusterIP:0 (ocf::heartbeat:IPaddr2): Started ClusterIP:1 (ocf::heartbeat:IPaddr2): Started Clone Set: WebFS-clone [WebFS] Started: [ pcmk-1 pcmk-2 ] Clone Set: WebSite-clone [WebSite] Started: [ pcmk-1 pcmk-2 ] ----- == Test Failover == Testing failover is left as an exercise for the reader. For example, you can put one node into standby mode, use `pcs status` to confirm that its ClusterIP clone was moved to the other node, and use `arping` to verify that packets are not being lost from any source host. [NOTE] ==== You may find that when a failed node rejoins the cluster, both ClusterIP clones stay on one node, due to the resource stickiness. While this works fine, it effectively eliminates load-balancing and returns the cluster to an active-passive setup again. You can avoid this by disabling stickiness for the IP address resource: ---- [root@pcmk-1 ~]# pcs resource meta ClusterIP resource-stickiness=0 ---- ==== diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt index 9e82bd8184..b6f3ab6e08 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Active-Passive.txt @@ -1,425 +1,426 @@ = Create an Active/Passive Cluster = == Explore the Existing Configuration == When Pacemaker starts up, it automatically records the number and details of the nodes in the cluster, as well as which stack is being used and the version of Pacemaker being used. The first few lines of output should look like this: ---- [root@pcmk-1 ~]# pcs status Cluster name: mycluster WARNING: no stonith devices and stonith-enabled is not false Last updated: Tue Dec 16 16:15:29 2014 Last change: Tue Dec 16 15:49:47 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 0 Resources configured Online: [ pcmk-1 pcmk-2 ] ---- For those who are not of afraid of XML, you can see the raw cluster configuration and status by using the `pcs cluster cib` command. .The last XML you'll see in this document ====== ---- [root@pcmk-1 ~]# pcs cluster cib ---- [source,XML] ---- ---- ====== Before we make any changes, it's a good idea to check the validity of the configuration. ---- [root@pcmk-1 ~]# crm_verify -L -V error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Errors found during check: config not valid ---- As you can see, the tool has found some errors. In order to guarantee the safety of your data, footnote:[If the data is corrupt, there is little point in continuing to make it available] the default for STONITH footnote:[A common node fencing mechanism. Used to ensure data integrity by powering off "bad" nodes] in Pacemaker is *enabled*. However, it also knows when no STONITH configuration has been supplied and reports this as a problem (since the cluster would not be able to make progress if a situation requiring node fencing arose). We will disable this feature for now and configure it later. To disable STONITH, set the *stonith-enabled* cluster option to false: ---- [root@pcmk-1 ~]# pcs property set stonith-enabled=false [root@pcmk-1 ~]# crm_verify -L ---- With the new cluster option set, the configuration is now valid. [WARNING] ========= The use of `stonith-enabled=false` is completely inappropriate for a production cluster. It tells the cluster to simply pretend that failed nodes are safely powered off. Some vendors will refuse to support clusters that have STONITH disabled. We disable STONITH here only to defer the discussion of its configuration, which can differ widely from one installation to the next. See <<_what_is_stonith>> for information on why STONITH is important and details on how to configure it. ========= == Add a Resource == Our first resource will be a unique IP address that the cluster can bring up on either node. Regardless of where any cluster service(s) are running, end users need a consistent address to contact them on. Here, I will choose 192.168.122.120 as the floating address, give it the imaginative name ClusterIP and tell the cluster to check whether it is running every 30 seconds. [WARNING] =========== The chosen address must not already be in use on the network. Do not reuse an IP address one of the nodes already has configured. =========== ---- [root@pcmk-1 ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 \ ip=192.168.122.120 cidr_netmask=32 op monitor interval=30s ---- Another important piece of information here is *ocf:heartbeat:IPaddr2*. This tells Pacemaker three things about the resource you want to add: * The first field (*ocf* in this case) is the standard to which the resource script conforms and where to find it. * The second field (*heartbeat* in this case) is standard-specific; for OCF resources, it tells the cluster which OCF namespace the resource script is in. * The third field (*IPaddr2* in this case) is the name of the resource script. To obtain a list of the available resource standards (the *ocf* part of *ocf:heartbeat:IPaddr2*), run: ---- [root@pcmk-1 ~]# pcs resource standards ocf lsb service systemd stonith ---- To obtain a list of the available OCF resource providers (the *heartbeat* part of *ocf:heartbeat:IPaddr2*), run: ---- [root@pcmk-1 ~]# pcs resource providers heartbeat pacemaker ---- Finally, if you want to see all the resource agents available for a specific OCF provider (the *IPaddr2* part of *ocf:heartbeat:IPaddr2*), run: ---- [root@pcmk-1 ~]# pcs resource agents ocf:heartbeat AoEtarget AudibleAlarm CTDB ClusterMon Delay Dummy . . (skipping lots of resources to save space) . IPaddr2 . . . tomcat varnish vmware zabbixserver ---- Now, verify that the IP resource has been added, and display the cluster's status to see that it is now active: ---- [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Tue Dec 16 17:44:40 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 1 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- == Perform a Failover == Since our ultimate goal is high availability, we should test failover of our new resource before moving on. First, find the node on which the IP address is running. ---- [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Tue Dec 16 17:44:40 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-1 (1) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 1 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 ---- You can see that the status of the *ClusterIP* resource is *Started* on a particular node (in this example, *pcmk-1*). Shut down Pacemaker and Corosync on that machine to trigger a failover. ---- [root@pcmk-1 ~]# pcs cluster stop pcmk-1 Stopping Cluster... ---- [NOTE] ====== A cluster command such as +pcs cluster stop pass:[nodename]+ can be run from any node in the cluster, not just the affected node. ====== Verify that pacemaker and corosync are no longer running: ---- [root@pcmk-1 ~]# pcs status Error: cluster is not currently running on this node ---- Go to the other node, and check the cluster status. ---- [root@pcmk-2 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 10:30:56 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 1 Resources configured Online: [ pcmk-2 ] OFFLINE: [ pcmk-1 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- Notice that *pcmk-1* is *OFFLINE* for cluster purposes (its *PCSD* is still *Online*, allowing it to receive `pcs` commands, but it is not participating in the cluster). -Also notice that *ClusterIP* is now running on pcmk-2 -- failover happened +Also notice that *ClusterIP* is now running on *pcmk-2* -- failover happened automatically, and no errors are reported. [IMPORTANT] .Quorum ==== If a cluster splits into two (or more) groups of nodes that can no longer communicate with each other (aka. _partitions_), _quorum_ is used to prevent resources from starting on more nodes than desired, which would risk data corruption. A cluster has quorum when more than half of all known nodes are online in the same partition, or for the mathematically inclined, whenever the following equation is true: .... total_nodes < 2 * active_nodes .... For example, if a 5-node cluster split into 3- and 2-node paritions, the 3-node partition would have quorum and could continue serving resources. If a 6-node cluster split into two 3-node partitions, neither partition would have quorum; pacemaker's default behavior in such cases is to stop all resources, in order to prevent data corruption. Two-node clusters are a special case. By the above definition, a two-node cluster would only have quorum when both nodes are running. This would make the creation of a two-node cluster pointless, footnote:[Some would argue that two-node clusters are always pointless, but that is an argument for another time] but corosync has the ability to treat two-node clusters as if only one node is required for quorum. The `pcs cluster setup` command will automatically configure *two_node: 1* in +corosync.conf+, so a two-node cluster will "just work". If you are using a different cluster shell, you will have to configure +corosync.conf+ appropriately yourself. If you are using older versions of corosync, you will have to ignore quorum at the pacemaker level, using `pcs property set no-quorum-policy=ignore` (or the equivalent command if you are using a different cluster shell). ==== Now, simulate node recovery by restarting the cluster stack on *pcmk-1*, and -check the cluster's status. +check the cluster's status. (It may take a little while before the cluster +gets going on the node, but it eventually will look like the below.) ---- [root@pcmk-1 ~]# pcs cluster start pcmk-1 pcmk-1: Starting Cluster... [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 10:50:11 2014 Last change: Tue Dec 16 17:44:26 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 1 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- [NOTE] ====== With older versions of pacemaker, the cluster might move the IP back to its original location (*pcmk-1*). Usually, this is no longer the case. ====== == Prevent Resources from Moving after Recovery == In most circumstances, it is highly desirable to prevent healthy resources from being moved around the cluster. Moving resources almost always requires a period of downtime. For complex services such as databases, this period can be quite long. To address this, Pacemaker has the concept of resource _stickiness_, which controls how strongly a service prefers to stay running where it is. You may like to think of it as the "cost" of any downtime. By default, Pacemaker assumes there is zero cost associated with moving resources and will do so to achieve "optimal" footnote:[Pacemaker's definition of optimal may not always agree with that of a human's. The order in which Pacemaker processes lists of resources and nodes creates implicit preferences in situations where the administrator has not explicitly specified them.] resource placement. We can specify a different stickiness for every resource, but it is often sufficient to change the default. ---- [root@pcmk-1 ~]# pcs resource defaults resource-stickiness=100 [root@pcmk-1 ~]# pcs resource defaults resource-stickiness: 100 ---- [NOTE] ====== Older versions of `pcs` required that `rsc` be added after `resource` in the above commands. ====== diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt index 3491f7d543..486743cae5 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Installation.txt @@ -1,524 +1,516 @@ = Installation = == Install &DISTRO; &DISTRO_VERSION; == Detailed instructions for installing Fedora are available at http://docs.fedoraproject.org/en-US/Fedora/21/html/Installation_Guide/ in a number of languages. The abbreviated version is as follows: Point your browser to https://getfedora.org/, choose a flavor (Server is an appropriate choice), and download the installation image appropriate to your hardware. Burn the installation image to a DVD or USB drive footnote:[http://docs.fedoraproject.org/en-US/Fedora/21/html/Installation_Guide/sect-preparing-boot-media.html] and boot from it, or use the image to boot a virtual machine. After starting the installation, select your language and keyboard layout at the welcome screen. footnote:[http://docs.fedoraproject.org/en-US/Fedora/21/html/Installation_Guide/sect-installation-graphical-mode.html] At this point, you get a chance to tweak the default installation options. In the *NETWORK & HOSTNAME* section you'll want to: - Assign your machine a host name. I happen to control the clusterlabs.org domain name, so I will use pcmk-1.clusterlabs.org here. - Assign a fixed IPv4 address. In this example, I'll use 192.168.122.101. [IMPORTANT] =========== Do not accept the default network settings. Cluster machines should never obtain an IP address via DHCP, because DHCP's periodic address renewal will interfere with corosync. If you miss this step during installation, it can easily be fixed later. You will have to navigate to *system settings* and select *network*. From there, you can select what device to configure. =========== In the *Software Selection* section (try saying that 10 times quickly), leave all *Add-Ons* unchecked so that we see everything that gets installed. We'll install any extra software we need later. [IMPORTANT] =========== By default Fedora uses LVM for partitioning which allows us to dynamically change the amount of space allocated to a given partition. However, by default it also allocates all free space to the +/+ (aka. *root*) partition, which cannot be dynamically _reduced_ in size (dynamic increases are fine, by the way). So if you plan on following the DRBD or GFS2 portions of this guide, you should reserve at least 1GiB of space on each machine from which to create a shared volume. To do so, enter the *Installation Destination* section where you are be given an opportunity to reduce the size of the *root* partition (after choosing which hard drive you wish to install to). If you want the reserved space to be available within an LVM volume group, be sure to select *Modify...* next to the volume group name and change the *Size policy:* to *Fixed* or *As large as possible*. =========== It is highly recommended to enable NTP on your cluster nodes. Doing so ensures all nodes agree on the current time and makes reading log files significantly easier. You can do this in the *DATE & TIME* section. footnote:[http://docs.fedoraproject.org/en-US/Fedora/21/html/Installation_Guide/sect-installation-gui-date-and-time.html] Once you've completed the installation, set a root password as instructed. For the purposes of this document, it is not necessary to create any additional users. After the node reboots, you'll see a (possibly mangled) login prompt on the console. Login using *root* and the password you created earlier. image::images/Console.png["Initial Console",align="center",scaledwidth="65%"] [NOTE] ====== From here on, we're going to be working exclusively from the terminal. ====== == Configure the OS == === Verify Networking === Ensure that the machine has the static IP address you configured earlier. ----- [root@pcmk-1 ~]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:d7:d6:08 brd ff:ff:ff:ff:ff:ff inet 192.168.122.101/24 brd 192.168.122.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fed7:d608/64 scope link valid_lft forever preferred_lft forever ----- [NOTE] ===== -If you ever need to change the node's IP address from the command line, follow these instructions: +If you ever need to change the node's IP address from the command line, follow +these instructions, replacing *${device}* with the name of your network device: .... [root@pcmk-1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-${device} # manually edit as desired [root@pcmk-1 ~]# nmcli dev disconnect ${device} [root@pcmk-1 ~]# nmcli con reload ${device} [root@pcmk-1 ~]# nmcli con up ${device} .... This makes *NetworkManager* aware that a change was made on the config file. ===== Next, ensure that the routes are as expected: ----- [root@pcmk-1 ~]# ip route default via 192.168.122.1 dev eth0 proto static metric 1024 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.101 ----- If there is no line beginning with *default via*, then you may need to add a line such as [source,Bash] -GATEWAY=192.168.122.1 +GATEWAY="192.168.122.1" -to +/etc/sysconfig/network+ and restart the network. +to the device configuration using the same process as described above for +changing the IP address. Now, check for connectivity to the outside world. Start small by testing whether we can reach the gateway we configured. ----- [root@pcmk-1 ~]# ping -c 1 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_req=1 ttl=64 time=0.249 ms --- 192.168.122.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms ----- Now try something external; choose a location you know should be available. ----- [root@pcmk-1 ~]# ping -c 1 www.google.com PING www.l.google.com (173.194.72.106) 56(84) bytes of data. 64 bytes from tf-in-f106.1e100.net (173.194.72.106): icmp_req=1 ttl=41 time=167 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 167.618/167.618/167.618/0.000 ms ----- === Login Remotely === The console isn't a very friendly place to work from, so we will now switch to accessing the machine remotely via SSH where we can use copy and paste, etc. From another host, check whether we can see the new host at all: ----- beekhof@f16 ~ # ping -c 1 192.168.122.101 PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data. 64 bytes from 192.168.122.101: icmp_req=1 ttl=64 time=1.01 ms --- 192.168.122.101 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.012/1.012/1.012/0.000 ms ----- Next, login as root via SSH. ----- -beekhof@f16 ~ # ssh -l root 192.168.122.11 -root@192.168.122.11's password: -Last login: Fri Mar 30 19:41:19 2012 from 192.168.122.1 +beekhof@f16 ~ # ssh -l root 192.168.122.101 +The authenticity of host '192.168.122.101 (192.168.122.101)' can't be established. +ECDSA key fingerprint is 6e:b7:8f:e2:4c:94:43:54:a8:53:cc:20:0f:29:a4:e0. +Are you sure you want to continue connecting (yes/no)? yes +Warning: Permanently added '192.168.122.101' (ECDSA) to the list of known hosts. +root@192.168.122.101's password: +Last login: Tue Aug 11 13:14:39 2015 [root@pcmk-1 ~]# ----- === Apply Updates === Apply any package updates released since your installation image was created: ---- [root@pcmk-1 ~]# yum update ---- === Disable Security During Testing === To simplify this guide and focus on the aspects directly connected to clustering, we will now disable the machine's firewall and SELinux installation. [WARNING] =========== These actions create significant security issues and should not be performed on machines that will be exposed to the outside world. =========== //// TODO: Create an Appendix that deals with (at least) re-enabling the firewall. //// ---- [root@pcmk-1 ~]# setenforce 0 [root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config [root@pcmk-1 ~]# systemctl disable firewalld.service [root@pcmk-1 ~]# systemctl stop firewalld.service [root@pcmk-1 ~]# iptables --flush ---- [NOTE] =========== If you are using Fedora 17 or earlier or are using the iptables service for your firewall, the commands would be: ---- [root@pcmk-1 ~]# setenforce 0 [root@pcmk-1 ~]# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config [root@pcmk-1 ~]# systemctl disable iptables.service [root@pcmk-1 ~]# rm -f /etc/systemd/system/basic.target.wants/iptables.service [root@pcmk-1 ~]# systemctl stop iptables.service [root@pcmk-1 ~]# iptables --flush ---- =========== === Use Short Node Names === During installation, we filled in the machine's fully qualified domain name (FQDN), which can be rather long when it appears in cluster logs and status output. See for yourself how the machine identifies itself: (((Nodes, short name))) ---- [root@pcmk-1 ~]# uname -n pcmk-1.clusterlabs.org [root@pcmk-1 ~]# dnsdomainname clusterlabs.org ---- (((Nodes, Domain name (Query)))) The output from the second command is fine, but we really don't need the domain name included in the basic host details. To address this, we need to use the `hostnamectl` tool to strip off the domain name. ---- [root@pcmk-1 ~]# hostnamectl set-hostname $(uname -n | sed s/\\..*//) ---- (((Nodes, Domain name (Remove from host name)))) Now check the machine is using the correct names ---- [root@pcmk-1 ~]# uname -n pcmk-1 [root@pcmk-1 ~]# dnsdomainname clusterlabs.org ---- -If it concerns you that the shell prompt has not been updated, simply -log out and back in again. - == Repeat for Second Node == Repeat the Installation steps so far, so that you have two nodes ready to have the cluster software installed. For the purposes of this document, the additional node is called pcmk-2 with address 192.168.122.102. == Configure Communication Between Nodes == === Configure Host Name Resolution === Confirm that you can communicate between the two new nodes: ---- [root@pcmk-1 ~]# ping -c 3 192.168.122.102 PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data. 64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=0.343 ms 64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.402 ms 64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.558 ms --- 192.168.122.102 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.343/0.434/0.558/0.092 ms ---- Now we need to make sure we can communicate with the machines by their name. If you have a DNS server, add additional entries for the two machines. Otherwise, you'll need to add the machines to +/etc/hosts+ on both nodes. Below are the entries for my cluster nodes: ---- [root@pcmk-1 ~]# grep pcmk /etc/hosts 192.168.122.101 pcmk-1.clusterlabs.org pcmk-1 192.168.122.102 pcmk-2.clusterlabs.org pcmk-2 ---- We can now verify the setup by again using ping: ---- [root@pcmk-1 ~]# ping -c 3 pcmk-2 PING pcmk-2.clusterlabs.org (192.168.122.101) 56(84) bytes of data. 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=1 ttl=64 time=0.164 ms 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=2 ttl=64 time=0.475 ms 64 bytes from pcmk-1.clusterlabs.org (192.168.122.101): icmp_seq=3 ttl=64 time=0.186 ms --- pcmk-2.clusterlabs.org ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.164/0.275/0.475/0.141 ms ---- === Configure SSH === SSH is a convenient and secure way to copy files and perform commands remotely. For the purposes of this guide, we will create a key without a password (using the -N option) so that we can perform remote actions without being prompted. (((SSH))) [WARNING] ========= Unprotected SSH keys (those without a password) are not recommended for servers exposed to the outside world. We use them here only to simplify the demo. ========= Create a new key and allow anyone with that key to log in: .Creating and Activating a new SSH Key ---- [root@pcmk-1 ~]# ssh-keygen -t dsa -f ~/.ssh/id_dsa -N "" Generating public/private dsa key pair. Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: 91:09:5c:82:5a:6a:50:08:4e:b2:0c:62:de:cc:74:44 root@pcmk-1.clusterlabs.org - The key's randomart image is: +--[ DSA 1024]----+ |==.ooEo.. | |X O + .o o | | * A + | | + . | | . S | | | | | | | | | +-----------------+ - [root@pcmk-1 ~]# cp ~/.ssh/id_dsa.pub ~/.ssh/authorized_keys ---- (((Creating and Activating a new SSH Key))) -Install the key on the other node and test that you can now run commands -remotely, without being prompted. - -.Installing the SSH Key on Another Host +Install the key on the other node: ---- [root@pcmk-1 ~]# scp -r ~/.ssh pcmk-2: The authenticity of host 'pcmk-2 (192.168.122.102)' can't be established. -RSA key fingerprint is b1:2b:55:93:f1:d9:52:2b:0f:f2:8a:4e:ae:c6:7c:9a. +ECDSA key fingerprint is a4:f5:b2:34:9d:86:2b:34:a2:87:37:b9:ca:68:52:ec. Are you sure you want to continue connecting (yes/no)? yes -Warning: Permanently added 'pcmk-2,192.168.122.102' (RSA) to the list of known hosts.root@pcmk-2's password: +Warning: Permanently added 'pcmk-2,192.168.122.102' (ECDSA) to the list of known hosts. +root@pcmk-2's password: id_dsa.pub 100% 616 0.6KB/s 00:00 id_dsa 100% 672 0.7KB/s 00:00 known_hosts 100% 400 0.4KB/s 00:00 authorized_keys 100% 616 0.6KB/s 00:00 +---- + +Test that you can now run commands remotely, without being prompted: +---- [root@pcmk-1 ~]# ssh pcmk-2 -- uname -n pcmk-2 ---- == Install the Cluster Software == Fire up a shell on both nodes and run the following to install pacemaker, and while we're at it, some command-line tools to make our lives easier: ---- # yum install -y pacemaker pcs psmisc ---- [IMPORTANT] =========== This document will show commands that need to be executed on both nodes with a simple `#` prompt. Be sure to run them on each node individually. =========== [NOTE] =========== -This document uses pcs for cluster management. Other alternatives, -such as crmsh, are available, but their syntax +This document uses `pcs` for cluster management. Other alternatives, +such as `crmsh`, are available, but their syntax will differ from the examples used here. =========== == Configure the Cluster Software == === Enable pcs Daemon === Before the cluster can be configured, the pcs daemon must be started and enabled to start at boot time on each node. This daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster. Start and enable the daemon by issuing the following commands on each node: ---- # systemctl start pcsd.service # systemctl enable pcsd.service +ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/multi-user.target.wants/pcsd.service' ---- The installed packages will create a *hacluster* user with a disabled password. While this is fine for running `pcs` commands locally, the account needs a login password in order to perform such tasks as syncing the corosync configuration, or starting and stopping the cluster on other nodes. This tutorial will make use of such commands, so now we will set a password for the *hacluster* user, using the same password on both nodes: ---- # passwd hacluster -password: +Changing password for user hacluster. +New password: +Retype new password: +passwd: all authentication tokens updated successfully. ---- [NOTE] =========== Alternatively, to script this process or set the password on a -different machine from the one you're logged into, you can use +different machine from the one you're logged into, you can use the `--stdin` option for `passwd`: ---- [root@pcmk-1 ~]# ssh pcmk-2 -- 'echo redhat1 | passwd --stdin hacluster' ---- =========== === Configure Corosync === On either node, use `pcs cluster auth` to authenticate as the *hacluster* user: ---- [root@pcmk-1 ~]# pcs cluster auth pcmk-1 pcmk-2 Username: hacluster -Password: +Password: pcmk-1: Authorized pcmk-2: Authorized ---- [IMPORTANT] =========== The version of pcs shipped with Fedora 21 will bind only to the host's IPv6 address in some circumstances. If you get errors with `pcs cluster auth`, add this line before the first *server.run* line in +/usr/lib/pcsd/ssl.rb+ to bind to IPv4 only: ---- webrick_options[:BindAddress] = '0.0.0.0' ---- And restart pcsd: ---- [root@pcmk-1 ~]# systemctl restart pcsd ---- This is a temporary workaround that will get removed if the pcsd package is later updated. =========== Next, use `pcs cluster setup` to generate and synchronize the corosync configuration: ---- [root@pcmk-1 ~]# pcs cluster setup --name mycluster pcmk-1 pcmk-2 Shutting down pacemaker/corosync services... Redirecting to /bin/systemctl stop pacemaker.service Redirecting to /bin/systemctl stop corosync.service Killing any remaining services... Removing all cluster configuration files... pcmk-1: Succeeded pcmk-2: Succeeded ---- If you received an authorization error for either of those commands, make sure you configured the *hacluster* user account on each node with the same password. [NOTE] ====== Early versions of pcs required that `--name` be omitted from the above command. -If using a different cluster shell such as crmsh rather than pcs, you must -manually create a corosync.conf and copy it to all nodes. +If you are not using `pcs` for cluster administration, +follow whatever procedures are appropriate for your tools +to create a corosync.conf and copy it to all nodes. -The pcs command will configure corosync to use UDP unicast transport; if you +The `pcs` command will configure corosync to use UDP unicast transport; if you choose to use multicast instead, choose a multicast address carefully. footnote:[For some subtle issues, see the now-defunct http://web.archive.org/web/20101211210054/http://29west.com/docs/THPM/multicast-address-assignment.html or the more detailed treatment in http://www.cisco.com/c/dam/en/us/support/docs/ip/ip-multicast/ipmlt_wp.pdf[Cisco's Guidelines for Enterprise IP Multicast Address Allocation] paper.] ====== The final /etc/corosync.conf configuration on each node should look -something like the sample in Appendix B, Sample Corosync Configuration. - -[NOTE] -====== -With versions of Corosync before 2.0, Pacemaker could obtain membership and -quorum from a custom Corosync plugin. This plugin also had the capability to -start Pacemaker automatically when Corosync was started. -Neither behavior is possible with Corosync 2.0 and later, as support for -plugins was removed. - -Because Pacemaker made use of the plugin for message routing, a cluster node -using an older Corosync cannot talk to one using Corosync 2.0 or later. -Rolling upgrades between these versions are therefore not possible, and an -alternate strategy -footnote:[http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ap-upgrade.html] -must be used. -====== +something like the sample in <>. diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt index fe47caae42..3301683f58 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Shared-Storage.txt @@ -1,510 +1,510 @@ = Replicate Storage Using DRBD = Even if you're serving up static websites, having to manually synchronize the contents of that website to all the machines in the cluster is not ideal. For dynamic websites, such as a wiki, it's not even an option. Not everyone care afford network-attached storage, but somehow the data needs to be kept in sync. Enter DRBD, which can be thought of as network-based RAID-1. footnote:[See http://www.drbd.org/ for details.] == Install the DRBD Packages == DRBD itself is included in the upstream kernel, footnote:[Since version 2.6.33] but we do need some utilities to use it effectively. On both nodes, run: ---- # yum install -y drbd-pacemaker drbd-udev ---- == Allocate a Disk Volume for DRBD == DRBD will need its own block device on each node. This can be a physical disk partition or logical volume, of whatever size you need for your data. For this document, we will use a 1GiB logical volume, which is more than sufficient for a single HTML file and (later) GFS2 metadata. ---- [root@pcmk-1 ~]# vgdisplay | grep -e Name -e Free VG Name fedora-server_pcmk-1 Free PE / Size 511 / 2.00 GiB [root@pcmk-1 ~]# lvcreate --name drbd-demo --size 1G fedora-server_pcmk-1 Logical volume "drbd-demo" created [root@pcmk-1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert drbd-demo fedora-server_pcmk-1 -wi-a----- 1.00g root fedora-server_pcmk-1 -wi-ao---- 5.00g swap fedora-server_pcmk-1 -wi-ao---- 1.00g ---- -Repeat this on the second node, making sure to use the same size. +Repeat for the second node, making sure to use the same size: ---- [root@pcmk-1 ~]# ssh pcmk-2 -- lvcreate --name drbd-demo --size 1G fedora-server_pcmk-2 Logical volume "drbd-demo" created ---- == Configure DRBD == There is no series of commands for building a DRBD configuration, so simply run this on both nodes to use this sample configuration: ---- # cat </etc/drbd.d/wwwdata.res resource wwwdata { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on pcmk-1 { disk /dev/fedora-server_pcmk-1/drbd-demo; address 192.168.122.101:7789; } on pcmk-2 { disk /dev/fedora-server_pcmk-2/drbd-demo; address 192.168.122.102:7789; } } END ---- [IMPORTANT] ========= Edit the file to use the hostnames, IP addresses and logical volume paths of your nodes if they differ from the ones used in this guide. ========= [NOTE] ======= Detailed information on the directives used in this configuration (and other alternatives) is available at http://www.drbd.org/users-guide/ch-configure.html The *allow-two-primaries* option would not normally be used in an active/passive cluster. We are adding it here for the convenience of changing to an active/active cluster later. ======= == Initialize DRBD == With the configuration in place, we can now get DRBD running. These commands create the local metadata for the DRBD resource, ensure the DRBD kernel module is loaded, and bring up the DRBD resource. Run them on one node: ---- # drbdadm create-md wwwdata initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created. # modprobe drbd # drbdadm up wwwdata ---- We can confirm DRBD's status on this node: ---- # cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:WFConnection ro:Secondary/Unknown ds:Inconsistent/DUnknown C r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508 ---- Because we have not yet initialized the data, this node's data is marked as *Inconsistent*. Because we have not yet initialized the second node, the local state is *WFConnection* (waiting for connection), and the partner node's status is marked as *Unknown*. Now, repeat the above commands on the second node. This time, when we check the status, it shows: ---- # cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508 ---- You can see the state has changed to *Connected*, meaning the two DRBD nodes are communicating properly, and both nodes are in *Secondary* role with *Inconsistent* data. To make the data consistent, we need to tell DRBD which node should be considered to have the correct data. In this case, since we are creating a new resource, both have garbage, so we'll just pick pcmk-1 and run this command on it: ---- [root@pcmk-1 ~]# drbdadm primary --force wwwdata ---- [NOTE] ====== -In DRBD 8.3 and earlier, the equivalent command is: ----- -[root@pcmk-1 ~]# drbdadm -- --overwrite-data-of-peer primary wwwdata ----- +If you are using an older version of DRBD, the required syntax may be different. +See the documentation for your version for how to perform these commands. ====== If we check the status immediately, we'll see something like this: ---- [root@pcmk-1 ~]# cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r----- ns:2872 nr:0 dw:0 dr:3784 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1045636 [>....................] sync'ed: 0.4% (1045636/1048508)K finish: 0:10:53 speed: 1,436 (1,436) K/sec ---- We can see that this node has the *Primary* role, the partner node has the *Secondary* role, this node's data is now considered *UpToDate*, the partner node's data is still *Inconsistent*, and a progress bar shows how far along the partner node is in synchronizing the data. After a while, the sync should finish, and you'll see something like: ---- [root@pcmk-1 ~]# cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) srcversion: 153833F4A69E341D3F3E707 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:1048508 nr:0 dw:0 dr:1049420 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 ---- Both sets of data are now *UpToDate*, and we can proceed to creating and populating a filesystem for our WebSite resource's documents. == Populate the DRBD Disk == On the node with the primary role (pcmk-1 in this example), create a filesystem on the DRBD device: ---- [root@pcmk-1 ~]# mkfs.ext4 /dev/drbd1 mke2fs 1.42.11 (09-Jul-2014) Creating filesystem with 262127 4k blocks and 65536 inodes Filesystem UUID: 26879260-9077-4d6d-ad69-7d31d3d8d8d4 Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done ---- [NOTE] ==== In this example, we create an ext4 filesystem with no special options. In a production environment, you should choose a filesystem type and options that are suitable for your application. ==== Mount the newly created filesystem, populate it with our web document, then unmount it (the cluster will handle mounting and unmounting it later): ---- [root@pcmk-1 ~]# mount /dev/drbd1 /mnt [root@pcmk-1 ~]# cat <<-END >/mnt/index.html My Test Site - DRBD END [root@pcmk-1 ~]# umount /dev/drbd1 ---- == Configure the Cluster for the DRBD device == One handy feature `pcs` has is the ability to queue up several changes into a file and commit those changes atomically. To do this, start by populating the file with the current raw XML config from the CIB. ---- -# pcs cluster cib drbd_cfg +[root@pcmk-1 ~]# pcs cluster cib drbd_cfg ---- Using the `pcs -f` option, make changes to the configuration saved in the +drbd_cfg+ file. These changes will not be seen by the cluster until the +drbd_cfg+ file is pushed into the live cluster's CIB later. Here, we create a cluster resource for the DRBD device, and an additional _clone_ resource to allow the resource to run on both nodes at the same time. ---- [root@pcmk-1 ~]# pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \ drbd_resource=wwwdata op monitor interval=60s [root@pcmk-1 ~]# pcs -f drbd_cfg resource master WebDataClone WebData \ master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 \ notify=true [root@pcmk-1 ~]# pcs -f drbd_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Stopped: [ pcmk-1 pcmk-2 ] ---- After you are satisfied with all the changes, you can commit them all at once by pushing the drbd_cfg file into the live CIB. ---- [root@pcmk-1 ~]# pcs cluster cib-push drbd_cfg CIB updated ---- [NOTE] ==== Early versions of `pcs` required `push cib` in place of `cib-push` above. ==== Let's see what the cluster did with the new configuration: ---- [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 16:39:43 2014 Last change: Wed Dec 17 16:39:30 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 4 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- We can see that *WebDataClone* (our DRBD device) is running as master (DRBD's primary role) on *pcmk-1* and slave (DRBD's secondary role) on *pcmk-2*. [IMPORTANT] ==== The resource agent should load the DRBD module when needed if it's not already loaded. If that does not happen, configure your operating system to load the module at boot time. For &DISTRO; &DISTRO_VERSION;, you would run this on both nodes: ---- # echo drbd >/etc/modules-load.d/drbd.conf ---- ==== == Configure the Cluster for the Filesystem == Now that we have a working DRBD device, we need to mount its filesystem. In addition to defining the filesystem, we also need to tell the cluster where it can be located (only on the DRBD Primary) and when it is allowed to start (after the Primary was promoted). We are going to take a shortcut when creating the resource this time. Instead of explicitly saying we want the *ocf:heartbeat:Filesystem* script, we are only going to ask for *Filesystem*. We can do this because we know there is only one resource script named *Filesystem* available to pacemaker, and that pcs is smart enough to fill in the *ocf:heartbeat:* portion for us correctly in the configuration. If there were multiple *Filesystem* scripts from different OCF providers, we would need to specify the exact one we wanted. Once again, we will queue our changes to a file and then push the new configuration to the cluster as the final step. ---- [root@pcmk-1 ~]# pcs cluster cib fs_cfg [root@pcmk-1 ~]# pcs -f fs_cfg resource create WebFS Filesystem \ device="/dev/drbd1" directory="/var/www/html" \ fstype="ext4" [root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebFS with WebDataClone INFINITY with-rsc-role=Master [root@pcmk-1 ~]# pcs -f fs_cfg constraint order promote WebDataClone then start WebFS Adding WebDataClone WebFS (kind: Mandatory) (Options: first-action=promote then-action=start) ---- We also need to tell the cluster that Apache needs to run on the same machine as the filesystem and that it must be active before Apache can start. ---- [root@pcmk-1 ~]# pcs -f fs_cfg constraint colocation add WebSite with WebFS INFINITY [root@pcmk-1 ~]# pcs -f fs_cfg constraint order WebFS then WebSite Adding WebFS WebSite (kind: Mandatory) (Options: first-action=start then-action=start) ---- Review the updated configuration. ---- [root@pcmk-1 ~]# pcs -f fs_cfg constraint Location Constraints: Ordering Constraints: start ClusterIP then start WebSite (kind:Mandatory) promote WebDataClone then start WebFS (kind:Mandatory) start WebFS then start WebSite (kind:Mandatory) Colocation Constraints: WebSite with ClusterIP (score:INFINITY) WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master) WebSite with WebFS (score:INFINITY) +---- +---- [root@pcmk-1 ~]# pcs -f fs_cfg resource show ClusterIP (ocf::heartbeat:IPaddr2): Started WebSite (ocf::heartbeat:apache): Started Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Stopped ---- After reviewing the new configuration, upload it and watch the cluster put it into effect. ---- [root@pcmk-1 ~]# pcs cluster cib-push fs_cfg [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 17:02:45 2014 Last change: Wed Dec 17 17:02:42 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 5 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-1 WebSite (ocf::heartbeat:apache): Started pcmk-1 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-1 ] Slaves: [ pcmk-2 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-1 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- == Test Cluster Failover == Previously, we used `pcs cluster stop pcmk-1` to stop all cluster services on *pcmk-1*, failing over the cluster resources, but there is another way to safely simulate node failure. We can put the node into _standby mode_. Nodes in this state continue to run corosync and pacemaker but are not allowed to run resources. Any resources found active there will be moved elsewhere. This feature can be particularly useful when performing system administration tasks such as updating packages used by cluster resources. Put the active node into standby mode, and observe the cluster move all the resources to the other node. The node's status will change to indicate that it can no longer host resources. ---- [root@pcmk-1 ~]# pcs cluster standby pcmk-1 [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 17:14:05 2014 Last change: Wed Dec 17 17:14:02 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 5 Resources configured Node pcmk-1 (1): standby Online: [ pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Stopped: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- Once we've done everything we needed to on pcmk-1 (in this case nothing, we just wanted to see the resources move), we can allow the node to be a full cluster member again. ---- [root@pcmk-1 ~]# pcs cluster unstandby pcmk-1 [root@pcmk-1 ~]# pcs status Cluster name: mycluster Last updated: Wed Dec 17 17:15:36 2014 Last change: Wed Dec 17 17:15:33 2014 Stack: corosync Current DC: pcmk-2 (2) - partition with quorum Version: 1.1.12-a9c8177 2 Nodes configured 5 Resources configured Online: [ pcmk-1 pcmk-2 ] Full list of resources: ClusterIP (ocf::heartbeat:IPaddr2): Started pcmk-2 WebSite (ocf::heartbeat:apache): Started pcmk-2 Master/Slave Set: WebDataClone [WebData] Masters: [ pcmk-2 ] Slaves: [ pcmk-1 ] WebFS (ocf::heartbeat:Filesystem): Started pcmk-2 PCSD Status: pcmk-1: Online pcmk-2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled ---- Notice that *pcmk-1* is back to the *Online* state, and that the cluster resources stay where they are due to our resource stickiness settings configured earlier. diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt index 0ad6c2ee2d..43772c3633 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Stonith.txt @@ -1,140 +1,151 @@ = Configure STONITH = == What is STONITH? == STONITH (Shoot The Other Node In The Head aka. fencing) protects your data from being corrupted by rogue nodes or unintended concurrent access. Just because a node is unresponsive doesn't mean it has stopped accessing your data. The only way to be 100% sure that your data is safe, is to use STONITH to ensure that the node is truly offline before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. == Choose a STONITH Device == It is crucial that your STONITH device can allow the cluster to differentiate between a node failure and a network failure. -The biggest mistake people make in choosing a STONITH device is to -use a remote power switch (such as many on-board IPMI controllers) that -shares power with the node it controls. In such cases, the cluster -cannot be sure if the node is really offline, or active and suffering -from a network fault. +A common mistake people make when choosing a STONITH device is to use a remote +power switch (such as many on-board IPMI controllers) that shares power with +the node it controls. If the power fails in such a case, the cluster cannot be +sure whether the node is really offline, or active and suffering from a network +fault, so the cluster will stop all resources to avoid a possible split-brain +situation. Likewise, any device that relies on the machine being active (such as -SSH-based "devices" used during testing) are inappropriate. +SSH-based "devices" sometimes used during testing) is inappropriate. == Configure the Cluster for STONITH == -. Configure the STONITH device itself to be able to fence your nodes and accept - fencing requests. - . Install the STONITH agent(s). To see what packages are available, run `yum - search fence-agents fence-virt`. Be sure to install the package(s) on all - cluster nodes. + search fence-`. Be sure to install the package(s) on all cluster nodes. + +. Configure the STONITH device itself to be able to fence your nodes and accept + fencing requests. This includes any necessary configuration on the device and + on the nodes, and any firewall or SELinux changes needed. Test the + communication between the device and your nodes. . Find the correct STONITH agent script: `pcs stonith list` . Find the parameters associated with the device: +pcs stonith describe pass:[agent_name]+ . Create a local copy of the CIB: `pcs cluster cib stonith_cfg` . Create the fencing resource: +pcs -f stonith_cfg stonith create pass:[stonith_id stonith_device_type [stonith_device_options]]+ . Enable STONITH in the cluster: `pcs -f stonith_cfg property set stonith-enabled=true` . If the device does not know how to fence nodes based on their uname, you may also need to set the special *pcmk_host_map* parameter. See `man stonithd` for details. . If the device does not support the *list* command, you may also need to set the special *pcmk_host_list* and/or *pcmk_host_check* parameters. See `man stonithd` for details. . If the device does not expect the victim to be specified with the *port* parameter, you may also need to set the special *pcmk_host_argument* parameter. See `man stonithd` for details. . Commit the new configuration: `pcs cluster cib-push stonith_cfg` . Once the STONITH resource is running, test it (you might want to stop the cluster on that machine first): +stonith_admin --reboot pass:[nodename]+ == Example == For this example, assume we have a chassis containing four nodes and an IPMI device active on 10.0.0.1. Following the steps above would go something like this: -Step 1: Configure the IP address, authentication credentials, etc. in the IPMI device itself. +Step 1: Install the *fence-agents-ipmilan* package on both nodes. -Step 2: Install the *fence-agents-ipmilan* package on both nodes. +Step 2: Configure the IP address, authentication credentials, etc. in the IPMI device itself. Step 3: Choose the *fence_ipmilan* STONITH agent. Step 4: Obtain the agent's possible parameters: ---- [root@pcmk-1 ~]# pcs stonith describe fence_ipmilan Stonith options for: fence_ipmilan ipport: TCP/UDP port to use for connection with device inet6_only: Forces agent to use IPv6 addresses only ipaddr (required): IP Address or Hostname passwd_script: Script to retrieve password method: Method to fence (onoff|cycle) inet4_only: Forces agent to use IPv4 addresses only passwd: Login password or passphrase lanplus: Use Lanplus to improve security of connection auth: IPMI Lan Auth type. cipher: Ciphersuite to use (same as ipmitool -C parameter) privlvl: Privilege level on IPMI device action (required): Fencing Action login: Login Name verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit power_wait: Wait X seconds after issuing ON/OFF login_timeout: Wait X seconds for cmd prompt after login power_timeout: Test X seconds for status change after ON/OFF delay: Wait X seconds before fencing is started ipmitool_path: Path to ipmitool binary shell_timeout: Wait X seconds for cmd prompt after issuing command retry_on: Count of attempts to retry power on sudo: Use sudo (without password) when calling 3rd party sotfware. stonith-timeout: How long to wait for the STONITH action (reboot, on, off) to complete per a stonith device. priority: The priority of the stonith resource. Devices are tried in order of highest priority to lowest. pcmk_host_map: A mapping of host names to ports numbers for devices that do not support host names. pcmk_host_list: A list of machines controlled by this device (Optional unless pcmk_host_check=static-list). pcmk_host_check: How to determine which machines are controlled by the device. ---- Step 5: `pcs cluster cib stonith_cfg` Step 6: Here are example parameters for creating our STONITH resource: ---- -# pcs -f stonith_cfg stonith create ipmi-fencing fence_ipmilan \ +[root@pcmk-1 ~]# pcs -f stonith_cfg stonith create ipmi-fencing fence_ipmilan \ pcmk_host_list="pcmk-1 pcmk-2" ipaddr=10.0.0.1 login=testuser \ passwd=acd123 op monitor interval=60s -# pcs -f stonith_cfg stonith +[root@pcmk-1 ~]# pcs -f stonith_cfg stonith ipmi-fencing (stonith:fence_ipmilan): Stopped ---- Steps 7-10: Enable STONITH in the cluster: ---- -# pcs -f stonith_cfg property set stonith-enabled=true -# pcs -f stonith_cfg property +[root@pcmk-1 ~]# pcs -f stonith_cfg property set stonith-enabled=true +[root@pcmk-1 ~]# pcs -f stonith_cfg property Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster dc-version: 1.1.12-a9c8177 have-watchdog: false stonith-enabled: true ---- Step 11: `pcs cluster cib-push stonith_cfg` + +Step 12: Test: +---- +[root@pcmk-1 ~]# pcs cluster stop pcmk-2 +[root@pcmk-1 ~]# stonith_admin --reboot pcmk-2 +---- + +After a successful test, login to any rebooted nodes, and start the cluster +(with `pcs cluster start`). diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt b/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt index f3bcd8c700..1390384ee1 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Tools.txt @@ -1,119 +1,119 @@ = Pacemaker Tools = == Simplify administration using a cluster shell == In the dark past, configuring Pacemaker required the administrator to read and write XML. In true UNIX style, there were also a number of different commands that specialized in different aspects of querying and updating the cluster. All of that has been greatly simplified with the creation of unified command-line shells (and GUIs) that hide all the messy XML scaffolding. These shells take all the individual aspects required for managing and -configuring a cluster, and packs them into one simple-to-use command +configuring a cluster, and pack them into one simple-to-use command line tool. They even allow you to queue up several changes at once and commit them atomically. -There are currently two command-line shells that people use, `pcs` and +Two popular command-line shells are `pcs` and `crmsh`. This edition of Clusters from Scratch is based on `pcs`. [NOTE] =========== The two shells share many concepts but the scope, layout and syntax does differ, so make sure you read the version of this guide that corresponds to the software installed on your system. =========== [IMPORTANT] =========== Since `pcs` has the ability to manage all aspects of the cluster (both corosync and pacemaker), it requires a specific cluster stack to be in use: corosync 2.0 or later with votequorum plus Pacemaker 1.1.8 or later. =========== == Explore pcs == Start by taking some time to familiarize yourself with what `pcs` can do. ---- [root@pcmk-1 ~]# pcs Usage: pcs [-f file] [-h] [commands]... Control and configure pacemaker and corosync. Options: -h, --help Display usage and exit -f file Perform actions on file instead of active CIB --debug Print all network traffic and external commands run --version Print pcs version information Commands: cluster Configure cluster options and nodes resource Manage cluster resources stonith Configure fence devices constraint Set resource constraints property Set pacemaker properties status View cluster status config Print full cluster configuration ---- As you can see, the different aspects of cluster management are separated into categories: resource, cluster, stonith, property, constraint, and status. To discover the functionality available in each of these categories, one can issue the command +pcs pass:[category] help+. Below is an example of all the options available under the status category. ---- [root@pcmk-1 ~]# pcs status help Usage: pcs status [commands]... View current cluster and resource status Commands: [status] View all information about the cluster and resources resources View current status of cluster resources groups View currently configured groups and their resources cluster View current cluster status corosync View current membership information as seen by corosync nodes [corosync|both|config] View current status of nodes from pacemaker. If 'corosync' is specified, print nodes currently configured in corosync, if 'both' is specified, print nodes from both corosync & pacemaker. If 'config' is specified, print nodes from corosync & pacemaker configuration. pcsd ... Show the current status of pcsd on the specified nodes xml View xml version of status (output from crm_mon -r -1 -X) ---- Additionally, if you are interested in the version and supported cluster stack(s) available with your Pacemaker installation, run: ---- [root@pcmk-1 ~]# pacemakerd --features Pacemaker 1.1.12 (Build: a9c8177) Supporting v3.0.9: generated-manpages agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc upstart systemd nagios corosync-native atomic-attrd acls ---- [NOTE] ====== If the SNMP and/or email options are not listed, then Pacemaker was not built to support them. This may be by the choice of your distribution, or the required libraries may not have been available. Please contact whoever supplied you with the packages for more details. ====== diff --git a/doc/shared/en-US/pacemaker-intro.txt b/doc/shared/en-US/pacemaker-intro.txt index 6b898c98cc..8e5f451671 100644 --- a/doc/shared/en-US/pacemaker-intro.txt +++ b/doc/shared/en-US/pacemaker-intro.txt @@ -1,169 +1,169 @@ == What Is 'Pacemaker'? == Pacemaker is a 'cluster resource manager', that is, a logic responsible for a life-cycle of deployed software -- indirectly perhaps even whole systems or their interconnections -- under its control within a set of -computers (a.k.a. 'cluster nodes', 'nodes' for short) and driven by +computers (a.k.a. 'nodes') and driven by prescribed rules. It achieves maximum availability for your cluster services (a.k.a. 'resources') by detecting and recovering from node- and resource-level failures by making use of the messaging and membership capabilities provided by your preferred cluster infrastructure (either http://www.corosync.org/[Corosync] or http://linux-ha.org/wiki/Heartbeat[Heartbeat]), and possibly by utilizing other parts of the overall cluster stack. .High Availability Clusters [NOTE] For *the goal of minimal downtime* a term 'high availability' was coined and together with its acronym, 'HA', is well-established in the sector. To differentiate this sort of clusters from high performance computing ('HPC') ones, should a context require it (apparently, not the case in this document), using 'HA cluster' is an option. Pacemaker's key features include: * Detection and recovery of node and service-level failures * Storage agnostic, no requirement for shared storage * Resource agnostic, anything that can be scripted can be clustered * Supports 'fencing' (also referred to as the 'STONITH' acronym, <> later on) for ensuring data integrity * Supports large and small clusters * Supports both quorate and resource-driven clusters * Supports practically any redundancy configuration * Automatically replicated configuration that can be updated from any node * Ability to specify cluster-wide service ordering, colocation and anti-colocation * Support for advanced service types ** Clones: for services which need to be active on multiple nodes ** Multi-state: for services with multiple modes (e.g. master/slave, primary/secondary) * Unified, scriptable cluster management tools == Pacemaker Architecture == At the highest level, the cluster is made up of three pieces: * *Non-cluster-aware components*. These pieces include the resources themselves; scripts that start, stop and monitor them; and a local daemon that masks the differences between the different standards these scripts implement. Even though interactions of these resources when run as multiple instances can resemble a distributed system, they still lack the proper HA mechanisms and/or autonomous cluster-wide governance as subsumed in the following item. * *Resource management*. Pacemaker provides the brain that processes and reacts to events regarding the cluster. These events include nodes joining or leaving the cluster; resource events caused by failures, maintenance and scheduled activities; and other administrative actions. Pacemaker will compute the ideal state of the cluster and plot a path to achieve it after any of these events. This may include moving resources, stopping nodes and even forcing them offline with remote power switches. * *Low-level infrastructure*. Projects like 'Corosync', 'CMAN' and 'Heartbeat' provide reliable messaging, membership and quorum information about the cluster. When combined with Corosync, Pacemaker also supports popular open source cluster filesystems.{empty}footnote:[ Even though Pacemaker also supports Heartbeat, the filesystems need to use the stack for messaging and membership, and Corosync seems to be what they're standardizing on. Technically, it would be possible for them to support Heartbeat as well, but there seems little interest in this. ] Due to past standardization within the cluster filesystem community, cluster filesystems make use of a common 'distributed lock manager', which makes use of Corosync for its messaging and membership capabilities (which nodes are up/down) and Pacemaker for fencing services. .The Pacemaker Stack image::images/pcmk-stack.png["The Pacemaker stack",width="10cm",height="7.5cm",align="center"] === Internal Components === Pacemaker itself is composed of five key components: * 'Cluster Information Base' ('CIB') * 'Cluster Resource Management daemon' ('CRMd') * 'Local Resource Management daemon' ('LRMd') * 'Policy Engine' ('PEngine' or 'PE') * Fencing daemon ('STONITHd') .Internal Components image::images/pcmk-internals.png["Subsystems of a Pacemaker cluster",align="center",scaledwidth="65%"] The CIB uses XML to represent both the cluster's configuration and current state of all resources in the cluster. The contents of the CIB are automatically kept in sync across the entire cluster and are used by the PEngine to compute the ideal state of the cluster and how it should be achieved. This list of instructions is then fed to the 'Designated Controller' ('DC'). Pacemaker centralizes all cluster decision making by electing one of the CRMd instances to act as a master. Should the elected CRMd process (or the node it is on) fail, a new one is quickly established. The DC carries out the PEngine's instructions in the required order by passing them to either the Local Resource Management daemon (LRMd) or CRMd peers on other nodes via the cluster messaging infrastructure (which in turn passes them on to their LRMd process). The peer nodes all report the results of their operations back to the DC and, based on the expected and actual results, will either execute any actions that needed to wait for the previous one to complete, or abort processing and ask the PEngine to recalculate the ideal cluster state based on the unexpected results. In some cases, it may be necessary to power off nodes in order to protect shared data or complete resource recovery. For this, Pacemaker comes with STONITHd. [[s-intro-stonith]] .STONITH [NOTE] *STONITH* is an acronym for 'Shoot-The-Other-Node-In-The-Head', a recommended practice that misbehaving node is best to be promptly 'fenced' (shut off, cut from shared resources or otherwise immobilized), and is usually implemented with a remote power switch. In Pacemaker, STONITH devices are modeled as resources (and configured in the CIB) to enable them to be easily monitored for failure, however STONITHd takes care of understanding the STONITH topology such that its clients simply request a node be fenced, and it does the rest. == Types of Pacemaker Clusters == Pacemaker makes no assumptions about your environment. This allows it to support practically any http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations[redundancy configuration] including 'Active/Active', 'Active/Passive', 'N+1', 'N+M', 'N-to-1' and 'N-to-N'. .Active/Passive Redundancy image::images/pcmk-active-passive.png["Active/Passive Redundancy",width="10cm",height="7.5cm",align="center"] Two-node Active/Passive clusters using Pacemaker and 'DRBD' are a cost-effective solution for many High Availability situations. .Shared Failover image::images/pcmk-shared-failover.png["Shared Failover",width="10cm",height="7.5cm",align="center"] By supporting many nodes, Pacemaker can dramatically reduce hardware costs by allowing several active/passive clusters to be combined and share a common backup node. .N to N Redundancy image::images/pcmk-active-active.png["N to N Redundancy",width="10cm",height="7.5cm",align="center"] When shared storage is available, every node can potentially be used for failover. Pacemaker can even run multiple copies of services to spread out the workload.