diff --git a/doc/Clusters_from_Scratch/en-US/Book_Info.xml b/doc/Clusters_from_Scratch/en-US/Book_Info.xml index e436c02aac..4eb6943f70 100644 --- a/doc/Clusters_from_Scratch/en-US/Book_Info.xml +++ b/doc/Clusters_from_Scratch/en-US/Book_Info.xml @@ -1,67 +1,67 @@ %BOOK_ENTITIES; ]> Clusters from Scratch Creating Active/Passive and Active/Active Clusters on Fedora Pacemaker 1.1 8 - 0 + 1 The purpose of this document is to provide a start-to-finish guide to building an example active/passive cluster with Pacemaker and show how it can be converted to an active/active one. The example cluster will use: &DISTRO; &DISTRO_VERSION; as the host operating system Corosync to provide messaging and membership services, Pacemaker to perform resource management, DRBD as a cost-effective alternative to shared storage, GFS2 as the cluster filesystem (in active/active mode) Given the graphical nature of the Fedora install process, a number of screenshots are included. However the guide is primarily composed of commands, the reasons for executing them and their expected outputs. diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt b/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt index ca81b217f7..7ed4f808b7 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt +++ b/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt @@ -1,164 +1,26 @@ = Read-Me-First = == The Scope of this Document == Computer clusters can be used to provide highly available services or resources. The redundancy of multiple machines is used to guard against failures of many types. This document will walk through the installation and setup of simple clusters using the &DISTRO; distribution, version &DISTRO_VERSION;. The clusters described here will use Pacemaker and Corosync to provide resource management and messaging. Required packages and modifications to their configuration files are described along with the use of the Pacemaker command line tool for generating the XML used for cluster control. Pacemaker is a central component and provides the resource management required in these systems. This management includes detecting and recovering from the failure of various nodes, resources and services under its control. When more in depth information is required and for real world usage, please refer to the http://www.clusterlabs.org/doc/[Pacemaker Explained] manual. -== What Is Pacemaker? == - -Pacemaker is a cluster resource manager. - -It achieves maximum availability for your cluster services -(aka. resources) by detecting and recovering from node- and -resource-level failures by making use of the messaging and membership -capabilities provided by your preferred cluster infrastructure (either -http://www.corosync.org/[Corosync] or -http://linux-ha.org/wiki/Heartbeat[Heartbeat]). - -Pacemaker's key features include: - - * Detection and recovery of node and service-level failures - * Storage agnostic, no requirement for shared storage - * Resource agnostic, anything that can be scripted can be clustered - * Supports fencing (aka. STONITH) for ensuring data integrity - * Supports large and small clusters - * Supports both quorate and resource-driven clusters - * Supports practically any redundancy configuration - * Automatically replicated configuration that can be updated from any node - * Ability to specify cluster-wide service ordering, colocation and anti-colocation - * Support for advanced service types - ** Clones: for services which need to be active on multiple nodes - ** Multi-state: for services with multiple modes (eg. master/slave, primary/secondary) - * Unified, scriptable, cluster management tools. - -== Pacemaker Architecture == - -At the highest level, the cluster is made up of three pieces: - - * Non-cluster-aware components. These pieces - include the resources themselves; scripts that start, stop and - monitor them; and a local daemon that masks the differences - between the different standards these scripts implement. - - * Resource management. Pacemaker provides the brain that processes - and reacts to events regarding the cluster. These events include - nodes joining or leaving the cluster; resource events caused by - failures, maintenance and scheduled activities; and other - administrative actions. Pacemaker will compute the ideal state of - the cluster and plot a path to achieve it after any of these - events. This may include moving resources, stopping nodes and even - forcing them offline with remote power switches. - - * Low-level infrastructure. Projects like Corosync, CMAN and - Heartbeat provide reliable messaging, membership and quorum - information about the cluster. - -When combined with Corosync, Pacemaker also supports popular open -source cluster filesystems. -footnote:[Even though Pacemaker also supports Heartbeat, the filesystems need -to use the stack for messaging and membership, and Corosync seems to be -what they're standardizing on. Technically, it would be possible for them to -support Heartbeat as well, but there seems little interest in this.] - -Due to past standardization within the cluster filesystem community, -cluster filesystems make use of a common distributed lock manager, which makes -use of Corosync for its messaging and membership capabilities (which nodes -are up/down) and Pacemaker for fencing services. - -.The Pacemaker Stack -image::images/pcmk-stack.png["The Pacemaker stack",width="10cm",height="7.5cm",align="center"] - -=== Internal Components === - -Pacemaker itself is composed of five key components: - - * Cluster Information Base (CIB) - * Cluster Resource Management daemon (CRMd) - * Local Resource Management daemon (LRMd) - * Policy Engine (PEngine or PE) - * Fencing daemon (STONITHd) - -.Internal Components -image::images/pcmk-internals.png["Subsystems of a Pacemaker cluster",align="center",scaledwidth="65%"] - -The CIB uses XML to represent both the cluster's configuration and -current state of all resources in the cluster. The contents of the CIB -are automatically kept in sync across the entire cluster and are used -by the PEngine to compute the ideal state of the cluster and how it -should be achieved. - -This list of instructions is then fed to the Designated -Controller (DC). Pacemaker centralizes all cluster decision making by -electing one of the CRMd instances to act as a master. Should the -elected CRMd process (or the node it is on) fail, a new one is -quickly established. - -The DC carries out the PEngine's instructions in the required order by -passing them to either the Local Resource Management daemon (LRMd) or -CRMd peers on other nodes via the cluster messaging infrastructure -(which in turn passes them on to their LRMd process). - -The peer nodes all report the results of their operations back to the -DC and, based on the expected and actual results, will either execute -any actions that needed to wait for the previous one to complete, or -abort processing and ask the PEngine to recalculate the ideal cluster -state based on the unexpected results. - -In some cases, it may be necessary to power off nodes in order to -protect shared data or complete resource recovery. For this, Pacemaker -comes with STONITHd. - -STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and is -usually implemented with a remote power switch. - -In Pacemaker, STONITH devices are modeled as resources (and configured -in the CIB) to enable them to be easily monitored for failure, however -STONITHd takes care of understanding the STONITH topology such that -its clients simply request a node be fenced, and it does the rest. - -== Types of Pacemaker Clusters == - -Pacemaker makes no assumptions about your environment. This allows it -to support practically any -http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations[redundancy -configuration] including Active/Active, Active/Passive, N+1, N+M, -N-to-1 and N-to-N. - -.Active/Passive Redundancy -image::images/pcmk-active-passive.png["Active/Passive Redundancy",width="10cm",height="7.5cm",align="center"] - -Two-node Active/Passive clusters using Pacemaker and DRBD are a -cost-effective solution for many High Availability situations. - -.Shared Failover -image::images/pcmk-shared-failover.png["Shared Failover",width="10cm",height="7.5cm",align="center"] - -By supporting many nodes, Pacemaker can dramatically reduce hardware -costs by allowing several active/passive clusters to be combined and -share a common backup node - -.N to N Redundancy -image::images/pcmk-active-active.png["N to N Redundancy",width="10cm",height="7.5cm",align="center"] - -When shared storage is available, every node can potentially be used -for failover. Pacemaker can even run multiple copies of services to -spread out the workload. +include::../../shared/en-US/pacemaker-intro.txt[] diff --git a/doc/Clusters_from_Scratch/en-US/Clusters_from_Scratch.ent b/doc/Clusters_from_Scratch/en-US/Clusters_from_Scratch.ent index eafd2819e2..5a675ebd55 100644 --- a/doc/Clusters_from_Scratch/en-US/Clusters_from_Scratch.ent +++ b/doc/Clusters_from_Scratch/en-US/Clusters_from_Scratch.ent @@ -1,6 +1,6 @@ - + diff --git a/doc/Clusters_from_Scratch/en-US/Revision_History.xml b/doc/Clusters_from_Scratch/en-US/Revision_History.xml index 0df7bbc577..03d367ea73 100644 --- a/doc/Clusters_from_Scratch/en-US/Revision_History.xml +++ b/doc/Clusters_from_Scratch/en-US/Revision_History.xml @@ -1,62 +1,68 @@ %BOOK_ENTITIES; ]> Revision History 1-0 Mon May 17 2010 AndrewBeekhofandrew@beekhof.net Import from Pages.app 2-0 Wed Sep 22 2010 RaoulScarazzinirasca@miamammausalinux.org Italian translation 3-0 Wed Feb 9 2011 AndrewBeekhofandrew@beekhof.net Updated for Fedora 13 4-0 Wed Oct 5 2011 AndrewBeekhofandrew@beekhof.net Update the GFS2 section to use CMAN 5-0 Fri Feb 10 2012 AndrewBeekhofandrew@beekhof.net Generate docbook content from asciidoc sources 6-0 Tues July 3 2012 AndrewBeekhofandrew@beekhof.net Updated for Fedora 17 7-0 Fri Sept 14 2012 DavidVosseldvossel@redhat.com Updated for pcs 8-0 Mon Jan 05 2015 KenGaillotkgaillot@redhat.com Updated for Fedora 21 + + 8-1 + Thu Jan 08 2015 + KenGaillotkgaillot@redhat.com + Minor corrections, plus use include file for intro + diff --git a/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt b/doc/shared/en-US/pacemaker-intro.txt similarity index 85% copy from doc/Clusters_from_Scratch/en-US/Ch-Intro.txt copy to doc/shared/en-US/pacemaker-intro.txt index ca81b217f7..bf432fc26d 100644 --- a/doc/Clusters_from_Scratch/en-US/Ch-Intro.txt +++ b/doc/shared/en-US/pacemaker-intro.txt @@ -1,164 +1,141 @@ -= Read-Me-First = - -== The Scope of this Document == - -Computer clusters can be used to provide highly available services or -resources. The redundancy of multiple machines is used to guard -against failures of many types. - -This document will walk through the installation and setup of simple -clusters using the &DISTRO; distribution, version &DISTRO_VERSION;. - -The clusters described here will use Pacemaker and Corosync to provide -resource management and messaging. Required packages and modifications -to their configuration files are described along with the use of the -Pacemaker command line tool for generating the XML used for cluster -control. - -Pacemaker is a central component and provides the resource management -required in these systems. This management includes detecting and -recovering from the failure of various nodes, resources and services -under its control. - -When more in depth information is required and for real world usage, -please refer to the http://www.clusterlabs.org/doc/[Pacemaker Explained] manual. == What Is Pacemaker? == Pacemaker is a cluster resource manager. It achieves maximum availability for your cluster services (aka. resources) by detecting and recovering from node- and resource-level failures by making use of the messaging and membership capabilities provided by your preferred cluster infrastructure (either http://www.corosync.org/[Corosync] or http://linux-ha.org/wiki/Heartbeat[Heartbeat]). Pacemaker's key features include: * Detection and recovery of node and service-level failures * Storage agnostic, no requirement for shared storage * Resource agnostic, anything that can be scripted can be clustered * Supports fencing (aka. STONITH) for ensuring data integrity * Supports large and small clusters * Supports both quorate and resource-driven clusters * Supports practically any redundancy configuration * Automatically replicated configuration that can be updated from any node * Ability to specify cluster-wide service ordering, colocation and anti-colocation * Support for advanced service types ** Clones: for services which need to be active on multiple nodes ** Multi-state: for services with multiple modes (eg. master/slave, primary/secondary) * Unified, scriptable, cluster management tools. == Pacemaker Architecture == At the highest level, the cluster is made up of three pieces: * Non-cluster-aware components. These pieces include the resources themselves; scripts that start, stop and monitor them; and a local daemon that masks the differences between the different standards these scripts implement. * Resource management. Pacemaker provides the brain that processes and reacts to events regarding the cluster. These events include nodes joining or leaving the cluster; resource events caused by failures, maintenance and scheduled activities; and other administrative actions. Pacemaker will compute the ideal state of the cluster and plot a path to achieve it after any of these events. This may include moving resources, stopping nodes and even forcing them offline with remote power switches. * Low-level infrastructure. Projects like Corosync, CMAN and Heartbeat provide reliable messaging, membership and quorum information about the cluster. When combined with Corosync, Pacemaker also supports popular open source cluster filesystems. footnote:[Even though Pacemaker also supports Heartbeat, the filesystems need to use the stack for messaging and membership, and Corosync seems to be what they're standardizing on. Technically, it would be possible for them to support Heartbeat as well, but there seems little interest in this.] Due to past standardization within the cluster filesystem community, cluster filesystems make use of a common distributed lock manager, which makes use of Corosync for its messaging and membership capabilities (which nodes are up/down) and Pacemaker for fencing services. .The Pacemaker Stack image::images/pcmk-stack.png["The Pacemaker stack",width="10cm",height="7.5cm",align="center"] === Internal Components === Pacemaker itself is composed of five key components: * Cluster Information Base (CIB) * Cluster Resource Management daemon (CRMd) * Local Resource Management daemon (LRMd) * Policy Engine (PEngine or PE) * Fencing daemon (STONITHd) .Internal Components image::images/pcmk-internals.png["Subsystems of a Pacemaker cluster",align="center",scaledwidth="65%"] The CIB uses XML to represent both the cluster's configuration and current state of all resources in the cluster. The contents of the CIB are automatically kept in sync across the entire cluster and are used by the PEngine to compute the ideal state of the cluster and how it should be achieved. This list of instructions is then fed to the Designated Controller (DC). Pacemaker centralizes all cluster decision making by electing one of the CRMd instances to act as a master. Should the elected CRMd process (or the node it is on) fail, a new one is quickly established. The DC carries out the PEngine's instructions in the required order by passing them to either the Local Resource Management daemon (LRMd) or CRMd peers on other nodes via the cluster messaging infrastructure (which in turn passes them on to their LRMd process). The peer nodes all report the results of their operations back to the DC and, based on the expected and actual results, will either execute any actions that needed to wait for the previous one to complete, or abort processing and ask the PEngine to recalculate the ideal cluster state based on the unexpected results. In some cases, it may be necessary to power off nodes in order to protect shared data or complete resource recovery. For this, Pacemaker comes with STONITHd. STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and is usually implemented with a remote power switch. In Pacemaker, STONITH devices are modeled as resources (and configured in the CIB) to enable them to be easily monitored for failure, however STONITHd takes care of understanding the STONITH topology such that its clients simply request a node be fenced, and it does the rest. == Types of Pacemaker Clusters == Pacemaker makes no assumptions about your environment. This allows it to support practically any http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations[redundancy configuration] including Active/Active, Active/Passive, N+1, N+M, N-to-1 and N-to-N. .Active/Passive Redundancy image::images/pcmk-active-passive.png["Active/Passive Redundancy",width="10cm",height="7.5cm",align="center"] Two-node Active/Passive clusters using Pacemaker and DRBD are a cost-effective solution for many High Availability situations. .Shared Failover image::images/pcmk-shared-failover.png["Shared Failover",width="10cm",height="7.5cm",align="center"] By supporting many nodes, Pacemaker can dramatically reduce hardware costs by allowing several active/passive clusters to be combined and share a common backup node .N to N Redundancy image::images/pcmk-active-active.png["N to N Redundancy",width="10cm",height="7.5cm",align="center"] When shared storage is available, every node can potentially be used for failover. Pacemaker can even run multiple copies of services to spread out the workload. +